CHAPTER 10 :
UNIVERSAL DESIGN
OVERVIEW
Multi-modal systems are those that use more than one
human input channel in the interaction.
These systems may, for example, use:
– speech
– non-speech sound
– touch
– handwriting
– gestures
OVERVIEW
Universal design means designing for diversity,
including:
– people with sensory, physical or cognitive impairment
– people of different ages
– people from different cultures and backgrounds.
INTRODUCTION
People have different abilities and weaknesses; they
come from different backgrounds and cultures; they
have different interests, viewpoints and experiences;
they are different ages and sizes. All of these things
have an impact on the way in which an individual will
use a particular computing application and, indeed, on
whether or not they can use it at all.
Universal design is the process of designing products
so that they can be used by as many people as
possible in as many situations as possible.
UNIVERSAL DESIGN PRINCIPLES
Universal design is primarily about trying to ensure that you do not exclude
anyone through the design choices you make but, by giving thought to
these issues, you will invariably make your design better for everyone.
In the late 1990s a group at North Carolina State University in the USA
proposed seven general principles of universal design [333]. These were
intended to cover all areas of design and are equally applicable to the
design of interactive systems. These principles give us a framework in
which to develop universal designs.
THE SEVEN GENERAL PRINCIPLES OF
UNIVERSAL DESIGN
1. Equitable use
2. Flexibility in use
3. Simple and intuitive to use
4. Perceptible information
5. Tolerance for error
6. Low physical effort
7. Size and space for approach and use
MULTI-MODAL INTERACTION
As we have seen in the previous section, providing access to information
through more than one mode of interaction is an important principle of
universal design. Such design relies on multi-modal interaction.
The use of multiple sensory channels increases the bandwidth of the
interaction between the human and the computer, and it also makes
human–computer interaction more like the interaction between humans
and their everyday environment, perhaps making the use of such systems
more natural.
Redundant systems provide the same information through a range of
channels, the aim is to provide at least an equivalent experience to all,
regardless of their primary channel of interaction.
Usable sensory inputs
the visual channel is used as the predominant channel for
communication.
Sound is already used, to a limited degree, in many interfaces: beeps
are used as warnings and notification, recorded or synthesized speech
and music are also used.
Tactile feedback, is also important in improving interactivity and so this
represents another sense that we can utilize more effectively
SOUND IN THE INTERFACE
Sound is an important contributor to usability. The dual presentation of
information through sound and vision supports universal design, by enabling
access for users with visual and hearing impairments respectively. Sound can
convey transient information and does not take up screen space, making it
potentially useful for mobile applications. There are two types of sound that we
could use: speech and non-speech.
SPEECH IN THE INTERFACE
Structure of speech
The English language is made up of 40 phonemes, which are the atomic
elements of speech. Each phoneme represents a distinct sound, there
being 24 consonants and 16 vowel sounds. This alteration in tone and
quality of the phonemes is termed prosody
Speech recognition
There have been many attempts at developing speech recognition
systems, but, although commercial systems are now commonly and
cheaply available, their success is still limited to single-user systems that
require considerable training.
THE PHONETIC TYPEWRITER
One early successful speech-based system is the phonetic typewriter.
This uses a neural network that clusters similar sounds together.
Designed to produce typed output from speech input in Finnish, it is
trained on one particular speaker, and then generalizes to others.
Speech synthesis
Complementary to speech recognition is speech synthesis. The notion of being
able to converse naturally with a computer is an appealing one for many users,
especially those who do not regard themselves as computer literate, since it
reflects their natural, daily medium of expression and communication
Uninterpreted speech
Speech does not have to be recognized by a computer to be useful in the
interface. Fixed pre-recorded messages can be used to supplement or replace
visual information. Segments of speech can be used together to construct
messages, for example the announcements in many airports and railway stations.
If you include speech input in an interface you must decide
what level of speech interaction you wish to support:
recording -- simply recording and replaying messages or
annotations;
transcription -- turning speech into text as in a word
processor;
control -- telling the computer what to do: for example,
‘print this file’.
NON-SPEECH SOUND
Non-speech sound can be used in a number of ways in
interactive systems. It is often used to provide transitory
information, such as indications of network or system changes, or
of errors.
Auditory icons -- use natural sounds to represent different types
of objects and actions in the interface. Natural sounds are used
because people recognize the source of a sound and its behavior
rather than timbre and pitch.
EARCONS
An alternative to using natural sounds is to devise synthetic sounds.
Earcons use structured combinations of notes, called motives, to
represent actions and objects. These vary according to rhythm, pitch,
timbre, scale and volume.
There are two types of combination of earcon:
Compound earcons -- combine different motives to build up a specific
action, for example combining the motives for ‘create’ and ‘file’.
Family earcons -- represent compound earcons of similar types. As an
example, operating system errors and syntax errors would be in the
‘error’ family.
Earcons provide a structured
approach to designing sound for
the interface. Evidence suggests
that people can learn to
recognize earcons, and that the
most important element in
distinguishing different sounds is
timbre, the characteristic quality
of the sound produced by
different instruments and voices.
TOUCH IN THE INTERFACE
Touch interaction in interface refers to the
ability for users to interact with digital
devices, such as smartphones, tablets, and
touchscreens, by physically touching the
screen with their fingers or a stylus.
Touch is the only sense that can be used to both send and
receive information.
The use of touch in the interface is known as haptic
interaction.
Haptic Interaction - Haptic devices are electronic tools or
systems designed to provide users with tactile feedback,
simulating the sense of touch through vibrations, forces, or
motions.
Haptics is a generic term relating to touch, but it can be
roughly divided into two areas:
cutaneous perception, which is concerned with tactile
sensations through the skin; and
kinesthetics, which is the perception of movement and
position.
One example of a tactile device is an electronic - Braille
displays are made up of a number of cells (typically
between 20 and 80), each containing six or eight
electronically controlled pins that move up and down to
produce braille representations of characters displayed on
the screen.
The other main type of haptic device is the force feedback
device, which provides kinesthetic information back to the
user, allowing him to feel resistance, textures, friction and so
on. One example is the PHANTOM range, from SensAble
Technologies.
PHANTOM provides
three-dimensional force
feedback, allowing users
to touch virtual objects.
HANDWRITING RECOGNITION
Handwriting to be a very natural
form of communication. The idea of
being able to interpret handwritten
input is very appealing, and
handwriting appears to offer both
textual and graphical input using
the same tools.
TECHNOLOGY FOR HANDWRITING RECOGNITION
The Apple Newton was the
first popular pen-based
computer.
RECOGNIZING HANDWRITING
The process of converting handwritten text into digital format,
typically using computer algorithms or software. This causes
problems for recognition systems, which work by trying to identify
the lines that contain text, and then to segment the digitized image
into separate characters
GESTURE RECOGNITION
Gesture is a component of human–computer interaction that has
become the subject of attention in multi-modal systems.
Being able to control the computer with certain movements of the
hand would be advantageous in many situations where there is no
possibility of typing, or when other senses are fully occupied.
The interpretation of the sampled
data is very difficult, since
segmenting the gestures causes
problems
DESIGNING FOR DIVERSITY
Designing for Diversity is the department's framework for embedding
responsiveness to diversity at the outset of any policy reform or service
design process. It focuses on including people from different backgrounds,
experiences, and abilities during the design process to create more
universally relevant and useful products.
In this section, we will consider briefly some of these factors and the
particular challenges that each raises.
We will consider three key areas:
Disability,
Age and,
Culture
How do you ensure that your designs
are accessible to users with
disabilities?
Accessibility starts with a clear and
consistent structure. Users must be able
to navigate through the website or
application seamlessly. Consistent
placement of navigation menus,
headings, and important content
elements helps users predict where
they'll find the information they're
seeking.
Visual impairment
Visual impairment is a term experts use to describe any
kind of vision loss, whether it's someone who cannot
see at all or someone who has partial vision loss. Some
people are completely blind, but many others have
what's called legal blindness.
There are two key approaches to extending access: the
use of sound and the use of touch.
A number of systems use sound to provide access to
graphical interfaces for people with visual
impairment. We looked at a range of approaches to
the use of sound such as speech, earcons and
auditory icons. All of these have been used in
interfaces for blind users.
Soundtrack
Soundtrack is an early example of a word processor
with an auditory interface, designed for users who
are blind or partially sighted
Hearing impairment
Compared with a visual disability a person is said to have hearing loss
if they are not able to hear as well as someone with normal hearing.
Where the impact on interacting with a graphical interface is
immediately obvious, a hearing impairment may appear to have little
impact on the use of an interface. To an extent this is true, and
computer technology can actually enhance communication
opportunities for people with hearing loss.
Physical impairment
Physical impairment is typically defined as not being able to perform
without assistance. A physical impairment is a condition in which a part
of a person's body is damaged or is not working properly. Users with
physical disabilities vary in the amount of control and movement
Speech impairment
Textual communication is slow, which can lower the effectiveness of
the communication. Speech or language impairment means a
communication disorder, such as stuttering, impaired articulation, a
language impairment, or a voice impairment, that adversely affects a
child's educational performance.
Dyslexia
Dyslexia is a learning disorder that involves difficulty reading due to
problems identifying speech sounds and learning how they relate to
letters and words (decoding). Also called a reading disability, dyslexia
is a result of individual differences in areas of the brain that process
language.
Autism
Autism affects a person’s ability to communicate and interact with
people around them and to make sense of their environment.
Developmental disability caused by differences in the brain.
Social interaction – problems in relating to others in a meaningful
way or responding appropriately to social situations.
Communication – problems in understanding verbal and textual
language including the use of gestures and expressions.
Imagination – problems with rigidity of thought processes, which
may lead to repetitive behavior and inflexibility.
Designing for different age groups
Diversity is one of the things that make the web great, and every
audience has its own needs and requirements. Age is an influential
factor on the web in terms of not only psychology, but also
accessibility, usability, and user interface design.
Older people
The proportion of older people in the population is growing steadily.
Contrary to popular stereotyping, there is no evidence that older people
are averse to using new technologies, so this group represents a major
and growing market for interactive applications.
Children
Like older people, children have distinct needs when it comes to
technology, and again, as a population, they are diverse. As well as their
likes and dislikes, children’s abilities will also be different from those of
adults. Younger children may have difficulty using a keyboard for instance,
and may not have well developed hand–eye coordination.
Designing for cultural differences
The final area of diversity we will consider is cultural difference. Is the
process of understanding cultural differences and dimensions and allowing
both to influence designed. It considers, embraces, and then translates
every aspect of the user's cultural identity into a design they can interact
with intuitively and seamlessly.
THANK YOU!