TABLE OF CONTENT
UNIT - 1 – FOUNDATIONS OF HCI
1.1 Introduction 1.1
1.1.1 The Human: Input- Output Channels 1.1
1.1.2 Human Memory 1.4
1.1.3 Reasoning and Problem Solving 1.6
1.1.3.1 Reasoning 1.6
1.1.3.2 Problem solving 1.6
1.1.3.3 Skill Acquisition 1.8
1.1.3.4 Errors And Mental Models 1.8
1.2 The Computer 1.8
1.2.1 Devices 1.9
1.2.1.1 Text entry devices 1.9
1.2.1.2 Display Devices 1.12
1.2.1.3 Devices for Virtual Reality And 3D Interaction 1.13
1.2.1.4 Printing and Scanning 1.14
1.2.2 The Computer Memory 1.14
1.2.3 Processing and Networks 1.16
1.3 Interaction 1.16
1.3.1 Interaction Models 1.17
1.3.2 Interaction Frameworks 1.17
1.3.3 Ergonomics 1.18
1.3.4 Interaction Styles 1.19
1.3.5 Elements of the Wimp Interface 1.21
1.3.6 Interactivity 1.24
1.3.7 Paradigms 1.25
Review Questions and Answer 1.29
UNIT - 2 – DESIGN & SOFTWARE PROCESS
2.1 Interactive Design Basics 2.1
2.1.1 Process 2.2
2.1.2 Scenarios 2.4
2.1.3 Navigation 2.5
2.1.4 Screen Design 2.7
2.1.5 Iteation and Prototyping 2.10
2.2 HCI in Software Process 2.10
2.2.1 The Software Life Cycle: 2.10
2.2.2 Usability Engineering 2.13
2.2.3 Iterative Design And Prototyping 2.15
2.2.4 Design Rationale 2.17
2.3 Design Rules 2.19
2.3.1 Principles 2.20
2.3.2 Standards 2.22
2.3.3 Guidelines 2.22
2.3.4 Golden rules and heuristics 2.23
2.4 Evaluation Techniques 2.24
2.4.1 Evaluation Through Expert Analysis 2.25
2.4.2 Evaluating Through User Participation 2.25
2.4.3 Observational Techniques 2.27
2.4.4 Query Techniques 2.29
2.4.5 Physiological Methods 2.30
2.5 Universal Design 2.30
2. 5.1 Universal Design Principles 2.31
2.5.2 Handwriting Recognition 2.33
2.5.3 Gesture Recognition 2.34
2.5.4 Designing For Diversity 2.35
Review Questions and Answer 2.36
UNIT - 3 – MODELS AND THEORIES
3.1. Cognitive Models 3.1
3.1.1. Cognitive Science 3.1
3.1.2 Why congnitve Sciecne is interdisciplinary? 3.2
3.1.3 Cognitive Model 3.2
3.1.4. Goal and Task Structure 3.4
3.1.4.1. Goal, Operators, Methods and Selection (GOMS) 3.5
3.1.4.2. Cognitive Complexity theory (CCT) 3.11
3.1.4.3. Problems and Extensions of goal hierarchies 3.12
3.1.5 Linguistic Models 3.12
3.1.6. Physical and Device Models 3.15
3.1.7. Cognitive Architecture Model 3.16
3.2. Socio Organizational Issues And Stakeholder Requirements 3.18
3.2.1. Organizational Issues 3.18
3.2.2. Capturing Requirements 3.21
3.2.3. Participatory Design 3.26
3.2.4. Ethnographic methods 3.27
3.3. Communication And Collaboration Models 3.27
3.3.1. Face to Face Communication 3.28
3.3.2. Conversation 3.30
3.3.3. Text based Communication 3.32
3.3.4. Group Working 3.35
3.4 Hypertext, Multimedia And The World Wide Web 3.37
3.4.1. Understanding Hyper text 3.37
3.4.1.1. Rich Content 3.38
3.4.2. Issues/Finding things in Hyper text 3.42
3.4.2.1 Lost in Hyperspace 3.42
3.4.2.2. Designing Structure 3.42
3.4.2.3 Making Navigation Easier 3.43
3.4.2.4 History, Bookmark and External Link 3.43
3.4.2.5 Indices, Directories and Search 3.44
3.4.3. Web Technology and Issues 3.44
3.4.3.1. Web Servers and Clients 3.44
3.4.3.2. Network Issues 3.45
3.4.4. Static Web Contents 3.46
3.4.5 Dynamic Web Contents 3.49
3.3.5.1 Fixed Content- Local Interaction and Changing views 3.50
Review Questions and Answer 3.51
UNIT - 4 – MOBILE HCI
4.1 Mobile Ecosystem 4.1
4.1.1 Operators 4.1
4.1.2 Networks 4.3
4.1.3 Devices 4.4
4.1.4 Platforms 4.4
4.1.5 Operating Systems 4.6
4.1.6 Application Frameworks 4.7
4.1.7 Applications 4.8
4.1.8 Services 4.9
4.2 Types of Mobile Applicatioins 4.10
4.2.1 Widget 4.11
4.2.2 Games 4.13
4.3 How is Mobile Different? 4.13
4.4 Mobile 2.0 4.19
4.5 Mobile Design Elements 4.24
4.6 Mobile Tools 4.25
Review Questions and Answer 4.27
UNIT - 5 – DESIGNING WEB INTERFACES
5.1 Designing Web Interfaces 5.1
5.1.1 Drag and Drop 5.1
5.1.1.1 The Events 5.1
5.1.1.2 The Actors 5.2
5.1.1.3 Moments Grid - Drag and Drop. 5.2
5.1.1.4 Purpose of Drag and Drop 5.2
5.1.1.5 Drag and Drop Module 5.2
5.1.1.6 Invitation to drag 5.2
5.1.1.7 Placeholder target-Boundary-based placement. 5.2
5.1.1.8 Drag distance 5.3
5.1.2 Drag and Drop List 5.3
5.1.2.1 Insertion target 5.3
5.1.2.2 Non–drag and drop alternative 5.3
5.1.2.3 Hinting at drag and drop 5.4
5.1.2.4 Drag lens 5.4
5.1.2.5 Invitation to drag 5.4
5.2 Direct Selection 5.5
5.2.1 Toggle Selection 5.5
5.2.2 Scrolling versus paging 5.5
5.2.3 Making selection explicit 5.5
5.2.4 Collected Selection 5.5
5.2.5 Object Selection 5.6
5.2.6 Hybrid Selection 5.6
5.2.7 Blending two models 5.6
5.2.8 Keep It Lightweight 5.6
5.3 Contextual Tools-Interaction In Context 5.6
5.3.1 Fitts’s Law 5.7
5.3.2 List of Contextual Tools 5.7
5.3.3 Secondary Menus 5.8
5.3.4 Relative importance 5.8
5.3.5 Discoverability 5.8
5.3.6 Hover-Reveal Tools 5.8
5.3.7 Discoverability 5.8
5.3.8 Contextual Tools in an overlay 5.9
5.3.9 Anti-pattern: Hover and Cover 5.9
5.3.10 Anti-pattern: Mystery Meat 5.9
5.3.11 Activation 5.9
5.3.12 Radial menus 5.9
5.3.13 Activation 5.9
5.3.14 Default action 5.9
5.3.17 Contextual toolbar 5.10
5.3.18 Anti-pattern: Tiny Targets 5.10
5.3.19 Change Blindness 5.11
5.4 Overlays 5.11
5.4.1 Dialog Overlay 5.11
5.4.2 Lightbox Effect 5.11
5.4.3 Modality 5.12
5.4.4 Detail Overlay 5.12
5.4.5 Box shots 5.12
5.4.6 Detail overlay activation 5.12
5.4.7 Detail overlay deactivation 5.12
5.4.8 Anti-pattern: Mouse Traps 5.12
5.4.9 Anti-pattern: Non-Symmetrical Activation/Deactivation 5.12
5.4.10 Anti-pattern: Needless Fanfare 5.12
5.4.11 Input Overlay 5.13
5.4.12 Dialog Inlay 5.13
5.4.13 List Inlay 5.13
5.4.14 Parallel content 5.13
5.4.15 Detail Inlay 5.14
5.4.16 Tabs 5.15
5.4.17 Personal assistant tabs 5.15
5.4.18 Inlay Versus Overlay? 5.15
5.4.19 Patterns that support virtual pages include: 5.15
5.4.21 Progressive loading 5.17
5.4.22 Inline Paging 5.18
5.4.23 Scrolled Paging: Carousel 5.18
5.4.24 Time-based 5.19
5.4.25 Virtual Panning 5.19
5.4.26 Zoomable User Interface 5.20
5.4.27 Paging versus Scrolling 5.22
5.5 Process Flow 5.23
5.5.1 Google Blogger 5.23
5.5.3 Process Flow patterns: 5.24
5.5.4 Interactive Single-Page Process 5.24
5.5.5 Keeping users engaged 5.25
5.5.6 Benefits 5.25
5.5.7 Configurator Process 5.26
5.5.8 Out of view status 5.26
5.5.9 Static Single-Page Process 5.26
5.5.10 An Invitation 5.26
5.6 Case Study- The Magic Principle 5.27
Review Questions and Answer 5.28
UNIT - 1
FOUNDATIONS OF HCI
1.1 INTRODUCTION
The research of studying the interaction between people and computers originally
went under the name man-machine interaction but this became human computer interaction in
recognition of specific interest in computers and the composition of user population.
•• HCI (Human-Computer Interaction) is the study of how people interact with
computers and to what extent computers are or are not developed for successful
interaction with human beings.
•• HCI involves design, implementation and evaluation of interactive systems in the
context of user’s task and work.
1.1.1 The Human: Input- Output Channels
•• A person interacts with outside world through information being received and sent:
input and output.
•• A user interacts with the computer by receiving information that is output by
computer and responds by providing input to the computer: user’s output becomes
computer’s input and vice versa.
•• Input in the human occurs mainly through the senses and output through the effectors
or responders.
•• Five major senses are Vision, hearing, Touch, Taste and Smell. Five major effectors
are Limbs, Fingers, Eyes, Head and Vocal systems.
•• Interaction with computer is possible through Input-Output channels such as using
a GUI based computer, information received by sight, beeps received by ear, feel
keyboard and mouse using fingers.
i) Vision
•• Human vision is a highly complex activity with physical and perceptual limitations.
•• Two stages of visual perception:
(i) physical reception of stimulus from outside world and
(ii) Processing and interpretation of that stimulus. Eye is a physical receptor.
1.2 Human Computer Interaction
Human Eye – Image formation
•• The cornea and lens focuses light into a sharp image on back of the eye, retina.
•• The retina is light sensitive and contains two types of photoreceptor: rods and
cones.
•• Rods are sensitive to light and allow us to see under a low level of illumination.
However, they are unable to resolve fine detail and are subject to light saturation.
This is the cause for temporary blindness we get when moving from a darkened
room into sunlight. There are approximately 120 million rods per eye and situated
towards the edges of retina.
•• Cones are less sensitive to light and can tolerate more light when compared to
rods. There are 3 types of cone each sensitive to a different wavelength allowing
color vision. The eye has approximately 6 million cones, mainly concentrated on
fovea, a small area of retina where images are fixated.
•• There is a blind spot where optic nerve enters the eye and has no rods or cones,
yet our visual system compensates for it.
•• Specialized nerve cells called ganglion cells. The two types are: X cells
concentrated in fovea and responsible for early detection of pattern whereas
Y cells widely distributed in the retina and responsible for early detection of
movement.
Retina
Aqueous humour Iris
Vitreous humour
Cornea Pupil Lens Fovea
Ligaments Blind Spot
Tendon
Human Eyes
Visual Perception
•• Visual perception involves how we perceive size, depth, brightness and color,
perception of brightness and perceiving color.
Foundations of HCI 1.3
Capabilities and limitations
•• Visual processing allows transformation and interpretation of a complete image.
•• Context illusion, over compensation illusion, proof reading illusion, lines and
optical center illusion
ii) Reading
•• Several stages: visual pattern is perceived and decoded using internal representation
of language
•• interpreted using knowledge of syntax, semantics, pragmatics
iii) Hearing
•• The sense of hearing is often considered secondary to sight, but we tend to
underestimate the amount of information that we receive through our ears.
The human ear
•• Hearing begins with vibrations in the air or sound waves.
•• The ear comprises three sections, commonly known as the outer ear, middle ear
and inner ear.
•• Sound is changes or vibrations in air pressure and has a number of characteristics
such as pitch, loudness and type of sound.
•• The human ear can hear frequencies from about 20 Hz to 15 KHz.
•• The auditory system performs some filtering of the sounds received, allowing us
to ignore background noise and concentrate on important information.
iv) Touch
•• Touch provides us with vital information about our environment.
•• It tells us when we touch something hot or cold, and can therefore act as a warning.
•• We receive stimuli through the skin. The skin contains three types of sensory
receptor: thermoreceptors respond to heat and cold, nociceptors respond to intense
pressure, heat and pain, and mechanoreceptors respond to pressure.
v) Movement
•• A simple action such as hitting a button in response to a question involves a number
of processing stages.
•• The stimulus is received through the sensory receptors and transmitted to the brain.
•• The question is processed and a valid response generated. The brain then tells the
appropriate muscles to respond.
•• Each of these stages takes time, which can be roughly divided into reaction time and
movement time.
1.4 Human Computer Interaction
•• Movement time is dependent largely on the physical characteristics of the subjects:
their age and fitness. Reaction time varies according to the sensory channel through
which the stimulus is received.
•• Speed and accuracy of movement are important considerations in the design of
interactive systems, primarily in terms of the time taken to move to a particular
target on a screen. The target may be a button, a menu item or an icon.
1.1.2 Human Memory
•• Memory contains our knowledge of actions or procedures.
•• It allows us to repeat actions, to use language, and to use new information received
via our senses.
•• It also gives us our sense of identity, by preserving information from our past
experiences.
Sensory Short term Long term
Memories memory or memory
Working
Iconic, Echoic, memory
Haptic
A model of the structure of memory
Types of memory
(i) Sensory memory – Sensory memory is the shortest term element of memory. It
is the ability to retain impressions of sensory information after the original stimuli
have ended. It acts as a kind of buffer for stimuli received through the five senses of
sight, hearing, smell, taste and touch retained accurately but for a flick of seconds.
•• The sensory memory for visual stimuli is known as iconic memory.
•• The memory for aural stimuli is known as echoic memory.
•• The memory for touch is known as haptic memory.
•• Example – Firework displays where moving sparklers leave a persistent image
in the order of 0.5 seconds.
(ii) Short term memory – Short-term memory can be defined as the ability to remember
an insubstantial amount of information for a short period of time.
•• Example – When someone is given a phone number and is forced to memorize
it because there is no way to write it down.
•• Also look at the following sequence, 265397620853 and now look at this
sequence 4411 3245 8920. Which one is easier to remember?
Foundations of HCI 1.5
•• Chunking of information can lead to an increase in the short-term memory
capacity. Chunking is the organization of material into shorter meaningful
groups to make them more manageable.
•• Example – A hyphenated phone number, split into groups of 3 or 4 digits, tends
to be easier to remember than a single long number.
(iii) Long Term Memory (LTM) –This memory is a permanent store of what an
individual has learned. We store factual information, experiential knowledge, and
procedural rules of behavior.
Types - Episodic and semantic.
Episodic memory – represents our memory of events and experiences in a serial form.
An example would be a memory of our 1st day at school.
Semantic memory – structured record of facts, concepts and skills that we have acquired.
For example, London is the capital of England. It involves conscious thought and is
declarative.
Models of LTM – Frames
•• Information organized in data structures
•• Slots in structure instantiated with values for instance of data
•• Type - subtype relationships
DOG COLLIE
Fixed Fixed
legs: 4 breed of: DOG
Default type: sheepdog
diet: carniverous Default
sound: bark size: 65 cm
Variable Variable
size: Colour
colour
Models of LTM – Production rules
•• Represents procedural knowledge
•• Condition/ action rules – If condition matches, then use rule to determine action
1.6 Human Computer Interaction
COLLIE
Fixed
breed of: DOG
type: sheepdog
Default
size: 65 cm
Variable
Colour
1.1.3 Reasoning and Problem Solving
1.1.3.1 Reasoning
The process by which we use the knowledge to derive conclusions is known as reasoning.
Types
(i) Deductive reasoning- It derives logically necessary conclusion from the given
premises. Example – If it is Monday then she will go to office. It is Monday Therefore
she will go to office.
(ii) Inductive Reasoning- The process of deriving general idea for the unseen cases
from the various cases which we have seen so far.
•• Example – if every dog we have ever seen has a tail,we infer that all dogs have
tails.
•• It is a useful process which uses only positive evidence.We can prove that
inference is false by producing a negative evidence i.e., a dog without a tail for
the above example.
(iii) Abductive reasoning
•• It is a form of logical inference in which we can extract a theory from an
observation.
•• Example – The doctor hears her patient’s symptoms including shortness of
breath on cold days and while doing exercise. Thus we can abduce that the best
explanation of these symptoms is that her patient is an asthma sufferer.
1.1.3.2 Problem solving
The process of finding a solution to a problem through the information, knowledge and
skills we have.
(i) Gestalt theory – According to this theory, problem solving is a process of reproducing
known responses or use of trial and error concept. The two types are – Productive
and Reproductive.
Foundations of HCI 1.7
•• Productive problem solving is deriving a solution from previous experience
whereas Reproductive problem solving involves insight and restructuring of the
problem.
•• Similarity occurs when objects look similar to one another. People often perceive
them as a group or pattern.
•• The example below (containing 11 distinct objects) appears as as single
unit because all of the shapes have similarity. Unity occurs because the triangular
shapes at the bottom of the eagle symbol look similar to the shapes that form the
sunburst. This is called anomaly.
(ii) Problem Space Theory
•• This theory was introduced by Newell and Simon where problem solving focuses
on the problem space which consists of problem states
•• It involves generating these states using legal state transition operators.
•• The problem has an initial state, all the states in between and a final or goal state.
Operators are used by people to perform transition from one state to another.
•• It can be performed by a method known as Means-End Analysis.
(a) Means-End Analysis
•• Compare current state with goal state. If there is no difference between them
problem is solved.
•• If there is a difference between current and goal state, set a goal to solve that
difference. If there is more than one difference, set a goal to solve the largest
difference.
•• Select an operator to solve the difference of step 2.
•• If the operator can be applied, apply it. If it cannot, set a new goal to reach a
state that would allow application of operator.
•• Return to step 1 with new goal set in step 4.
•• Example – Tower of Hanoi problem.
1.8 Human Computer Interaction
(iii) Analogy in Problem Solving
•• Mapping knowledge related to a similar known domain to the new problem
domain is called analogical mapping.
•• Operators are transferred from known domain to the new domain.
•• Example – A general wants to attack a fortress. He can’t send his entire army
as the roads are mined to explode if large numbers of men pass over them.
Therefore the general devises a simple plan. He divides his armies into small
groups and dispatches each group to the head of a different road so that entire
army arrived together at the fortress at the same time. In this way, the general
captured the fortress.
1.1.3.3 Skill Acquisition
Skill acquisition deals with problem solving by gradually acquiring skills in a particular
domain.
(i) Unconscious incompetence –It is the first stage where we don’t know much about
the skills and also we don’t know how much we don’t know. We have a very basic
understanding of the skill.
(ii) Conscious incompetence – It is the stage where we have learned enough about the
skill to realize how little we know.
(iii) Conscious competence – It is the stage where we find ourselves able to perform the
skill increasingly well but it takes lot of concentration and hard work to do so.
(iv) Unconscious competence – It is the stage where our ability to perform the skill has
become almost second nature and need less conscious effort.
1.1.3.4 Errors And Mental Models
The common errors are slips and mistakes.
A slip tends to occur in those cases where the user does have the right mental model but
accidentally does the wrong thing.
A mistake is where the user has the wrong mental model. Designers can prevent these
mistakes from occurring by providing better feedback and by clearly outlining the options
available to the user.
A mental model is a set of beliefs of how a system works. Users act with the systems
based on these beliefs and develop a mental model based on these interactions.
1.2 THE COMPUTER
A computer system is made up of various elements and each of these elements affects
the interaction.
Foundations of HCI 1.9
1.2.1 Devices
1.2.1.1 Text entry devices
Keyboards
•• Most common text input device
•• Allows rapid entry of text by experienced users
•• Keypress closes connection, causing a character code to be sent
•• Usually connected by cable, but can be wireless
QWERTY keyboard
1 2 3 4 5 6 7 8 9 0
Q W E R T Y U I O P
A S D F G H J K L
Z X C V B N M , .
SPACE
•• Standardised layout but
•• non-alphanumeric keys are placed differently
•• accented symbols needed for different scripts
•• minor differences between UK and USA keyboards
•• QWERTY arrangement not optimal for typing
•• layout to prevent typewriters jamming
•• Alternative designs allow faster typing but large social base of QWERTY typists
produces reluctance to change.
Phone pad and T9 entry
•• use numeric keys with multiple presses
1.10 Human Computer Interaction
2 – a b c 6 - m n o
3 - d e f 7-pqrs
4 - g h i 8-tuv
5 - j k l 9-wxyz
•• T9 predictive entry
type as if single key for each letter
use dictionary to ‘guess’ the right word
hello = 43556 …
but 26 -> menu ‘am’ or ‘an’
Mouse
Handheld pointing device
•• very common
•• easy to use
•• Two characteristics - planar movement&buttons
Touchpad
•• They are small touch sensitive tablets and requires several strokes to move the cursor
across the screen
•• used mainly in laptops
•• fast stroke
•• Lots of pixels per inch moved
•• Initial movement to the target
•• slow stroke
•• Fewer pixels per inch moved
•• For accurate positioning
Trackball and thumbwheels
Trackball
•• ball is rotated inside static housing like an upside down mouse
Foundations of HCI 1.11
•• relative motion moves cursor
•• indirect device, fairly accurate
•• separate buttons for picking
•• very fast for gaming
•• Used in some portable and notebook computers
Thumbwheels
•• for accurate CAD – two dials for X-Y cursor position
•• for fast scrolling – single dial on mouse
Joystick and keyboard nipple
Joystick
•• Indirect input device taking up very little space
•• Two types of joystick: the absolute and the isometric.In the absolute joystick,
movement is the important characteristic whereas in the isometric joystick, the
pressure on the stick corresponds to the velocity of the cursor
•• often used for computer games, aircraft controls and 3D navigation
Keyboard nipple
•• controls the rate of movement across the screen
Stylus and light pen
Stylus
•• small pen-like pointer to draw directly on screen
•• may use touch sensitive surface or magnetic detection
•• used in PDA, tablets PCs and drawing tables
Light Pen
•• now rarely used
•• uses light from screen to detect location
Digitizing tablet
•• Mouse like-device with cross hairs
•• used on special surface
•• very accurate and used for digitizing maps
1.12 Human Computer Interaction
1.2.1.2 Display Devices
Bitmap Displays
•• Display is made of vast numbers of colored dots or pixels in a rectangular grid.
These pixels may be limited to black and white
•• two things to consider-total number of pixels and the density of pixels
•• Aspect ratio - ration between width and height i.e. 4:3 for most screens, 16:9 for
wide-screen TV
•• Anti-aliasing
•• softens edges by using shades of line colour
•• also used for text
Technologies
(a) Cathode ray tube
•• Stream of electrons emitted from electron gun, focused and directed by magnetic
fields, hit phosphor-coated screen which glows
•• used in TVs and computer monitors
electron beam
electron gun
focussing and
deflection
phosphor-
coated screen
Foundations of HCI 1.13
Liquid Crystal display
•• Smaller, lighter, and no radiation problems.
•• Found on PDAs, portables and notebooks and increasingly on desktop and even
for home TV
•• also used in dedicated displays: digital watches, mobile phones, HiFi controls
Large displays
•• Used for meetings, lectures, etc.
•• technology used are
•• plasma – usually wide screen
•• video walls – lots of small screens together
•• projected – RGB lights or LCD projector
1.2.1.3 Devices for Virtual Reality And 3D Interaction
(i) Positioning in 3D space
•• cockpit and virtual controls
•• Steering wheels, knobs and dials … just like real!
•• the 3D mouse
•• six-degrees of movement: x, y, z + roll, pitch, yaw
•• data glove
•• fibre optics used to detect finger position
•• VR helmets
•• detect head motion and possibly eye gaze
•• whole body tracking
•• accelerometers strapped to limbs or reflective dots and video processing
(ii) 3D displays
•• Desktop VR
•• ordinary screen, mouse or keyboard control
•• perspective and motion give 3D effect
•• Seeing in 3D
•• use stereoscopic vision
•• VR helmets
•• Screen plus shuttered specs, etc.
1.14 Human Computer Interaction
1.2.1.4 Printing and Scanning
•• image made from small dots
•• allows any character set or graphic to be printed,
•• critical features:resolution, size and spacing of the dots, measured in dots per inch
(dpi), speed
•• usually measured in pages per minute
Types of dot-based printers
•• dot-matrix printers
•• use inked ribbon (like a typewriter
•• Line of pins that can strike the ribbon, dotting the paper.
•• typical resolution 80-120 dpi
•• ink-jet and bubble-jet printers
•• tiny blobs of ink sent from print head to paper
•• Typically 300 dpi or better.
•• laser printer
•• like photocopier: dots of electrostatic charge deposited on drum, which picks up
toner (black powder form of ink) rolled onto paper which is then fixed with heat
•• Typically 600 dpi or better.
Two sorts of scanner
•• flat-bed: paper placed on a glass plate, whole page converted into bitmap
•• hand-held: scanner passed over paper, digitising strip typically 3-4” wide
1.2.2 The Computer Memory
•• A memory is just like a human brain. It is used to store data and instructions.
•• Computer memory is the storage space in computer where data is to be processed
and instructions required for processing are stored.
•• The memory is divided into large number of small parts called cells. Each location
or cell has a unique address which varies from zero to memory size minus one.
Memory is primarily of three types
(i) Cache Memory
(ii) Primary Memory/Main Memory
(iii) Secondary Memory
Foundations of HCI 1.15
(i) Cache Memory
•• Cache memory is a very high speed semiconductor memory which can speed up
CPU.
•• It acts as a buffer between the CPU and main memory.
•• It is used to hold those parts of data and program which are most frequently used by
CPU.
Advantages
The advantages of cache memory are as follows:
•• Cache memory is faster than main memory.
•• It consumes less access time as compared to main memory.
•• It stores data for temporary use.
Disadvantages
•• Cache memory has limited capacity.
•• It is very expensive.
(ii) Primary Memory/Main Memory
•• Primary memory holds only those data and instructions on which computer is
currently working.
•• It has limited capacity and data is lost when power is switched off.
•• It is divided into two subcategories RAM and ROM.
Characteristics of Main Memory
These are semiconductor memories.
•• It is known as main memory.
•• Data is lost in case power is switched off.
•• Faster than secondary memories
(iii) Secondary Memory
•• This type of memory is also known as external memory or non-volatile.
•• It is slower than main memory.
•• These are used for storing data/Information permanently.
•• CPU directly does not access these memories instead they are accessed via input-
output routines.
1.16 Human Computer Interaction
Characteristic of Secondary Memory
•• These are magnetic and optical memories
•• It is known as backup memory. It is non-volatile memory.
•• Data is permanently stored even if power is switched off.
1.2.3 Processing and Networks
Computers that run interactive programs will process in the order of 100 million
instructions per second.
(i) Finite Processor speed
•• Speed of processing can seriously affect the user interface.
•• These effects must be taken into account when designing an interactive system.
•• There are two sorts of faults due to processing speed: those when it is too slow, and
those when it is too fast
(ii) Limitations on Interactive performance
Computation bound
•• Computation takes ages, causing frustration for the user
Storage channel bound
•• Bottleneck in transference of data from disk to memory
Graphics bound
•• Common bottleneck: updating displays requires a lot of effort - sometimes
helped by adding a graphics co-processor optimised to take on the burden
(iii) Networked computing
Networks allow access to
•• large memory and processing
•• other people (groupware, email)
•• shared resources – esp. the web
Issues
•• network delays – slow feedback
•• conflicts - many people update data
•• unpredictability
1.3 INTERACTION
Interaction involves at least two participants –system and user. Both are complex and
very different from each other in the way they communicate and view the domain and tasks.
Foundations of HCI 1.17
Domain consists of concepts that highlight its important aspects. Example- Graphic
design where domain are geometric shapes and drawing surfaces.
Tasks are operations to manipulate the concepts of a domain.
System’s language is referred to as core language which describes the computational
attributes of domain relevant to system state whereas user’s language is referred as task language
which describes psychological attributes of domain relevant to user state.
1.3.1 Interaction Models
Norman’s Execution-Evaluation cycle
Norman’s model of interaction cycle can be divided into 2 major phases – execution and
evaluation and further subdivided into 7 stages:
•• Establishing the goal – The user forms a goal by his idea of the needs in task
language.
•• Forming the intention – The goal is translated to more specific intention
•• Specifying the action sequence – specification of actual actions to reach the goal
•• Executing the action – execution of the actual actions
•• Perceiving the system state – The user perceives the new state of the system
•• Interpreting the system state – Interprets it in terms of his expectations
•• Evaluating the system state with respect to the goals and intentions – If the user’s
goal is accomplished then the interaction is successful otherwise the user need to
establish a new goal and repeat the cycle.
goal
execution evaluation
system
Some systems are harder to use than others
•• Gulf of Execution – user’s formulation of actions is not equal to actions allowed by
the system
•• Gulf of Evaluation – user’s expectation of changed system state is not equal to
actual presentation of this state
1.3.2 Interaction Frameworks
Abowd and Beale framework has 4 parts- user, input, system and output
1.18 Human Computer Interaction
O
output
S U
core task
I
input
Each has its own unique language. The user intentions are:
•• translated into actions at the interface
•• translated into alterations of system state
•• reflected in the output display
•• interpreted by the user
It is a general framework for understanding interaction
•• not restricted to electronic computer systems
•• identifies all major components involved in interaction
•• allows comparative assessment of systems
•• an abstraction
1.3.3 Ergonomics
•• Study of the physical characteristics of interaction
•• Ergonomics are good at defining standards and guidelines for constraining the way
we design certain aspects of systems
•• Examples - arrangement of controls and displays
•• e.g. controls grouped according to function or frequency of use, or sequentially
•• surrounding environment
•• e.g: seating arrangements adaptable to cope with all sizes of user
•• health issues
•• e.g: physical position, environmental conditions (temperature, humidity), lighting,
noise
•• use of colour
•• e.g: use of red for warning, green for okay, awareness of colour-blindness etc.
Foundations of HCI 1.19
1.3.4 Interaction Styles
Common interaction styles are:
•• command line interface
•• menus
•• natural language
•• question/answer and query dialogue
•• form-fills and spreadsheets
•• WIMP
•• point and click
•• three–dimensional interfaces
(i) Command line interface
•• Way of expressing instructions to the computer directly
•• Function keys, single characters, short abbreviations, whole words, or a
combination
•• suitable for repetitive tasks
•• better for expert users than novices
•• offers direct access to system functionality
•• Command names/abbreviations should be meaningful!
•• Typical example: the Unix system
(ii) Menus
•• Set of options displayed on the screen
•• Options visible
•• less recall - easier to use
•• rely on recognition so names should be meaningful
•• Selection by:
•• numbers, letters, arrow keys, mouse
•• combination (e.g. mouse plus accelerators)
•• Often options hierarchically grouped
•• sensible grouping is needed
•• Restricted form of full WIMP system
(iii) Natural language
•• Familiar to user
•• speech recognition or typed natural language
1.20 Human Computer Interaction
•• Problems
•• vague
•• ambiguous
•• hard to do well
•• Solutions
•• try to understand a subset
•• pick on key words
(iv) Query interfaces
•• Question/answer interfaces
•• user led through interaction via series of questions
•• suitable for novice users but restricted functionality
•• often used in information systems
•• Query languages (e.g. SQL)
•• used to retrieve information from database
•• requires understanding of database structure and language syntax, hence requires
some expertise
(v) Form fills
•• Primarily for data entry or data retrieval
•• Screen like paper form.
•• Data put in relevant place
•• Requires
•• good design
•• obvious correction facilities
Foundations of HCI 1.21
Spread sheets
•• first spreadsheet VISICALC, followed by Lotus 1-2-3 MS Excel most common
today
•• Sophisticated variation of form-filling.
•• grid of cells contain a value or a formula
•• formula can involve values of other cells e.g. sum of all cells in this column
•• user can enter and alter data spreadsheet maintains consistency
(vi) WMP interface
•• Windows, icons, mice, and pull-down menus!
•• default style for majority of interactive computer systems, especially PCs and
desktop machines
(vii) Point and click interfaces
•• used inmultimedia, web browsers and hypertext
•• Just click something! - icons, text links or location on map
•• minimal typing
(viii) Three dimensional interfaces
•• virtual reality
•• ‘ordinary’ window systems
•• highlighting
•• visual affordance
•• Indiscriminate use just confusing!
•• 3D workspaces
•• use for extra virtual space
•• light and occlusion give depth
•• distance effects
1.3.5 Elements of the Wimp Interface
Windows
•• Areas of the screen that behave as if they were independent
•• can contain text or graphics
•• can be moved or resized
•• can overlap and obscure each other, or can be laid out next to one another (tiled)
•• scrollbars
1.22 Human Computer Interaction
•• allow the user to move the contents of the window up and down or from side to
side
•• title bars - describe the name of the window
Icons
•• small picture or image
•• represents some object in the interface
•• often a window or action
•• windows can be closed down (iconised)
•• small representation fi many accessible windows
•• icons can be many and various
•• highly stylized
Pointers
•• important component
•• WIMP style relies on pointing and selecting things
•• uses mouse, trackpad, joystick, trackball, cursor keys or keyboard shortcuts
•• wide variety of graphical images
Menus
•• Choice of operations or services offered on the screen
•• Required option selected with pointer
Kinds of Menus
•• Menu Bar at top of screen (normally), menu drags down
Foundations of HCI 1.23
•• pull-down menu - mouse hold and drag down menu
•• drop-down menu - mouse click reveals menu
•• Fall-down menus - mouse just moves over bar!
•• Contextual menu appears where you are
•• pop-up menus - actions for selected object
•• pie menus - arranged in a circle
•• easier to select item (larger target area)
•• quicker (same distance to any option)
Buttons
•• individual and isolated regions within a display that can be selected to invoke an
action
•• Special kinds
•• radio buttons
•• set of mutually exclusive choices
Check boxes
•• set of non-exclusive choices
Toolbars
•• Long lines of icons but what do they do?
•• fast access to common actions
•• often customizable:
•• choose which toolbars to see
•• choose what options are on it
Palettes and tear-off menus
•• Problem
•• menu not there when you want it
1.24 Human Computer Interaction
•• Solution
palettes – little windows of actions
•• shown/hidden via menu option
e.g. available shapes in drawing package
tear-off and pin-up menus
•• menu ‘tears off’ to become palette
Dialog boxes
•• Information windows that pop up to inform an important event or request. e.g: when
saving a file, a dialogue box is displayed to allow the user to specify the filename and
location. Once the file is saved, the box disappears.
1.3.6 Interactivity
•• Interactivity is the defining feature of an interactive system. This can be seen in
many areas of HCI.
(i) Speech–driven interfaces are rapidly improving but still inaccurate
e.g. airline reservation:
reliable “yes” and “no” and also system reflects back its understanding
“you want a ticket from New York to Boston?”
(ii) Look and feel WIMP systems have the same elements: windows, icons. Menus, pointers,
buttons, etc.but different window systems behave differently
e.g. MacOS vs Windows menus
appearance + behaviour = look and feel
(iii) Initiative
•• The major example is modal dialog boxes. It is often the case that when a dialog box
appears the application will not allow you to do anything else until the dialog box
has been completed or cancelled.
(iv) Error and repair
•• can’t always avoid errors but we can put them right
•• make it easy to detect errors then the user can repair them
(v) Context
Interaction affected by social and organizational context
•• other people
•• desire to impress, competition, fear of failure
Foundations of HCI 1.25
•• motivation
•• fear, allegiance, ambition, self-satisfaction
•• inadequate systems
•• cause frustration and lack of motivation
1.3.7 Paradigms
Successful interactive systems are commonly believed to enhance usability and,
therefore, serve as paradigms for the development of future products. They are Predominant
theoretical frameworks or scientific world views.
e.g., Aristotelian, Newtonian, Einsteinian (relativistic) paradigms in physics
Paradigms of interaction
•• New computing technologies arrive, creating a new perception of the
human—computer relationship.
•• We can trace some of these shifts in the history of interactive technologies.
Example Paradigm Shifts
•• Batch processing
•• Timesharing
•• Networking
•• Graphical display
•• Microprocessor
•• WWW
•• Ubiquitous Computing
Time-sharing
•• 1940s and 1950s – explosive technological growth- the significant advances in
computing consisted of new hardware technologies
•• 1960s – need to channel the power
•• The concept of time sharing is that a single computer supporting multiple users
Video Display Units
•• more suitable medium than paper
•• 1962 – Sutherland's Sketchpad
•• computers for visualizing and manipulating data from the computer in the form of
images on a VDU(video display units)
•• one person's contribution could drastically change the history of computing
1.26 Human Computer Interaction
Programming toolkits
•• Engelbart at Stanford Research Institute in the 1960s worked towards achieving the
manifesto set forth in 1963
•• the right programming toolkit provides building blocks to produce complex
interactive systems
Personal computing
•• 1970s – Papert's LOGO language for simple graphics programming by children
•• A system is more powerful as it becomes easier to user
•• Future of computing in small, powerful machines dedicated to the individual
•• Kay at Xerox PARC – the Dynabook as the ultimate personal computer
Window systems and the WIMP interface
•• humans can pursue more than one task at a time
•• windows used for dialogue partitioning, to “change the topic”
•• 1981 – Xerox Star first commercial windowing system
•• windows, icons, menus and pointers now familiar interaction mechanisms
Metaphor
Relating computing to other real-world activity is effective teaching technique
•• LOGO's turtle dragging its tail
•• file management on an office desktop
•• word processing as typing
•• financial analysis on spreadsheets
•• virtual reality – user inside the metaphor
Direct manipulation
•• 1982 – Shneiderman describes appeal of graphically-based interaction
•• visibility of objects
•• incremental action and rapid feedback
•• reversibility encourages exploration
•• syntactic correctness of all actions
•• replace language with action
•• 1984 – Apple Macintosh
•• the model-world metaphor
•• What You See Is What You Get (WYSIWYG)
Foundations of HCI 1.27
Language versus Action
•• actions do not always speak louder than words!
•• DM – interface replaces underlying system
•• language paradigm
•• interface as mediator
•• interface acts as intelligent agent
•• programming by example is both action and language
Hypertext
•• In mid 1960s – Nelson describes hypertext as non-linear browsing structure
•• hypermedia (or multimedia) is used for non-linear storage of all forms of electronic
media.
Multimodality
•• a mode is a human communication channel
•• emphasis on simultaneous use of multiple channels for input and output
Computer Supported Cooperative Work (CSCW)
•• CSCW removes bias of single user / single computer system
•• Can no longer neglect the social aspects
•• Electronic mail is most prominent success
The World Wide Web
•• Hypertext, as originally realized, was a closed system
•• Simple, universal protocols (e.g. HTTP) and mark-up languages (e.g. HTML) made
publishing and accessing easy
•• Critical mass of users leads to a complete transformation of our information economy.
Agent-based Interfaces
•• Original interfaces
•• Commands given to computer
•• Language-based
•• Direct Manipulation/WIMP
•• Commands performed on “world” representation
•• Action based
•• Agents - return to language by instilling proactively and “intelligence” in command
processor
1.28 Human Computer Interaction
•• Avatars, natural language processing
Ubiquitous Computing
•• “The most profound technologies are those that disappear.”
•• Mark Weiser, 1991
•• Late 1980’s: computer was very apparent
•• How to make it disappear?
•• Shrink and embed/distribute it in the physical world
•• Design interactions that don’t demand our intention
Sensor-based and Context-aware Interaction
•• Humans are good at recognizing the “context” of a situation and reacting appropriately
•• Automatically sensing physical phenomena (e.g., light, temp, location, identity)
becoming easier
Foundations of HCI 1.29
REVIWE QUESTIONS
WITH ANSWER
PART A – 2 MARKS
1. Expand and define HCI.
HCI (human-computer interaction) is the study of how people interact with computers
and to what extent computers are or are not developed for successful interaction with human
beings.
HCI involves design, implementation and evaluation of interactive systems in the
context of user’s task and work.
2. What are the two stages of visual perception?
The two stages of visual perception: i) physical reception of stimulus from outside world
ii) Processing and interpretation of that stimulus.
3. Draw the diagram of human eye.
Retina
Aqueous humour Iris
Vitreous humour
Cornea Pupil Lens Fovea
Ligaments Blind Spot
Tendon
4. Differentiate between short-term and long-term memory.
Short –term Long-term
i) Contains limited amount of information. i) Contains unlimited amount of information.
ii) Receives information from either the ii) Receives information from short term
senses or long term memory. through learning process.
5. Define mental model.
A mental model is a set of beliefs of how a system works?. Users act with the systems
based on these beliefs and develop a mental model based on these interactions.
1.30 Human Computer Interaction
6. List out the various types of reasoning.
(i) Deductive reasoning
(ii) Inductive Reasoning
(iii) Abductive Reasoning
7. Write short notes on Inductive reasoning.
The process of deriving general idea for the unseen cases from the various cases.
Example: if every dog we have ever seen has a tail, we infer that all dogs have tails.
It is a useful process which uses only positive evidence. We can prove that inference is
false by producing a negative evidence i.e., a dog without a tail for the above example.
8. What are slips and mistakes?
A slip tends to occur in those cases where the user does have the right mental model but
accidentally does the wrong thing.
A mistake is where the user has the wrong mental model. Designers can prevent these
mistakes from occurring by providing better feedback and by clearly outlining the options
available to the user.
9. What are the different types of dot-based printers?
i) dot-matrix printers
• use inked ribbon (like a typewriter
• Line of pins that can strike the ribbon, dotting the paper.
• typical resolution 80-120 dpi
ii) ink-jet and bubble-jet printers
• tiny blobs of ink sent from print head to paper
• Typically 300 dpi or better.
iii) Laser printer
• like photocopier: dots of electrostatic charge deposited on drum, which picks up
toner (black powder form of ink) rolled onto paper which is then fixed with heat
• Typically 600 dpi or better.
10. What are the two sorts of scanners?
The first one is flat-bed: paper placed on a glass plate, whole page converted into bitmap
and second one is hand-held: scanner passed over paper, digitising strip typically 3-4” wide.
Foundations of HCI 1.31
11. Mention the types of computer memory.
Memory is primarily of three types
(i) Cache Memory
(ii) Primary Memory/Main Memory
(iii) Secondary Memory
12. List out the advantages and disadvantages of Cache memory.
Advantages:
The advantages of cache memory are as follows:
•• Cache memory is faster than main memory.
•• It consumes less access time as compared to main memory.
•• It stores data for temporary use.
Disadvantages:
•• Cache memory has limited capacity.
•• It is very expensive.
13. Briefly describe the characteristics of main memory.
Characteristics of Main Memory
•• It is known as semiconductor memory.
•• Data is lost in case power is switched off.
•• Faster than secondary memories
14. What are the limitations on Interactive performance?
Limitations on Interactive performance:
Computation bound
•• Computation takes ages, causing frustration for the user
Storage channel bound
•• Bottleneck in transference of data from disk to memory
Graphics bound
•• Common bottleneck: updating displays requires a lot of effort - sometimes
helped by adding a graphics co-processor optimised to take on the burden
1.32 Human Computer Interaction
15. Give a diagrammatic representation of interaction framework.
O
output
S U
core task
I
input
16. List out the 7 stages of Norman’s Execution-Evaluation cycle
•• Establishing the goal
•• Forming the intention
•• Specifying the action sequence
•• Executing the action
•• Perceiving the system state
•• Interpreting the system state
•• Evaluating the system state with respect to the goals and intentions
17. Write short notes on Ergonomics.
Ergonomics are good at defining standards and guidelines for constraining the way we
design certain aspects of systems.
Examples: arrangement of controls and displays
18. Mention the common interaction styles.
Common interaction styles are:
•• command line interface
•• menus
•• natural language
•• question/answer and query dialogue
•• form-fills and spreadsheets
•• WIMP
•• point and click
•• three–dimensional interfaces
Foundations of HCI 1.33
19. Give short notes on different kinds of menus.
Kinds of Menus:
• Menu Bar at top of screen (normally), menu drags down
• pull-down menu - mouse hold and drag down menu
• drop-down menu - mouse click reveals menu
• Fall-down menus - mouse just moves over bar!
• Contextual menu appears where you are
• pop-up menus - actions for selected object
• pie menus - arranged in a circle
• easier to select item (larger target area)
• quicker (same distance to any option)
20. Write down the examples of Paradigm Shifts.
Examples are:
•• Batch processing
•• Timesharing
•• Networking
•• Graphical display
•• Microprocessor
•• WWW
•• Ubiquitous Computing
21. List out the types of human memory.
(i) Sensory memory
(ii) Short term memory
(iii) Long Term Memory
22. Define Analogical mapping.
Mapping knowledge related to a similar known domain to the new problem domain is
called analogical mapping.
23. Give short notes on Skill acquisition.
Skill acquisition deals with problem solving by gradually acquiring skills in a particular
domain.
(1) Unconscious incompetence
1.34 Human Computer Interaction
(2) Conscious incompetence
(3) Conscious competence
(4) Unconscious competence
24. Write down few uses of any one text entry device.
Touchpad
•• They are small touch sensitive tablets and requires several strokes to move the cursor
across the screen
•• used mainly in laptops
•• fast stroke
Lots of pixels per inch moved
Initial movement to the target
•• slow stroke
Fewer pixels per inch moved
For accurate positioning
25. Briefly describe about Liquid Crystal Display
Liquid Crystal Displays are smaller, lighter, and doesn’t have radiation problems. It is
found on PDAs, portables and notebooks and increasingly on desktop and even for home TV
and also used in dedicated displays: digital watches, mobile phones, Hi-Fi controls.
Foundations of HCI 1.35
PART B – 16 MARKS
1. Explain in detail about the Input-Output channels of a human.
2. a) What are the types of human memory? Describe each with example. (8)
b) List out and explain the various types of reasoning with an example. (8)
3. Give notes on the following:
a) Gestalt theory (4)
b) Problem Space theory (4)
c) Skill Acquisition (4)
d) Errors and Mental models (4)
4. Explain some of the text entry devices of a computer.
5. With neat sketch, elaborate the various display devices.
6. Write notes on the following:
a) Types of computer memory (8)
b) Processing and Networks (8)
7. What are the different elements of WIMP interface? Explain.
8. What is interaction? Describe about the various interaction styles.
9. Discuss about the following:
a) Norman’s Execution-Evaluation cycle
b) Abowd and Beale framework
10. Describe about the Paradigms and ergonomics.
UNIT - 2
DESIGN & SOFTWARE PROCESS
Interactive Design basics – process – scenarios – navigation – screen design – Iteration
and prototyping. HCI in software process – software life cycle – usability engineering –
Prototyping in practice – design rationale. Design rules – principles, standards, guidelines,
rules. Evaluation Techniques – Universal Design.
2.1 INTERACTIVE DESIGN BASICS
•• Interaction design is about creating interventions in often complex situations using
technology of many kinds including PC software, the web and physical devices.
•• Interaction design is not just about the artifact that is produced, whether a physical
device or a computer program, but about understanding and choosing how that is
going to affect the way people.
What is Design?
Design is achieving goals within constraints.
What is a goal?
A well-designed user interface will provide a good match between the user’s task needs,
skill level and learning ability.
•• What is the purpose of the design?
•• Who is it for?
•• Why do they want it ?
List of design’s constraints:
•• What materials must we use?
•• What standards must we adopt?
•• How much can it cost?
•• How much time do we have to develop it?
In a task –oriented approach, the likely sequence of user system dialogue must be
identified. Once the sequence of the user system dialogue is understood, the next stage is to plan
the design of the sequence of screens or windows which will support the dialogue. We cannot
always achieve all our goals within the constraints. So perhaps one of the most important things
2.2 Human Computer Interaction
about design is: Trade off- Choosing which goals or constraints can be relaxed so that others
can be met.
The Golden Rule of Design
The designs we produce may be different, but often the raw materials are the same. This
leads us to the golden rule of design:
Understand your materials
For Human Computer Interaction, the obvious materials are the human and the computer.
•• understand computers
•• limitations, capacities, tools, platforms
•• understand people
•• psychological, social aspects, human error.
The Err For Human
The phrase ‘human error’ is taken to mean ‘operator error’.
Example of human error:
•• Accident reports :
•• air crash, industrial accident, hospital mistake, blames etc...
•• If you design using a physical material, you need to understand how and where
failures would occur and strengthen the construction, build in safety features or
redundancy.
•• Similarly, if you treat the human with as much consideration as a piece of steel or
concrete, it is obvious that you need to understand the way human failures occur and
build the rest of the interface accordingly.
The Central Message – The User
This is the core of interaction design:
•• Put the user first
•• Keep the user in the center
•• Remember the user at the end
2.1.1 Process
Simplified view of four main phases plus an iteration loop, focused on the design of
interaction.
Design & Software Process 2.3
Figure 2.1 Interaction design process
Interaction Design involves four basic activities:
•• Identifying needs and establishing requirements.
•• Developing alternative designs that meet those requirements.
•• Building interactive versions of the designs so that they can be communicated and
assessed.
•• Evaluating what is being built throughout the process.
Requirements:
What is wanted?
The first stage is establishing what exactly is needed that is needed to spend more time
for gathering user’s needs. At the task level, the designer gains a greater understanding
of the users and the tasks carry out and can begin to identity which tasks will be of
importance to the proposed system.
There are a number of techniques used for this in HCI:
•• interviewing people,
•• videotaping them,
•• looking at the documents and objects that they work with,
•• observing them directly
Analysis:
At the task level, translate the user’s needs into system requirements and responsibilities.
The way they use the system can provide insight into the user’s requirements.
Example:
One use of the system might be analyzing an incentive payroll system, which will
tell us that this capacity must be included in the system requirements.
2.4 Human Computer Interaction
Design:
This is central stage of interaction design process .Design begins with a problem
statement and ends with a design that can be transformed into an operational system.
Iteration And Prototype:
Humans are complex and we cannot expect to get designs right first time. We therefore
need to evaluate a design to see how well it is working and where there can be improvements.
•• A prototype enables to fully understand how easy or difficult it will be to implement
some of the features of the system.
•• It also can give users a chance to comment on the usability and usefulness of the user
interface design.
•• Prototyping provides the developer a means to test and refine the user interface and
increase the usability of the system.
Implementation and Deployment:
Implement refines the detailed design into system deployment that will satisfy the user’s
needs.
User Focus
Know about user
How do you get to know your users?
•• Who are they?
•• Probably not like you!
•• talk to them
•• watch them
•• use your imagination
2.1.2 Scenarios
Example scenario:
A user intends to press the “save” button, but accidentally presses the “quit” button so
loses his work’. Others are focused more on describing the situation or context.
Scenarios are stories for design which is force to think about design in detail . Also help
to notice potential problem before they happen.
Scenarios can be used to:
•• communicate with clients or user
•• validate other task models, dialogue models and navigation models
Design & Software Process 2.5
•• understand dynamics of individual screen shots and pictures
Scenarios are in form of networks, hierarchy and linear
If it is linearity:
Scenarios – one linear path through system
Pros:
•• life and time are linear
•• easy to understand (stories and narrative are natural)
•• concrete (errors less likely)
Cons:
•• no choice, no branches, no special conditions, no alternative path of interaction
•• miss the unintended
So:
•• use several scenarios
•• use several methods
2.1.3 Navigation
User can interact at several levels of GUI
Figure 2.2 level of interaction
Individual screens or the layout of devices will have their own structure
Two main kinds of issue
•• local structure
•• looking from this screen out
•• global structure
•• structure of site, movement between screens
2.6 Human Computer Interaction
•• wider still
•• relationship with other applications
Local structure:
local structure focused goal –seeking behavior. To do this goal seeking, each state of the
system or each screen needs to give the user enough knowledge of what to do to get closer to
their goal.
Figure 2.3 goal-seeking
To get you started, here are four things to look for when looking at a single web page,
screen or state of a device.
•• knowing where you are
•• knowing what you can do
•• knowing where you are going – or what will happen
•• Knowing where you’ve been – or what you’ve done.
Global structure (hierarchical organization):
The overall structure of an application. This is the way the various screens, pages or
device states link to one another.
The system
Info and help Management Messages
Add user Remove user
Figure 2.4 Application functional hierarchy
Global structure – dialog:
In HCI the word ‘dialog’ is used to refer to this pattern of interactions between the user
and a system
Design & Software Process 2.7
A simple way is to use a network diagram showing the principal states or screens
Linked together with arrows. This can:
•• show what leads to what
•• show what happens when
•• include branches and loops
•• be more task oriented than a hierarchy
A network diagram illustrating the main screens for adding or deleting a user from the
messaging system.
Main Remove
Confirm
Screen User
Add user
Figure: 2.5 Network of screens/states
Wider still
•• style issues:
•• platform standards, consistency
•• functional issues
•• cut and paste
•• navigation issues
•• embedded applications
2.1.4 Screen Design
The basic principles at the screen level reflect those in other areas of interaction design:
•• Ask What is the user doing?
•• Think What information is required? What comparisons may the user need to make?
In what order are things likely to be needed?
•• Design Form follows function: let the required interactions drive the layout
Tools for layout
We have a number of visual tools available to help us suggest to the user appropriate
ways to read and interact with a screen or device.
2.8 Human Computer Interaction
•• grouping of items
•• order of items
•• Decoration - fonts, boxes etc.
•• alignment of items
•• white space between items
Group of items :
Billing details: Delivery details:
Name Name
Address: … Address: …
Credit card no Delivery time
Order details:
item quantity cost/item cost
size 10 screws (boxes) 7 3.71 25.97
…… … … …
Figure 2.6 Grouping related items in an order screen
order of groups and items
•• think! – what is natural order
•• should match screen order!
•• use boxes, space etc.
•• set up tabbing right!
•• instructions
•• beware the cake recipie syndrome!
… mix milk and flour, add the fruit
… after beating them
•• Decoration
•• use boxes to group logical items
•• use fonts for emphasis, headings
•• but not too many!!
Design & Software Process 2.9
ABCDEFHIJKLM
NOPQRSTUVWXYZ
Figure :2.7 Decoration
Alignment – text :
•• you read from left to right (English and European)
Figure 2.8 Alignment-text
Alignment – names
Usually scanning for surnames ⇒ make it easy!
Figure 2.9 Alignment – Names
2.10 Human Computer Interaction
2.1.5 Iteation and Prototyping
•• you never get it right first time if at first you don’t succeed
OK?
design prototype evaluate done!
re-design
Figure 2.10 Role of prototyping
There are two things you need in order for prototyping methods to work:
(1) To understand what is wrong and how to improve.
(2) A good start point
2.2 HCI IN SOFTWARE PROCESS
2.2.1 The Software Life Cycle:
•• Software engineering is the discipline for understanding the software design process,
or life cycle
•• Designing for usability occurs at all stages of the life cycle, not as a single isolated
activity
•• software engineering for interactive system design is not simply a matter of adding
one more activity that slots in nicely with the existing activities in the life cycle.
•• Rather, it involves techniques that span the entire life cycle
Phases of software life cycle:
•• Requirement specification
•• Architectural Design
•• Detailed Design
•• Coding and Unit testing
•• Integration and testing
•• Operation and Maintenance
Design & Software Process 2.11
Requirements
specification
Architectural
design
Detailed
design
Coding and
unit testing
Integration
and testing
Operation and
maintenance
Figure: 2.11 The activities in the waterfall model of the software life cycle
Requirements specification:
Designer and customer try capture what the system is expected to provide can be
expressed in natural language or more precise languages, such as a task analysis would provide.
Architectural design:
High-level description of how the system will provide the services required factor
system into major components of the system and how they are interrelated needs to satisfy both
functional and non-functional requirements
•• Present functionality through a familiar metaphor.
•• Provide similar execution style of analogous operations in different applications.
•• Organize the functionality of a system to support common user tasks.
•• Make invisible parts and processes visible to the user.
Detailed design:
Refinement of architectural components and interrelations to identify modules to be
implemented separately the refinement is governed by the non-functional requirements
Coding and Unit Testing:
The detailed design for a component of the system should be in such a form that it
is possible to implement it in some executable programming language. After coding, the
component can be tested to verify that it performs correctly, according to some test criteria that
were determined in earlier activities
2.12 Human Computer Interaction
Integration and testing
Testing is done to ensure correct behavior and acceptable use of any shared resources.
Maintenance
After product release, all work on the system is considered under the category of
maintenance, until such time as a new version of the product demands a total redesign or the
product is phased out entirely.
Validation and Verification
Verification
Designing the product right
Validation
Designing the right product
Real-world
requirements
and constraints The formality gap
Figure 2.12 The formality gap between the real world and structured design
The formality gap: Validation will always rely to some extent on subjective means of
proof.
Figure: 2.13 Feedback from maintenance activity to other design activities
Design & Software Process 2.13
Interactive systems and the software life cycle
The actual design process is iterative, work in one design activity affecting work in any
other activity both before and after it in the life cycle.
Figure: 2.14 Representing iteration in the waterfall model
2.2.2 Usability Engineering
The ultimate test of usability based on measurement of user experience
Usability engineering demands that specific usability measures be made explicit as
requirements
Usability specification
•• usability attribute/principle
•• measuring concept
•• measuring method
•• now level/ worst case/ planned level/ best case
Problems
•• usability specification requires level of detail that may not be
•• possible early in design satisfying a usability specification
•• does not necessarily satisfy usability
2.14 Human Computer Interaction
Attribute and Backward recoverability of a usability specification for a VCR:
Measuring concept: Undo an erroneous programming sequence
Measuring method: Number of explicit user actions to undo current
program
Now level: No current product allows such an undo
Worst case: As many actions as it takes to program-in mistake
Planned level: A maximum of two explicit user actions
Best case: One explicit cancel action
Usability standard ISO 9241
Usability categories:
•• effectiveness
•• can you achieve what you want to?
•• efficiency
•• can you do it without wasting effort?
•• satisfaction
•• do you enjoy the process?
Some metrics from ISO 9241
Usability objective Effectiveness Efficiency Satisfaction
Measures Measures Measures
Suitability Percentage of Time to Rating scale
for the task goals achieved complete a task for satisfaction
Appropriate for Number of power Relative efficiency Rating scale for
trained users features used compared with an satisfaction with
expert user power features
Learnability Percentage of Time to learn Rating scale for
functions learned criterion ease of learning
Error tolerance Percentage of Time spent on Rating scale for
errors corrected correcting errors error handling
successfully
Table 2.1 Examples of usability metrics from ISO 9241
Design & Software Process 2.15
2.2.3 ITERATIVE DESIGN AND PROTOTYPING
•• Iterative design overcomes inherent problems of incomplete requirements
•• Prototypes
•• simulate or animate some features of intended system
•• different types of prototypes
•• throw-away
•• incremental
•• evolutionary
•• Management issues
•• time
•• planning
•• non-functional features
•• contracts
Throw-way:
The prototype is built and tested. The design knowledge gained from this exercise is
used to build the final product, but the actual prototype is discarded.
Figure:2.15 Throw-away prototyping within requirements specification
Incremental prototype:
The final product is built as separate components, one at a time. There is one overall
design for the final system, but it is partitioned into independent and smaller components. The
final product is then released as a series of products, each subsequent release including one
more component
2.16 Human Computer Interaction
Figure:2.16 Incremental prototyping within the life cycle
Evolutionary prototype:
Here the prototype is not discarded and serves as the basis for the next iteration of design
Figure: 2.17 Evolutionary prototyping throughout the life cycle
Techniques for prototyping (prototyping in practice):
•• Storyboards
The simplest notion of a prototype is the storyboard, which is a graphical depiction
of the outward appearance of the intended system
Design & Software Process 2.17
•• Limited functionality simulations
Some part of system functionality provided by designers tools like HyperCard .
•• High-level programming support
Hyper Talk was an example of a special-purpose high-level programming language
which makes it easy for the designer to program certain features of an interactive
system
•• Warning about iterative design
2.2.4 Design Rationale
Design rationale is information that explains why a computer system is the way it is.
Benefits of design rationale:
•• communication throughout life cycle
•• reuse of design knowledge across products
•• enforces design discipline
•• presents arguments for design trade-offs
•• organizes potentially large design space
•• capturing contextual information
Types of Design Rationale:
•• Process-oriented
•• preserves order of deliberation and decision-making
•• Structure-oriented
•• emphasizes post hoc structuring of considered design alternatives
•• Two examples:
•• Issue-based information system (IBIS)
•• Design space analysis
Process Oriented Design Rationale
Much of the work on design rationale is based on issue-based information system, or
IBIS,
In IBIS, a hierarchical structure to a design rationale is created. A root issue is identified
which represents the main problem or question that the argument is addressing. Various
positions are put as potential resolutions for the root issue. Each position is supported or refuted
by arguments, which modify the relationship between issue and position.
2.18 Human Computer Interaction
supports
Position Argument
responds to
Issue
responds to
objects to
Position Argument
specializes
Sub-issue generalizes
questions
Sub-issue
Sub-issue
Figure :2.18 Structure of gIBIS
A graphical version of IBIS has been defined by gIBIS, which makes the structure of the
design rationale more apparent visually in the form of a directed graph which can be directly
edited by the creator of the design rationale.
Design space analysis
•• The design space is initially structured by a set of questions representing the major
issues of the design
•• Design space analysis is structure-oriented
•• Questions, Options and Criteria (QOC) notation, is characterized as design space
analysis
•• QOC – hierarchical structure:
questions (and sub-questions)
•• represent major issues of a design
options
•• provide alternative solutions to the question
criteria
•• the means to assess the options in order to make a choice
Design & Software Process 2.19
Figure : 2.19 The QOC notation
Psychological design rationale
•• To support task-artefact cycle in which user tasks are affected by the systems they
use
•• Aims to make explicit consequences of design for users
•• Designers identify tasks system will support
•• Scenarios are suggested to test task
•• Users are observed on system
•• Psychological claims of system made explicit
•• Negative aspects of design can be used to improve next iteration of design
2.3 DESIGN RULES
Design rules, which are rules a designer can follow in order to increase the usability
of the eventual software product can classify design rules based on the rule’s authority and
generality.
•• Authority- indication of whether or not the rule must be followed in design or
whether it is only suggested.
•• Generality- whether the rule can be applied to many design situations or whether it
is focussed on a more limited application situation.
•• Rules also vary in their level of abstraction
Number of different types of design rules.
•• Principles are abstract design rules, with high generality and low authority.
•• Standards are specific design rules, high in authority and limited in application.
2.20 Human Computer Interaction
•• guidelines tend to be lower in authority and more general in application.
Designing for maximum usability – the goal of interaction design
•• Principles of usability
•• general understanding
•• Standards and guidelines
•• direction for design
•• Design patterns
•• capture and reuse design knowledge
Figure 2.20 Design rules
2.3.1 Principles
Principles to support usability
•• Learnability
The ease with which new users can begin effective interaction and achieve maximal
performance
•• Flexibility
the multiplicity of ways the user and system exchange information
•• Robustness
the level of support provided the user in determining successful achievement and
assessment of goal-directed behaviour
Design & Software Process 2.21
Principles of learnability:
Predictability
•• determining effect of future actions based on past interaction history
•• operation visibility
Synthesizability
•• assessing the effect of past actions
•• immediate vs. eventual honesty
Familiarity
•• how prior knowledge applies to new system
•• guessability; affordance
Generalizability
•• extending specific interaction knowledge to new situations
Consistency
•• likeness in input/output behaviour arising from similar situations or task objectives
Principles of flexibility:
Dialogue initiative
•• freedom from system imposed constraints on input dialogue
•• system vs. user pre-emptiveness
Multithreading
•• ability of system to support user interaction for more than one task at a time
•• concurrent vs. interleaving; multimodality
Task migratability
•• passing responsibility for task execution between user and system
Substitutivity
•• allowing equivalent values of input and output to be substituted for each other
•• representation multiplicity; equal opportunity
Customizability
•• modifiability of the user interface by user (adaptability) or system (adaptivity)
2.22 Human Computer Interaction
Principles of robustness:
Observability
•• ability of user to evaluate the internal state of the system from its perceivable
representation
•• browsability; defaults; reachability; persistence; operation visibility
Recoverability
•• ability of user to take corrective action once an error has been recognized
•• reachability; forward/backward recovery; commensurate effort
Responsiveness
•• how the user perceives the rate of communication with the system
•• Stability
Task conformance
•• degree to which system services support all of the user’s tasks
•• task completeness; task adequacy
2.3.2 Standards
•• set by national or international bodies to ensure compliance by a large community
of designers standards require sound underlying theory and slowly changing
technology.
•• hardware standards more common than software high authority and low level of
detail.
•• ISO 9241 defines usability, effectiveness, efficiency and satisfaction with which
users accomplish tasks
Usability The effectiveness, efficiency and satisfaction with which specified users
achieve specified goals in particular environments.
Effectiveness The accuracy and completeness with which specified users can achieve
specified goals in particular environments.
Efficiency The resources expended in relation to the accuracy and completeness of
goals achieved.
Satisfaction The comfort and acceptability of the work system to its users and other
people affected by its use.
2.3.3 Guidelines
•• more suggestive and general
•• abstract guidelines (principles) applicable during early life cycle activities
Design & Software Process 2.23
•• detailed guidelines (style guides) applicable during later life cycle activities
•• understanding justification for guidelines aids in resolving conflicts
Smith and Mosier guidelines are:
(1) Data Entry
(2) Data Display
(3) Sequence Control
(4) User Guidance
(5) Data Transmission
(6) Data Protection
2.3.4 Golden rules and heuristics
•• “Broad brush” design rules
•• Useful check list for good design
•• Better design using these than using nothing!
•• Different collections e.g.
•• Nielsen’s 10 Heuristics (see Chapter 9)
•• Shneiderman’s 8 Golden Rules
•• Norman’s 7 Principles
Shneiderman’s 8 Golden Rules
(1) Strive for consistency
(2) Enable frequent users to use shortcuts
(3) Offer informative feedback
(4) Design dialogs to yield closure
(5) Offer error prevention and simple error handling
(6) Permit easy reversal of actions
(7) Support internal locus of control
(8) Reduce short-term memory load
Norman’s 7 Principles
(1) Use both knowledge in the world and knowledge in the head.
(2) Simplify the structure of tasks.
(3) Make things visible: bridge the gulfs of Execution and Evaluation.
(4) Get the mappings right.
2.24 Human Computer Interaction
(5) Exploit the power of constraints, both natural and artificial.
(6) Design for error.
(7) When all else fails, standardize
HCI Design Patterns
•• An approach to reusing knowledge about successful design solutions
•• Originated in architecture: Alexander
•• A pattern is an invariant solution to a recurrent problem within a specific context.
•• Examples
•• Light on Two Sides of Every Room (architecture)
•• Go back to a safe place (HCI)
•• Patterns do not exist in isolation but are linked to other patterns in languages which
enable complete designs to be generated
•• Characteristics of patterns
•• capture design practice not theory
•• capture the essential common properties of good examples of design
•• represent design knowledge at varying levels: social, organisational, conceptual,
detailed
•• embody values and can express what is humane in interface design
•• patterns are intuitive and readable and can therefore be used for communication
between all stakeholders
•• a pattern language should be generative and assist in the development of complete
designs.
2.4 EVALUATION TECHNIQUES
What is Evaluation?
•• tests usability and functionality of system
•• occurs in laboratory, field and/or in collaboration with users
•• evaluates both design and implementation
•• should be considered at all stages in the design life cycle
Goals of Evaluation
•• assess extent of system functionality
•• assess effect of interface on user
•• identify specific problems
Design & Software Process 2.25
2.4.1 Evaluation Through Expert Analysis
Cognitive Walkthrough
•• evaluates design on how well it supports user in learning task
•• usually performed by expert in cognitive psychology
•• expert ‘walks though’ design to identify potential problems using psychological
principles
•• forms used to guide analysis
Heuristic Evaluation
•• Proposed by Nielsen and Molich.
•• usability criteria (heuristics) are identified
•• design examined by experts to see if these are violated
•• Example heuristics
•• system behaviour is predictable
•• system behaviour is consistent
•• feedback is provided
•• Heuristic evaluation `debugs’ design
Model-based evaluation
•• cognitive and design models provide a means of combining design specification and
evaluation into the same framework.
•• Design rationale provides a framework in which design options can be evaluated.
By examining the criteria that are associated with each option in the design, and the
evidence that is provided to support these criteria, informed judgments can be made
in the design
Using previous studies in evaluation
using previous results as evidence to support (or refute) aspects of the design. It is
expensive to repeat experiments continually and an expert review of relevant literature can
avoid the need to do so.
2.4.2 Evaluating Through User Participation
User participation in evaluation tends to occur in the later stages of development when
there is at least a working prototype of the system in place.
2.26 Human Computer Interaction
Styles of evaluation
Laboratory studies:
Advantages:
•• specialist equipment available
•• uninterrupted environment
Disadvantages:
•• lack of context
•• difficult to observe several users cooperating
Appropriate
•• if system location is dangerous or impractical for constrained single user systems
to allow controlled manipulation of use
Field Studies:
Advantages:
•• natural environment
•• context retained (though observation may alter it)
•• longitudinal studies possible
Disadvantages:
•• distractions
•• noise
Appropriate
•• where context is crucial for longitudinal studies
Experimental evaluation
•• controlled evaluation of specific aspects of interactive behaviour
•• evaluator chooses hypothesis to be tested
•• a number of experimental conditions are considered which differ only in the value
of some controlled variable.
•• changes in behavioural measure are attributed to different conditions
Experimental factors:
•• Subjects
•• Variables
Design & Software Process 2.27
•• Hypothesis
•• Experimental design
Variables
•• independent variable (IV)
•• characteristic changed to produce different conditions
•• e.g. interface style, number of menu items
•• dependent variable (DV)
•• characteristics measured in the experiment
•• e.g. time taken, number of errors.
Hypothesis
•• prediction of outcome
•• framed in terms of IV and DV
•• e.g. “error rate will increase as font size decreases”
•• null hypothesis:
•• states no difference between conditions
•• aim is to disprove this
•• e.g. null hyp. = “no change with font size”
Experimental design
•• within groups design
•• each subject performs experiment under each condition.
•• transfer of learning possible
•• less costly and less likely to suffer from user variation.
•• between groups design
•• each subject performs under only one condition
•• no transfer of learning
•• more users required
•• variation can bias results.
2.4.3 Observational Techniques
A popular way to gather information about actual use of a system is to observe users
interacting with it.
2.28 Human Computer Interaction
Think Aloud
•• user observed performing task
•• user asked to describe what he is doing and why, what he thinks is happening .
Advantages
•• simplicity - requires little expertise
•• can provide useful insight
•• can show how system is actually use
Disadvantages
•• subjective
•• selective
•• act of describing may alter task performance
Cooperative evaluation
•• variation on think aloud
•• user collaborates in evaluation
•• both user and evaluator can ask each other questions throughout
Additional advantages:
•• less constrained and easier to use
•• user is encouraged to criticize system
•• clarification possible
Protocol Analysis
•• paper and pencil – cheap, limited to writing speed
•• audio – good for think aloud, difficult to match with other protocols
•• video – accurate and realistic, needs special equipment, obtrusive
•• computer logging – automatic and unobtrusive, large amounts of data difficult to
analyze
•• user notebooks – coarse and subjective, useful insights, good for longitudinal studies
Automated Analysis
•• Post task walkthrough
•• user reacts on action after the event
•• used to fill in intention
Design & Software Process 2.29
Advantages
•• analyst has time to focus on relevant incidents
•• avoid excessive interruption of task
Disadvantages
•• lack of freshness
•• may be post-hoc interpretation of events
Post-task walkthroughs
•• transcript played back to participant for comment
•• immediately ® fresh in mind
•• delayed ® evaluator has time to identify questions
•• useful to identify reasons for actions and alternatives considered
•• necessary in cases where think aloud is not possible
2.4.4 Query Techniques
Interviews
•• analyst questions user on one-to -one basis usually based on prepared questions
•• informal, subjective and relatively cheap
Advantages
•• can be varied to suit context
•• issues can be explored more fully
•• can elicit user views and identify unanticipated problems
Disadvantages
•• very subjective
•• time consuming
Questionnaires
•• Set of fixed questions given to users
Advantages
•• quick and reaches large user group
•• can be analyzed more rigorously
Disadvantages
•• less flexible
•• less probing
2.30 Human Computer Interaction
Styles of question
•• general
•• open-ended
•• scalar
•• multi-choice
•• ranked
2.4.5 Physiological Methods
Eye Tracking
•• head or desk mounted equipment tracks the position of the eye
•• eye movement reflects the amount of cognitive processing a display requires
•• measurements include
•• Fixations: eye maintains stable position. Number and duration indicate level of
difficulty with display
•• saccades: rapid eye movement from one point of interest to another
•• scan paths: moving straight to a target with a short fixation at the target is optimal
Physiological Measurements
•• emotional response linked to physical changes
•• these may help determine a user’s reaction to an interface
•• measurements include:
•• heart activity, including blood pressure, volume and pulse.
•• activity of sweat glands: Galvanic Skin Response (GSR)
•• electrical activity in muscle: Electromyogram (EMG)
•• electrical activity in brain: Electroencephalogram (EEG)
•• some difficulty in interpreting these physiological responses - more research needed
2.5 UNIVERSAL DESIGN
•• Designing systems to be used by anyone under any conditions
•• Multi-modal systems use more than one human input channel in the interaction
•• Speech
•• Non-speech sound
•• Touch
•• Handwriting
Design & Software Process 2.31
•• Gestures
•• Universal Design designing for diversity
•• Sensory, physical or cognitive impairment
•• Different ages
•• Different cultures & backgrounds
2. 5.1 Universal Design Principles
•• equitable use
•• flexibility in use
•• simple and intuitive to use
•• perceptible information
•• tolerance for error
•• low physical effort
•• size and space for approach and use
Multi-Modal Interaction
•• We providing access to information through more than one mode of interaction
is an important principle of universal design. Such design relies on multi-modal
interaction.
•• There are five senses: sight, sound, touch, taste and smell.
Sound in the interface
•• Sound is an important contributor to usability
•• Sound can convey transient information and does not take up
•• Screen space, making it potentially useful for mobile applications
Speech in the interface
•• Language is rich and complex
•• Human beings have a great and natural mastery of speech
Structure of speech:
The English language is made up of 40 phonemes, which are the atomic elements of
speech. Each phoneme represents a distinct sound, there being 24 consonants and 16 vowel
sounds.
Speech Recognition Problems
•• Different people speak differently:
2.32 Human Computer Interaction
•• accent, intonation, stress, idiom, volume, etc.
•• The syntax of semantically similar sentences may vary.
Background noises can interfere
Example:
The Phonetic Typewriter
•• Developed for Finnish (a phonetic language, written as it is said)
•• Trained on one speaker, will generalise to others.
•• A neural network is trained to cluster together similar sounds, which are then labelled
with the corresponding character.
When recognising speech, the sounds uttered are allocated to the closest corresponding
output, and the character for that output is printed
Speech Synthesis
Useful
•• natural and familiar way of receiving information
Problems
•• similar to recognition: prosody particularly
Additional problems
•• intrusive - needs headphones, or creates noise in the workplace
•• transient - harder to review and browse
Examples:
•• screen readers
•• read the textual display to the user utilised by visually impaired people
•• warning signals
•• spoken information sometimes presented to pilots whose visual and haptic skills
are already fully occupied
Non-Speech Sounds:
•• Non-speech sound can be used in a number of ways in interactive systems.
•• It is often used to provide transitory information, such as indications of network or
system changes, or of errors.
•• It can also be used to provide status information on background processes, since we
are able to ignore continuous sounds but still respond to changes in those sounds
Design & Software Process 2.33
Auditory Icons
•• Use natural sounds to represent different types of object or action
Natural sounds have associated semantics which can be mapped onto similar meanings
in the interaction
e.g. throwing something away
Earcons
•• Synthetic sounds used to convey information
•• Structured combinations of notes (motives) represent actions and objects
•• Motives combined to provide rich information
Figure: 2.21 Eaecons
Touch in the interface
The use of touch in the interface is known as haptic interaction. Haptics is a generic term
relating to touch, but it can be roughly divided into two areas: cutaneous perception, which is
concerned with tactile sensations through the skin; and kinesthetics, which is the perception of
movement and position. Both are useful in interaction but they require different technologies.
2.5.2 Handwriting Recognition
Handwriting is another communication mechanism which we are used to in day-to-day
life Technology
2.34 Human Computer Interaction
•• Handwriting consists of complex strokes and spaces
•• Captured by digitising tablet
•• strokes transformed to sequence of dots
•• large tablets available
•• suitable for digitising maps and technical drawings
•• smaller devices, some incorporating thin screens to display the information
•• PDAs such as Palm Pilot
•• tablet PCs
•• Problems
•• personal differences in letter formation
•• co-articulation effects
Figure: 2.22 handwriting various considerably
2.5.3 Gesture Recognition
•• applications
•• gestural input - e.g. “put that there”
•• sign language
•• technology
•• data glove
•• position sensing devices
•• benefits
•• natural form of interaction - pointing
•• enhance communication between signing and non-signing users
Design & Software Process 2.35
•• problems
•• user dependent, variable and issues of co articulation
2.5.4 Designing For Diversity
Designing for users with disabilities:
•• visual impairment
•• screen readers
•• hearing impairment
•• text communication, gesture, captions
•• physical impairment
•• speech I/O, eyegaze, gesture, predictive systems (e.g. Reactive keyboard)
•• speech impairment
•• speech synthesis, text communication
•• dyslexia
•• speech input, output
•• autism
•• communication, education
•• age groups
•• older people e.g. disability aids, memory aids, communication tools to prevent
social isolation
•• children e.g. appropriate input/output devices, involvement in design process
•• cultural differences
•• influence of nationality, generation, gender, race, sexuality, class, religion,
political persuasion etc. on interpretation of interface features
•• e.g. interpretation and acceptability of language, cultural symbols, gesture and
colour
Define & Software Process 2.35
2.36 Human Computer Interaction
REVIWE QUESTIONS
WITH ANSWER
PART – A (2-MARKS)
1. What is design?
Design is achieving goals within constraints.
2. What is a goal?
A well-designed user interface will provide a good match between the user’s task needs,
skill level and learning ability.
• What is the purpose of the design?
• Who is it for?
• Why do they want it?
3. List out design’s constraints.
• What materials must we use?
• What standards must we adopt?
• How much can it cost?
• How much time do we have to develop it?
4. Mention four basic activities of interaction design.
• Identifying needs and establishing requirements.
• Developing alternative designs that meet those requirements.
• Building interactive versions of the designs so that they can be communicated and
assessed.
• Evaluating what is being built throughout the process.
Define & Software Process 2.37
5. Draw interaction design process in details
6. List the importance of prototype.
• A prototype enables to fully understand how easy or difficult it will be to
implement some of the features of the system.
• It also can give users a chance to comment on the usability and usefulness of the
user interface design.
• Prototyping provides the developer a means to test and refine the user interface and
increase the usability of the system.
7. Describes the uses of scenarios.
• communicate with clients or user
• validate other task models, dialogue models and navigation models
• understand dynamics of individual screen shots and pictures
8. List out basic principles of screen design.
• Ask : What is the user doing?
• Think: What information is required? What comparisons may the user need to
make? In what order are things likely to be needed?
• Design: Form follows function: let the required interactions drive the layout.
9. Write down the Phases of software life cycle.
• Requirement specification
• Architectural Design
• Detailed Design
• Coding and Unit testing
• Integration and testing
• Operation and Maintenance
2.38 Human Computer Interaction
10. List out Attribute and Backward recoverability of a usability specification.
Measuring concept : Undo an erroneous programming sequence
Measuring method : Number of explicit user actions to undo current program
Now level : No current product allows such an undo
Worst case : As many actions as it takes to program-in mistake
Planned level : A maximum of two explicit user actions
Best case : One explicit cancels action
11. States Throw-way prototype with neat diagram
The prototype is built and tested. The design knowledge gained from this exercise is
used to build the final product, but the actual prototype is discarded.
12. What is Incremental prototype?
The final product is built as separate components, one at a time. There is one overall
design for the final system, but it is partitioned into independent and smaller components. The
final product is then released as a series of products, each subsequent release including one
more component
Define & Software Process 2.39
13. Write down the techniques to develop the prototyping.
• Storyboards
• The simplest notion of a prototype is the storyboard, which is a graphical depiction
of the outward appearance of the intended system
• Limited functionality simulations
• Some part of system functionality provided by designers tools like HyperCard.
• High-level programming support
• Hyper Talk was an example of a special-purpose high-level programming language
which makes it easy for the designer to program certain features of an interactive
system
• Warning about iterative design
14. What are the Benefits of design rationale?
• communication throughout life cycle
• reuse of design knowledge across products
• enforces design discipline
• presents arguments for design trade-offs
• organizes potentially large design space
• capturing contextual information
15. List out Smith and Mosier guidelines.
• Data Entry
• Data Display
• Sequence Control
• User Guidance
• Data Transmission
• Data Protection
16. Mention Shneiderman’s 8 Golden Rules.
1. Strive for consistency
2. Enable frequent users to use shortcuts
3. Offer informative feedback
4. Design dialogs to yield closure
5. Offer error prevention and simple error handling
6. Permit easy reversal of actions
2.40 Human Computer Interaction
7. Support internal locus of control
8. Reduce short-term memory load
17. States the Norman’s 7 Principles.
1. Use both knowledge in the world and knowledge in the head.
2. Simplify the structure of tasks.
3. Make things visible: bridge the gulfs of Execution and Evaluation.
4. Get the mappings right.
5. Exploit the power of constraints, both natural and artificial.
6. Design for error.
7. When all else fails, standardize
18. Write down the Characteristics of patterns.
• capture design practice not theory
• capture the essential common properties of good examples of design
• represent design knowledge at varying levels: social, organisational, conceptual,
detailed
• embody values and can express what is humane in interface design
• patterns are intuitive and readable and can therefore be used for communication
between all stakeholders
• pattern language should be generative and assist in the development of complete
designs.
19. What is Evaluation?
• tests usability and functionality of system
• occurs in laboratory, field and/or in collaboration with users
• evaluates both design and implementation
• should be considered at all stages in the design life cycle
20. Why is Evaluation design used?
• assess extent of system functionality
• assess effect of interface on user
• identify specific problems in the design life cycle
21. List down the Universal Design Principles.
• equitable use
• flexibility in use
Define & Software Process 2.41
• simple and intuitive to use
• perceptible information
• tolerance for error
• low physical effort
• size and space for approach and use
22. Write short notes on Non-Speech Sounds
Non-speech sound can be used in a number of ways in interactive systems. It is often
used to provide transitory information, such as indications of network or system changes, or of
errors. It can also be used to provide status information on background processes, since we are
able to ignore continuous sounds but still respond to changes in those sounds.
23. What is formality gap with diagram?
(1) Verification: Designing the product right.
(2) Validation: Designing the right product
Figure: The formality gap between the real world and structured design
The formality gap: Validation will always rely to some extent on subjective means of
proof
24. Draw an QOC notation
2.42 Human Computer Interaction
25. Write about ISO 9241
ISO 9241 defines usability, effectiveness, efficiency and satisfaction with which users
accomplish tasks
Usability The effectiveness, efficiency and satisfaction with which specified users
achieve specified goals in particular environments.
Effectiveness The accuracy and completeness with which specified users can achieve
specified goals in particular environments.
Efficiency The resources expended in relation to the accuracy and completeness of
goals achieved.
Satisfaction The comfort and acceptability of the work system to its users and other
people affected by its use.
PART –B (16MARKS)
1. Describe about interaction design process with neat diagram
2. Explain about navigation in interactive design process.
3. Discuss about software process in human computer interaction in details.
4. What is usability engineering? and discuss in detail about usability Engineering.
5. Discuss about iterative design and prototyping in detail
6. Explain about design rules and principles in details.
7. Explain about universal design in details.
8. Describe about evaluation techniques in interactive design process and explain.
9. Discuss about standards and guidelines of HCI in details.
10. Write in detail about observational techniques of evaluation in design process.
UNIT - 3
MODELS AND THEORIES
Cognitive models –Socio-Organizational issues and stake holder requirements –
Communication and collaboration models-Hypertext, Multimedia and WWW.
3.1. COGNITIVE MODELS
3.1.1. Cognitive Science
•• Cognitive Science is the science of mind and behavior which is different from
psychology.
•• Cognitive science used to understand the knowledge acquisition and us is key to
understanding the mind.
•• So, Cognitive science- is a scientific and interdisciplinary study of the mind with
special emphasis on the use and acquisition of knowledge and information.
•• It includes Artificial Intelligence, Psychology, Linguisitcs, Philosphy,
Anthropology, Nueorscience and Education.
•• It implies both interdisciplinary approach (i.e. many scientific disciplinecontribute
to cognitive science) and a computational approach which explains information
processing in terms of neural computations.
•• Cognitive science grew out of the following three developments:
•• The invention of computer and design the program that could do the kind of
tasks that humans do.
•• The development of information processing. The main goal of information
processing is to to specify the internal processing which involves perception,
language, memeory and thought.
•• The development of the theory of gnenerative grammar and related branches in
linguistics.
•• Cognitve science contains five major areas: Knowledge representation, language,
learning, thinking, and perception.
•• Cognitive model bound from cognitive science.
3.2 Human Computer Interaction
•• Cognitions: derived from latin base cognitio – i.e., “know together” which means
the collection of mental processes and activities used in perceiving ,learning,
remebering, thinking, and understanding, and the act of those processes.
3.1.2 Why congnitve Sciecne is interdisciplinary?
To process the information, human information processor has to use the following steps:
•• To acquire real time information about the surrounding environment using
perception or recpetors.
•• With the usage of some languages, making the use of information about syntax,
semantics and phonology.
•• To combine diferent sources of information and derive new information and test its
consistency using reasoning.
•• To make use of information in action planning and guidence
•• To store and retreive the information using memeory
Hence, cognitive science is inter- discipilinary one.
Fig.3.1. Cognitive Model for Human Information Processor (HIP)
3.1.3 Cognitive Model
•• Congnitive Model: A theory that produces a computational model of how people
perform tasks and solve problems by using psychological prinicples and emprical
studies.
•• It is both a research tool for theory building and and engineering tool for applying
theroy.
•• The aim of cognitive model in Human Computer Interaction (HCI) is to design and
evolution of interface alternatives.
•• Cognitive models are abstract, quantitative, approximate, and estimate from
experiments based on a theory of cognition.
Models and Theories 3.3
•• There is a need to model some aspects of user’s understanding, knowledge, intentions
or processing.
•• The level of representation differs from technique to technique- from models of high
level goals and the results of problem solving activities, to description of motor level
activity such as keystrokes and mouse click.
Fig.3.2. Cognitive Model
•• The common categorization of cognitive models are:
•• Competence Vs Performance
•• Computational Flavour
•• No clear divide
•• Competence Model
•• Competence models tend to be ones that can predict legal behavior sequences but
generally do this without reference to whether they could actually be executed
by users
•• Competence models, therefore, represent the kinds of behavior expected of
a user, but they provide little help in analyzing that behavior to determine its
demands on the user.
•• Performance Model
•• performance models not only describe what the necessary behavior
sequences are but usually describe both what the user needs to know and how
this is employed in actual task execution.
•• Performance models provide analytical power mainly by focusing on routine
behavior in very limited applications.
•• Cognitive model for HCI are mainly classified in to three categories:
•• Hiererchical representation of the user’s taks and goal structure.
3.4 Human Computer Interaction
•• Linguistic and grammatical model
•• physical and device-level models.
•• Role of Cognitive Model in HCI
•• It attempts to predict predict user perrformance based on a model of cognition
in HCI.
•• HCI uses that model to predict how human would complete tasks using a
particular user interface.
•• It limits the design space in HCI
•• HCI can take any specific design decision based on the answers of cognitive
model
•• It is used to estimate both total task time and training time.
•• Also used to identify complex, error-prone stages of the design.
•• Advantages
•• Don’t need to implement or prototype
•• Don’t need to test with real users
•• Theory has explanatory power which provides scientific foundation for design ,
like other engineering fields.
3.1.4. Goal and Task Structure
•• It is based on divide and coquer
•• It models the mental processing in which the user achieves goals by solving sub
goals.
•• Goals are intentions/aims (i.e. what you would like to be true).
•• Tasks are actions (i.e., how to achieve it)
•• Consider the example to produce a report on sales of introductory HCI text books.
•• To achieve this goal, we divide in to several sub goals viz gathering the data together,
producing the tables and histograms and writing the descriptive material
•• Again such above mentioned sub goals are divided into further sub goals until some
level of details is found at which we decide to stop.
•• Example: again gathering the data together sub goal is divided into find the names
of all intoductoru HCI books and then search the book sales database for thes books.
gather data
. find book names
. . do keywords search of names database
. . . … further sub-goals
Models and Theories 3.5
. . sift through names and abstracts by hand
. . . … further sub-goals
. search sales database - further sub-goals
layout tables and histograms - further sub-goals
write description - further sub-goals
•• There are three techniques are used to model the Goals and Task Hierarchies:
•• Goals, Operators, Methods, and Selection (GOMS)
•• Cognitive Complexity Theory (CCT)
•• Hierarchical Task Analysis (HTA)
•• Issues for Goal Vs Task Structure:
•• Granularity
•• Where do we start and stop?
•• Get down to routine learned behavior, not problem solving
•• Unit task – most abstract task is unit task which does not require any problem
solving skills on the part of the user.
•• Conflict
•• There is more than one way to achieve the goal or solve the problem.
•• Treatment of Error
3.1.4.1. Goal, Operators, Methods and Selection (GOMS)
•• GOMS provides a higher-level language for task analysis and UI modeling
•• It generates a set of quantitative and qualitative measures based on description of
the task and user interface
•• It provides a hierarchy of goals and methods to achieve them
•• The different GOMS variants use different terms, operate at various levels of
abstraction, and make different simplifying assumptions
•• GOMS is a formal representation of routine cognitive skill
•• A description of knowledge required by an expert user to perform a specific task.
•• It provides a description of what the user must learn.
•• GOMS can be classified as a predictive, descriptive and prescriptive model:
•• Predictive
•• Predicts the time it will take user to perform the tasks under analysis
•• Descriptive
•• Represents the way a user performs tasks on a system
3.6 Human Computer Interaction
•• Prescriptive
•• Guides the development of training programs and help systems
•• GOMS models user’s behavior in terms of:
•• Goals
•• What the user wants to do/achieve; it can be broken down in to sub goals.
•• Operators
•• An action performed in service of a goal; can be perceptual, cognitive or
motor acts
•• It is a specific step a user is able to take and assigned a specific execution
time.
•• Methods
•• Well-learned sequences of sub-goals and operators that can accomplish a
goal.
•• Selection Rules
•• Guidelines for deciding between multiple methods.
•• Consider the following example to GOMS model for closing window:
GOAL: CLOSE-WINDOW
. [select GOAL: USE-MENU-METHOD
. MOVE-MOUSE-TO-FILE-MENU
. PULL-DOWN-FILE-MENU
. CLICK-OVER-CLOSE-OPTION
GOAL: USE-CTRL-W-METHOD
. PRESS-CONTROL-W-KEYS]
For a particular user:
Rule 1: Select USE-MENU-METHOD unless another
rule applies
Rule 2: If the application is GAME,
select CTRL-W-METHOD
•• Goals vs Operators
•• The difference between goals and operators is simply the level of detail chosen
by the analyzer
•• Goals are usually important end-user intentions
Models and Theories 3.7
•• Operators usually represent primitive user actions, that have a fixed execution
time regardless of the context (or that is a constant function of some parameter),
that can be estimated empirically
•• As a result, operators usually stop at the command or keystroke level
•• Variations of GOMS are: Keystroke-Level Model (KLM), Card, Moran, and Newell
(CMN-GOMS), Natural GOMS Language (NGOMSL), Cognitive-Perceptual-
Motor GOMS (CPM-GOMS)
3.1.4.1 (a) GOMS - Keystroke-Level Model (KLM)
•• It is a simplest GOMS technique
•• The basis for all other GOMS techniques
•• It predicts execution time
•• It requires analyst-supplied methods
•• Assumption: the routine cognitive skills can be decomposed into a serial sequence
of basic cognitive operations and motor activities, which are:
•• K: A keystroke (280 msec)
•• M: A single mental operator (1350 msec)
•• P: Pointing to a target on a small display (1100 msec)
•• H: Moving hands from the keyboard to a mouse (400 msec)
•• Consider the following example to edit menu script and calculate the execution time
as:
Fig.3.3 KLM example
3.8 Human Computer Interaction
Fig.3.4 Execution time calculation for cut paste operation using menu in notepad with
KLM Model
3.1.4.1 (b) GOMS - Card, Moran, and Newell (CMN-GOMS)
•• It provides methods to achieve explicit goals and sub goals
•• The selection methods are predicted by the system - based on user’s rational decision-
making
•• Sub goal invocations and method selection are predicted by the model given the task
situation
•• In program form, the analysis is general and executable
•• It predicts operator sequence and execution time
•• It is directly based on the Model Human Processor
•• Building CMN-GOMS Model:
•• Generate task description
•• Pick a high level user goal
•• Write method of accomplishing goal
•• Write a methods for sub goals
•• Stop when operators are reached.
Models and Theories 3.9
•• Evaluate description of task
•• Apply result to UI
•• Iterate the process
3.1.4.1. (c) Natural GOMS Language (NGOMSL)
•• It is a structured natural language notation for representing GOMS models
•• It can predict learning time based on the number of NGOMSL statements in a method
(plus loading any additional static data in memory)
•• It can be used for developing training materials, help modules, etc.
•• It models the operation of working memory by including Retain and Recall statements
3.1.4.1. (d) Cognitive-Perceptual-Motor GOMS (CPM-GOMS)
•• Cognitive-Perceptual-Motor or Critical-Path-Method Works at lower level of detail
•• The primitive operators in CPM are very simple perceptual, cognitive or motor acts
•• It models parallelism by considering three mental processors and storage systems
that can work in parallel based on Model Human Processor
•• The execution time is predicted using critical path which is the longest path through
the task based on cognitive limitations and information flow dependencies
•• This model allocates less time for “prepare for action” type operation and allows
parallel processes. Hence it predicts a substantially shorter execution time rather
than other models.
•• Consider collect call example (Toll and Assistance Operator study (TAO)), operator
hits a “collect-call” key and says “Thank you” to customer.
Fig.3.5 Collect Call between Toll and Assistance Operator
•• In order to save the time, we can reposition the key for faster access in the sequential
example, but not in the parallel example.
3.10 Human Computer Interaction
•• The Critical path method (CPM) of above example is:
•• Critical path is a connected sequence that represents the greatest total time and
therefore determines the overall time for tasks.
•• Critical Path below is 400 + 280 + 2000 + 280 = 2.96 seconds
Fig.3.6. Critical path of Collect Call between Toll and Assistance Operator
3.1.4.1. (e) Comparison between GOMS variants:
KLM CMN NGOMSL CPM
Architectural Simple Model Cognitive Model Human
Basis Cognitive Human Complexity Processor,
Architecture Processor Theory assume
expertise in use
Goal Hierarchy Implicit Explicit Implicit Implicit
Models No No Yes No
Learning/
Transfer
Models Parallel No No No Yes
Processes
Assigned Mental Yes, use No Yes Yes, but very
Time operator M short for expert
users
Notation Used Primitive Programming Natural Schedule/PERT
Operators Language Language chart
3.1.4.1. (f) Advantages of GOMS:
•• It gives qualitative & quantitative measures
•• GOMS model explains the results
Models and Theories 3.11
•• It provides less work than user study
•• It is easy to modify when UI is revised
•• It is research tools to aid modeling process since it can still be tedious
3.1.4.1. (g) Disadvantages of GOMS:
•• Assumption made here in GOMS model is extreme expert behavior
•• Here the Tasks must be goal-directed
•• It does not model problem-solving, exploration, etc.
•• It is very difficult to model behavior at this level of detail
•• It is still hard to model anything but the simplest tasks
•• It not easy for heuristic analysis, guidelines, or cognitive walkthrough
•• GOMS is an interesting theoretical model, but is hardly ever used in practice
3.1.4.2. Cognitive Complexity theory (CCT)
•• It was introduced by Kieras and polson.
•• The basic premises of CCT is goal decomposition
•• It enriches the model to provide more analytical power.
•• There are two parallel descriptions:
•• User’s goal
•• It is based on a GOMS like goal hierarchy
•• It is expressed by using some production rule. i.e if condition then action
•• Where condition is statement about the contents of working memory
•• If the condition is true then the production rule is said to be fire.
•• Action may consist of one or more elementary actions which may be either
changes to the working memory, or external action such as key stroke.
•• Production rule may be written as program in LISP like language.
•• Device/ Computer goal
•• CCT uses generalized transition networks
•• It is a forms of state transition network
•• It is covered under dialogue models.
•• Consider the example of an editing task using the UNIX vi text editor
•• We had written CCT.
•• Here the production rules are in long-term memory where 4 rules in the text on page
425.
3.12 Human Computer Interaction
•• CCT as:
(GOAL perform unit task
(TEXT task is insert space)
(TEXT task is at 5 23)
(CURSOR 8 7)
•• Four rules to model inserting a space is:
•• Some notes on CCT:
•• The rules did not fire in the order they were written
•• The rules are all active and at each moment any rule that has its conditions true
may fire. Some rules may never fire; the same rule may fire repeatedly.
•• The rules can fire simultaneously in parallel model
•• The rules may be represented by experts
3.1.4.3. Problems and Extensions of goal hierarchies
•• The description can be enormous in goal hierarchies
•• Goal hierarchies are a post hoc technique where the risk is that it is defined by the
computer dialog and not a user one
•• Expert and also novice can also use this model at their level. Hence the representation
of model for any problem is different by both experts and novice.
3.1.5 Linguistic Models
•• Usually, Language is a tool for user can interact with computer.
•• Understanding the user’s behaviour and cognitive difficulty based on analysis of
language between user and system.
Models and Theories 3.13
•• There are two types of Linguistic dialogue models
•• Backus–Naur Form (BNF)
•• Task–Action Grammar (TAG)
3.1.5. (a) Backus-Naur Form(BNF)
•• It is a very common notation from computer Science
•• BNF rules are used to describe the dialog grammar
•• It is purely syntactic view of dialogue. It just ignores semantics
•• BNF has been used widely to specify the syntax of computer programming language.
•• There are two types of description used here: Terminals and Non Terminals.
•• Terminals
•• It shown in upper case letter
•• It represents the lowest level of user behaviour
•• e.g. CLICK-MOUSE, MOVE-MOUSE
•• Nonterminals
•• It is shown in lower case letter
•• It represents the higher level of abstraction
•• e.g. select-menu, position-mouse
•• Non-terminals are defined in terms of other non-terminals and terminals by
a definition of the form
•• Name::= expression
•• Where ::= symbol is read as ‘is defined as’
•• Non-terminals only appear on the left hand side.
•• Consider the following example
•• An expression
•• contains terminals and nonterminals
•• combined in sequence (+) or as alternatives (|)
draw line ::= select line + choose points + last point
select line ::= pos mouse + CLICK MOUSE
choose points ::= choose one | choose one + choose points
choose one ::= pos mouse + CLICK MOUSE
last point ::= pos mouse + DBL CLICK MOUSE
pos mouse ::= NULL | MOVE MOUSE+ pos mouse
3.14 Human Computer Interaction
•• Measurement with BNF
•• In order to measure and analyze the BNF we could consider the following factors:
•• Count the number of rules where the more rules an interface requires to use
it, the more complicated it is. Not all the rules are so good.
•• Count the number of ‘+’ and ‘|’ operators. This would be in effect, penalize
the more complex single rule.
3.1.5. (b) Task Action Grammar(TAG)
•• BNF ignores the advantages of consistency both in the language’s structure and in
its use of command names and letters.
•• TAG making consistency more explicitly.
•• It uses parameterized grammar rules to emphasize consistency and encoding the
user’s world knowledge.
•• In order to illustrate the consistency, we consider the three UNIX commands:cp(Copy),
mv(Move) and ln(link)
•• Each has two possible forms in BNF
•• In BNF, three UNIX commands would be described as:
•• copy ::= cp + filename + filename | cp + filenames + directory
•• move ::= mv + filename + filename | mv + filenames + directory
•• link ::= ln + filename + filename | ln + filenames + directory
•• No BNF measure could distinguish between this and a less consistent grammar in
which
•• link ::= ln + filename + filename | ln + directory + filenames
•• In TAG, this consistency of argument order can be made explicit using a parameter,
or semantic feature for file operations.
•• Rules here are:
•• file-op[Op] ::= command[Op] + filename + filename
•• command[Op] + filenames + directory
•• command[Op = copy] ::= cp
•• command[Op = move] ::= mv
•• command[Op = link] ::= ln
Models and Theories 3.15
3.1.6. Physical and Device Models
•• There are two types:
•• The Keystroke Level Model (KLM)
•• Buxton’s 3-state model
•• It is based on empirical knowledge of human motor system.
•• User’s task here is acquisition and execution. It only addresses the execution.
•• This model is complementary with goal hierarchy.
3.1.6. (a) Keystroke Level Model (KLM)
•• It is lowest level of GOMS
•• It predicts the user performance based on deep cognitive understanding
•• It is aimed at unit tasks within interaction
•• The assumption is that these more complex tasks would be split into sub task before
the user attempts to map them into physical action
•• The tasks is split in to 2 phases:
•• Acquisition of the task – ie the user builds mental representation of the task
•• Execution of the task using the system’s facilities.
•• It is related to GOMS model
•• The model decomposes the execution phase in to 5 different physical motor operators,
a mental operator and a system response operator.
•• Physical motor: K - keystroking
P - pointing
H - homing
D - drawing
•• Mental M - mental preparation
•• System R - response
•• The execution of a tasks will involve interleaved occurrence of the various operators
•• The execution time are empirically determined by
•• Texecute = TK + TP + TH + TD + TM + TR
•• Consider the example, we are using mouse based editor where if we notice a single
character error we will point at the error, delete the character and retype it, and
then return to previous typing point. KLM model of previous example and its time
calculation are as follows:
3.16 Human Computer Interaction
GOAL: ICONISE-WINDOW
[select
GOAL: USE-CLOSE-METHOD
. MOVE-MOUSE-TO- FILE-MENU
. PULL-DOWN-FILE-MENU
. CLICK-OVER-CLOSE-OPTION
GOAL: USE-CTRL-W-METHOD
PRESS-CONTROL-W-KEY]
•• compare alternatives:
•• USE-CTRL-W-METHOD vs.
•• USE-CLOSE-METHOD
•• assume hand starts on mouse
3.1.7. Cognitive Architecture Model
•• All of cognitive models make assumptions about the architecture of the human mind.
•• GOMS is based on divide and conquer method where the concept of taking problem
and solving it by divide and conquer method
•• CCT assumes the distinction between long and short term memory where production
rule being stored in long term memory and matched against the contents of short
term memory to determine which ‘fire’.
•• KLM based on model human processor architecture.
3.1.7. (a) Problem Space Model
•• Rational behavior is characterized as behavior that is intended to achieve a specific
goal.
•• This element of rationality is often used to distinguish between intelligent and
machine-like behavior.
Models and Theories 3.17
•• In the field of artificial intelligence (AI), a system exhibiting rational behavior is
referred to as a knowledge-level system.
•• A knowledge-level system contains an agent behaving in an environment.
•• The agent has knowledge about itself and its environment, including its own
goals.
•• It can perform certain actions and sense information about its changing
environment.
•• The agent behaves in its environment, it changes the environment and its own
knowledge.
•• This model can view the overall behavior of the knowledge-level system as a
sequence of environment and agent states as they progress in time.
•• The goal of the agent is characterized as a preference over all possible sequences
of agent/environment states.
•• The contrast of this rational behavior with another general computational model for
a machine, which is not rational.
•• For example, it is common to describe a problem as the search through a set of
possible states, from some initial state to a desired state.
•• The search proceeds by moving from one state to another possible state by
means of operations or actions, the ultimate goal of which is to arrive at one of
the desired states.
•• Once a programmer has identified a problem and a means of arriving at the
solution to the problem (the algorithm), the programmer then represents the
problem and algorithm in a programming language, which can be executed on a
machine to reach the desired state.
•• The architecture of the machine only allows the definition of the search or
problem space and the actions that can occur to traverse that space.
•• Termination is also assumed to happen once the desired state is reached.
•• The machine does not have the ability to formulate the problem space and its
solution, mainly because it has no idea of the goal.
•• It is the job of the programmer to understand the goal and so define the machine to
achieve it.
•• In order to realize the architecture of a knowledge-level system, we can adapt the
state-based computational model of a machine.
•• The new computational model is the problem space model.
•• Thus, a problem space consists of a set of states and a set of operations that can be
performed on the states.
•• Behavior in a problem space is a two-step process.
3.18 Human Computer Interaction
•• First, the current operator is chosen based on the current state and then it is
applied to the current state to achieve the new state
3.2. SOCIO ORGANIZATIONAL ISSUES AND STAKEHOLDER
REQUIREMENTS
•• Requirements capture is an important part of all software engineering methodologies
•• It focuses primarily on the functional requirements of the system – what the system
must be able to do – with less emphasis on non-functional human issues such as
usability and acceptability.
•• It may reflect only the management’s view of the user’s needs rather than gathering
information from the users themselves.
•• Stakeholder requirements modeling restore this balance by identifying the needs of
all stakeholders.
•• It includes the user and anyone else affected by the system, within the context in
which it will be used.
3.2.1. Organizational Issues
•• Organizational factors can make or break a system.
•• The study of workgroup is not only sufficient for any system is used within a wider
context and the crucial people need not be a direct user.
•• Before installing a new system, we must understand who will benefit, who will put
in effort and the balance of power in the organization and how it will be affected.
•• Even when a system is successful, it may be difficult to measure the success of it.
3.2.1. (a) Cooperation or Conflict?
•• ‘computer-supported cooperative work’ (CSCW) means that groups will be acting
in a cooperative manner
•• It is true to some extend level.
•• For example, opposing football teams cooperate to the extent that they keep (largely)
within the rules of the game, but their cooperation only goes so far
•• People in organizations and groups have conflicting goals
•• The systems which ignore this are likely to fail enormously.
•• Before installing a new computer system, whether explicitly ‘cooperative’ or not,
one must identify the stakeholders who will be affected by it.
•• These are not just the immediate users, but anyone whose jobs will be altered,
who supplies or gains information from it, or whose power or influence within the
organization will increase or decrease.
Models and Theories 3.19
3.2.1. (b) Organizational Structure/ Changing power structure
•• The identification of stakeholders will across the organizational structure.
•• It will not cover information transfer and power relationships
•• All organizations should have the informal networks that support both social and
functional contacts
•• Technology can be used to change management style and power structures
•• The impact of technology must be analyzed before it is introduced in an organization.
3.2.1. (c) Invisible Worker
•• Telecommunications improvement allows functional groups to be distributed over
different sites.
•• This can take the form of cross-functional neighbor hood centers, where workers
from different departments do their jobs in electronic contact with their functional
colleagues.
•• Examples: home-based teleworker
•• This will overcome many of the traditional barriers such as to reduce the car travel
and flexible family commitments. i.e , the employees will get many ecological and
economic benefits
•• But some of the problem here is ‘management by presence’ does not work and
problem for getting promotion by the employees
•• Many video-based groupware systems are intended to create a sense of engagement,
of active participation and social presence
3.2.1. (d) Who Benefit?
•• One frequent reason for the failure of information systems is that the people who get
the benefits from the system
•• Some of the system will produce disproportionate effort that is who puts in the effort
and who gets the benefit
•• For example in shared diary online application where secretaries and subordinates
put effort to enter the data, manager get benefit to easily arrange the meeting which
result fall in to neglect.
•• Information systems should aim for some level of symmetry.
•• If you have to do work for the system, you should obtain some benefit from it.
•• For the shared calendar, this might involve improving the personal user interface, so
that there are definite advantages in using the online system to plan your time rather
than using paper (it could even print out Filofax pages).
•• In addition, if people use electronic organizers one could consider integrating these
into the system.
3.20 Human Computer Interaction
3.2.1. (e) Free Rider Problem
•• In order to avoid specific group will get benefit, organization will follow no bias
system
•• But still it has a problem, i.e there is a possibility to get benefit without doing any
work.
•• The major issue in no bias system is free rider problem.
•• For example, an electronic conferencing system, if there is plenty of discussion of
relevant topics then there are obvious advantages to subscribing and reading the
contributions.
•• When considering writing a contribution, the effort of doing so may outweigh any
benefits.
•• The total benefit of the system for each user outweighs the costs, but for any particular
decision the balance is overturned.
•• A few free riders in a conference system are often not a problem, as the danger is
more likely from too much activity.
•• In addition, in electronic conferences the patterns of activity and silence may reflect
other factors such as expertise.
3.2.1. (f) Critical Mass
•• Another issue related to free rider problem is to develop critical mass.
•• In an earlier days, telephones were only in public places, their use as form of
pervasive interpersonal communication was limited. i.e telephone user is limited
•• But now days, a large number of people have telephones in their homes it becomes
worthwhile paying to have a telephone installed. All the people are using telephone
for their communication.
•• In cost/benefit terms, the early subscribers probably have a smaller benefit than the
cost.
•• Only when the number of subscribers increases beyond the critical mass does the
benefit for all dominate the cost.
Fig.3.7. Cost/Benefit of System Use.
Models and Theories 3.21
•• The same is true for all electronic communication system.
3.2.1. (g) Evaluating the Benefits
•• There are several problems that can arise from the mismatch between information
systems and organizational and social factors
•• Assuming that we have avoided the pitfalls
•• In order to measure the success of the system, we could consider the two main
factors; First one is Job satisfaction and information flow but which is very hard to
measure and second one is economic benefit
3.2.2. Capturing Requirements
•• There is a need to identify the requirements within the context of use.
•• It will take the account of the complex mix of concerns felt by different stakeholders
and the structure and process operating in the workgroup
•• There are several approaches
•• Socio-technical modelling
•• Soft system modelling
•• Participatory design
•• Contextual inquiry
3.2.2. (a) Who are Stakeholders?
•• Stakeholders is key to many of the approaches to requirements capture
•• In an organizational setting it is not simply the end-user who is affected by the
introduction of new technology
•• A stakeholder can be defined as anyone who is affected by the success or failure of
the system
•• But the system will have many stakeholders with potentially conflicting interests
•• The different categories of stakeholders are:
•• Primary stakeholders are people who actually use the system – the end-users.
•• Secondary stakeholders are people who do not directly use the system, but
receive output from it or provide input to it.
•• Tertiary stakeholders are people who do not fall into either of the first two
categories but who are directly affected by the success or failure of the system
•• Facilitating stakeholders are people who are involved with the design,
development and maintenance of the system.
•• For example of Classifying stakeholders in an airline booking system
3.22 Human Computer Interaction
•• An international airline is considering introducing a new booking system for use
by associated travel agents to sell flights directly to the public.
•• Primary stakeholders: travel agency staff, airline booking staff
•• Secondary stakeholders: customers, airline management
•• Tertiary stakeholders: competitors, civil aviation authorities, customers’
travelling companions, airline shareholders
•• Facilitating stakeholders: design team, IT department staff
•• The aim of the design team is to meet the needs of as many stakeholders as possible
•• The reality is that usually stakeholder needs are in conflict with each other.
•• As a general rule, the priority of stakeholder needs diminishes as we go down the
categories. So primary stakeholders usually take priority over the others. However,
this is not always the case.
•• All of the approaches considering here are concerned with understanding stakeholders
within their organizational context.
3.2.2. (b) Socio-technical Models
•• Early studies of work focused on how humans needed to adapt to technical
innovations.
•• Technological determinism, the view that social change is primarily dictated by
technology, with human and social factors being secondary concerns, was prevalent.
•• Socio-technical models for interactive systems are concerned with technical, social,
organizational and human aspects of design.
•• They recognize the fact that technology is not developed in isolation but as part of a
wider organizational environment.
•• The key focus of the socio-technical approach is to describe and document the
impact of the introduction of a specific technology into an organization
•• Methods vary but most attempt to capture certain common elements are:
•• There is a need to understand why the technology is being proposed and what
problem it is intended to solve.
•• The stakeholders affected, including primary, secondary, tertiary and facilitating,
together with their objectives, goals and tasks.
•• The workgroups within the organization, both formal and informal.
•• The changes or transformations that will be supported.
•• The proposed technology and how it will work within the organization.
•• External constraints and influences and performance measures.
•• There are two common approaches to illustrate the Socio-technical Models. They
are:
Models and Theories 3.23
•• CUSTOM
•• Open System Task Analysis (OSTA)
CUSTOM
•• CUSTOM is a socio-technical methodology
•• It is suitable to use in small organizations
•• It is based on the User Skills and Task Match (USTM) approach
•• CUSTOM focusses on establishing stakeholder requirements: all stakeholders are
considered, not just the end-users.
•• It is applied at the initial stage of design when a product opportunity has been
identified, so the emphasis is on capturing requirements.
•• It is a forms-based methodology
•• There are six key stages to carry out in a CUSTOM analysis:
•• describe organizational context, including primary goals, physical characteristics,
political and economic background
•• identify and describe stakeholders including personal issues, role in the
organization and job
•• identify and describe work-groups whether formally constituted or not
•• identify and describe task–object pairs i.e. tasks to be performed and objects
used
•• identify stakeholder needs: stages 2–4 described in terms of both current and
proposed system - stakeholder needs are identified from the differences between
the two
•• consolidate and check stakeholder requirements against earlier criteria
Open System Task Analysis (OSTA)
•• It is an alternative socio-technical approach
•• It attempts to describe what happens when a technical system is introduced into an
organizational work environment.
•• Like CUSTOM, OSTA specifies both social and technical aspects of the system.
•• But in CUSTOM, aspects are framed in terms of stakeholder perspectives, where as
in OSTA they are captured through a focus on tasks.
•• OSTA has eight main stages:
•• The primary task which the technology must support is identified in terms of
users’ goals.
3.24 Human Computer Interaction
•• Task inputs to the system are identified. These may have different sources and
forms that may constrain the design.
•• The external environment into which the system will be introduced is described,
including physical, economic and political aspects.
•• The transformation processes within the system are described in terms of actions
performed on or with objects.
•• The social system is analyzed, considering existing work-groups and relationships
within and external to the organization.
•• The technical system is described in terms of its configuration and integration
with other systems.
•• Performance satisfaction criteria are established, indicating the social and
technical requirements of the system.
•• The new technical system is specified.
3.2.2. (c) Soft System Methodology
•• Socio-technical models focus on identifying requirements from both human and
technical perspectives.
•• It assumes a technological solution is being proposed.
•• Soft systems methodology (SSM) arises from the same tradition.
•• But takes a view of the organization as a system of which technology and people are
components.
•• There is no assumption of a particular solution
•• It was developed by Checkland
•• It has seven stages:
•• recognition of problem and initiation of analysis
•• detailed description of problem situation
•• rich picture
•• generate root definitions of system
•• CATWOE
•• conceptual model - identifying transformations
•• compare real world to conceptual model
•• identify necessary changes
•• determine actions to effect changes
Models and Theories 3.25
Fig.3.8. Seven Stage of Soft System Methodology (SSM).
Rich Picture
•• It is a useful tool to aid understanding of situation
•• It is informal and relatively intuitive
•• It captures succinctly the potentially conflicting interests of the various stakeholders
and the other influences on a design situation.
•• It provides an understandable summary of the designer’s understanding
•• It can be easily checked with stakeholders.
•• It can even be developed collaboratively with stakeholders as part of the consultation
process – allowing all parties to contribute to the rich picture sketch.
Potential Customer
Fig.3.9. A rich picture of travel agency
3.26 Human Computer Interaction
CATWOE
•• The acronym of CATWOE is
•• Clients: those who receive output or benefit from the system
•• Actors: those who perform activities within the system
•• Transformations: the changes that are affected by the system
•• Weltanschauung: (from the German) or World View - how the system is
perceived in a particular root definition
•• Owner: those to whom the system belongs, to whom it is answerable and who
can authorize changes to it
•• Environment: the world in which the system operates and by which it is
influenced
3.2.3. Participatory Design
•• Participatory design is a philosophy
•• It encompasses the whole design cycle.
•• It is design in the workplace, where the user is involved not only as an experimental
subject or as someone to be consulted when necessary but as a member of the design
team.
•• Users are therefore active collaborators in the design process, rather than passive
participants whose involvement is entirely governed by the designer.
•• It has three specific characteristics:
•• It is design and evaluation context or work oriented rather than system oriented.
•• It is characterized by collaboration: the user is included in the design team and
can contribute to every stage of the design.
•• The approach is iterative: the design is subject to evaluation and revision at each
stage.
•• It uses the following methods to convey information between the user and designer:
•• Brainstorming- The session provides a range of ideas from which to work. These
can be filtered using other techniques.
•• Storyboarding- means of describing the user’s day-to-day activities as well as the
potential designs and the impact they will have
•• Workshop - used to fill in the missing knowledge of both user and designer and
provide a more focussed view of the design.
•• Pencil and paper exercises- allow designs to be talked through and evaluated with
very little commitment in terms of resources.
Models and Theories 3.27
Effective Technical and Human Implementation of Computer-based Systems (ETHICS)
•• It was developed by Enid Mumford
•• It is a participatory socio-technical approach
•• it is distinct in its view of the role of stakeholders in the process
•• Here, stakeholders are included as participants in the decision making process.
•• ETHICS considers the process of system development as one of managing change:
conflicts will occur and must be negotiated to ensure acceptance and satisfaction
with the system.
•• It has three levels of participation:
•• Consultative – the weakest form of participation where participants are asked
for their opinions but are not decision makers.
•• Representative – a representative of the participant group is involved in the
decision making process.
•• Consensus – all stakeholders are included in the decision-making process.
3.2.4. Ethnographic methods
•• It is very influential in CSCW
•• It is a form of anthropological study with special focus on social relationships
•• It does not enter actively into situation
•• It seeks to understand social culture
Contextual Inquiry
•• It is an ethnographic approach developed by Holtzblatt
•• The model of investigator being apprenticed to user to learn about work
•• The investigation takes place in workplace - detailed interviews, observation,
analysis of communications, physical workplace, artefacts
•• The number of models created:
•• sequence, physical, flow, cultural, artefact
•• models consolidated across users
•• The output indicates task sequences, artefacts and communication channels needed
and physical and cultural constraints
3.3. COMMUNICATION AND COLLABORATION MODELS
•• Groupware systems, such as email or conferencing systems, involve more than one
person.
3.28 Human Computer Interaction
•• The field of computer-supported cooperative work (CSCW) encompasses both
specific groupware systems and the effects of computers on cooperative working in
general.
•• Effective communication underlies much collaborative work and many systems aim
to support communication at a distance.
•• Face-to-face communication is often seen as the ideal to which computer-mediated
communication should aim.
3.3.1. Face to Face Communication
•• It is the most primitive and most subtle form of communication
•• It is often seen as the paradigm for computer mediated communication
•• It involves not just speech and hearing, but also the subtle use of body language and
eyegaze.
3.3.1. (a) Transfer effects and personal space
•• Face-to-face communication can carry forward our expectations and social norms
by using computer-mediated forms of communication.
•• People are very adaptable and can learn new norms to go with new media (for
example, the use of ‘over’ for turn-taking when using a walkie-talkie).
•• The success with new media is often dependent on whether the participants can use
their existing norms.
•• Furthermore, the rules of face-to-face conversation are not conscious.
•• It may interpret failure as rudeness of colleague
•• e.g. personal space
•• video may destroy mutual impression of distance
•• happily the `glass wall’ effect helps
3.3.1. (b) Eye Contact and gaze
•• Normal conversation uses eye contact extensively, if not as intently.
•• Our eyes tell us whether our colleague is listening or not; they can convey interest,
confusion or boredom
•• Video may spoil direct eye contact.
•• A role in establishing rapport between the participants, eyegaze is useful in
establishing the focus of the conversation.
3.3.1. (c) Gestures and body language
•• Much of our communication is through our bodies gesture (and eye gaze) used for
deictic reference
Models and Theories 3.29
•• The head and shoulders video loses this
•• This is called deictic reference
•• Even the participants are in the same room, the existence of electronic equipment
can interfere with the body language used in normal face-to-face communication.
•• The fact that attention is focused on keyboard and screen can reduce the opportunities
for eye contact.
•• Also, large monitors may block participants’ views of one another’s bodies, reducing
their ability to interpret gestures and body position.
•• Most computer-supported meeting rooms recess monitors into the desks to reduce
these problems.
3.3.1. (d) Back channels, confirmation and interruption
•• It is easy to think of conversation as a sequence of utterances:
•• i.e., A says something, then B says something, then back to A.
•• This process is called turn-taking
•• It is one of the fundamental structures of conversation.
•• Consider the following transcript:
Alison: Do you fancy that film . . . er . . . ‘The Green’ um . . . it starts at eight.
Brian: Great!
•• It is not just the word
•• Once it includes the nods, grimaces, shrugs of the shoulder and small noises are
called back channels.
•• They feed information back from the listener to the speaker at a level below the turn-
taking of the conversation
•• Text-based communication, in electronic conferencing, usually has no back channels
3.3.1. (e) Turn Taking
•• Turn-taking is the process by which the roles of speaker and listener are exchanged.
•• Back channels are often a crucial part of this process.
•• In example, in a meeting, speaker offers the floor (fraction of a second gap) and
listener requests the floor (facial expression, small noise)
•• Grunts, ‘um’s and ‘ah’s, can be used by the:
•• listener to claim the floor
•• speaker to hold the floor
•• … but often too quiet for half-duplex channels
3.30 Human Computer Interaction
•• e.g. Trans-continental conferences – special problem
•• lag can exceed the turn taking gap
… leads to a monologue!
3.3.2. Conversation
•• Most analysis of conversation focuses on two-person conversations, but this can
range from informal social chat over the telephone to formal courtroom cross-
examination.
•• There are three uses for theories of conversation in CSCW.
•• First, they can be used to analyze transcripts, for example from an electronic
conference. This can help us to understand how well the participants are coping
with electronic communication.
•• Secondly, they can be used as a guide for design decisions – an understanding
of normal human–human conversation can help avoid blunders in the design of
electronic media.
•• Thirdly, and most controversially, they can be used to drive design – structuring
the system around the theory.
3.3.2. (a) Basic Conversational Structure
•• Consider the conversation between two person
•• The most basic conversational structure is turn taking.
•• The speech within each turn is called an utterance
•• If there is a gap in the conversation, the same party may pick up thethread, even if
she was the last speaker.
•• However, such gaps are normally of short duration, enough to allow turn-claiming if
required, but short enough to consider the speech a single utterance.
•• The utterances of the conversation can be grouped into pairs: a question and an
answer, a statement and an agreement.
•• The answer or response will normally follow directly after the question or statement
and so these are called adjacency pairs. The above mentioned conversation can be
mentioned in two adjacency pair. In the above example First, Alison asks Brian
whether he knows about the film and he responds. Second, she suggests a time to go
and he agrees. We can codify this structure as:
•• A-x, B-x, A-y, B-y,
Models and Theories 3.31
•• where the first letter denotes the speaker (Alison or Brian) and the second letter
labels the adjacency pair.
3.3.2. (b) Context
•• Utterances are highly ambiguous
•• We use context to disambiguate:
•• Brian: (points) that post is leaning a bit
•• Alison: that’s the one you put in
•• There are two types of context:
•• external context – reference to the environment
•• e.g., Brian’s ‘that’ – the thing pointed to
•• internal context – reference to previous conversation
•• e.g., Alison’s ‘that’ – the last thing spoken of
•• Often contextual utterances involve indexicals:
•• that, this, he, she, it
•• These may be used for internal or external context
•• Also descriptive phrases may be used:
•• external: ‘the corner post is leaning a bit’
•• internal: ‘the post you mentioned’
3.3.2. (c) Topics, focus and forms of utterance
•• The conversation is so dependent on context, it is important that the participants
have a shared focus.
•• Consider the example,
Alison: Oh, look at your roses : : :
Brian: mmm, but I’ve had trouble with greenfly.
Alison: they’re the symbol of the English summer.
Brian: greenfly?
Alison: no roses silly!
•• In the above conversation, Tracing topics is one way to analyse conversation.
•• Alison begins – topic is roses
•• Brian shifts topic to greenfly
•• Alison misses shift in focus … breakdown
3.32 Human Computer Interaction
3.3.2. (d) Breakdown and repair
•• Breakdown happens at all levels:
topic, indexicals, gesture
•• Breakdowns are frequent, but
•• redundancy makes detection easy
(Brian cannot interpret ‘they’re … summer’)
•• correction after breakdown is called repair.
•• people very good at repair
(Brain and Alison quickly restore shared focus)
•• Electronic media may lose some redundancy
⇒ breakdown more severe
3.3.2. (e) Speech act theory
•• A particular form of conversational analysis, speech act theory, has been both
influential and controversial in CSCW.
•• Not only is it an analytic technique, but it has been used as the guiding force behind
the design of a commercial system, Coordinator.
•• The basic premise of speech act theory is that utterances can be characterized by
what they do.
•• For example, if you say ‘I’m hungry’, this has a certain propositional meaning – that
you are feeling hungry.
•• Speech act theory concerns itself with the way utterances interact with the actions
of the participants.
•• The act of saying the words changes the state of the couple. Other acts include
promises by the speaker to do something and requests that the hearer do something.
•• These basic acts are called illocutionary points.
3.3.3. Text based Communication
•• The major form of direct communication is text based in asynchronous groupware
system.
•• Text-based communication is familiar to most people, in that they will have written
and received letters.
•• There are four types of textual communication in current groupware:
•• discrete directed messages, no structure
•• linear messages added (in temporal order)
•• non-linear hypertext linkages
Models and Theories 3.33
•• spatial two dimensional arrangement
3.3.3. (a) Back channels and affective state
•• One of the most profound differences between face-to-face and text-based
communication is the lack of fine-grained channels.
•• Text-based communication loses these back channels completely.
•• There is no facial expression or body language in text based communication
⇒ weak back channels
•• So, difficult to convey: affective state – happy, sad, … , illocutionary force – urgent,
important, …
•• Email users have developed explicit tokens of their affective state by the use of
‘flaming’ and ‘smilies’, using punctuation and acronyms; for example:
•• :-) – smiling face, happy
•• :-( – sad face, upset or angry
•• ;-) – winking face, humorous
•• LOL – laughing out loud.
3.3.3. (b) Grounding Constraints
•• Establishing common ground depends on grounding constraints
cotemporality – instant feedthrough
simultaneity – speaking together
sequence – utterances ordered
•• Often weaker in text based communication
e.g., loss of sequence in linear text
•• Network delays or coarse granularity ⇒ overlap
Bethan: how many should be in the group?
Rowena: maybe this could be one of the 4 strongest reasons
Rowena: please clarify what you mean
Bethan: I agree
Rowena: hang on
Rowena: Bethan what did you mean?
•• Message pairs 1&2 and 3&4 composed simultaneously
lack of common experience
Rowena: 213456
3.34 Human Computer Interaction
Bethan: 124356
•• N.B. breakdown of turn-taking due to poor back channels
•• In order to maintain context, Recall context was essential for disambiguation
•• Text loses external context, hence deixis
•• (but, linking to shared objects can help)
(1) Alison: Brian’s got some lovely roses
(2) Brian: I’m afraid they’re covered in greenfly
(3) Clarise: I’ve seen them, they’re beautiful
•• Both (2) and (3) respond to (1)
•• … but transcript suggests greenfly are beautiful!
•• Non Linear Conversation
3.3.3. (c) Pace and granularity
•• The term pace is being used in a precise sense
•• The pace of the conversation is the rate of such a sequence of connected messages
and replies.
•• Clearly, as the pace of a conversation reduces, there is a tendency for the granularity
to increase.
•• Pace of conversation – the rate of turn taking
•• face-to-face – every few seconds
•• telephone – half a minute
•• email – hours or days
•• face-to-face conversation is highly interactive
•• initial utterance is vague
Models and Theories 3.35
•• feedback gives cues for comprehension
•• lower pace ⇒ less feedback
⇒ less interactive
•• People create coping strategies when things are difficult
•• Coping strategies for slow communication attempt to increase granularity:
•• eagerness – looking ahead in the conversation game
•• Brian: Like a cup of tea? Milk or lemon?
•• multiplexing – several topics in one utterance
•• Alison: No thanks. I love your roses.
•• Reviewability is another grounding constraint of communication, but this time one
where text-based communication has the advantage over speech.
3.3.4. Group Working
3.3.4. (a) Group Dynamics
•• Work groups constantly change:
•• – in structure – in size
•• Several groupware systems have explicit rôles
•• But rôles depend on context and time
•• e.g., M.D. down mine under authority of foreman
•• and may not reflect duties
•• e.g., subject of biography, author, but now writer
•• Social structure may change: democratic, autocratic, and group may fragment into
sub-groups
•• Groupware systems rarely achieve this flexibility
•• Groups also change in composition
⇒ new members must be able to `catch up’
3.3.4. (b) Physical environment
•• Face-to-face working radically affected by layout of workplace
•• e.g. meeting rooms:
•• recessed terminals reduce visual impact
•• inward facing to encourage eye contact
•• different power positions
3.36 Human Computer Interaction
Fig.3.10. Power Position – Traditional Meeting room
•• Distributed cognition
•• Traditional cognitive psychology in the head
•• Distributed cognition suggests look to the world
•• Thinking takes place in interaction
•• with other people
•• with the physical environment
•• Implications for group work:
•• importance of mediating representations
•• group knowledge greater than sum of parts
•• design focus on external representation
Fig.3.11. Power Position – Augmented Meeting room
Models and Theories 3.37
3.4 HYPERTEXT, MULTIMEDIA AND THE WORLD WIDE WEB
3.4.1. Understanding Hyper text
•• What is the hyper?- it includes rich contents: graphics, audio, video, computation,
and interaction
•• The traditional texts share a common linear nature.
•• This linearity is partly because of the nature of the media used
•• Text imposes strict linear progression on reader.
Fig.3.12. Linear Progression Structure of Text
•• There are classes of activities where the reader needs to establish their own path
through a text. For example, in the online documentation
•• The learners may want to follow their own paths through material during some
forms of investigative learning.
•• Hence, the linear form of a traditional text is a difficult one.
•• For example, when using manuals, the user may not understand all the terms used
in the text, and will have to keep going back to a different series of pages to look up
the definitions, returning to the original pages.
•• Hypertext attempts to get around these limitations of text by structuring it into a
mesh rather than a line
•• This allows a number of different pages to be accessed from the current one
•• Hypertext systems incorporate diagrams, photographs and other media as well as
text. Such systems are often known as multimedia or hypermedia systems
•• A hypertext system comprises a number of pages and a set of links that are used to
connect pages together.
•• The links can join any page to any other page, and there can be more than one link
per page.
3.38 Human Computer Interaction
Fig.3.13. Non Linear Progression Structure of Hyper Text
•• Hyper media is not just the text which includes hypertext systems and additional
media such as illustration, photographs, audio and video
•• Links can exist at the end of pages, with the user choosing which one to follow, or
can be embedded within the document itself.
•• Simply clicking on an unknown word takes the user to the relevant place in the
glossary.
•• The positions of these links are known as hot-spots since they respond to mouse
clicks.
•• Hot-spots can also be embedded within diagrams, pictures or maps, allowing the
user to focus his attention on aspects that interest him.
3.4.1.1. Rich Content
•• Hypertext systems may also include more dynamic material such as animation,
video and audio clips, and even full computer applications.
Animation
•• Adding motion to images
•• For things that change in time
•• Digital faces – seconds tick past or warp into the next
•• Analogue face – hands sweep around the clock face
•• Live displays: e.g. current system load
•• For showing status and progress
•• Flashing carat at text entry location
•• Busy cursors (hour-glass, clock, spinning disc)
Models and Theories 3.39
•• Progress bars
•• For education and training
•• Let students see things happen … as well as being interesting and entertaining
images in their own right
•• For data visualisation
•• Abrupt and smooth changes in multi-dimensional data visualised using animated,
coloured surfaces
•• Complex molecules and their interactions more easily understood when they are
rotated and viewed on the screen
•• for animated characters
•• Wizards and help
Video and Audio
•• Video or audio material as part of hypertext systems for education, entertainment
•• Both audio and video material are expensive and time consuming to produce video
and audio editing as standard
•• Digital video cameras bring the production of audio/video material in to reach of
many.
•• In addition to that the standard formats such as QuickTime allow this material to be
embedded in web pages for easy distribution. But, the production of quality video
requires extensive experience
•• Conceivably the biggest problem with audio and video is still the memory
requirements.
•• Longer video sequences are, of course, more linear than plain text. This may be
acceptable if the hypertext is acting effectively as an index for video material.
•• For example, one might have a collection of silent movies online and access them
through a website.
•• Hence, the techniques required to gain maximum benefit from moving images are
very different from those that are used for static or minimal motion displays, and
designers do not have enough experience to start applying the relevant technology
at the relevant time.
•• It may well be that computer interface designers will have to study the techniques
of the film makers and cartoonists before they start to discover the real benefits that
these techniques can provide.
Computation, Intelligence, and Interaction
•• The good thing about a computer is that not only can it show things that have
previously been prepared, it can also do things.
3.40 Human Computer Interaction
•• However, the web search can look through all the chapters and find any words you
want.
•• More interactive hypermedia may contain embedded games or applications. For
example, puzzle from the website
•• Hypermedia running on the user’s own computer may interact closely with other
applications.
•• For example, on a Macintosh HyperCard stacks can control applications using
AppleEvents, or on a Windows platform hypermedia can include ActiveX
components
Delivery Technology
•• On the computer
•• Some hypertexts, in particular help systems, are downloaded or installed
permanently on a computer.
•• This has the advantage of instant access and such applications need not use a
standard viewer but may include their own bespoke browsing software.
•• CD-ROM or DVD based hypermedia
•• On the web
•• It is really ubiquitous!
•• The world wide web offers a rich environment for the presentation of information.
•• Documents can be constructed that are very different from paper versions; basic
text can be augmented through the use of hypertext links to other documents,
while graphics can easily be incorporated as pictures, photographs, icons, page
dividing bars, or backgrounds.
•• Pages can also have hypertext links embedded into different regions, which take
the user to a different page or graphic if they are clicked on; these are known as
active image maps.
•• These features allow web pages to become interactive, acting as the interface to
the information as well as its holder.
•• In many countries, near universal internet access is not just web pages!
•• e.g. many applications have web-base documentation
•• On the Move
•• Mobile phones, PDAs (personal digital assistants), and notebook computers
have all increased the demand to have hypermedia available on the move.
•• Furthermore across many countries governments have sold franchises for high-
bandwidth mobile services.
Models and Theories 3.41
•• Notebook computers can use just the same mechanisms as desktop computers,
using CD-ROM or DVD for standalone material, or connecting to the web
through wireless access points or through modems linked to mobile phone
networks.
•• However, the fact that the computer is mobile means that location can be used
as a key into context-aware hypermedia showing different content depending on
location.
•• Delivery
•• CD-ROM or DVD (like desktop)
•• Cached content (e.g. AvantGo)
•• WiFi access points or mobile phone networks
•• WAP – for mobile phone, tiny web-like pages
•• Context – who and where
•• tourist guides, directed advertising
Application Areas
•• Rapid prototyping
•• Create live storyboards
•• Mock-up interaction using links
•• Help and Documentation
•• Allows hierarchical contents, keyword search or browsing
•• Just in time learning
•• What you want when you want it
•• (e.g. technical manual for a photocopier)
•• Technical words linked to their definition in a glossary
•• links between similar photocopiers
•• Education
•• Animation and graphics allow students to see things happen
•• Sound adds atmosphere and means diagrams can be looked at while hearing
explanation
•• Non-linear structure allows students to explore at their own pace
•• E-learning
•• Letting education out of the classroom!!
•• e.g. eClass
3.42 Human Computer Interaction
3.4.2. Issues/Finding things in Hyper text
3.4.2.1 Lost in Hyperspace
•• Even though the non-linear structure of hypertext is very powerful, it can also be
confusing.
•• It is easy to lose track of where you are, a problem that has been called ‘lost in
hyperspace’.
•• There are two elements to this feeling of ‘lostness’.
•• Cognitive and related to content
•• The reader can browse the text in any order. Each page or node has to be
written virtually independently, but, of course, in reality it cannot be written
entirely without any assumption of prior knowledge.
•• Once the reader encounters fragmentary information, it cannot be properly
integrated, leading to confusion about the topic.
•• Navigation and Structure
•• The hypertext may have a hierarchical or other structure, the user may navigate
by hyperlinks that move across this main structure.
•• It is easy to lose track of where you are and where you have been.
•• The solution to the former issue is to design the information better.
•• The solution to the latter is to give users better ways of understanding where they are
and of navigating in the hypertext.
3.4.2.2. Designing Structure
•• In a paper format one is stuck with a single structure, which can lead to tensions: for
example, the fact that in this book structural design is discussed in several places.
•• If multiple structures are used, you have to consider what to do about the common
material.
•• For example, if we examine a car hypermedia text under ‘engine compartment’
and get to the fuel pump, this would also appear in the functional view under ‘fuel
system’. Such common elements may be replicated.
•• This has the advantage that the material can be presented in ways that make sense
given their context, but it can also lead to inconsistencies.
•• Alternatively we may make links across the hierarchy at some level; for example,
the engine compartment may have a diagram of the engine with a labeled arrow
saying ‘fuel pump (fuel system)’, which takes you to the description of the pump in
the fuel system part of the hypertext.
•• In all cases it is important that the structure and the naming of parts is meaningful
for the user
Models and Theories 3.43
3.4.2.3 Making Navigation Easier
•• No matter how well designed the site structure is, there will still be problems: because
the user does not understand the structure; or because the user has individual needs
that the designer has not foreseen; or because even a good structure is not perfect.
•• However, there are various things that can make it easier for users.
•• Solution is to provide the following
•• maps
•• give an overview of the structure
•• show current location – you are here!
•• recommended routes
•• guided tour or bus tour metaphor
•• linear path through non-linear structure
•• levels of access
•• summary then progressive depth
•• supporting printing!
•• needs linearised content, links back to source
3.4.2.4 History, Bookmark and External Link
•• Hypertext viewers and web browsers usually have some sort of history mechanism
to allow you to see where you have been, and a more stack-based system using the
‘back’ button that allows you to backtrack through previously visited pages.
•• The back button may be used where a user has followed a hyperlink and then
decided it was to the wrong place, or alternatively, when browsing back and forth
from a central page that contains lots of links.
•• The latter is called hub and spoke browsing
•• For longer-term revisiting, browsers typically support some form of bookmarking
of favorite pages.
•• Both this and, on the web, external links from other people’s sites mean that users
may enter your hypertext at locations other than the top level or home page.
•• On the web this is called deep linking.
•• Many websites rely on the user remembering where they have come from to make
sense of a page.
•• If a page does not adequately show where it fits, then a user coming to it from
outside may have no idea what site it is from, or why they are reading the material.
3.44 Human Computer Interaction
3.4.2.5 Indices, Directories and Search
•• Index
•• often found ion help, documentation, … even books
•• selective: not an exhaustive list of words used
•• Directories
•• on web index would be huge! so hand chosen sites
•• e.g. open directory project, Yahoo!
•• Web Search Engines
•• ‘Crawl’ the web following links from page to page
•• Build full word index (but ignore common ‘stop’ words)
•• Looks up in index when you enter keywords to find pages
3.4.3. Web Technology and Issues
•• The web consists of a set of protocols built on top of the internet that, in theory,
allow multimedia documents to be created and read from any connected computer
in the world.
•• The web supports hypertext, graphics, sound and movies, and, to structure and
describe the information, uses a language called HTML (hypertext markup language)
or in some cases, XML (extensible markup language).
•• HTML is a markup language that allows hypertext links, images, sounds and movies
to be embedded into text, and it provides some facilities for describing how these
components are laid out.
•• HTML documents are interpreted by a viewer, known as a browser ; there are many
browsers.
•• Some of the commercial browsers such as Netscape Navigator, Microsoft Internet
Explorer and Opera.
•• These offer a graphical interface to the document, controlled by the mouse.
•• Hypertext links are shown by highlighting the text that acts as the link in an alternative
color, and are activated by clicking on the link.
•• A further color is used to indicate a link that has already been visited.
•• Hypertext links can also be embedded into regions within an image.
•• Challenges in Web - lost in hyperspace, information overload
3.4.3.1. Web Servers and Clients
•• A conventional PC program runs and is displayed on one computer, where as the
web is distributed.
Models and Theories 3.45
•• Different parts of it run on different computers, often in different countries of the
world.
•• They are linked, of course, by the internet, an enormous global computer network
•• The pages are stored on web servers that may be on a company’s own premises or
in special data centers.
•• Because they are networked, the webmaster for a site can upload pages to the server
from wherever.
Fig.3.14. Working of Web Server and Client
3.4.3.2. Network Issues
•• The fact that the web is networked raises a series of issues that can impact on
usability.
•• Network capacity is called bandwidth.
•• This is a measure of the amount of information that can pass down the channel in a
given time.
•• However, bandwidth is not the only important measure.
•• There is also the time it takes for a message to get across the network from your
machine to the web server and back.
•• This delay is called latency.
•• Latency is caused by several factors – the finite speed of electrical or optical signals
(no faster than the speed of light), and delays at routers along the way that take
messages from one computer network and pass them on.
•• This latency may not always be the same, varying with the exact route through
the network traveled by a message, the current load on the different routers, etc.
Variability in the latency is called jitter
3.46 Human Computer Interaction
Fig.3.15. Bandwidth, Latency and Jitter
3.4.4. Static Web Contents
Message and Medium
•• “content is king”
•• the catch phrase of dot.com era … but widely ignored
•• the message … content should be
•• appropriate to the audience, timely, reliable , ….
•• generally worth reading !
•• the medium … page and site design
•• good design – essential to attract readers
… but won’t hide bad material!
•• bad design – may mean good material never seen
•• printable!
Text
•• text style
•• generic styles universal: serif, sans, fixed, bold, italic
•• specific fonts too, but vary between platforms
•• cascading style sheets (CSS) for fine control
… but beware older browsers and fixed font sizes
•• colour … often abused!
•• positioning
•• easy .. left, right justified or centred
Models and Theories 3.47
•• precise positioning with DHTML … but beware platforms …
•• screen size
•• mathematics … needs special fonts, layout, …
Graphics
•• use with care …
•• N.B. file size and download time
… this image = 1000 words of text
•• affected by size, number of colours, file format
•• backgrounds … often add little, hard to read text
•• speeding it up
•• caching – reuse same graphics
•• progressive formats:
•• image appears in low res and gets clearer
•• formats
•• JPEG – for photos
•• higher compression but ‘lossy’
•• get ‘artefacts’
•• GIF for sharp edges
•• lossless compression
•• PNG supported by current web browsers
•• and action
•• animated gifs for simple animations
•• image maps for images you can click on
Icons
•• on the web just small images
•• for bullets, decoration
•• or to link to other pages
•• lots available!
•• design … just like any interface
•• need to be understood
•• designed as collection to fit …
3.48 Human Computer Interaction
•• under construction
•• a sign of the inherent incompleteness of the web
•• or just plain lazy ??
Web Colour
•• how many colours?
•• PC monitors – millions – 24 bits per pixel
… but the ‘same’ colour may look very different
•• N.B. usually only 72–96 dpi
•• older computers, PDAs, phones …
•• perhaps only 16 bits or 8 bits per pixel … 256 colours
•• or even greyscale
•• colour palettes
•• choose useful 256 colours
•• different choices, but Netscape ‘web safe’ 216 are common
•• each GIF image has its own palette – use for fast download
Movies and Sound
•• problems
•• size and download… like graphics but worse!
•• may need special plug-ins
•• audio not so bad, some compact formats (MIDI)
•• streaming video
•• play while downloading
•• can be used for ‘broadcast’ radio or TV
Fig.3.16.Animated GIF or Movie needs to download completely
Models and Theories 3.49
3.4.5 Dynamic Web Contents
•• In the early days, the web was simply a collection of (largely text) pages linked
together.
•• The material was static or slowly changing and much of it authored and updated by
hand.
•• As HCI researchers and designers, we can neither ignore nor uncritically accept new
technology in the web.
•• The active web is here, our job is to understand it and to learn how to use it
appropriately
What happens and where?
•• The ‘what happens where’ question is the heart of architectural design.
•• It has a major impact on the pace of interaction, both feedback, how fast users see
the effects of their own actions, and feedthrough, how fast they see the effects of
others’ actions.
•• Also, where the computation happens influences where data has to be moved to with
corresponding effects on download times and on the security of the data.
•• User View
•• One set of issues is based on what the end-user sees, the end-user here being the
web viewer.
•• What changes? This may be a media stream (video, audio or animation) which
is changing simply because it is the fundamental nature of the medium. It may
be the presentation or view the user has of the underlying content; for example,
sorting by different categories or choosing text-only views for blind users
•• By whom? Who effects the changes? In the case of a media stream or animation,
the changes are largely automatic – made by the computer. The other principal
sources of change are the site author and the user. However, in complex sites
users may see each other’s changes – feedthrough.
•• How often? Finally, what is the pace of change? Months, days, or while you
watch?
•• Technology and Security
•• The fundamental question here is where ‘computation’ is happening. If pages are
changing, there must be some form of ‘computation’ of those changes. Where
does it happen?
•• Client One answer is in the user’s web-browsing client enabled by Java applets,
various plug-ins such as Flash, scripting using JavaScript or VBScript with
dynamic HTML layers, CSS and DOM (Domain Object Model).
•• Server A second possibility is at the web server using CGI scripts (written in
3.50 Human Computer Interaction
Perl, C, UNIX shell, Java or whatever you like!), Java Servlets, Active Server
Pages or one of the other server-specific scripting languages such as PHP. In
addition, client-side Java applets are only allowed to connect to networked
resources on the same machine as they came from. This means that databases
accessed from clientside JDBC (Java database connectivity) must run on the
web server (see below).
•• Another machine Although the pages are delivered from the web server, they
may be constructed elsewhere. For hand-produced pages, this will usually be on
the page author’s desktop PC. For generated pages, this may be a PC or a central
database server.
•• People Of course, as noted earlier, the process of production and update may
even involve people!
3.3.5.1 Fixed Content- Local Interaction and Changing views
•• fixed content
•• use Java applets, Flash, JavaScript+DHTML
•• pros: rapid feedback
•• cons: only local, no feedthrough
•• after interaction … what does ‘back’ do ??
Fig.3.17.Java Applet or Javascript Running locally
Models and Theories 3.51
REVIWE QUESTIONS
WITH ANSWER
PART A – 2 MARKS
1. Define Cognitive Science
Cognitive science- is a scientific and interdisciplinary study of the mind with special
emphasis on the use and acquisition of knowledge and information.
2. List out the fields included in Cognitive Science
Artificial Intelligence, Psychology, Linguisitcs, Philosphy, Anthropology, Nueorscience
and Education.
3. What is Cognition?
It is derived from latin base cognitio – i.e., “know together” which means the collection
of mental processes and activities used in perceiving ,learning, remebering, thinking, and
understanding, and the act of those processes.
4. Why Cognitive Science is interdisciplinary?
In order to process the information, human information processor has to use the following
steps:
•• To acquire real time information about the surrounding environment using
perception or recpetors.
•• With the usage of some languages, making the use of information about syntax,
semantics and phonology.
•• To combine diferent sources of information and derive new information and test its
consistency using reasoning.
•• To make use of information in action planning and guidence
•• To store and retreive the information using memeory
•• Hence, cognitive science is inter- discipilinary one.
5. Define Cognitive Model
A theory that produces a computational model of how people perform tasks and solve
problems by using psychological prinicples and emprical studies.
3.52 Human Computer Interaction
6. What is the aim of Cognitive model in Human Computer Interaction?
The aim of cognitive model in Human Computer Interaction is to design and evolution
of interface alternatives.
7. List out the Categorization of Cognitive Model
a. Competence Vs Performance
b. Computational Flavour
c. No clear divide
8. List out the role of Cognitive Model in HCI
a. It attempts to predict predict user perrformance based on a model of cognition in
HCI.
b. HCI uses that model to predict how human would complete tasks using a particular
user interface.
c. It limits the design space in HCI
d. HCI can take any specific design decision based on the answers of cognitive model
e. It is used to estimate both total task time and training time.
f. Also used to identify complex, error-prone stages of the design.
9. List out the Advantages of Cognitive Model
a. Don’t need to implement or prototype
b. Don’t need to test with real users
c. Theory has explanatory power which provides scientific foundation for design , like
other engineering fields.
10. List out some issues in Goal Vs task structure.
a. Granularity
b. Conflict
c. Treatment of Error
11. What is GOMS?
GOMS provides a higher-level language for task analysis and UI modeling. It generates
a set of quantitative and qualitative measures based on description of the task and user interface.
It provides a hierarchy of goals and methods to achieve them.
12. List out the Classification of GOMS model
a. Predictive
b. Descriptive
c. Prescriptive
Models and Theories 3.53
13. List out various GOMS model Variants
a. GOMS - Keystroke-Level Model (KLM)
b. GOMS - Card, Moran, and Newell (CMN-GOMS)
c. Natural GOMS Language (NGOMSL)
d. Cognitive-Perceptual-Motor GOMS (CPM-GOMS)
14. List out the Various advantages and disadvantages of GOMS model
Advantages:
• It gives qualitative & quantitative measures
• GOMS model explains the results
• It provides less work than user study
• It is easy to modify when UI is revised
• It is research tools to aid modeling process since it can still be tedious
Disadvantages:
• Assumption made here in GOMS model is extreme expert behavior
• Here the Tasks must be goal-directed
• It does not model problem-solving, exploration, etc.
• It is very difficult to model behavior at this level of detail
• It is still hard to model anything but the simplest tasks
• It not easy for heuristic analysis, guidelines, or cognitive walkthrough
• GOMS is an interesting theoretical model, but is hardly ever used in practice
15. Give the two types of linguistic dialogue models
(1) Backus- Naur Form (BNF)
(2) Task Action Grammar (TAG)
16. Define CSCW.
‘Computer-Supported Cooperative Work’ (CSCW) means that groups will be acting
in a cooperative manner. It is true to some extend level. For example, opposing football teams
cooperate to the extent that they keep (largely) within the rules of the game, but their cooperation
only goes so far.
17. Who are stakeholders?
Stakeholders are key to many of the approaches to requirements capture. In an
organizational setting, it is not simply the end-user who is affected by the introduction of new
3.54 Human Computer Interaction
technology. A stakeholder can be defined as anyone who is affected by the success or failure of
the system
18. List out the different categories of stakeholders.
a. Primary stakeholders are people who actually use the system – the end-users.
b. Secondary stakeholders are people who do not directly use the system, but receive
output from it or provide input to it.
c. Tertiary stakeholders are people who do not fall into either of the first two categories
but who are directly affected by the success or failure of the system
d. Facilitating stakeholders are people who are involved with the design, development
and maintenance of the system.
19. Give the stakeholders list for airline systems.
i. Primary stakeholders: travel agency staff, airline booking staff
ii. Secondary stakeholders: customers, airline management
iii. Tertiary stakeholders: competitors, civil aviation authorities, customers’ travelling
companions, airline shareholders
iv. Facilitating stakeholders: design team, IT department staff
20. List out the two common approaches to illustrate the socio-technical models.
(a) CUSTOM
(b) Open System Task Analysis (OSTA)
21. List the eight main stages of OSTA
a. The primary task which the technology must support is identified in terms of users’
goals.
b. Task inputs to the system are identified. These may have different sources and forms
that may constrain the design.
c. The external environment into which the system will be introduced is described,
including physical, economic and political aspects.
d. The transformation processes within the system are described in terms of actions
performed on or with objects.
e. The social system is analyzed, considering existing work-groups and relationships
within and external to the organization.
f. The technical system is described in terms of its configuration and integration with
other systems.
g. Performance satisfaction criteria are established, indicating the social and technical
requirements of the system.
h. The new technical system is specified.
Models and Theories 3.55
22. Define Rich Picture
It is a useful tool to aid understanding of situation. It is informal and relatively intuitive.
It captures succinctly the potentially conflicting interests of the various stakeholders and the
other influences on a design situation. It provides an understandable summary of the designer’s
understanding
23. Define CATWOE
a. Clients: those who receive output or benefit from the system
b. Actors: those who perform activities within the system
c. Transformations: the changes that are affected by the system
d. Weltanschauung: (from the German) or World View - how the system is perceived
in a particular root definition
e. Owner: those to whom the system belongs, to whom it is answerable and who can
authorize changes to it
f. Environment: the world in which the system operates and by which it is influenced
UNIT - 4
MOBILE HCI
4.1 MOBILE ECOSYSTEM
The Internet is actually a complex ecosystem made up of many parts that must all work
together. When you enter a URL into a web browser, you don’t think about everything that has
to happen to see a web page. When you send an email, you don’t care about all the servers,
switches, and software that separate you from your recipient. Everything you do on the Internet
happens in fractions of a second. And you have the perception that all of this happens for free.
Services
Applications
Application Frame works
Operating Systems
Platforms
Devices
Aggregators
Networks
Operators
4.1.1 Operators
The base layer in the mobile ecosystem is the operator. Operators go by many names,
depending on what part of the world you happen to be in or who you are talking to. Operators
can be referred to as Mobile Network Operators (MNOs); mobile service providers, wireless
carriers, or simply carriers; mobile phone operators; or cellular companies. In the mobile
community, we officially refer to them as operators, though in the United States, there is a
tendency to call them carriers.
Operators are what essentially make the entire mobile ecosystem work. They are the
gatekeepers to the kingdom. They install cellular towers, operate the cellular network, make
services (such as the Internet) available for mobile subscribers, and they often maintain
relationships with the subscribers, handling billing and support, and offering subsidized device
sales and a network of retail stores.
4.2 Human Computer Interaction
The operator’s role in the ecosystem is to create and maintain a specific set of wireless
services over a reliable cellular network. That’s it. However, to grow the mobile market over the
past decade, the operator has been required to take a greater role in the mobile ecosystem, doing
much more than just managing the network. For example, they have had to establish trust with
subscribers to handle the billing relationship and to offers.
World’s largest mobile operators
Rank Operator Markets Technology Subscribers
(in millions)
1. China Mobile China (including Hong Kong) and GSM, GPRS, 436.12
Pakistan EDGE, TD-
SCDMA
2. Vodafone United Kingdom, Germany, Italy, GSM, GPRS, 260.5
France,
Spain, Romania, Greece, Portugal, EDGE, UMTS,
Netherlands,
Czech Republic, Hungary, Ireland, HSDPA
Albania, Malta, Northern Cyprus,
Faroe Islands, India, United States,
South Africa, Aus tralia, New
Zealand, Turkey, Egypt, Ghana, Fiji,
Lesotho, and Mozambique
3. Telefónica Spain, Argentina, Brazil, Chile, CDMA, 188.9
Colombia,
Ecuador, El Salvador, Guatemala, CDMA2000 1x,
Mexico,
Nicaragua, Panama, Peru, Uruguay, EV-DO, GSM,
Vene-
zuela, Ireland, Germany, United GPRS, EDGE,
Kingdom,
Czech Republic, Morocco, and UMTS, HSDPA
Slovakia
4. América United States, Argentina, Chile, CDMA, 172.5
Móvil Colombia,
Paraguay, Uruguay, Mexico, Puerto CDMA2000 1x,
Rico,
Ecuador, Jamaica, Peru, Brazil, EV-DO, GSM,
Dominican
Republic, Guatemala, Honduras, GPRS, EDGE,
Nicaragua,
Ecuador, and El Salvador UMTS, HSDPA
Mobile HCI 4.3
5. Telenor Norway, Sweden, Denmark, GSM, GPRS, 143.0
Hungary, Montenegro,
Serbia, Russia, Ukraine, Thailand, EDGE, UMTS,
Bangladesh, Pakistan, and Malaysia HSDPA
6. China Unicom China GSM, GPRS 127.6
7. T-Mobile Germany, United States, United GSM, GPRS, 126.6
Kingdom,
Poland, Czech Republic, EDGE, UMTS,
Netherlands, Hun-
4.1.2 Networks
Operators operate wireless networks. Remember that cellular technology is just a radio
that receives a signal from an antenna. The type of radio and antenna determines the capability
of the network and the services you can enable on it.
You’ll notice that the vast majority of networks around the world use the GSM standard,
using GPRS or GPRS EDGE for 2G data and UMTS or HSDPA for 3G. We also have CDMA
(Code Division Multiple Access) and its 2.5G hybrid CDMA2000, which offers greater coverage
than its more widely adopted rival. So in places like the United States or China, where people
are more spread out, CDMA is a great technology. It uses fewer towers, giving subscribers
fewer options as they roam networks.
GSM mobile network evolutions
2G Second generation of mobile phone standards and Theoretical max
technology data speed
GSM Global System for Mobile communications 12.2 KB/sec
GPRS General Packet Radio Service Max 60 KB/sec
EDGE Enhanced Data rates for GSM Evolution 59.2 KB/sec
HSCSD High-Speed Circuit-Switched Data 57.6 KB/sec
3G Third generation of mobile phone standards and Theoretical max
technology data speed
W-CDMA Wideband Code Division Multiple Access 14.4 MB/sec
UMTS Universal Mobile Telecommunications System 3.6 MB/sec
UMTS- UMTS +Time Division Duplexing 16 MB/sec
TDD
T D - C D - Time Divided Code Division Multiple Access 16 MB/sec
MA
HSPA High-Speed Packet Access 14.4 MB/sec
4.4 Human Computer Interaction
4.1.3 Devices
What you call phones, the mobile industry calls handsets or terminals. These are terms
that I think are becoming outdated with the emergence of wireless devices that rely on operator
networks, but do not make phone calls. The number of these “other” devices is a small piece of
the overall pie right now, but it’s growing rapidly.
Let’s focus on the biggest slice of the device pie—mobile phones. As of 2008, there
are about 3.6 billion mobile phones currently in use around the world; just more than half the
planet’s population has a mobile phone
Africa
6%
Middle East
7% China
United States 20%
9%
Feature Phones
Eastern Europe Pacific Rim Smart Phones
85%
11% 13% 13%
India European Union
11% 12%
Latin America
12%
Most of these devices are feature phones, making up the majority of the marketplace.
Smartphones make up a small sliver of worldwide market share and maintain a healthy
percentage in the United States and the European Union; smartphone market share is growing
with the introduction of the iPhone and devices based on the Android platform. As next-
generation devices become a reality, the distinction between feature phones and smartphones
will go away. In the next few years, feature phones will largely be located in emerging and
developing markets.
4.1.4 Platforms
A mobile platform’s primary duty is to provide access to the devices. To run software
and services on each of these devices, you need a platform, or a core programming language
in which all of your software is written. Like all software platforms, these are split into three
categories: licensed, proprietary, and open source.
Licensed
Licensed platforms are sold to device makers for nonexclusive distribution on devices.
The goal is to create a common platform of development Application Programming Interfaces
(APIs) that work similarly across multiple devices with the least possible effort required to adapt
for device differences, although this is hardly reality. Following are the licensed platforms:
Java Micro Edition (Java ME)
Formerly known as J2ME, Java ME is by far the most predominant software platform
of any kind in the mobile ecosystem. It is a licensed subset of the Java platform and
Mobile HCI 4.5
provides a collection of Java APIs for the development of software for resource-
constrained devices such as phones.
Binary Runtime Environment for Wireless (BREW)
BREW is a licensed platform created by Qualcomm for mobile devices, mostly for the
U.S. market. It is an interface-independent platform that runs a variety of ap-plication
frameworks, such as C/C++, Java, and Flash LITE.
Windows Mobile
Windows Mobile is a licensable and compact version of the Windows operating system,
combined with a suite of basic applications for mobile devices that is based on the
Microsoft Win32 API.
LiMo
LiMo is a Linux-based mobile platform created by the LiMo Foundation. Although
Linux is open source, LiMo is a licensed mobile platform used for mobile devices.
LiMo includes SDKs for creating Java, native or mobile web applications using the
WebKit browser framework.
Proprietary
Proprietary platforms are designed and developed by device makers for use on their
devices. They are not available for use by competing device makers. These include:
Palm
Palm uses three different proprietary platforms. Their first and most recognizable is
the Palm OS platform based on the C/C++ programming language; this was initially
developed for their Palm Pilot line, but is now used in low-end smartphones such as
the Centro line. As Palm moved into higher-end smartphones, they started using the
Windows Mobile-based platform for devices like the Treo line. The most recent platform
is called webOS, is based on the WebKit browser framework, and is used in the Prē line.
BlackBerry
Research in Motion maintains their own proprietary Java-based platform, used
exclusively by their BlackBerry devices.
iPhone
Apple uses a proprietary version of Mac OS X as a platform for their iPhone and iPod
touch line of devices, which is based on UNIX.
Open Source
Open source platforms are mobile platforms that are freely available for users to down-
load, alter, and edit. Open source mobile platforms are newer and slightly controversial, but they
are increasingly gaining traction with device makers and developers. Android is one of these
4.6 Human Computer Interaction
platforms. It is developed by the Open Handset Alliance, which is spear-headed by Google.
The Alliance seeks to develop an open source mobile platform based on the Java programming
language.
4.1.5 Operating Systems
It used to be that if a mobile device ran an operating system, it was most likely considered
a smartphone. But as technology gets smaller, a broader set of devices supports operating
systems.
Operating systems often have core services or toolkits that enable applications to talk to
each other and share data or services. Mobile devices without operating systems typically run
“walled” applications that do not talk to anything else.
Although not all phones have operating systems, the following are some of the most
common:
Symbian
Symbian OS is an open source operating system designed for mobile devices, with
associated libraries, user interface frameworks, and reference implementations of
common tools.
Windows Mobile
Windows Mobile is the mobile operating system that runs on top of the Windows Mobile
platform.
Palm OS
Palm OS is the operating system used in Palm’s lower-end Centro line of mobile phones.
Linux
The open source Linux is being increasingly used as an operating system to power
smartphones, including Motorola’s RAZR2.
Mac OS X
A specialized version of Mac OS X is the operating system used in Apple’s iPhone and
iPod touch.
Android
Android runs its own open source operating system, which can be customized by
operators and device manufacturers.
You might notice that many of these operating systems share the same names as the
platforms on which they run. Mobile operating systems are often bundled the platform they are
designed to run on.
Mobile HCI 4.7
4.1.6 Application Frameworks
The first layer the developer can access is the application framework or API released by
one of the companies mentioned already. The first layer that you have any control over is the
choice of application framework.
Application frameworks often run on top of operating systems, sharing core services
such as communications, messaging, graphics, location, security, authentication, and many
others.
Java
Applications written in the Java ME framework can often be deployed across the
majority of Java based devices, but given the diversity of device screen size and processor
power, cross device deployment can be a challenge.
Most Java applications are purchased and distributed through the operator, but they can
also be downloaded and installed via cable or over the air.
S60
The S60 platform, formerly known as Series 60, is the application platform for devices
that run the Symbian OS. S60 is often associated with Nokia devices, Nokia owns the platform
— but it also runs on several non-Nokia devices. S60 is an open source framework.
S60 applications can be created in Java, the Symbian C++ framework, or even Flash
Lite.
BREW
Applications written in the BREW application framework can be deployed across
the majority of BREW-based devices, with slightly less cross-device adaption than other
frameworks.
However BREW applications must go through a costly and timely certification process
and can be distributed only through an operator.
Flash Lite
Adobe Flash Lite is an application framework that uses the Flash Lite and ActionScript
frameworks to create vector-based applications. Flash Lite applications can be run within the
Flash Lite Player, which is available in a handful of devices around the world.
Flash Lite is a promising and powerful platform, but there has been some difficulty
getting it on devices. A distribution service for applications written in Flash Lite is long overdue.
Windows Mobile
Applications written using the Win32 API can be deployed across the majority of
Windows Mobile based devices. Like Java, Windows Mobile applications can be downloaded
and installed over the air or loaded via a cable connected computer.
4.8 Human Computer Interaction
Cocoa Touch
Cocoa Touch is the API used to create native applications for the iPhone and iPod touch.
Cocoa Touch applications must be submitted and certified by Apple before being included in
the App Store. Once in the App Store, applications can be purchased, downloaded, and installed
over the air or via a cable connected computer.
Android SDK
The Android SDK allows developers to create native applications for any device that
runs the Android platform. By using the Android SDK, developers can write applications in
C/ C++ or use a Java virtual machine included in the OS that allows the creation of applications
with Java, which is more common in the mobile ecosystem.
Web Runtimes (WRTs)
Nokia, Opera, and Yahoo! provide various Web Runtimes, or WRTs. These are meant
to be miniframeworks, based on web standards, to create mobile widgets. Both Opera’s and
Nokia’s WRTs meet the W3C recommended specifications for mobile widgets.
Although WRTs are very interesting and provide access to some device functions using
mobile web principles, found them to be more complex than just creating a simple mobile web
app, as they force the developer to code within an SDK rather than just code a simple web app,
based on the number of mobile web apps written for the iPhone versus the number written for
other, more full featured WRTs.
WebKit
With Palm’s introduction of webOS, a mobile platform based on WebKit, and given its
predominance as a mobile browser included in mobile platforms like the iPhone, Android and
S60, and that the vast majorities of mobile web apps are written specifically for WebKit, and can
now refer to WebKit as a mobile framework in its own right.
WebKit is a browser technology, so applications can be created simply by using
web technologies such as HTML, CSS, and JavaScript. WebKit also supports a number of
recommended standards not yet implemented in many desktop browsers.
The Web
The Web is the only application framework that works across virtually all devices and
all platforms. Although innovation and usage of the Web as an application framework in mobile
has been lacking for many years, increased demand to offer products and services outside of
operator control, together with a desire to support more devices in shorter development cycles,
has made the Web one of the most rapidly growing mobile application platforms up to date.
4.1.7 Applications
Application frameworks are used to create applications, such as a game, a web browser,
a camera, or media player. Although the frameworks are well standardized, the devices are
not. The largest challenge of deploying applications knows the specific device attributes and
Mobile HCI 4.9
capabilities. For example, if you are creating an application using the Java ME application
framework, you need to know what version of Java ME the device supports, the screen
dimensions, the processor power, the graphics capabilities, the number of buttons it has, and
how the buttons are oriented. Multiply that by just a few additional handsets and you have
hundreds of variables to consider when building an application. Multiply it by the most popular
handsets in a single market and you can easily have a thousand variables, quickly dooming your
application’s design or development.
Although mobile applications can typically provide an excellent user experience, it
almost always comes at a fantastic development cost, making it nearly impossible to create a
scalable product that could potentially create a positive return on investment.
A common alternative these days is creating applications for only one platform, such as
the iPhone or Android. By minimizing the number of platforms the developer has to support and
utilizing modern application frameworks, the time and cost of creation go down significantly.
This strategy may be perfectly acceptable to many, but what about the rest of the market? Surely
people without a more costly smartphone should be able to benefit from mobile applications,
too.
Many see the web browser as the solution to this problem and the savior from the insanity
of deploying multidevice applications. The mobile web browser is an application that renders
content that is device, platform, and operating system independent. The web browser knows
its limitations, enabling content to scale gracefully across multiple screen sizes. However, like
all applications, mobile web browsers suffer from many of the same device fragmentation
problems.
You could consider the Motorola RAZR to be the epitome of the mobile ecosystem
of yesterday. It’s been provisioned to numerous operators around the world. It’s the perfect
example not just of how crazy deploying mobile applications to devices can be, but also of just
how bad mobile web browser fragmentation can be. It is a highly prolific vice and one that is
often recommended for people to support, due to its market penetration. But that is much easier
said than done.
4.1.8 Services
Finally, come to the last layer in the mobile ecosystem is services. Services include
tasks such as accessing the Internet, sending a text message, or being able to get a location —
basically, anything the user is trying to do.
The state of the current market is evaluated with reference to current market and data,
financial and sales trends, user surveys, and assessing the impressions of news media outlets.
Each of the top hardware and software platforms are reviewed and evaluated to provide a
thorough understanding of the competitive landscape of the smart phone and tablet market.
In addition to updating the research with respect to latest data and market developments,
the sixth edition report adds invaluable insights into HTML5, Compact Coding, Input Controls
and Sensors, Connection Type Issues, Smartphone Market Performance, Augmented Reality,
and more.
4.10 Human Computer Interaction
This report is an essential read for any organization directly or indirectly involved in the
mobile marketplace.
Report Benefits:
•• Forecasts of many types
•• Identify opportunities for mobile apps and widgets
•• Understand the mobile widget ecosystem
•• Understand the role and importance of HTML5
•• Identify key emerging applications areas including Augmented Reality
Target Audience:
•• Application developers
•• Wireless portal providers
•• Mobile network operators
•• Wireless device manufacturers
•• Mobile virtual network operators
•• Mobile application store providers
•• Content development and management companies
•• Advertising companies (online and mobile marketing)
4.2 TYPES OF MOBILE APPLICATIOINS
Mobile application, most commonly referred to as an app, is a type of application
software designed to run on a mobile device, such as a Smartphone or tablet computer. Mobile
applications frequently serve to provide users with similar services to those accessed on PCs.
Apps are generally small, individual software units with limited function. This type of software
Mobile HCI 4.11
has been popularized by Apple Inc. and its App Store, which sells thousands of applications for
the iPhone, iPad and iPod Touch.
A mobile application also may be known as an app, Web app, online app, iPhone app
or smart phone app. A mobile app is a software application developed specifically for use on
small, wireless computing devices, such as smart phones and tablets, rather than desktop or
laptop computers.
Mobile apps are designed with consideration for the demands and constraints of the
devices and also to take advantage of any specialized capabilities they have. A gaming app, for
example, might take advantage of the iPhone’s accelerometer.
Mobile apps are sometimes categorized according to whether they are webbased or
native apps, which are created specifically for a given platform. A third category, hybrid apps,
combines elements of both native and Web apps. As the technologies mature, it’s expected that
mobile application development efforts will focus on the creation of browser-based, device
agnostic Web applications.
4.2.1 WIDGET
ElainTTT The word widget (pronounced wih-jit) is a tech word that has many definitions
depending on the context being used. The dictionary defines a widget as a small mechanical
device; a gadget, or a manufactured item that is unnamed, but in the era of Internet and computers
this definition doesn’t fit when talking about widgets in relation to software and code.
What are Widgets?
In a programming context, widget is a generic term for the part of a GUI that allows
the user to interface with the application and operating system. Widgets display information
and invite the user to act in a number of ways. Typical widgets that you may encounter include
buttons, dialog boxes, pop-up windows, selection boxes, windows, toggle switches and forms.
The term widget also can be used to refer to either the graphic component or its controlling
program or a combination of both.
Today when people use the word widget, in a Web 2.0 world, they are referring to piece
of self-contained code a small application actually, that opens up a doorway to a much larger
application. To this end, you can find widgets that provide stock quotes and news, search boxes
for Google, eBay and other popular search-based Web sites, clocks, counters, games, feeds and
more.
100%
Adding to the confusion is the fact that widgets used on the desktop or Web are also
called gadgets. In fact, in Windows Vista, Microsoft uses the word gadget, but it is still a widget.
4.12 Human Computer Interaction
Is a Widget an App?
Widgets and applications do not mean the same thing, but they are similar terms. In
mobile computing, for example, we tend to think of widgets and apps as “objects” that enhance
the user experience. Mobile widgets provide a simple interface to display live feeds (e.g.
weather or stock news). Apps are full applications that typically require mobile users to pay and
download - things like games, contact and calendar apps, and so on. Widgets can be thought
of as “miniature applications” that are embedded in other applications on your mobile device.
Example: A live local weather news feed would be a widget that is embedded on your
mobile device home screen (the home screen is the application).
Desktop Widgets
Many widgets are designed to run on your desktop - a small application that provides
specific information to the user, and can be functional or fun. If you’re using the Windows
operating system, you can use a widget engine and then choose widgets to install to your
desktop. Popular desktop widget engines include Dashboard, which was released with Mac OS
X v10.4, Google Desktop, and SpringWidgets.
What is a Web Widget?
Advancing on desktop widget technology, Web widgets are another type of widget that
has gained in popularity, especially with the increased interest in personal publishing. Web
widgets are pieces of code that you can embed right on to your Web page, or personal publishing
space such as Blogger or WordPress.
Web widgets work like a mini-application that you use to provide information to visitors
on websites. They include things like search widgets, eBay trackers, news headlines, Twitter
feeds, Face book friend (or Fan) lists, games, clocks and other miniature “live” apps.
Web widgets are easy to use and require you only to copy and paste a snippet of code
to display the widget, which is hosted on the developer’s server. Widget directories, such as
Widgetbox enable you to search for a specific type of widget, customize it for your own use,
then copy and then paste the code to your own pages.
Widget Development for Beginners
Many widget tools help developers create innovate widgets and are useful novices
as well. Widgetbox’s free developer services offer analytics for tracking, services that allow
consumers to use your widgets on their own site and popular blogging services, hosts the widget
and also take care of including options that let you customize your widget.
For the Mac community, Apple Dashboard Widgets are created using a mix of HTML,
JavaScript, and CSS. The Apple developer connection provides tools and resources for Dashcode
that can be used by interested developers. There is also an Android Dev Guide to help you
develop applications for the Android platform. Microsoft also offers a Dev Center to help you
build Metro style apps for Windows 8.
Mobile HCI 4.13
4.2.2 GAMES
A mobile game is a video game played on a feature phone, smartphone, smart watch,
PDA, tablet computer, portable media player or calculator.
The earliest known game on a mobile phone was a Tetris variant on the Hagenuk MT-
2000 device from 1994.
In 1997, Nokia launched the very successful Snake. Snake (and its variants), that was
preinstalled in most mobile devices manufactured by Nokia, has since become one of the most
played video games and is found on more than 350 million devices worldwide. A variant of the
Snake game for the Nokia 6110, using the infrared port, was also the first two-player game for
mobile phones.
Today, mobile games are usually downloaded from app stores as well as from mobile
operator’s portals, but in some cases are also preloaded in the handheld devices by the OEM
or by the mobile operator when purchased, via infrared connection, Bluetooth, memory card or
side loaded onto the handset with a cable.
Downloadable mobile games were first commercialized in Japan circa the launch of
NTT DoCoMo’s I-mode platform in 1999, and by the early 2000s were available through a
variety of platforms throughout Asia, Europe, North America and ultimately most territories
where modern carrier networks and handsets were available by the mid-2000s. However, mobile
games distributed by mobile operators and third party portals (channels initially developed to
monetize downloadable ringtones, wallpapers and other small pieces of content using premium
SMS or direct carrier charges as a billing mechanism) remained a marginal form of gaming until
Apple’s iOS App Store was launched in 2008. As the first mobile content marketplace operated
directly by a mobile platform holder, the App Store significantly changed the consumer behavior
and quickly broadened the market for mobile games, as almost every smart phone owner started
to download mobile apps.
Mobile games are games designed for mobile devices, such as Smartphones, feature
phones, pocket PCs, personal digital assistants (PDA), tablet PCs and portable media players.
Mobile games range from basic (like Snake on older Nokia phones) to sophisticated (3D and
augmented reality games).
Today’s mobile phones - particularly smartphones - have a wide range of connectivity
features, including infrared, Bluetooth, Wi-Fi and 3G. These technologies facilitate wireless
multiplayer games with two or more players.
4.3 HOW IS MOBILE DIFFERENT?
The first thing we need to understand about mobile design is that it’s different – and not
just with regards to size. The physicality and specifications of mobile devices impart different
design affordances and requirements. Because mobile devices are lighter and more portable, we
often find it more convenient to use them. Consequently, through this more regular use, we
feel a unique, emotional connection to them.
4.14 Human Computer Interaction
Physicality and specifications
Most mobile devices employ touch screens, where users rely on gestures – in addition
to simple interface elements – to interact with them. Because of their smaller dimensions, we
sometimes expect the content structures to be simpler and smaller. Also, because of their limited
bandwidth and connectivity, mobile devices require designs to be optimized for loading time,
with reduced data requirements.
How, where and when?
Because we have constant access to our mobile devices, we tend to use them more
frequently. They come with us on the bus, walking down the street, or watching TV. We often
use them while “doing” something else. This means we may use the device under difficult
viewing conditions, or among a variety of distractions.
How we behave and feel?
Finally, we have different attitudes, behaviors and priorities while using mobile devices.
As part of their Going Mobile 2012 study, User Experience Design agency Foolproof found
that these devices have given us a new sense of freedom and control. In turn, some users feel
a very real affection for their mobile device. Foolproof found that 63% of people felt lost if
their smartphone was not in easy reach. They described their mobile devices as ‘alive’ … an
extension of their own body and personality
Because mobile devices have fundamentally changed user expectations, it’s extremely
important that we, as designers, follow a user-centered design process to arrive at our solutions.
The only problem is that our traditional best practices may not always apply.
Mobile HCI 4.15
How mobile affects designers?
Mobile’s differences directly impact all parts of the user-centered design process: from
user research to the final development and testing of the solution. The biggest parts of the
process it affects are our delivery methods and our information architecture.
Mobile delivery methods
Unlike traditional websites, there are four popular mobile delivery methods. Mobile
users that choose to view content in their browser are best served with either a mobile-specific
site – optimized for mobile devices – or a responsive site – which reorients/arranges itself
for mobile devices. Those who choose to install an application on their phone either receive
a native app(lication) or a hybrid app. Native apps are self-contained: every screen of the
application is defined up front. Hybrid apps offer a bit more flexibility, loading content from the
web (as it’s viewed in a browser) but providing users with an “app-like” interface (or chrome).
Each delivery method has different pros and cons. Choose what’s right for you based on
your project’s design context.
Mobile Information Architecture
Mobile devices have their own set of Information Architecture patterns, too. While the
structure of a responsive site may follow more “standard” patterns, native apps, for example,
often employ navigational structures that are tab-based. Again, there’s no “right “way to architect
a mobile site or application. Instead, let’s take a look at some of the most popular patterns:
Hierarchy, Hub & spoke, Nested doll, Tabbed view, Bento box and Filtered view:
Hierarchy
The hierarchy pattern is a standard site structure with an index page and a series of sub
pages. If you are designing a responsive site you may be restricted to this, however introducing
additional patterns could allow you to tailor the experience for mobile.
4.16 Human Computer Interaction
Luke Wroblewski’s Mobile First approach helps us focus on the important stuff first:
features and user journeys that will help us create great user experiences.
Good for
Organizing complicated site structures that need to follow a desktop site’s structure.
Watch for
Navigation - Multi-faceted navigation structures can present a problem to people using
small screens.
Hub & spoke
A hub and spoke pattern gives you a central index from which users will navigate out.
It’s the default pattern on Apple’s iPhone. Users can’t navigate between spokes but must return
to the hub, instead. This has historically been used on desktop where a workflow is restricted
(generally due to technical restrictions such as a form or purchasing process) however this is
becoming more prevalent within the mobile landscape due to users being focused on one task,
as well as the form factor of the device, making a global navigation more difficult to use.
Good for
Multi-functional tools, each with a distinct internal navigation and purpose.
Watch for
Users that want to multi-task.
Mobile HCI 4.17
Nested doll
The nested doll pattern leads users in a linear fashion to more detailed content. When
users are in difficult conditions this is a quick and easy method of navigation. It also gives the
user a strong sense of where they are in the structure of the content due to the perception of
moving forward and then back.
Good for
Apps or sites with singular or closely related topics. This can also be used as a sub
section pattern inside other parent patterns, such as the standard hierarchy pattern or hub and
spoke.
Watch for
Users won’t be able to quickly switch between sections so consider whether this will be
suitable, rather than a barrier to exploring content.
Tabbed view
This is a pattern that regular app users will be familiar with. It’s a collection of sections
tied together by a toolbar menu. This allows the user to quickly scan and understand the complete
functionality of the app when it’s first opened.
Good for
Tools based apps with a similar theme - Multitasking.
Watch for
Complexity-This pattern is best suited to very simple content structures.
4.18 Human Computer Interaction
Bento Box/Dashboard
The bento box or dashboard pattern brings more detailed content directly to the index
screen by using components to display portions of related tools or content. This pattern is more
suited to tablet than mobile due to its complexity. It can be really powerful as it allows the user
to comprehend key information at a glance, but does heavily rely on having a well-designed
interface with information presented clearly.
Good for
Multi-functional tools and content-based tablet apps that have a similar theme.
Watch for
The tablet screen gives you more space to utilize this pattern well; however it becomes
especially important to understand how a user will interact with and between each piece of
content, to ensure that app is easy, efficient and enjoyable to use.
Filtered view
Finally, a filtered view pattern allows the user to navigate within a set of data by selecting
filter options to create an alternative view. Filtering, as well as using faceted search methods,
can be an excellent way to allow users to explore content in a way that suits them.
Good for
Apps or sites with large quantities of content, such as articles, images and videos. Can
be a good basis for magazine style apps or sites, or as a sub pattern within another navigational
pattern.
Watch for
Mobile. Filters and faceted search can be difficult to display on a smaller screen due to
their complexity.
Mobile HCI 4.19
4.4 MOBILE 2.0
IT refers to a perceived next generation of mobile internet services that leverage the
social web, or what some call Web 2.0. The social web includes social networking sites and wikis
that emphasize collaboration and sharing amongst users. Mobile Web 2.0, with an emphasis on
Web, refers to bringing Web 2.0 services to the mobile internet, i.e., accessing aspects of Web
2.0 sites from mobile internet browsers.
By contrast, Mobile 2.0 refers to services that integrate the social web with the core
aspects of mobility – personal, localized, always-on and ever-present. These services are
appearing on wireless devices such as Smartphones and multimedia feature phones that are
capable of delivering rich, interactive services as well as being able to provide access and to
the full range of mobile consumer touch points including talking, texting, capturing, sending,
listening and viewing.
Enablers of Mobile 2.0
•• Ubiquitous Mobile Broadband Access
•• Affordable, unrestricted access to enabling software platforms, tools and technologies
•• Open access, with frictionless distribution and monetization
Characteristics of Mobile 2.0
•• The social web meets mobility
•• Extensive use of User-Generated Content, so that the site is owned by its contributors
•• Leveraging services on the web via mashups
•• Fully leveraging the mobile device, the mobile context, and delivering a rich mobile
user experience
•• Personal, Local, Always-on, Ever-present
Implementations of Mobile 2.0
Mobile 2.0 is still at the development stage but there are already a range of sites available,
both for so-called “smartphones” and for more ordinary “feature” mobile phones. The best
examples are Micro-blogging services Jaiku, Twitter, Pownce, CellSpin, and open platforms
for creating sms services like Fortumo and Sepomo or providing information and services like
mobeedo.
The largest mobile telecoms body, the GSM Association, representing companies
serving over 2 billion users, is backing a project called Telco 2.0, designed to drive this area.
4.20 Human Computer Interaction
1. How to Design Android UI/GUIs in Android Studio
If, you’ve had enough of using Eclipse for GUI design, you will enjoy this video tutorial.
It explains to Eclipse users how to use Android Studio to create graphical user interfaces
for your app. The video is long (close to an hour), but flows logically and is easy to
follow.
2. Adaptive Layout Tutorial in iOS 9: Getting Started
The introduction of Adaptive Layout in iOS9 for supporting multiple screen sizes
was a game changer. If you are not familiar with the concept, this tutorial is a detailed
introduction to Adaptive Layout for iOS9. The tutorial offers in-depth explanation of all
you need to know about Adaptive Layout, such as universal storyboards, size classes,
layout and font customizations and the preview assistant editor. This is the most useful
tutorial on the topic I could find and it’s a must read for any iOS designer.
Mobile HCI 4.21
3. Design iOS 8 Apps from Scratch: Learn by Designing the Health App in Photoshop
iOS 8 isn’t retired yet and if you still need to support it, this tutorial is essential viewing.
The tutorial is over half an hour and whilst the explanations are good, be prepared to watch
it more than once to fully understand everything. If you are not familiar with Photoshop
itself, the learning curve will be steeper. The tutorial comes with downloadable files, so
you can experiment with the design steps on your own.
4. Introduction to Material Design
Material Design is now fundamental to Android mobile design and if you are designing
for Android, you should be following its principles. Among the numerous resources on
Material Design, the place to start is the Material Design Specification from Google. If
you want to know more, and above all, see practical examples of Material Design, read
these Material Design tutorials on SitePoint as well.
5. Material Design with the Android Design Support Library
Besides the specification and tutorials about Material Design
mentioned above, the Android Design Support Library is
another essential resource. If you are unfamiliar with it, follow
this Android Design Support Library tutorial to learn what’s
possible. The tutorial requires some programming knowledge
but includes a download of the code so you can experiment
with it.
4.22 Human Computer Interaction
6. Responsive Images in Practice
The responsive design concept isn’t limited to mobile design, but
since responsiveness is a key requirement for mobile design, this
resource covering responsive images in practice is a must read. The
tutorial doesn’t cover everything, but does offer a neat explanation
of the basics and beyond.
7. Android UI Tutorial: Layouts and Animations
One of the best tutorials on Android UI layouts and animations is this one. It teaches
how to use Android Studio to create different layouts (frame layouts, linear layouts,
relative layouts, 1and grid layouts), Views (TextView, ListView, ImageView,
GridView, RecyclerView) and Motions (Property Animation, drawable Animation).
The tutorial is suitable for beginners and advanced designers and has the project(s)
available on GitHub.
8. An Introduction to Android Accessibility Features
Mobile HCI 4.23
A good designer values usability and accessibility over everything else. This tutorial
provides details on the numerous accessibility features of Android, such as how
to add textual descriptions to UI elements, navigation without the touch screen,
creating your own accessibility service, and how to test accessibility.
9. Building Great Mobile Menus for Your Website
If you are building a site (mobile or desktop), you typically need menus. With the
limited screen estate on a mobile device, good menus are even more important.
One of the best resources about mobile menus is this tutorial. It explains clearly
everything you need to know about building mobile menus, such as how to create
animations with CSS or jQuery
10. How to Build a Simple Mobile Website with CSS3
Often you don’t need advanced tools, but just good old CSS to build a decent mobile
site. This may be a more primitive method, because even the author of this tutorial
on How to Build a Simple Mobile Website with CSS3 warns that the code will not
work on all phones. Still, this is a great resource if you are new to designing mobile
sites rather than apps and you are looking for a simple way to create a fantastic
mobile site.
4.24 Human Computer Interaction
11. Responsive Data Tables: A Comprehensive List of Solutions
4.5 MOBILE DESIGN ELEMENTS
5 Design Elements
Functional Parity
Instead of removing functionality from a site or application just because the user is on a
smaller device, consider changing the emphasis or location of some features. For example, for
a mobile user, it may be best to minimize the browsing navigation in favor of search and filter
options. However, it is important to ensure the differences introduced to the layout and design
between devices or breakpoints aren’t so drastic that users get lost or have to learn two different
systems.
Mental Modalities
It is best practice to maintain mental modalities wherever possible. For example,
consider transitioning from a tab system to an accordion when the screen becomes too narrow,
because the concept of clicking on a header to reveal content in a display area remains the same
in both widgets
Mobile HCI 4.25
Full Desktop Version
If opting to completely remove some functionality for a given device to simplify the
application and cognitive load on the user, consider including a link to the full desktop version.
It allows users to choose to access the full experience.
Zoom
It is best to avoid turning off the zoom ability of touch devices just to allow the use of
“fixed” elements. Headers and their included navigation seem to be the biggest offender of this.
Most of the time it’s because reducing the screen width, and therefore reflow width, causes very
long scrolling pages. A better alternative may be to conceal content and move towards a “drill-
in” method.
If zoom must be turned off, then provide users an alternate mechanism for enlarging
fonts and pictures, especially to accommodate users with poor vision or other hindrances to
viewing content at the size set.
Inputs
Users will interact with the site through various inputs, and a robust site should strive
to support many, if not all, of them. Those inputs may include, but are not limited to, mouse,
keyboard, touch, gesture and voice.
While a mouse is precise and works well with small hit areas, a finger lacks the same
specificity and needs a much larger hit area. Keep this in mind when designing buttons, and
remember that the hit area of a button doesn’t always have to match the visual asset.
Keyboard integration often gets overlooked on a site, but is greatly important to users
with accessibility challenges. It is also a nice feature for other users who find navigating via the
keyboard faster or easier.
At the very least, the entire site should be navigable via only a keyboard. Using shortcut
key bindings and specifying tab-index values greatly improves the experience for keyboard
users. Tab-index values are set by a simple tab-index attribute, and they specify the order in
which elements on a page gain focus when a user hits the tab key.
Gesture interactions on responsive sites deliver a more “native” feel on a touch-based
device. While gesture-interactive areas that are targeted, like a carousel, can work well, sites
may become slow or unresponsive if the gesture target area is too broad. Broad hit areas are
those that encompass many different items as opposed to a specific hit area – for example, the
ability to swipe anywhere on a page to navigate to another page.
4.6 MOBILE TOOLS
(1) Framer - Modern prototyping tool
(2) Indigo Studio - Rapid, interactive prototyping
(3) Mockingbird - Wireframes on the fly
4.26 Human Computer Interaction
(4) Simulify - Interactive, shareable wireframes, mockups and prototypes
(5) Solidify - Create clickable prototypes
(6) Lovely Charts - Diagramming app with desktop and mobile versions
(7) ForeUI - Easy to use UI prototyping tool
(8) Creately - Realtime diagram collaboration
(9) JumpChart - Architecture, layout and content planning
(10) Lumzy - Mockup creation and prototyping tool
(11) Concept.ly - Convert wireframes and designs into interactive apps
(12) Frame Box - Easy, simple wireframing
(13) Realizer - Interactive presentation prototypes
(14) Cacoo - Diagrams with realtime collaboration
(15) Mockup Builder - Super-easy prototyping and mockups
(16) Appery.io - Develop cross platform mobile apps fast
(17) Mockup Designer - Basic wireframing tool hosted on GitHub
(18) ClickDummy - Turn mockups into clickable prototypes
(19) Mockups.me - Create and present interactive UI wireframes
(20) Mockabilly - iPhone mockups with genuine iphone behavior
(21) RWD Wireframes - Wireframing tool for responsive layouts
(22) Blocks - Create annotated HTML prototypes
(23) UX Toolbox - Create, document and share wireframes and prototypes
Foundations of HCI 4.27
REVIWE QUESTIONS
WITH ANSWER
PART A – 2 MARKS
1. Write short notes on MOBILE ECOSYSTEM
The Internet is actually a complex ecosystem made up of many parts that must all work
together. When you enter a URL into a web browser, you don’t think about everything that has
to happen to see a web page. When you send an email, you don’t care about all the servers,
switches, and software that separate you from your recipient. Everything you do on the Internet
happens in fractions of a second. And you have the perception that all of this happens for free.
2. Draw a neat sketch for Mobile Ecosystem.
Services
Applications
Application Frame works
Operating Systems
Platforms
Devices
Aggregators
Networks
Operators
3. What is meant by Operators?
The base layer in the mobile ecosystem is the operator. Operators go by many names,
depending on what part of the world you happen to be in or who you are talking to. Operators
can be referred to as Mobile Network Operators (MNOs); mobile service providers, wireless
carriers, or simply carriers; mobile phone operators; or cellular companies. In the mobile
community, we officially refer to them as operators, though in the United States, there is a
tendency to call them carriers.
4.28 Human Computer Interaction
4. List down the world’s largest mobile operators.
World’s largest mobile operators
Subscribers
Rank Operator Markets Technology (in millions)
GSM, GPRS,
China (including Hong Kong) and
1. China Mobile EDGE, 436.12
Pakistan
TD-SCDMA
United Kingdom, Germany, Italy,
France, GSM, GPRS,
Spain, Romania, Greece, Portugal, EDGE,
Nether- UMTS,
lands, Czech Republic, Hungary,
2. Vodafone Ireland, 260.5
Albania, Malta, Northern Cyprus,
Faroe Is-
lands, India, United States, South
Africa, Aus tralia, New Zealand,
Turkey, Egypt, Ghana, Fiji,
Lesotho, and Mozambique HSDPA
Spain, Argentina, Brazil, Chile,
Colombia, CDMA,
Ecuador, El Salvador, Guatemala, CDMA2000
Mexico, 1x,
3. Telefónica 188.9
Nicaragua, Panama, Peru, Uru- EV-DO,
guay, Vene- GSM,
zuela, Ireland, Germany, United GPRS,
Kingdom, EDGE,
5. What do you mean by wireless Networks?
Operators operate wireless networks. Remember that cellular technology is just a radio
that receives a signal from an antenna. The type of radio and antenna determines the capability
of the network and the services you can enable on it.
You’ll notice that the vast majority of networks around the world use the GSM standard,
using GPRS or GPRS EDGE for 2G data and UMTS or HSDPA for 3G. We also have CDMA
(Code Division Multiple Access) and its 2.5G hybrid CDMA2000, which offers greater coverage
than its more widely adopted rival. So in places like the United States or China, where people
are more spread out, CDMA is a great technology. It uses fewer towers, giving subscribers
fewer options as they roam networks.
Foundations of HCI 4.29
GSM mobile network evolutions
2G Second generation of mobile phone standards Theoretical max data
and technology speed
GSM Global System for Mobile communications 12.2 KB/sec
GPRS General Packet Radio Service Max 60 KB/sec
EDGE Enhanced Data rates for GSM Evolution 59.2 KB/sec
HSCSD High-Speed Circuit-Switched Data 57.6 KB/sec
3G Third generation of mobile phone standards Theoretical max data
and technology speed
W-CDMA Wideband Code Division Multiple Access 14.4 MB/sec
UMTS Universal Mobile Telecommunications System 3.6 MB/sec
UMTS- UMTS +Time Division Duplexing 16 MB/sec
TDD
TD-CDMA Time Divided Code Division Multiple Access 16 MB/sec
HSPA High-Speed Packet Access 14.4 MB/sec
6. What do you mean by Devices?
What you call phones, the mobile industry calls handsets or terminals. These are terms
that I think are becoming outdated with the emergence of wireless devices that rely on operator
networks, but do not make phone calls. The number of these “other” devices is a small piece of
the overall pie right now, but it’s growing rapidly.
Most of these devices are feature phones, making up the majority of the marketplace.
Smartphones make up a small sliver of worldwide market share and maintain a healthy
percentage in the United States and the European Union; smartphone market share is growing
with the introduction of the iPhone and devices based on the Android platform. As next-
generation devices become a reality, the distinction between feature phones and smartphones
will go away. In the next few years, feature phones will largely be located in emerging and
developing markets.
7. States the need for Platforms / programming languages
A mobile platform’s primary duty is to provide access to the devices. To run software
and services on each of these devices, you need a platform, or a core programming language
in which all of your software is written. Like all software platforms, these are split into three
categories: licensed, proprietary, and open source.
8. Mention the need for Licensed platforms/ softwares.
Licensed platforms are sold to device makers for nonexclusive distribution on devices.
The goal is to create a common platform of development Application Programming Interfaces
(APIs) that work similarly across multiple devices with the least possible effort required to adapt
for device differences, although this is hardly reality. Following are the licensed platforms:
4.30 Human Computer Interaction
•• Java Micro Edition (Java ME)
•• Binary Runtime Environment for Wireless (BREW)
•• Windows Mobile
•• LiMo
9. What do you mean by Proprietary?
Proprietary platforms are designed and developed by device makers for use on their
devices. They are not available for use by competing device makers. These include:
•• Palm
•• BlackBerry
•• iPhone
10. What do you mean by Open Source?
Open source platforms are mobile platforms that are freely available for users to down-
load, alter, and edit. Open source mobile platforms are newer and slightly controversial, but they
are increasingly gaining traction with device makers and developers. Android is one of these
platforms. It is developed by the Open Handset Alliance, which is spear-headed by Google.
11. Write and states the purpose of Operating Systems
It used to be that if a mobile device ran an operating system, it was most likely considered
a smartphone. But as technology gets smaller, a broader set of devices supports operating
systems.
Operating systems often have core services or toolkits that enable applications to talk to
each other and share data or services. Mobile devices without operating systems typically run
“walled” applications that do not talk to anything else.
Although not all phones have operating systems, the following are some of the most
common:
•• Symbian
•• Windows Mobile
•• Palm OS
•• Linux
•• Mac OS X
•• Android
12. Specify the importance of Application Frameworks
The first layer the developer can access is the application framework or API released by
one of the companies mentioned already. The first layer that you have any control over is the
choice of application framework.
Foundations of HCI 4.31
Application frameworks often run on top of operating systems, sharing core services
such as communications, messaging, graphics, location, security, authentication, and many
others.
•• Java
•• S60
•• BREW
•• Flash Lite
•• Windows Mobile
•• Cocoa Touch
•• Android SDK
•• Web Runtimes (WRTs)
13. Define Web
The Web is the only application framework that works across virtually all devices and
all platforms. Although innovation and usage of the Web as an application framework in mobile
has been lacking for many years, increased demand to offer products and services outside of
operator control, together with a desire to support more devices in shorter development cycles,
has made the Web one of the most rapidly growing mobile application platforms up to date.
14. What do you mean by Applications?
Application frameworks are used to create applications, such as a game, a web browser,
a camera, or media player. Although the frameworks are well standardized, the devices are
not. The largest challenge of deploying applications knows the specific device attributes and
capabilities. For example, if you are creating an application using the Java ME application
framework, you need to know what version of Java ME the device supports, the screen
dimensions, the processor power, the graphics capabilities, the number of buttons it has, and
how the buttons are oriented. Multiply that by just a few additional handsets and you have
hundreds of variables to consider when building an application.
15. Define Services
Finally, come to the last layer in the mobile ecosystem is services. Services include
tasks such as accessing the Internet, sending a text message, or being able to get a location—
basically, anything the user is trying to do.
The state of the current market is evaluated with reference to current market and data,
financial and sales trends, user surveys, and assessing the impressions of news media outlets.
Each of the top hardware and software platforms are reviewed and evaluated to provide a
thorough understanding of the competitive landscape of the smart phone and tablet market.
4.32 Human Computer Interaction
16. List out the TYPES OF MOBILE APPLICATIOINS
•• WIDGET
•• GAMES
17. Write short notes on Mobile Information Architecture
Mobile devices have their own set of Information Architecture patterns, too. While the
structure of a responsive site may follow more “standard” patterns, native apps, for example,
often employ navigational structures that are tab-based. Again, there’s no “right “way to architect
a mobile site or application. Instead, let’s take a look at some of the most popular patterns:
Hierarchy, Hub & spoke, Nested doll, Tabbed view, Bento box and Filtered view:
18. Draw a neat sketch of Mobile Information Architecture
Hierarchy
The hierarchy pattern is a standard site structure with an index page and a series of sub
pages. If you are designing a responsive site you may be restricted to this, however introducing
additional patterns could allow you to tailor the experience for mobile.
Luke Wroblewski’s Mobile First approach helps us focus on the important stuff first:
features and user journeys that will help us create great user experiences.
19. What is meant by Mobile 2.0?
IT refers to a perceived next generation of mobile internet services that leverage the
social web, or what some call Web 2.0. The social web includes social networking sites and wikis
that emphasize collaboration and sharing amongst users. Mobile Web 2.0, with an emphasis on
Web, refers to bringing Web 2.0 services to the mobile internet, i.e., accessing aspects of Web
2.0 sites from mobile internet browsers.
Enablers of Mobile 2.0
•• Ubiquitous Mobile Broadband Access
•• Affordable, unrestricted access to enabling software platforms, tools and technologies
•• Open access, with frictionless distribution and monetization
Foundations of HCI 4.33
Characteristics of Mobile 2.0
•• The social web meets mobility
•• Extensive use of User-Generated Content, so that the site is owned by its contributors
•• Leveraging services on the web via mashups
•• Fully leveraging the mobile device, the mobile context, and delivering a rich mobile
user experience
•• Personal, Local, Always-on, Ever-present
Implementations of Mobile 2.0
Mobile 2.0 is still at the development stage but there are already a range of sites available,
both for so-called “smartphones” and for more ordinary “feature” mobile phones. The best
examples are Micro-blogging services Jaiku, Twitter, Pownce, CellSpin, and open platforms
for creating sms services like Fortumo and Sepomo or providing information and services like
mobeedo.
20. List down the Mobile Design Elements.
5 Design Elements
•• Functional Parity
•• Mental Modalities
•• Full Desktop Version
•• Zoom
•• Inputs
21. List out the Mobile Tools
(1) Framer - Modern prototyping tool
(2) Indigo Studio - Rapid, interactive prototyping
(3) Mockingbird - Wireframes on the fly
(4) Simulify - Interactive, shareable wireframes, mockups and prototypes
(5) Solidify - Create clickable prototypes
(6) Lovely Charts - Diagramming app with desktop and mobile versions
(7) ForeUI - Easy to use UI prototyping tool
(8) Creately - Realtime diagram collaboration
(9) JumpChart - Architecture, layout and content planning
(10) Lumzy - Mockup creation and prototyping tool
(11) Concept.ly - Convert wireframes and designs into interactive apps
(12) Frame Box - Easy, simple wireframing
4.34 Human Computer Interaction
(13) Realizer - Interactive presentation prototypes
(14) Cacoo - Diagrams with realtime collaboration
(15) Mockup Builder - Super-easy prototyping and mockups
(16) Appery.io - Develop cross platform mobile apps fast
(17) Mockup Designer - Basic wireframing tool hosted on GitHub
(18) ClickDummy - Turn mockups into clickable prototypes
(19) Mockups.me - Create and present interactive UI wireframes
(20) Mockabilly - iPhone mockups with genuine iphone behavior
(21) RWD Wireframes - Wireframing tool for responsive layouts
(22) Blocks - Create annotated HTML prototypes
(23) UX Toolbox - Create, document and share wireframes and prototypes
PART A – 2 MARKS
1. Explain in detail about MOBILE ECOSYSTEM
2. Explain in detail about Applications and Services of mobile systems
3. Explain in details about WIDGET
4. Write in detail about GAMES
5. How is mobile different from others?
6. Explain in detail about Mobile Information Architecture
7. Explain in detail about Mobile 2.0 services,
8. Write down the steps to design Mobile applications.
9. Explain in detail about the Mobile Design Elements
10. List out the Mobile Tools and explain in details.
Foundations of HCI 4.35
4.36 Human Computer Interaction
UNIT - 5
DESIGNING WEB INTERFACES
5.1 DESIGNING WEB INTERFACES
5.1.1 Drag and Drop
The Macintosh brought to the world in 1984 was Drag and Drop.
5.1.1.1 The Events
There are at least 15 events available for cueing the user during a drag and drop
interaction:
•• Page Load: Before any interaction occurs, you can pre-signify the availability of drag
and drop. For example, you could display a tip on the page to indicate draggability.
•• Mouse Hover: The mouse pointer hovers over an object that is draggable.
•• Mouse Down: The user holds down the mouse button on the draggable object.
•• Drag Initiated: After the mouse drag starts (usually some threshold - 3 pixels).
•• Drag Leaves Original Location: After the drag object is pulled from its location or
object that contains it.
•• Drag Re-Enters Original Location: When the object re-enters the original location.
•• Drag Enters Valid Target: Dragging over a valid drop target.
•• Drag Exits Valid Target: Dragging back out of a valid drop target.
•• Drag Enters Specific Invalid Target: Dragging over an invalid drop target.
•• Drag Is Over No Specific Target: Dragging over neither a valid or invalid target.
Do you treat all areas outside of valid targets as invalid?
•• Drag Hovers over Valid Target: User pauses over the valid target without dropping
the object. This is usually when a spring loaded drop target can open up. For example,
drag over a folder and pause, the folder opens revealing a new area to drag into.
•• Drag Hovers over Invalid Target: User pauses over an invalid target without
dropping the object. Do you care? Will you want additional feedback as to why it is
not a valid target?
•• Drop Accepted: Drop occurs over a valid target and drop has been accepted.
•• Drop Rejected: Drop occurs over an invalid target and drop has been rejected. Do
you zoom back the dropped object?
5.2 Human Computer Interaction
•• Drop on Parent Container: Is the place where the object was dragged from special?
Usually this is not the case, but it may carry special meaning in some contexts.
5.1.1.2 The Actors
During each event you can visually manipulate a number of actors. The page elements
available include:
•• Page (e.g., static messaging on the page)
•• Cursor
•• Tool Tip
•• Drag Object (or some portion of the drag object, e.g., title area of a module)
•• Drag Object’s Parent Container
•• Drop Target
5.1.1.3 Moments Grid - Drag and Drop.
The grid is a handy tool for planning out interesting moments during a drag and drop
interaction. It serves as a checklist to make sure there are no “holes” in the interaction. Just
place the actors along the left-hand side and the moments along the top. In the grid intersections,
place the desired behaviors.
5.1.1.4 Purpose of Drag and Drop
Drag and drop can be a powerful idiom if used correctly. Specifically it is useful for:
•• Drag and Drop Module – Rearranging modules on a page.
•• Drag and Drop List – Rearranging lists.
•• Drag and Drop Object – Changing relationships between objects.
•• Drag and Drop Action – Invoking actions on a dropped object.
•• Drag and Drop Collection – Maintaining collections through drag and drop.
5.1.1.5 Drag and Drop Module
One of the most useful purposes of drag and drop is to allow the user to directly place
objects where she wants them on the page. A typical pattern is Drag and Drop Modules on a
page.
5.1.1.6 Invitation to drag
Moving the mouse to a module’s header changes the cursor to indicate that the item is
drag gable.
5.1.1.7 Placeholder target-Boundary-based placement.
Most of sites that use placeholder targeting drag the module in its original size, targeting
is determined by the boundaries of the dragged object and the boundaries of the dragged-
Designing Web Interfaces 5.3
over object. The mouse position is usually ignored be-cause modules are only draggable in
the title (a small region). Both Netvibes and iGoogle take the boundary-based approach. But,
interestingly, they calculate the position of their placeholders differently.
5.1.1.8 Drag distance
Dragging the thumbnail around does have other issues. Since the object being dragged
is small, it does not intersect a large area. It requires moving the small thumbnail directly to the
place it will be dropped. With iGoogle, the complete module is dragged.
•• Drag rendering –The transparency effect communicates that the object being
dragged is actually a representation of the dragged object.
•• Placeholder targeting – Most explicit way to preview the effect.
•• Midpoint boundary – Requires the least drag effort to move modules around.
•• Full-size module dragging – Coupled with placeholder targeting and midpoint
boundary detection
•• Ghost rendering – Emphasizes the page rather than the dragged object and keeps
the preview clear.
5.1.2 Drag and Drop List
The Drag and Drop List pattern defines interactions for rearranging items in a list.
5.1.2.1 Insertion target
An insertion bar can be used within a list
5.1.2.2 Non–drag and drop alternative
Besides drag and drop, the Netflix queue actually supports two other ways to move
objects around:
5.4 Human Computer Interaction
•• Edit the row number and then press the “Update DVD Queue” button.
•• Click the “Move to Top” icon to pop a movie to the top.
Modifying the row number is straightforward. It’s a way to rearrange items without drag
and drop. The “Move to Top” button is a little more direct and fairly straightforward (if the user
really understands that this icon means “move to top”). Drag and drop is the least discoverable
of the three, but it is the most direct, visual way to rearrange the list. Since rearranging the queue
is central to the Netflix customer’s satisfaction, it is appropriate to allow multiple ways to do so.
5.1.2.3 Hinting at drag and drop
When the user clicks the “Move to Top” button, Netflix animates the movie as it moves
up. But first, the movie is jerked downward slightly and then spring-loaded to the top
Click “Move to Top”
Clicking the “Move to Top” button starts the movie moving to the top.
Spring loaded
The movie does not immediately start moving up. Instead, it drops down and to the right
slightly. This gives the feeling that the movie is being launched to the top.
Animated move to top
The movie then animates very quickly to show it is moving to the top.
5.1.2.4 Drag lens
Drag and drop works well when a list is short or the items are all visible on the page. But
when the list is long, drag and drop becomes painful. Providing alternative ways to rearrange is
a drag lens while dragging. A good example of this is dragging the insertion bar while editing
text on the iPhone
•• Drag and Drop Object is used to rearrange members of the organization.
•• Normal display state: An organizational chart visually represents relationships.
5.1.2.5 Invitation to drag
When the mouse hovers over a member of the organization, the cursor changes to
show draggability. In addition, the texture in the top-left corner changes to represent a dimpled
surface. This hints at draggability.
•• Dragging: An insertion bar is used to indicate where the member will be inserted
when dropped.
•• Dropped: When the dragged member is dropped, the chart is rearranged to
accommodate the new location.
Designing Web Interfaces 5.5
5.2 DIRECT SELECTION
The ability to directly select objects and apply actions to them. Treating elements in
the interface as directly selectable is a clear application of the Make It Direct principle. On the
desktop, the most common approach is to initiate a selection by directly clicking on the object
itself. We call this selection pattern Object Selection
•• Toggle Selection: Checkbox or control-based selection.
•• Collected Selection: Selection that spans multiple pages.
•• Object Selection: Direct object selection.
•• Hybrid Selection: Combination of Toggle Selection and Object Selection.
5.2.1 Toggle Selection
The most common form of selection on the Web is Toggle Selection. Checkboxes and
toggle buttons are the familiar interface for selecting elements on most web pages.
5.2.2 Scrolling versus paging
Mail uses a scrolled list to show all of its mail messages. While not all messages are
visible at a time, the user knows that scrolling through the list retains the currently selected items.
Since the user understands that all the messages not visible are still on the same continuous
pane, there is no confusion about what an action will operate on, it will affect all selected items
in the list.
5.2.3 Making selection explicit
The selection model is visually explicit. The advantage of this method is that it is
always clear how many items have been selected. Visualizing the underlying selection model is
generally a good approach. This direct approach to selection and acting on bookmarks creates
a straightforward interface.
5.2.4 Collected Selection
Toggle Selection is great for showing a list of items on a single page. But what happens
if you want to collect selected items across multiple pages? Collected Selection is a pattern for
keeping track of selection as it spans multiple pages.
5.6 Human Computer Interaction
5.2.5 Object Selection
Toggle Selection is the most common type of selection on the Web. The other type of
selection, Object Selection, is when selection is made directly on objects within the interface.
•• Selected state – When the user clicks on a message, the whole row gets selected.
5.2.6 Hybrid Selection
Mixing Toggle Selection and Object Selection in the same interface can lead to a
confusing interface
Clicking and dragging on the unselected bookmark element initiates a drag.
The drag includes both the selected element and the unselected element. Since only one
is shown as selected, this creates a confusing situation.
This occurs because three things are happening in the same space:
•• Toggle Selection is used for selecting bookmarks for editing, deleting, etc.
•• Object Selection is used for initiating a drag drop.
•• Mouse click is used to open the bookmark on a separate page.
5.2.7 Blending two models
Yahoo! Mail originally started with the Toggle Selection model when the new Yahoo!
Mail Beta was released, it used Object Selection exclusively. But since there are advantages
to both approaches, the most recent version of Yahoo! Mail incorporates both approaches in a
Hybrid Selections
Toggle Selection selects the message without loading the message in the viewing pane
5.2.8 Keep It Lightweight
Digg is a popular news site where the community votes on its favorite stories
5.3 CONTEXTUAL TOOLS-INTERACTION IN CONTEXT
Most desktop applications separate functionality from data. Menu bars, toolbars, and
palettes form islands of application functionality
Early websites were just the opposite. They were completely content-oriented. Rich tool
sets were not needed for simply viewing and linking to content pages. Even in e-commerce sites
like Amazon or eBay, the most functionality needed was the hyperlink and “Submit” button.
Designing Web Interfaces 5.7
Touch-based interfaces were the stuff of research labs and, more recently, interesting
You-Tube videos.
5.3.1 Fitts’s Law
Fitts’s Law is an ergonomic principle that ties the size of a target and its contextual
proximity to ease of use.
Bruce Tognazzini restates it simply as “The time to acquire a target is a function of the
distance to and size of the target”. In other words, if a tool is close at hand and large enough to
target, then we can improve the user’s interaction. Putting tools in context makes for lightweight
interaction
5.3.2 List of Contextual Tools
Contextual Tools are the Web’s version of the desktop’s right-click menus.
Instead of having to right-click to reveal a menu, we can reveal tools in context with the
content. We can do this in a number of ways:
•• Always-Visible Tools – Place Contextual Tools directly in the content.
•• Hover-Reveal Tools – Show Contextual Tools on mouse hover.
•• Toggle-Reveal Tools – A master switches to toggle on/off Contextual Tools
for the page.
•• Multi-Level Tools – Progressively reveal actions based on user interaction.
5.8 Human Computer Interaction
5.3.3 Secondary Menus
Show a secondary menu (usually by right-clicking on an object).
5.3.4 Relative importance
Is the “digg it” action as important as the “bury it” action? In the case of Digg, the
answer is no. The “digg it” action is represented as a button and placed prominently in the
context of the story. The “bury it” action is represented as a hyperlink along with other “minor”
actions just below the story.
5.3.5 Discoverability
Discoverability is a primary reason to choose Always-Visible Tools. On the flip side, it
can lead to more visual clutter.
5.3.6 Hover-Reveal Tools
Instead of making Contextual Tools always visible, we can show them on demand. One
way to do this is to reveal the tools when the user pause the mouse over an object. The Hover-
Reveal Tools pattern is most clearly illustrated by 37 Signal’s Backpackit. To-do items may be
deleted or edited directly in the interface. The tools to accomplish this are revealed on mouse
hover.
•• Normal state – The edit and delete tools are hidden in the normal state.
•• Invitation – On mouse hover, the tools are revealed. The tools are
“cut” into the gray bar, drawing the eye to the change.
5.3.7 Discoverability
A serious design consideration for Hover-Reveal Tools is just how discoverable the
additional functionality will be Flickr provides a set of tools for contacts. To avoid clutter,
contact profile photos are shown without any tool adornment. When the mouse hovers over the
contact’s photo, a drop-down arrow is revealed. Clicking reveals a menu with a set of actions for
the contact. This works because users often know to click on an image to get more information.
Being drawn to the content is a good way to get the user to move the mouse over the area and
discover the additional functionality.
Designing Web Interfaces 5.9
5.3.8 Contextual Tools in an overlay
Sometimes there are several actions available for a focused object.
Providing an overlay feels heavier. An overlay creates a slight contextual switch for
the user’s attention. The overlay will usually cover other information—information that often
provides context for the tools being offered. Most implementations shift the content slightly
between the normal view and the overlay view, causing the users to take a moment to adjust
to the change. The overlay may get in the way of navigation. Because an overlay hides at least
part of the next item, it becomes harder to move the mouse through the content without stepping
into a “landmine.”
5.3.9 Anti-pattern: Hover and Cover
When these tools were placed in an overlay, it covered the item to the right, making
it hard to see that content and even navigate to it. In addition, since the overlay had some
additional padding (as well as rounded corners), the image shown in the overlay was about
two pixels off from the non-overlay version. This slight jiggle was distracting. To add insult to
injury, the overlay was sluggish to bring into view.
5.3.10 Anti-pattern: Mystery Meat
The only way to identify this mystery meat is to open it. Unidentifiable icons are pretty
much the same as a row of unlabeled cans. The only recourse for the user was to pause over
each icon and wait a second or so to read a tool tip about the purpose of the icon. This does not
create a lightweight interaction.
5.3.11 Activation
Tool overlays should activate immediately. The tools are an extension of the interface,
and any delay creates too much of a delay between the activation and invocation of the action
5.3.12 Radial menus
Radial menus such as in Songza have been shown to have some advantages over more
traditional menus. First, experienced users can rely on muscle memory rather than having to
look directly at the menu items. Second, the proximity and targeting size make the menu easy
to navigate since the revealed menu items are all equally close at hand.
5.3.13 Activation
Another interesting decision Songza made was to not activate the radial menu on hover.
Instead, the user must click on a song to reveal the menu. Activating on click makes the user
intent more explicit.
5.3.14 Default action
However, this does mean there is no way to start a song playing with just a simple click.
Playing a song requires moving to the top leaf. One possible solution would be to place the
“play” option in the middle of the menu (at the stem) instead of in one of the leaves. Clicking
5.10 Human Computer Interaction
once would activate the menu. Clicking a second time (without moving the mouse) would start
playing the song. This interaction is very similar to one commonly used in desktop application:
allowing a double-click to activate the first item (default action) in a right-click menu.
5.3.17 Contextual toolbar
Picnik is an online photo-editing tool that integrates with services like Flickr. In all,
there are six sets of tools, each with a wide range of palette choices. Picnik uses Multiple- Level
Tools to expose additional functionality. By wrapping the photo with tools in context and
progressively revealing the levels of each tool, Picnik makes editing straightforward
•• Muttons
Another variation on Multi-Level Tools is the “mutton” (menu + button = mutton).
Muttons are useful when there are multiple actions and we want one of the actions
to be the default. Yahoo! Mail uses mutton for its “Reply” button.
•• Normal state
Yahoo! Mail displays the “Reply” mutton in its toolbar as a button with a drop-down
arrow control.
•• As a button
On mouse hover, the button gets a 3D treatment and color high-light. The drop-
down arrow gets the same treatment to call out its functionality.
Clicking the “Reply” button at this point will trigger a reply without activating the
menu.
•• As a menu
Clicking on the drop-down arrow reveals two commands: “Reply to Sender” is the
same as the default “Reply” button action; “Reply to All” is an additional action that
was hidden until the menu was revealed.
5.3.18 Anti-pattern: Tiny Targets
At the beginning of this chapter, we discussed Fitts’s Law. Recall that the time it takes
to acquire a target is a function of both distance and size. Even if tools are placed close by in
context, don’t forget to make them large enough to target.
Both Flickr and Yahoo! Mail provide a reasonable-size target for the drop-down arrow.
Compare this with the expand/collapse arrow used in an early version of Yahoo! for Teachers.
Designing Web Interfaces 5.11
5.3.19 Change Blindness
The break can cause visual consequences as well. An exhibit that caught my eye was the
one demonstrating change blindness. A large screen displayed an image of a store front typical
of those seen in most urban areas, complete with awning, windows, doors all of a distinctive
style. Then suddenly a new updated image of the store front replaced the original one. The
new image had a slight change from the original. However, try as I might I could not detect the
change
•• Overlays: Instead of going to a new page, a mini-page can be displayed in a
lightweight layer over the page.
•• Inlays: Instead of going to a new page, information or actions can be inlaid within
the page.
•• Virtual Pages: By revealing dynamic content and using animation, we can extend
the virtual space of the page.
•• Process Flow: Instead of moving from page to page, sometimes we can create a
flow within a page itself.
5.4 OVERLAYS
Overlays are really just lightweight pop ups. We use the term lightweight to make a clear
distinction between it and the normal idea of a browser pop up
•• Browser pop ups display a new browser window. As a result these windows often
take time and a sizeable chunk of system resources to create.
•• rowser pop ups often display browser interface controls (e.g., a URL bar). Due to
security concerns, in Internet Explorer7 the URL bar is a permanent fixture on any
browser pop-up window.
•• Lightweight overlays are just a lightweight inpage object. They are inexpensive to
create and fast to display.
•• The interface for lightweight overlays is controlled by the web application and not
the browser.
•• There is complete control over the visual style for the overlay. This allows the
overlay to be more visually integrated into the application’s interface.
5.4.1 Dialog Overlay
Dialog Overlays replace the old style browser pop ups
5.4.2 Lightbox Effect
In photography a lightbox provides a backlit area to view slides.
On the Web, this technique has come to mean bringing something into view by making
it brighter than the background. In practice, this is done by dimming down the background.
5.12 Human Computer Interaction
You can see the Lightbox Effect pattern used by Flickr when rotating images.
The Lightbox Effect is useful when the Dialog Overlay contains important information
that the user should not ignore. Both the Netflix Purchase dialog and the Flickr Rotate dialog are
good candidates for the Lightbox Effect. If the overlay contains optional information, then the
Lightbox Effect is overkill and should not be used.
5.4.3 Modality
Overlays can be modal or non-modal. A modal overlay requires the user to interact with
it before she can return to the application.
5.4.4 Detail Overlay
The Detail Overlay allows an overlay to present additional information when the user
clicks or hovers over a link or section of content.
5.4.5 Box shots
In the more recent versions of the Netflix site, large box shots are employed without
synopsis text. Box shots convey a lot of information.
5.4.6 Detail overlay activation
However, often more information is needed to decide whether a movie should be played
or added to a movie queue.By providing a synopsis along with personalized recommendation
information, the user can quickly make a determination.The movie detail information is
displayed after a slight delay.
5.4.7 Detail overlay deactivation
Moving the mouse outside the box shot immediately re-moves the movie detail
information.
5.4.8 Anti-pattern: Mouse Traps
It is important to avoid activating the Detail Overlay too easily. We have seen usability
studies that removed the delay in activation, and users reported that the interface was “too
noisy” and “felt like a trap”. We label this anti-pattern the Mouse Trap.
5.4.9 Anti-pattern: Non-Symmetrical Activation/Deactivation
When the user moves her mouse over the link, the overlay springs up immediately. The
only way he can remove the overlay is by clicking the small close button in the upper right.
Using Non-Symmetrical Activation/Deactivation is also a general anti-pattern that should
be avoided. It should take the same amount of effort to dismiss an overlay as it took to open it.
5.4.10 Anti-pattern: Needless Fanfare
One of the advantages of a lightweight overlay is the ability to pop it up quickly. Needless
Fanfare is an anti-pattern to avoid.
Designing Web Interfaces 5.13
5.4.11 Input Overlay
Input Overlay is a lightweight overlay that brings additional input information for each
field tabbed into. American Express uses this technique in its registration for premium cards
such as its gold card.
5.4.12 Dialog Inlay
A simple technique is to expand a part of the page, revealing a dialog area within the
page. The BBC recently began experimenting with using a Dialog Inlay as a way to reveal
customization controls for its home page
5.4.13 List Inlay
Lists are a great place to use Inlays. Instead of requiring the user to navigate to a new
page for an item’s detail or popping up the information in an Overlay, the information can be
shown with a List Inlay in context. The List Inlay works as an effective way to hide detail
until needed—while at the same time preserving space on the page for high-level over-view
information.
5.4.14 Parallel content
It uses an accordion style interaction for search filters that allows more than one pane
to be open at a time. This choice makes sense because the decisions needed for one detail pane
may be affected by the details of another pane.
5.14 Human Computer Interaction
Accordions can also be horizontally oriented. This is usually best done in nontraditional
interfaces.
5.4.15 Detail Inlay
A common idiom is to provide additional detail about items shown on a page. Hovering
over a movie revealed a Detail Overlay calling out the back-of-the-box information. Roost
allows house photos to be viewed in-context for a real estate listing with a Detail Inlay.
In-context tools
Hovering over a real estate list-ing brings in
a set of in-context tools, including the “View
photos” tool.
House photos inlay
Clicking on the “View photos” link expands the
real-estate item to include a carousel of house
photos.
Detail overlay
The Detail Inlay contains thumbnails of house
photos. Clicking on an individual thumbnail
pops up a Detail Overlay with a larger photo of
the house.
Designing Web Interfaces 5.15
5.4.16 Tabs
Lest we forget, there are some very traditional interface elements that can be used to
inlay details. Tabs, for instance, can be used as a Detail Inlay. Instead of moving the user from
page to page (site navigation), tabs can be used to bring in content within the page, keeping the
user in the page.
5.4.17 Personal assistant tabs
On the right side of the page, Yahoo! provides what it calls a Personal Assistant. Each
tab in this area (Mail, Messenger, etc.) is activated by hovering over the tab. In our example
the mouse is hovered over the Mail tab and it automatically expands open. Clicking on the link
actually takes the user to Yahoo! Mail.
The three types of tabs vary greatly, visually and interactively. However, Yahoo! is able
to pull this off because:
•• Normal users of Yahoo! will discover these interactions over time.
•• Creating the contrast makes a more visually compelling interface, as well as making
the interaction feel deeper (inviting exploration).
•• It is a great improvement over the old Yahoo! home page, which was completely
static. Every link took the user to a different page. Keeping users on the page until
they are ready to leave actually creates a happier user experience.
5.4.18 Inlay Versus Overlay?
•• Use an overlay when there may be more than one place a dialog can be activated
from
•• Use an overlay to interrupt the process.
•• Use an overlay if there is a multi-step process.
•• Use an inlay when you are trying to avoid covering information on the page needed
in the dialog.
•• Use an inlay for contextual information or details about one of many items (as in a
list): a typical example is expanding list items to show detail.
5.4.19 Patterns that support virtual pages include:
•• Virtual Scrolling
•• Inline Paging
•• Scrolled Paging
•• Panning
•• Zoomable User Interface
5.16 Human Computer Interaction
5.4.20 Virtual Scrolling
The traditional Web is defined by the “page.” In practically every implementation of
web-sites (for about the first 10 years of the Web’s existence) pagination was the key way to
get to additional content. Of course, websites could preload data and allow the user to scroll
through it. However, this process led to long delays in loading the page. So most sites kept it
simple: go fetch 10 items and display them as a page and let the user request the next page of
content. Each fetch resulted in a page refresh.
The classic example of this is Google Search.
Another approach is to remove the artificial page boundaries created by paginating the
data with Virtual Scrolling. In Yahoo! Mail, mail messages are displayed in a scrolled list that
loads additional messages on demand as the user scrolls.
Scrolling
Messages are loaded on demand. As the user
scrolls, the content items are filled in. While
loading, the mes-sage lines are replaced with
the text “Loading…”.
Scroll completes
Messages are displayed based on where the
user scrolled to
Designing Web Interfaces 5.17
Loading status
There are a few downsides to the Yahoo! Mail version of Virtual Scrolling. First, if
the loading is slow, it spoils the illusion that the data is continuous. Second, since the
scrollbar does not give any indication of where users are located in the data, they have to
guess how far down to scroll. A remedy would be to apply a constantly updating status
while the user is scrolling.
5.4.21 Progressive loading
Microsoft has applied Virtual Scrolling to its image search. However, it implements it
in a different manner than Yahoo! Mail. Instead of all content being virtually loaded (and the
scrollbar reflecting this), the scrollbar reflects what has been loaded. Scrolling to the bottom
causes more content to load into the page.
Scrolled list
12,500,000 image results are represented as a scrolled
list. Obviously there is no way to accurately represent that
many items in a list with a scrollbar. Notice the scrollbar
shows size relative to the amount of data that has been
loaded
Scrolling
By scrolling into the area where results have not been
loaded, images are initially represented as gray squares to
indicate that they are currently not loaded.
As each image is loaded it replaces the gray squares.
At the top, the start and end range of the visible im-ages is
displayed (“Images 46–70 of 12,500,00”).
Scroll completes
Image results are fully loaded, and the scrollbar is up-
dated to reflect where this page is in relation to the previ-
ously loaded content.
5.18 Human Computer Interaction
One more example illustrates an endless wall of pictures and uses a novel approach to a
scrollbar control for Virtual Scrolling.
These examples of Virtual Scrolling demonstrate three different ways to manage the
virtual space:
5.4.22 Inline Paging
By only switching the content in and leaving the rest of the page stable, we can create
an Inline Paging experience.
5.4.23 Scrolled Paging: Carousel
Besides Virtual Scrolling and Virtual Paging, there is another option. You can combine
both scrolling and paging into Scrolled Paging. Paging is performed as normal. But in-stead the
content is “scrolled” into view.
The Carousel pattern takes this approach. A Carousel provides a way to page-in more
data by scrolling it into view. On one hand it is a variation on the Virtual Scrolling pat-tern. In
other ways it is like Virtual Paging since most carousels have paging controls. The additional
effect is to animate the scrolled content into view.
Yahoo! Underground uses a Carousel to provide a way to page/scroll through articles.
Timeline
The top section provides a navigation
control through various Under-ground
articles. “Previously” and “Up Next”
indicate where the user can go.
Designing Web Interfaces 5.19
Animation
Animation reinforces the fact
that the articles are from the past
(the content moves in from the
left to the right).
5.4.24 Time-based
Carousels work well for time-based content. Flickr employs a Carousel to let users
navigate back and forth through their photo collection
Carousels are best for featured or recent content. They are also good for small sets of
time-based content.
5.4.25 Virtual Panning
One way to create a virtual canvas is to allow users the freedom to roam in two- dimensional
space. A great place for Virtual Panning is on a map. Google Maps allows you to pan in any
direction by clicking the mouse down and dragging the map around.
5.20 Human Computer Interaction
Start drag
Clicking and holding down changes the cursor into
a hand (signifying panning).
5.4.26 Zoomable User Interface
A Zoomable User Interface (ZUI) is another way to create a virtual canvas. Unlike
panning or flicking through a flat, two-dimensional space, a ZUI allows the user to also zoom
in to elements on the page. This freedom of motion in both 2D and 3D supports the concept of
an infinite interface.This type of interface is starting to emerge and may be commonplace in the
not-too-distant future.
Zoomed-out
At the zoomed-out level the user can see
thumbnails of the total collection
Zooming in
By using the mouse thumb-wheel, the user
can zoom in (it is like flying) on any object
Designing Web Interfaces 5.21
Detail stitched in
As the user gets closer and closer, more
detail is stitched in.
Aza Raskin, son of the late Jef Raskin (who pioneered many of the original ZUI concepts)
is continuing to experiment with user interfaces that push the current norms. He demonstrated
some potential ZUI interactions in a concept demo for Firefox on the mobile.
Zoomed-in to content
This browser page lives on a large canvas.
This view is zoomed in to the page.
Slide over
Using a panning technique, the page is pulled to
the right, revealing a hidden toolbar on the left.
Zoomed-out
The canvas can contain many “Tabs”, or
windows.
5.22 Human Computer Interaction
Add a Tab
Clicking the plus sign adds a new Tab. The
interface zooms out slightly, revealing a new
window being created in the bottom left of the
canvas.
Zooming in to new Tab
5.4.27 Paging versus Scrolling
Yahoo! Mail chose Virtual Scrolling. Gmail chose Inline Paging.
•• When the data feels “more owned” by the user – in other words, the data is not
transient but something users want to interact with in various ways. If they want to
sort it, filter it, and so on, consider Virtual Scrolling (as in Yahoo! Mail).
•• When the data is more transient (as in search results) and will get less and less
relevant the further users go in the data, Inline Paging works well (as with the
iPhone).
•• For transient data, if you don’t care about jumping around in the data to specific
sections, consider using Virtual Scrolling (as in Live Image Search).
•• If you are concerned about scalability and performance, paging is usually the best
choice. Originally Microsoft’s Live Web Search also provided a scrollbar. However,
the scrollbar increased server-load considerably since users are more likely to scroll
than page.
•• If the content is really continuous, scrolling is more natural than paging.
•• If you get your revenue by page impressions, scrolling may not be an option for your
business model.
Designing Web Interfaces 5.23
•• If paging causes actions for the content to become cumbersome, move to a scrolling
model. This is an issue in Gmail. The user can only operate on the current page.
Changing items across page boundaries is unexpected. Changing items in a
continuous scrolled list is intuitive
5.5 PROCESS FLOW
In the last chapters we’ve been discussing the principle Stay on the Page. Sometimes
tasks are unfamiliar or complicated and require leading the user step-by-step through a Process
Flow. It has long been common practice on the Web to turn each step into a separate page. While
this may be the simplest way break down the problem, it may not lead to the best solution. For
some Process Flows it makes sense to keep the user on the same page throughout the process.
5.5.1 Google Blogger
The popular site Google Blogger generally makes it easy to create and publish blogs.
One thing it does not make easy, though, is deleting comments that others may leave on your
blog. This is especially difficult when you are the victim of hundreds of spam comments left by
nefarious companies hoping to increase their search ranking.
Blogger forces you to delete these comments through a three-step process
My (Bill’s) blog site was recently spammed. It turns out that my 100 or so articles all
had 4 or more spam comments. That means that I had to delete more than 400 spam comments.
Given the way Google Blogger implemented comment deleting, I had to follow these steps for
each comment on each blog article:
(1) Scroll to find the offending comment.
(2) Click the trash icon to delete the comment.
(3) After page refreshes, click the “Remove Forever” checkbox.
(4) Click the “Delete Comment” button.
(5) After the page refreshes, click the link to return to my blog article.
(6) Repeat steps 1–5 for each article with spam comment
It took 1,600 clicks, 1,200 page refreshes, 400 scroll operations, and several hours to
finally rid myself of all of the spam comments. If the delete action could have been completed
in the same page as the comments that would have eliminated hundreds of clicks and well over
a thousand page refreshes, and scrolling would have been all but eliminated. I would not have
5.24 Human Computer Interaction
wasted all the mental energy to reorient myself after each page transition. And I would have
been a much happier man.
This is a common interaction flow on the Web. It turns out to be simpler to design and
implement a process as a series of pages rather than a single interactive space.
5.5.3 Process Flow patterns:
•• Interactive Single-Page Process
•• Inline Assistant Process
•• Configurator Process
•• Overlay Process
•• Static Single-Page Process
5.5.4 Interactive Single-Page Process
Consumer products come in a variety of shapes, sizes, textures, colors, etc. Online
shoppers will not only have to decide that they want shoes, but do they want blue suede shoes?
And what size and width do they want them in? In the end the selection is constrained by the
available inventory. As the user makes decisions, the set of choices gets more and more limited.
This type of product selection is typically handled with a multi-page workflow. On one page, the
user selects a shirt and its color and size. After submitting the choice, a new page is displayed.
Only when the user arrives at this second page does he find out that the “true navy” shirt is not
available in the medium size.
The Gap accomplishes this kind of product selection in a single page using Interactive
Single-Page Process. The purple shirt is available in all sizes from XS to XXXL. Hovering
over the dark blue shirt immediately discloses that this color is only available in XS and S sizes.
The Gap uses Interactive Single-Page Process to reflect the sizes for each product color
choice in real time
Designing Web Interfaces 5.25
The idea is the same: make the experience for selecting a product painless by providing
inventory disclosures as quickly as possible, and doing it all in a single-page interface.
5.5.5 Keeping users engaged
Broadmoor Hotel uses Interactive Single-Page Process for room reservations
Date selection
In the first column, the calendar
reveals available dates. Check-in and
check -out dates can be chosen from a
calendar.
Available room types are shown for the
date and number of people
Room and payment
Each column represents what would
normally be presented on a separate
page. In the first column, a calendar
discloses availability up front. This
prevents scheduling errors. Selecting
the room from the second column
updates both the room picture and the
pricing
5.5.6 Benefits
Adobe calls out the Broadmoor one-page reservation interface in its Adobe Showcase.
It states the benefits of this method:
•• Reduces entire reservation process to a single screen.
•• Reduces the number of screens in the online reservation process from five to one.
Other online reservation applications average 5 to 10 screens.
•• Seventy-five percent of users choose One Screen in favor of the HTML version.
•• Allows users to vary purchase parameters at will and immediately view results.
•• Reduces the time it takes to make a reservation from at least three minutes to less
than one.
•• Additionally, Adobe notes that conversion rates (users who make it through the
reservation process) are much higher with the Interactive Single-Page Process.
5.26 Human Computer Interaction
•• Inline Assistant Process
Another common place where multiple pages are used to complete a process is when
adding items to a shopping cart.
•• Dialog Overlay Process to en-capsulate a multi-step flow inside a Dialog Overlay.
5.5.7 Configurator Process
Sometimes a Process Flow is meant to invoke delight. In these cases, it is the engagement
factor that becomes most important. This is true with various Configurator Process interfaces
on the Web. We can see this especially at play with car configurators
5.5.8 Out of view status
Apple has a Configurator Process for purchasing a Macintosh computer
5.5.9 Static Single-Page Process
The Apple example illustrates another way to get rid of multiple pages in a Process
Flow. Just put the complete flow on one page in a Static Single-Page Process. The user sees all
the tasks needed to complete the full process.
5.5.10 An Invitation
Invitations are the prompts and cues that lead users through an interaction.
They often include just-in-time tips or visual affordances that hint at what will happen
next in the interface. Providing an invitation to the user is one of the keys to successful interactive
interfaces.
In the Flickr example, the designers chose to reveal the feature during mouse hover.
This Invitation is revealed just in time, when the mouse is paused over the editable area. The
downside is that the feature is not visible when the mouse is not over the area. The other choice
is to make the Invitation visible at all times
Designing Web Interfaces 5.27
5.6 CASE STUDY- THE MAGIC PRINCIPLE
Alan Cooper discusses a wonderful technique for getting away from a technology-
driven approach and discovering the underlying mental model of the user. He calls it the “magic
principle.”* Ask the question, “What if when trying to complete a task the user could invoke
some magic?” For example, let’s look at the problem of taking and sharing photos. The process
for this task breaks down like this:
•• Take pictures with a digital camera.
•• Sometime later, upload the photos to a photo site like Flickr. This involves:
•• Finding the cable.
•• Starting iTunes.
•• Importing all photos.
•• Using a second program, such as Flickr Uploadr, to upload the photos to Flickr.
•• Copying the link for a Flickr set
•• Send the link in email to appropriate friends.
If some magic were invoked, here is how it might happen:
•• The camera would be event-aware. It would know that is your daughter’s eighth
birthday.
•• When finished taking pictures of the event, the camera would upload the pictures to
Flickr.
•• Flickr would notify family and friends that the pictures of the birthday party are
available.
Hiding complexity through ingenious mechanical doors or tiny display screens is an
overt form of deception. If the deceit feels less like malevolence, more like magic, then hidden
complexities become more of a treat than a nuisance.
5.28 Human Computer Interaction
REVIWE QUESTIONS
WITH ANSWER
PART A – 2 MARKS
1. Write a short notes on Drag and Drop.
The Macintosh brought to the world in 1984 was Drag and Drop.
There are at least 15 events available for cueing the user during a drag and drop interaction
2. List out the major Events during drag and drop design.
•• Page Load
•• Mouse Hover
•• Mouse Down
•• Drag Initiated
•• Drag Leaves Original Location
•• Drag Re-Enters Original Location
•• Drop Accepted
•• Drop Rejected
•• Drop on Parent Container
3. Define The Actors.
During each event you can visually manipulate a number of actors. The page elements
available include:
•• Page (e.g., static messaging on the page)
•• Cursor
•• Tool Tip
•• Drag Object (or some portion of the drag object, e.g., title area of a module)
•• Drag Object’s Parent Container
•• Drop Target
4. Define Moments of Grid - Drag and Drop.
The grid is a handy tool for planning out interesting moments during a drag and drop
interaction. It serves as a checklist to make sure there are no “holes” in the interaction. Just
Designing Web Interfaces 5.29
place the actors along the left-hand side and the moments along the top. In the grid intersections,
place the desired behaviors.
5. States the Purpose of Drag and Drop
Drag and drop can be a powerful idiom if used correctly. Specifically it is useful for:
•• Drag and Drop Module-Rearranging modules on a page.
•• Drag and Drop List-Rearranging lists.
•• Drag and Drop Object-Changing relationships between objects.
•• Drag and Drop Action-Invoking actions on a dropped object.
•• Drag and Drop Collection-Maintaining collections through drag and drop.
6. What do you mean by Drag distance?
Dragging the thumbnail around does have other issues. Since the object being dragged
is small, it does not intersect a large area. It requires moving the small thumbnail directly to the
place it will be dropped. With iGoogle, the complete module is dragged.
•• Drag rendering-The transparency effect communicates that the object being
dragged is actually a representation of the dragged object.
•• Placeholder targeting - Most explicit way to preview the effect.
•• Midpoint boundary-Requires the least drag effort to move modules around.
•• Full-size module dragging-Coupled with placeholder targeting and midpoint
boundary detection
•• Ghost rendering-Emphasizes the page rather than the dragged object and keeps the
preview clear.
7. How will you design a form to drag and drop?
5.30 Human Computer Interaction
8. What do you mean by Non–drag and drop alternative?
Besides drag and drop, the Netflix queue actually supports two other ways to move
objects around:
•• Edit the row number and then press the “Update DVD Queue” button.
•• Click the “Move to Top” icon to pop a movie to the top.
9. Define Drag lens
Drag and drop works well when a list is short or the items are all visible on the page. But
when the list is long, drag and drop becomes painful. Providing alternative ways to rearrange is
a drag lens while dragging. A good example of this is dragging the insertion bar while editing
text on the iPhone
•• Drag and Drop Object is used to rearrange members of the organization.
•• Normal display state: An organizational chart visually represents relationships.
10. How will you define Direct Selection?
The ability to directly select objects and apply actions to them. Treating elements in
the interface as directly selectable is a clear application of the Make It Direct principle. On the
desktop, the most common approach is to initiate a selection by directly clicking on the object
itself. We call this selection pattern Object Selection
•• Toggle Selection: Checkbox or control-based selection.
•• Collected Selection: Selection that spans multiple pages.
•• Object Selection: Direct object selection.
•• Hybrid Selection: Combination of Toggle Selection and Object Selection.
11. Write short notes on Hybrid Selection
Mixing Toggle Selection and Object Selection in the same interface can lead to a
confusing interface
Clicking and dragging on the unselected bookmark element initiates a drag.
The drag includes both the selected element and the unselected element. Since only one
is shown as selected, this creates a confusing situation.
This occurs because three things are happening in the same space:
•• Toggle Selection is used for selecting bookmarks for editing, deleting, etc.
•• Object Selection is used for initiating a drag drop.
•• Mouse click is used to open the bookmark on a separate page.
12. Define Fitts’s Law
Fitts’s Law is an ergonomic principle that ties the size of a target and its contextual
proximity to ease of use.
Designing Web Interfaces 5.31
Bruce Tognazzini restates it simply as “The time to acquire a target is a function of the
distance to and size of the target”. In other words, if a tool is close at hand and large enough to
target, then we can improve the user’s interaction. Putting tools in context makes for lightweight
interaction
13. List out some of the Contextual Tools
Contextual Tools are the Web’s version of the desktop’s right-click menus.
Instead of having to right-click to reveal a menu, we can reveal tools in context with the
content. We can do this in a number of ways:
•• Always-Visible Tools - Place Contextual Tools directly in the content.
•• Hover-Reveal Tools - Show Contextual Tools on mouse hover.
•• Toggle-Reveal Tools - A master switches to toggle on/off Contextual
Tools for the page.
•• Multi-Level Tools - Progressively reveal actions based on user
interaction.
14. Describes about on Overlays
Overlays are really just lightweight pop ups. We use the term lightweight to make a clear
distinction between it and the normal idea of a browser pop up
•• Browser pop ups display a new browser window. As a result these windows often
take time and a sizeable chunk of system resources to create.
•• rowser pop ups often display browser interface controls (e.g., a URL bar). Due to
security concerns, in Internet Explorer7 the URL bar is a permanent fixture on any
browser pop-up window.
•• Lightweight overlays are just a lightweight inpage object. They are inexpensive to
create and fast to display.
•• The interface for lightweight overlays is controlled by the web application and not
the browser.
•• There is complete control over the visual style for the overlay. This allows the
overlay to be more visually integrated into the application’s interface.
15. Write a note on Lightbox Effect
In photography a lightbox provides a backlit area to view slides.
On the Web, this technique has come to mean bringing something into view by making
it brighter than the background. In practice, this is done by dimming down the background.
You can see the Lightbox Effect pattern used by Flickr when rotating images.
5.32 Human Computer Interaction
16. Compare Inlay & Overlay methods.
•• Use an overlay when there may be more than one place a dialog can be activated
from
•• Use an overlay to interrupt the process.
•• Use an overlay if there is a multi-step process.
•• Use an inlay when you are trying to avoid covering information on the page needed
in the dialog.
•• Use an inlay for contextual information or details about one of many items (as in a
list): a typical example is expanding list items to show detail.
17. Compare and contrast between Paging & Scrolling
Yahoo! Mail chose Virtual Scrolling. Gmail chose Inline Paging.
•• When the data feels “more owned” by the user—in other words, the data is not
transient but something users want to interact with in various ways. If they want to
sort it, filter it, and so on, consider Virtual Scrolling (as in Yahoo! Mail).
•• When the data is more transient (as in search results) and will get less and less
relevant the further users go in the data, Inline Paging works well (as with the
iPhone).
•• For transient data, if you don’t care about jumping around in the data to specific
sections, consider using Virtual Scrolling (as in Live Image Search).
•• If you are concerned about scalability and performance, paging is usually the best
choice. Originally Microsoft’s Live Web Search also provided a scrollbar. However,
the scrollbar increased server-load considerably since users are more likely to scroll
than page.
•• If the content is really continuous, scrolling is more natural than paging.
•• If you get your revenue by page impressions, scrolling may not be an option for your
business model.
•• If paging causes actions for the content to become cumbersome, move to a scrolling
model. This is an issue in Gmail. The user can only operate on the current page.
Changing items across page boundaries is unexpected. Changing items in a
continuous scrolled list is intuitive
18. Discuss about Process Flow
In the last chapters we’ve been discussing the principle Stay on the Page. Sometimes
tasks are unfamiliar or complicated and require leading the user step-by-step through a Process
Flow. It has long been common practice on the Web to turn each step into a separate page. While
this may be the simplest way break down the problem, it may not lead to the best solution. For
some Process Flows it makes sense to keep the user on the same page throughout the process.
Designing Web Interfaces 5.33
19. Write a short note on Configurator Process
Sometimes a Process Flow is meant to invoke delight. In these cases, it is the engagement
factor that becomes most important. This is true with various Configurator Process interfaces
on the Web. We can see this especially at play with car configurators
Apple has a Configurator Process for purchasing a Macintosh computer
20. State the uses of The Magic Principle
Alan Cooper discusses a wonderful technique for getting away from a technology-
driven approach and discovering the underlying mental model of the user. He calls it the “magic
principle.”* Ask the question, “What if when trying to complete a task the user could invoke
some magic?” For example, let’s look at the problem of taking and sharing photos. The process
for this task breaks down like this:
•• Take pictures with a digital camera.
•• Sometime later, upload the photos to a photo site like Flickr. This involves:
•• Finding the cable.
•• Starting iTunes.
•• Importing all photos.
•• Using a second program, such as Flickr Uploadr, to upload the photos to Flickr.
•• Copying the link for a Flickr set
•• Send the link in email to appropriate friends.
•• If some magic were invoked, here is how it might happen:
•• The camera would be event-aware. It would know that is your daughter’s eighth
birthday.
•• When finished taking pictures of the event, the camera would upload the pictures to
Flickr.
•• Flickr would notify family and friends that the pictures of the birthday party are
available.
5.34 Human Computer Interaction
PART B – 16 MARKS
1. Write down the steps to design Web Interface
2. Explain in detail about Direct Selection methods.
3. Discuss in detail about Contextual Tools-Interaction in Context
4. Explain in detail about Contextual toolbar
5. Discuss in details about Overlays
6. Explain in detail about Anti-pattern: Mouse Traps
7. Write in detail about the Patterns that support virtual pages
8. Write in detail about Process Flow
9. Explain The Magic Principle in details.
Question Paper Code : 20379
B.E./B.Tech. DEGREE EXAMINATION, NOVEMBER/DECEMBER 2018
Eighth Semester
Computer Science and Engineering
CS 6801 – MULTI-CORE ARCHITECTURES AND PROGRAMMING
(Regulations 2013)
(Common to PTCS 6801 – Multi-Core Architectures and Programming for B.E. (Part-
Time) – Computer Science and Engineering – Regulations 2014)
Time : Three hours Maximum : 100 marks
Answer ALL questions.
PART A – (10 2 = 20 marks)
1. State Amdahl’s law
2. What is symmetric shared memory?
3. List down the various synchronization primitives in parallel programming
4. Compare deadlock and livelock in terms of resource reservation
5. State the trapezoidal rule in Open MP
6. What are loop-carried dependencies?
7. Write a not on distributed memory machines
8. How to compile an MPI program?
9. Name any two OpenMP environment variables
10. List any two data scoping clauses in OpenMP
PART B – (5 16 = 80 marks)
11. (a) (i) Outline the distributed shared-memory architecture with a diagram (8)
(ii) Present an outline of parallel program design (8)
Or
(a) Highlight the limitations of single core processors and outline how multicore
architectures overcome these limitations (16)
1.2 University Question Paper
12. (a) What is deadlock? Explain the four conditions for deadlock and present an
example for deadlock in a parallel computing environment (16)
Or
(b) (i) Outline the critical section problem with an example (6)
(ii) Explain how semaphores can be used to accomplish mutual exclusion of
parallel-process synchronization with an example. (10)
13. (a) (i) Outline the Open MP execution model (8)
(ii) Discus about Open Mp directives with relevant examples (8)
Or
(b) (i) What is loop-carried dependence? Explain with an example. (8)
(ii) Outline with an example the use of the greatest common divisor test to
determine whether dependences exist in a loop. (8)
14. (a) Explain the structure of an MPI program with an example (16)
Or
(b) (i) Outline collective vs point communications in MPI with an example (8)
(ii) What is a MPI derived data type? How to create a MPI derived data
type? Give any two examples (8)
15. (a) Outline the process of parallelizing depth-first search algorithm using
OpenMP with an example (16)
Question Paper Code : 20346
B.E./B.Tech. DEGREE EXAMINATION, NOVEMBER/DECEMBER 2018
Eighth Semester
Computer Science and Engineering
CS 6008 –HUMAN COMPUTER INTERACTION
(Common to Information Technology)
(Regulations 2013)
University Question Paper 1.3
(Also Common to PTCS 6008 – Human Computer Interaction for B.E. (Part-Time)
Seventh Semester– Computer Science and Engineering – Regulations 2014)
Time : Three hours Maximum : 100 marks
Answer ALL questions.
PART A – (10 2 = 20 marks)
1. What is directive reasoning?
2. List the factors that can limit the speed of an interactive system
3. Identify the steps involved in interaction design process
4. Write down the techniques used for prototyping
5. What is Task-Action Grammar (TAG)?
6. Compare the primary and secondary stack holders
7. Give some examples of World’s largest mobile operators
8. Define color palette
9. List any four principles of designing rich web interface
10. What do you mean by inlay?
PART B – (5 16 = 80 marks)
11. (a) (i) Explain the framework of Human Computer Interaction (10)
(ii) Highlight the features of direct manipulation interface (6)
Or
(b) (i) Discuss the technologies involved in display devices (8)
(ii) Brief about common interface styles used in interactive system (8)
12. (a) (i) Explain the visual tools available for screen design and layout (8)
(ii) Outline the activities involved in waterfall model of software life cycle.
(8)
Or
(b) (i) List and explain the factors that influence for choosing an evaluation
method (8)
1.4 University Question Paper
(ii) Enumerate Norman’s seven principles for transferring difficult task to
simple one in design (8)
13. (a) (i) Explain the concept of key stake level model. (8)
(ii) Describe the stage of Open System Task analysis (OSTA) (8)
Or
(b) (i) What are the four types of textual communication? Explain. (8)
(ii) Write note on Dynamic web content (8)
14. (a) (i) Describe the roles of major mobile operating system (8)
(ii) Tabulate the various mobile design tools and interface tool kits (8)
Or
(b) Exlaborate on Mobile application medium types. (16)
15. (a) (i) Write notes on contextual tools (8)
(ii) Brief about the different types of overlays (8)
Or
(b) Explain the Steps involved in designing a web interface (16)