0% found this document useful (0 votes)
21 views18 pages

Hci Mid-1

HUMAN COMPUTER INTERACTION | fundamentals of cognitive computing and applications of cognitive computinng

Uploaded by

hehe bisnoi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views18 pages

Hci Mid-1

HUMAN COMPUTER INTERACTION | fundamentals of cognitive computing and applications of cognitive computinng

Uploaded by

hehe bisnoi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Q1.

Devise experiments to test the properties of (i) short-term memory, (ii)


long-term memory
Human memory plays a vital role in Human–Computer Interaction (HCI) since it directly
influences how users retain commands, learn interfaces, and recall system operations.
Cognitive psychology divides memory into short-term memory (STM) and long-term
memory (LTM), each with distinct properties. To understand these, experiments can be
devised.
Short-Term Memory Experiments:
Short-term memory, also called working memory, holds limited information temporarily. Its
capacity is about 7±2 chunks of information (Miller’s Law), and information decays rapidly,
usually within 20–30 seconds unless rehearsed.
• Digit Span Test: A common experiment involves reading sequences of random digits
(e.g., 5–10 numbers) and asking subjects to recall them in order. Increasing sequence
length helps identify STM capacity. For instance, most individuals recall 7 digits
accurately, while performance drops beyond that.
• Recency Effect Experiment: Present participants with a list of words, then test recall
immediately. They tend to remember the last few items better due to STM storage.
Delaying recall for 30 seconds while preventing rehearsal reduces this effect, showing
STM’s temporal limitations.
• Chunking Demonstration: Present participants with digits in chunks (e.g.,
198419991947 vs. 1 9 8 4 1 9 9 9 1 9 4 7). Chunked sequences are remembered more
efficiently, showing that STM can be extended through cognitive strategies.
Long-Term Memory Experiments:
Long-term memory stores information over extended periods, potentially lifelong. It has
enormous capacity but slower retrieval compared to STM.
• Recognition vs. Recall: Present participants with a list of words. After some time, ask
them to freely recall the words, then present a recognition test with both old and new
words. Recognition tends to be more accurate, demonstrating differences in retrieval
mechanisms.
• Forgetting Curve (Ebbinghaus): An experiment can involve memorizing nonsense
syllables and testing recall after varying intervals (minutes, hours, days). Results show
rapid forgetting initially, then slower decay, forming the “forgetting curve.”
• Semantic Memory Test: Provide meaningful information (facts or concepts) and
episodic information (personal events). Over time, episodic details fade faster, while
semantic memory remains stronger, highlighting structural differences within LTM.
• Interference Experiment: Subjects learn one word list, then a second list. Recall of
the first declines due to retroactive interference. Similarly, proactive interference can
occur when earlier knowledge disrupts recall of new information.
Applications in HCI:
Understanding STM and LTM is critical for interface design. STM limitations explain why
menus, icons, and consistent layouts are preferred over memorized command-line inputs.
LTM insights justify the use of consistent metaphors, repeated exposure, and training to
help users build durable knowledge. Systems should minimize cognitive load by offloading
memory requirements onto recognition-based interfaces.
Conclusion:
Experiments with STM reveal its limited capacity and rapid decay, while LTM experiments
highlight its vast capacity but vulnerability to interference. Together, these insights inform
HCI design principles such as reducing reliance on memory, supporting recognition over
recall, and ensuring consistency for long-term learning.

Q2. What are mental models, and why are they important in interface
design?
A mental model is an internal cognitive representation that people form about how systems
work based on prior experience, observation, and interaction. In HCI, mental models help
users predict system behavior and guide their actions when using technology. These models
may not reflect the actual underlying system but influence usability.
Nature of Mental Models:
• Users create models from past interactions with similar devices (e.g., expecting a
“trash can” icon to delete files).
• They can be accurate (when system design aligns with real-world metaphors) or
inaccurate (when design is inconsistent or counterintuitive).
• Mental models simplify complex systems, allowing users to function without full
technical knowledge.
Importance in Interface Design:
1. Predictability: A good mental model enables users to anticipate outcomes. For
instance, dragging a file into a folder is expected to move it, not copy it. Interfaces
that support such models reduce errors.
2. Learnability: When interfaces match users’ pre-existing models, they are easier to
learn. A calculator app resembles a physical calculator, leveraging prior knowledge.
3. Error Reduction: Misaligned models cause errors. If a “Save” button is hidden under
non-obvious menus, users may wrongly assume auto-save. Aligning design with user
expectations minimizes mistakes.
4. Consistency: Consistent metaphors and layouts reinforce models, aiding memory. For
example, the “undo” function is expected in most software, regardless of domain.
5. Adaptability: Mental models evolve. Designers must provide feedback (visual,
auditory, or haptic cues) that helps users refine their models during interaction.
Examples in HCI:
• Desktop Metaphor: Early graphical user interfaces (GUIs) used folders, files, and
trash cans to reflect real office environments. This mental model helped users
transition from paper-based work to computers.
• Web Navigation: Users assume blue underlined text is a hyperlink. Violating this
expectation confuses them.
• Smartphones: The “swipe” gesture is now widely understood as a model for
unlocking screens or deleting items.
• Voice Assistants: Some users assume conversational agents understand natural
human reasoning, but limitations often break these models, leading to frustration.
Design Implications:
Interface designers should perform user studies to understand existing mental models and
ensure system metaphors align with them. Where new models must be introduced, designers
should provide progressive disclosure, training aids, and consistent feedback. Usability
testing helps reveal mismatches between designer and user models, often termed the gulf of
execution and evaluation (Norman’s model).
Conclusion:
Mental models are central to HCI because they shape user expectations and behavior.
Designing interfaces that match user models enhances usability, efficiency, and satisfaction.
When mismatches occur, errors and frustration rise. Hence, designers must carefully consider
users’ cognitive frameworks while building interactive systems.

Q3a. Explain about Reasoning and Problem Solving Methods


Reasoning and problem solving are essential cognitive functions that influence how users
interact with computer systems. In HCI, understanding these processes helps designers create
interfaces that support human thought patterns and reduce cognitive effort.
Reasoning:
Reasoning is the mental process of drawing conclusions, making inferences, and generating
solutions from available information. There are three main types:
1. Deductive Reasoning:
o Based on logic and guarantees correctness if premises are true.
o Example: If all files in a folder are images, and this file is in the folder, then it
must be an image.
o Deduction is useful in structured environments like databases or programming.
2. Inductive Reasoning:
o Generalizes from specific cases to broader rules.
o Example: If a user notices that clicking a floppy disk icon always saves their
work, they infer the general rule that the disk symbol means “save.”
o Induction underpins how users learn systems through repeated exposure.
3. Abductive Reasoning:
o Infers the most likely explanation for an observation.
o Example: If a printer error occurs, a user might guess it is due to paper jams,
based on prior experience.
o This is common in troubleshooting system errors.
Problem Solving Methods:
Problem solving is applying reasoning to achieve goals when faced with obstacles. HCI
borrows models from psychology to explain how users approach problems.
1. Gestalt Theory:
o Claims problem solving is both reproductive (using past knowledge) and
productive (insightful restructuring).
o Example: A user stuck on a form may suddenly realize they must scroll down
to find the missing “Submit” button — a moment of insight.
2. Problem Space Theory (Newell & Simon):
o Defines a problem space containing the initial state, goal state, and legal
operators (actions).
o Problem solving is moving from the initial state to the goal using heuristics
like means–ends analysis.
o Example: Debugging code involves reducing the difference between error
messages (current state) and a correctly running program (goal).
3. Analogical Reasoning:
o Involves mapping solutions from familiar problems to new ones.
o Example: Learning to use a spreadsheet by analogy with paper-based
accounting ledgers.
Applications in HCI:
• Error messages must support abductive reasoning by providing clear causes and
solutions.
• Systems should allow trial-and-error learning and incremental exploration to facilitate
inductive reasoning.
• Tutorials and training should use analogies familiar to users.
• Complex systems (e.g., air traffic control) must provide structured feedback that
supports deductive problem solving.
Conclusion:
Reasoning and problem solving underpin human interaction with technology. Designers who
understand these processes can create interfaces that align with users’ thought strategies,
reduce errors, and improve overall usability.
Q3b. Explain about Processing and Networks
Processing and networks are the computational backbone of interactive systems. Their speed,
reliability, and efficiency directly influence user experience in HCI.
Processing in HCI:
A computer’s processing speed determines how quickly it responds to user actions. If too
slow, feedback delays frustrate users; if too fast, information may flash by before users can
process it. This mismatch creates usability problems.
• Slow Processing Issues:
o Input buffering may cause overshooting (e.g., cursor jumps in text editors).
o Variable response times create “icon wars,” where multiple clicks trigger
delayed responses all at once.
• Fast Processing Issues:
o Users cannot keep up if information updates too quickly, as in fast-scrolling
text or real-time dashboards.
• Moore’s Law:
o Historically, processor speeds doubled every 18 months. This exponential
growth meant system capabilities rapidly outpaced human cognitive limits,
shifting HCI focus to managing information overload rather than scarcity.
Networked Computing:
Modern systems rarely operate in isolation; they rely on networks for communication,
resource sharing, and collaboration.
• Impact on HCI:
o Networks enable distributed systems, cloud services, and real-time
collaboration.
o Latency, bandwidth, and reliability directly affect user satisfaction. For
instance, video conferencing usability depends on low-latency networks.
• Limitations:
o Computation-bound: Tasks slow due to limited processor speed.
o Storage-bound: Delays caused by disk read/write times.
o Graphics-bound: Rendering complex visuals slows down interaction.
o Network-bound: Limited bandwidth hampers performance in remote
applications.
Design Implications:
• Interactive systems must balance processing speed with human capabilities. Delays
longer than a second disrupt user flow, while instant responses may overwhelm
perception.
• Progress indicators, buffering messages, and animations help bridge gaps between
human and machine speeds.
• In networked systems, redundancy, error correction, and feedback mechanisms ensure
smoother interaction.
Conclusion:
Processing and networks are invisible to most users, yet their performance shapes interface
usability. Systems that optimize response time while aligning with human cognitive speed
foster effective and satisfying user experiences.

Q4. Explain the concept of (i) Input–Output channel, (ii) The Human
Memory, (iii) Computer Memory
Human–Computer Interaction (HCI) is fundamentally about communication between the
human and the computer. This communication happens through input–output (I/O)
channels, which are supported by human memory systems and the computer’s memory
systems. Understanding these three areas is essential for designing user-centered interactive
systems.
(i) Input–Output Channel
An input–output channel is the medium through which information is exchanged between
humans and computers. For humans, input is received mainly through senses (sight, hearing,
touch) and output occurs through motor actions (speaking, moving hands, clicking, typing).
For computers, input devices (keyboard, mouse, touchscreen, sensors) and output devices
(monitor, speakers, haptic feedback) serve the same role.
• Vision: The most important input channel, where users interpret text, images, icons,
and colors. Design must consider visual angle, brightness, color contrast, and
legibility.
• Hearing: Supports interaction through alerts, alarms, speech recognition, and
multimedia. The ear’s ability to detect pitch, loudness, and timbre is leveraged in
system sounds.
• Touch: Provides haptic feedback, crucial in VR, mobile devices, and accessibility
tools like Braille displays.
• Movement: Interaction speed and accuracy depend on motor control. Fitts’ Law
explains that movement time depends on target size and distance.
(ii) The Human Memory
Human memory enables learning and interaction but has limitations that directly affect
usability.
• Sensory Memory: Very brief storage (milliseconds), capturing raw data from senses
like iconic (visual) and echoic (auditory) memory.
• Short-Term Memory (STM): Acts as working memory, holding 7±2 chunks of
information for a few seconds. It is crucial in recalling commands or menu items.
Designers minimize STM load using recognition-based interfaces.
• Long-Term Memory (LTM): Stores information permanently with huge capacity. It
includes:
o Episodic memory (events/experiences).
o Semantic memory (facts and concepts).
o Procedural memory (skills, e.g., typing).
Experiments such as Ebbinghaus’ forgetting curve show that memory decays over time unless
reinforced. In HCI, interfaces must reinforce learning through repetition, feedback, and
consistent design.
(iii) Computer Memory
Computer memory functions parallel human memory but with technological differences.
• Short-Term Memory (RAM): Temporary storage that supports active processes.
Volatile, fast access, typically in nanoseconds. Comparable to human working
memory.
• Long-Term Memory (Disks): Permanent storage using magnetic or optical media.
Includes hard disks, SSDs, and CD/DVD. Data retrieval is slower than RAM but has
massive capacity.
• Flash Memory: Non-volatile, bridging RAM and disk memory, widely used in
mobile devices.
• Compression and Standards: Data compression reduces storage space (e.g., text
encoding, video compression). Standards like ASCII and Unicode ensure text
compatibility across systems.
Conclusion
The input–output channels form the bridge between human senses and computer devices,
while memory systems on both sides influence efficiency and learning. For effective HCI,
designers must consider the limits of human attention and memory alongside the speed and
storage mechanisms of computer memory, ensuring smooth, intuitive interactions.

Q5. Explain about devices used for positioning, pointing, and drawing in
detail
Positioning, pointing, and drawing are essential interaction tasks in HCI, allowing users to
select, manipulate, and create objects within digital environments. The effectiveness of these
tasks depends heavily on the input devices provided by the computer system.
1. Positioning Devices
Positioning refers to moving a pointer or selecting a location on a screen.
• Mouse: The most common device, using relative movement of a physical device to
control cursor position. It is efficient and precise for desktop environments.
• Trackball: A stationary device where users roll a ball to move the pointer. Compact
and suitable for limited space, often used in laptops and specialized equipment.
• Touchpad: A flat surface detecting finger movements. Common in laptops, offering
portability but less precision compared to a mouse.
• Joysticks: Allow multidirectional movement, often used in gaming, simulations, and
CAD. Joysticks provide fine control in continuous positioning tasks.
2. Pointing Devices
Pointing allows users to select objects, activate commands, or navigate.
• Light Pen: A pen-shaped device that detects screen location when touched to a CRT
monitor. Though outdated, it was once popular for design applications.
• Touchscreen: Enables direct pointing and manipulation by touching the display.
Widely used in smartphones, kiosks, and tablets, offering intuitive interaction.
• Eyegaze Systems: Track the direction of eye movement to control pointer position.
Useful for accessibility and hands-free environments, though costly and sensitive to
calibration.
• Data Glove: A wearable device that detects hand and finger movements, translating
them into pointer actions in virtual reality.
3. Drawing Devices
Drawing requires free-form input for creative or technical work.
• Graphics Tablet and Stylus: Used by designers and artists, offering high precision in
drawing tasks. The stylus provides pressure sensitivity, simulating traditional drawing
tools.
• Touchscreen with Pen Input: Modern tablets like iPads use stylus input for drawing,
combining positioning and drawing functions.
• 3D Input Devices: For modeling and virtual reality, devices like 3D styluses or
motion sensors allow drawing in three-dimensional space.
4. Evaluation of Devices
Each device has advantages and limitations:
• Mouse: High precision, but limited portability.
• Touchscreen: Natural interaction but can suffer from “fat-finger” problems.
• Stylus/Tablet: Excellent for professionals but requires training.
• Eyegaze: Supports accessibility but less suited for everyday tasks.
Conclusion
Positioning, pointing, and drawing devices are fundamental to user interaction. Device
selection should align with task demands, user preferences, and context. For instance, a
mouse is best for desktop precision, while touchscreens dominate mobile interfaces. Future
innovations like VR gloves and gesture recognition will further expand these interaction
modalities.

Q6. Write short notes on (i) Psychology and Design of Interactive Systems
(ii) Text Entry Devices (iii) Models of Interaction (B) Ergonomics
(i) Psychology and Design of Interactive Systems
Psychology provides the foundation for understanding how humans perceive, process, and
respond to technology. In HCI, psychological principles guide the design of interactive
systems.
• Cognitive Psychology: Focuses on perception, memory, reasoning, and learning.
Designers must consider STM limitations (7±2 rule) and long-term learning
strategies. Recognition-based interfaces (menus, icons) are easier than recall-based
ones (command lines).
• Perception and Attention: Visual perception guides screen layout, color use, and
contrast. Auditory psychology influences alarms and alerts. Attention theories
highlight the importance of reducing distractions in design.
• Human Error: Psychology explains slips, mistakes, and biases. Error-tolerant
systems, undo functions, and confirmation prompts reduce cognitive stress.
• Motivation and Emotions: User satisfaction and enjoyment influence adoption.
Psychology ensures that systems are not only functional but also engaging.
(ii) Text Entry Devices
Text entry remains a core task in HCI. Several devices enable this input:
• Keyboard: Standard QWERTY keyboards dominate, with variations like ergonomic
split keyboards for comfort.
• Virtual Keyboards: Found on touchscreens; they provide flexibility but may slow
typing compared to physical keyboards.
• Speech Recognition: Converts spoken language into text, reducing manual effort, but
is affected by background noise and accents.
• Handwriting Recognition: Stylus or pen input on tablets, useful for natural writing,
though less accurate than typing.
• Specialized Input Devices: Braille keyboards for visually impaired users and chord
keyboards for compact input.
(iii) Models of Interaction
Models help explain and predict interaction.
• Norman’s Execution–Evaluation Cycle: Defines seven stages (goal formation,
intention, action specification, execution, perception, interpretation, evaluation).
Useful in diagnosing usability gaps (gulf of execution and evaluation).
• Interaction Framework: Describes interaction through four components: user,
system, input, and output. Translations (articulation, performance, presentation,
observation) determine usability.
• Dialogue Models: Treat interaction as structured communication, shaping menus,
forms, and command languages.
(B) Ergonomics
Ergonomics studies the physical and environmental aspects of interaction.
• Physical Layout: Controls should be arranged functionally, sequentially, or by
frequency of use.
• Health Factors: Proper seating, back support, lighting, and noise reduction improve
comfort and safety.
• Environmental Factors: Temperature, ventilation, and workspace organization
influence efficiency.
• Color and Display Use: Colors should align with conventions (red = danger, green =
safe). Overuse of blue is discouraged due to poor visibility.
Conclusion: Psychology emphasizes cognitive processes, ergonomics ensures physical
comfort, and interaction models provide frameworks. Together, they create user-friendly
systems.

Q7. (i) Process of Design and Golden Rules of Design (ii) Navigation Design
through Levels of Interaction and Screen Design
(i) Process of Design and Golden Rules
The design of interactive systems is not simply about aesthetics; it involves a careful balance
of usability, functionality, and user satisfaction. The process of design in HCI typically
follows an iterative cycle, ensuring that systems meet real user needs.
Design Process Steps:
1. Requirements Analysis: Identify user tasks, goals, and constraints through
interviews, questionnaires, and observations. For example, designing a hospital
management system requires studying doctors, nurses, and administrative workflows.
2. Conceptual Design: Develop mental models and metaphors (e.g., using “shopping
cart” for e-commerce). This stage establishes how users will understand the system.
3. Prototyping: Create low- or high-fidelity prototypes that simulate system
functionality. Prototypes allow for early user feedback, reducing costly errors in later
stages.
4. Evaluation: Conduct usability testing (heuristic evaluations, walkthroughs, and
performance metrics) to identify problems.
5. Implementation: Translate design into working software with attention to interface
consistency, performance, and accessibility.
6. Deployment and Maintenance: Deploy the system, gather user feedback, and update
features to keep pace with user needs.
Golden Rules of Design (Shneiderman’s Eight Rules):
1. Strive for Consistency: Uniform commands, fonts, and layouts reduce confusion.
2. Enable Universal Usability: Ensure accessibility for beginners, experts, and
differently-abled users.
3. Offer Informative Feedback: Immediate feedback after every action helps users feel
in control.
4. Design Dialogs to Yield Closure: Clear beginnings, processes, and endings in
interactions (e.g., confirmation screens).
5. Prevent Errors and Allow Recovery: Use constraints, error messages, and undo
options.
6. Permit Easy Reversal of Actions: Reduce anxiety with undo/redo functionality.
7. Support Internal Locus of Control: Users should feel they direct the system, not
vice versa.
8. Reduce Short-Term Memory Load: Avoid overloading STM; use recognition
instead of recall (e.g., menus instead of remembering commands).
(ii) Navigation Design and Screen Design
Navigation Design ensures smooth movement through an application, reducing the cognitive
effort needed to locate functions or data.
• Levels of Interaction:
o Command Line Interfaces (CLI): Efficient for experts but cognitively
demanding for novices.
o Menus and Forms: Easier for casual users as they depend on recognition
rather than recall.
o Direct Manipulation: Drag-and-drop interfaces are intuitive for beginners.
o Natural Language Interaction: Supports intuitive communication but risks
ambiguity.
• Navigation Aids: Breadcrumbs, search bars, menus, and hyperlinks help reduce
disorientation, especially in complex applications.
Screen Design Principles:
• Clarity: Use legible fonts, consistent colors, and sufficient spacing.
• Grouping: Group related controls together (e.g., toolbar icons).
• Feedback: Highlight active items and provide status indicators.
• Minimalism: Avoid clutter; provide only relevant information.
• Accessibility: Consider font size, contrast ratios, and colorblind-friendly palettes.
Conclusion:
The design process and golden rules form the foundation of user-friendly systems, while
navigation and screen design principles ensure that interaction remains efficient, consistent,
and satisfying across all user levels.

Q8. (i) Waterfall Model of Software Development Life Cycle (ii)


Prototyping Model
(i) Waterfall Model
The Waterfall Model is one of the earliest and most structured approaches to software
development. It is sequential and linear, where each phase must be completed before
moving to the next.
Phases of the Waterfall Model:
1. Requirement Analysis: Detailed documentation of user needs. For instance, an
online exam portal would require secure login, timed tests, and automated grading.
2. System Design: Develop architecture, data structures, and UI mockups.
3. Implementation (Coding): Translate designs into actual software modules.
4. Integration and Testing: Verify functionality, check for errors, and ensure system
reliability.
5. Deployment: Install the system in a real environment.
6. Maintenance: Fix bugs, improve performance, and adapt to new requirements.
Advantages:
• Simple, structured, and easy to manage.
• Clear documentation at each stage.
• Suitable for projects with stable requirements.
Disadvantages:
• Rigid and inflexible to changes.
• Poor for projects with evolving needs.
• Late discovery of design flaws due to lack of early user involvement.
(ii) Prototyping Model
Unlike the rigid waterfall, the Prototyping Model emphasizes iterative development with
early user involvement.
Steps in Prototyping:
1. Requirements gathering.
2. Quick design of a rough prototype.
3. User evaluation and feedback.
4. Refinement through multiple iterations.
5. Final system development and deployment.
Types:
• Throwaway Prototype: Built for exploration and discarded after evaluation.
• Evolutionary Prototype: Refined progressively until it becomes the final product.
Advantages:
• Reduces misunderstandings by showing early models to users.
• Encourages user involvement and feedback.
• Helps discover usability problems early.
Disadvantages:
• May increase development time if too many iterations occur.
• Users may mistakenly view prototypes as final systems.
• Risk of scope creep if not carefully managed.
Conclusion:
The waterfall model ensures discipline and order, while prototyping supports user-centered
and flexible design. In modern HCI, prototyping is preferred because it allows iterative
refinement, usability testing, and adaptation to user needs.

Q9. (i) Universal Design Principles (ii) Evaluation Techniques


(i) Universal Design Principles
Universal design ensures that systems are inclusive and accessible to everyone, regardless
of age, ability, or cultural background.
Seven Principles of Universal Design:
1. Equitable Use: The design should be accessible to people with diverse abilities.
Example: ramps and elevators in buildings.
2. Flexibility in Use: Multiple ways to perform tasks (e.g., touch, voice, or keyboard
inputs).
3. Simple and Intuitive: Reduce complexity. Smartphones use icons like a phone
symbol for calling, which is universally understood.
4. Perceptible Information: Communicate effectively regardless of sensory ability
(subtitles for videos, vibration alerts).
5. Tolerance for Error: Anticipate mistakes and provide recovery (e.g., undo function,
trash folder).
6. Low Physical Effort: Minimize fatigue by reducing repetitive tasks (auto-fill forms,
shortcut keys).
7. Size and Space for Approach and Use: Ensure proper spacing for different users,
including wheelchair users or those with large hands.
Benefits in HCI:
• Promotes accessibility for differently-abled users.
• Encourages global usability by accounting for cultural diversity.
• Improves user satisfaction and system adoption.
(ii) Evaluation Techniques
Evaluation ensures systems meet usability goals.
Types of Evaluation:
• Analytical Evaluation: Performed without users.
o Heuristic Evaluation: Experts review system based on usability principles.
o Cognitive Walkthrough: Analysts step through tasks as novice users would.
• Empirical Evaluation: Involves actual users.
o User Testing: Observe users performing tasks.
o Field Studies: Real-world observation.
o Surveys and Interviews: Capture subjective opinions.
• Performance Metrics: Time to complete tasks, error rates, and learning curves.
Conclusion:
Universal design makes systems inclusive, while evaluation techniques ensure those
principles translate into practical usability. Together, they create accessible, reliable, and
satisfying interactive systems.

Q10. Interface Styles Analysis Using Interaction Framework


For this question, let’s analyze menus and natural language interfaces using Abowd and
Beale’s interaction framework, focusing on a database selection task.
Menus:
• Articulation (User → Input): User expresses intent by selecting from visible options.
• Performance (Input → System): System executes chosen command.
• Presentation (System → Output): Results are displayed.
• Observation (Output → User): User checks whether desired results appear.
• Distances: Gulf of execution is small (recognition-based), but gulf of evaluation may
be large if menus are too deep.
Natural Language Interfaces (NLI):
• Articulation: User types or speaks a query in plain language (e.g., “Show me all
students with GPA above 8.0”).
• Performance: System interprets query.
• Presentation: Results shown in tabular form.
• Observation: User verifies correctness.
• Distances: Gulf of execution is small (natural phrasing), but gulf of evaluation may be
large if interpretation fails.
Comparison:
Menus are reliable and structured but limit flexibility. NLIs are intuitive but prone to
ambiguity. For novices, menus reduce cognitive load, while experts may prefer natural
language for efficiency.
Conclusion:
The interaction framework reveals that menus minimize execution errors, while natural
language increases expressiveness at the cost of possible misinterpretation. Designers often
combine both (e.g., search bars with auto-suggest menus).

Q11. Physical Controls and Displays


Physical controls and displays form the tangible interface between humans and machines.
Examples:
• Controls: Buttons, sliders, knobs, keyboards, joysticks, switches.
• Displays: Screens, gauges, meters, LED lights, alarms.
Classification:
• Discrete vs. Continuous Controls: Buttons (on/off) vs. sliders (gradual change).
• Visual vs. Auditory Displays: LED indicators vs. warning alarms.
• Direct vs. Indirect Controls: Touchscreens (direct) vs. mice (indirect).
Suitability:
A control’s suitability depends on context. For example, volume adjustment is best with a
slider (continuous control), while a power switch is best as a discrete button. Similarly,
critical alerts require auditory signals (alarms) since they cannot be missed.
Conclusion:
By classifying controls and aligning them with task requirements, designers ensure that
interfaces are ergonomic, efficient, and intuitive.

Q12. Input Devices (Joystick, Light Pen, Touchscreen, Trackball, Eyegaze,


Data Glove)
• Joystick: Ideal for simulations and games; provides continuous control but lacks
precision for typing.
• Light Pen: Accurate for pointing/drawing on CRTs; largely obsolete but historically
important.
• Touchscreen: Highly intuitive, used in smartphones and kiosks. Supports gestures but
suffers from “fat-finger” errors.
• Trackball: Offers precise movement in limited space; common in specialized
systems.
• Eyegaze: Tracks eye movement, excellent for accessibility but expensive.
• Data Glove: Captures hand/finger gestures, key in VR applications.
Comparison: Each device supports specific interactions. Touchscreens dominate consumer
devices, while eyegaze and datagloves address specialized contexts.
Conclusion: Input devices must match user tasks, context, and accessibility requirements.
Their variety reflects the diverse needs of modern interactive systems.

Q13. (i) Socio-Organizational Issues (ii) Cognitive Architecture


(i) Socio-Organizational Issues
Technology adoption is not purely technical—it involves social and organizational factors:
• Training Needs: Inadequate training reduces acceptance.
• Organizational Culture: Resistance arises if new systems threaten existing work
practices.
• Work Distribution: New software can shift responsibilities, sometimes causing
conflict.
• Collaboration: Systems must support teamwork, not isolate individuals.
• Cultural Differences: Global systems must adapt to different languages, norms, and
work ethics.
Example: Introducing ERP software may face resistance from employees accustomed to
manual processes. Successful adoption requires training, incentives, and organizational
change management.
(ii) Cognitive Architecture
Cognitive architecture refers to models that simulate human cognition, guiding system
design.
• ACT-R: Models human memory, attention, and decision-making.
• SOAR: Simulates problem-solving and learning.
• GOMS Model: Breaks tasks into Goals, Operators, Methods, and Selection rules to
predict performance.
Application in HCI: Cognitive architectures help predict user behavior, design training
systems, and evaluate task complexity.
Conclusion: Socio-organizational factors ensure acceptance of technology, while cognitive
architectures provide scientific models for designing interfaces that align with human
thinking.

Q14. (i) Three-State Model (ii) Organizational Issues


(i) Three-State Model
Proposed by Buxton, this model explains how pointing devices operate in three states:
1. Tracking State: Pointer moves freely (mouse movement).
2. Dragging State: Movement occurs while a button is pressed (drag-and-drop).
3. Out-of-Range State: Device inactive or not connected.
This model is crucial in GUI systems, as it underlies tasks like selecting, moving, and
resizing objects. It also clarifies how devices like touchscreens and styluses work, aiding the
design of intuitive direct manipulation interfaces.
(ii) Organizational Issues
Organizational issues in HCI include:
• Integration with Workflow: Systems must align with real tasks.
• Cost and Resource Allocation: New systems require investment in hardware, training,
and maintenance.
• Management Support: Without leadership buy-in, systems fail.
• User Roles: Redefining tasks may cause role conflict.
• Change Management: Adoption requires managing resistance and ensuring smooth
transition.
Example: A university introducing e-learning platforms must restructure teaching methods,
provide training, and gain support from faculty and students.
Conclusion: The three-state model explains technical interaction at the device level, while
organizational issues address broader social and managerial concerns. Both are crucial for
successful interactive system design.

You might also like