Assignment 1 - Merged
Assignment 1 - Merged
Problem Definition: List five technologies from the Knowledge Navigator video that were not
around in 1987, but are in widespread use today.
Learning Objectives: Learn the technologies from the Knowledge Navigator video
Outcomes: After completion of this assignment students will be able to learn various
technologies from the knowledge navigator video.
Theory Concepts:
The Knowledge Navigator is a concept described by former Apple Computer CEO John Sculley
in his 1987 book, Odyssey: Pepsi to Apple. It describes a device that can access a large networked
database of hypertext information, and use software agents to assist searching for information.
Videos Apple produced several concept videos showcasing the idea. All of them featured a tablet
style computer with numerous advanced capabilities, including an excellent text-to-speech
system with no hint of "computerese", a gesture based interface resembling the multi-touch
interface later used on the iPhone and an equally powerful speech understanding system,
allowing the user to converse with the system via an animated "butler" as the software agent.
In one vignette a university professor returns home and turns on his computer, in the form of a
tablet the size of a large-format book. The agent is a bow-tie wearing butler who appears on the
screen and informs him that he has several calls waiting. He ignores most of these, from his
mother, and instead uses the system to compile data for a talk on deforestation in the Amazon
Rainforest. While he is doing this, the computer informs him that a colleague is calling, and they
then exchange data through their machines while holding a video based conversation.
In another such video, a young student uses a smaller handheld version of the system to prompt
him while he gives a class presentation on volcanoes, eventually sending a video of an exploding
volcano to the video "blackboard". In a final installment a user scans in a newspaper by placing
it on the screen of the full-sized version, and then has it help him learn to read by listening to
him read the scanned results, and prompting when he pauses.
Credits
The videos were funded and sponsored by Bud Colligan, Director of Apple's higher education
marketing group, written and creatively developed by Hugh Dubberly and Doris Mitsch of Apple
Creative Services, with technical and conceptual input from Mike Liebhold of Apple's Advanced
Technologies Group and advice from Alan Kay, then an Apple Fellow.The videos were produced
by The Kenwood Group in San Francisco and directed by Randy Field. The director of
photography was Bill Zarchy. The post-production mix was done by Gary Clayton at Russian Hill
Recording for The Kenwood Group. The product industrial design was created by Gavin Ivester
and Adam Grosser of Apple design.
Samir Arora, a software engineer at Apple was involved in R&D on application navigation and
what was then called hypermedia. He wrote an important white paper entitled “Information
Navigation: The Future of Computing". While working for Apple CEO John Sculley at the time,
Arora built the technology to show fluid access to linked data displayed in a friendly manner, an
emerging area of research at Apple. The Knowledge Navigator video premiered in 1987 at
Educom, the leading higher education conference, in a keynote by John Sculley, with demos of
multimedia, hypertext and interactive learning directed by Bud Colligan. The music featured in
this video is Georg Anton Benda 's Harpsichord Concerto in C.
Reception
The astute bow tie wearing software agent in the video has been the center of quite a few heated
discussions in the domain of human–computer interaction. It was criticized as being an
unrealistic portrayal of the capacities of any software agent in the foreseeable future, or even in a
distant future.[citation needed] Some user interface professionals like Ben Shneiderman of the
University of Maryland, College Park have also criticized its use of a human likeness for giving a
misleading idea of the nature of any interaction with a computer, present or future. Some
visions put forth by proponents of the Semantic Web have been likened to that of the Knowledge
Navigator by Marshall and Shipman, who argue that some of these visions "ignore the difficulty
of scaling knowledge-based systems to reason across domains, like Apple's Knowledge
Navigator," and conclude that, as of 2003, "scenarios of the complexity of [a previously quoted]
Knowledge Navigator-like approach to interacting with people and things in the world seem
unlikely."
Siri
The notion of Siri was firmly planted at Apple 25 years ago though “Knowledge Navigator” with
the voice of the assistant was only a concept prototype. In one of the videos, a man is seen asking
the assistant to search for an article published 5 years before his time, the assistant finds it and
tells the article being dated to 2006, and due to this we can conclude that the video is set to take
place in September 2011. In October 2011, Apple relaunched Siri, a voice activated personal
assistant software vaguely similar to that aspect of the Knowledge Navigator just a month after
their initial prediction.
The Making of Knowledge Navigator : Apple made the Knowledge Navigator video for a
keynote speech that John Sculley gave at Educom (the premier college computer tradeshow and
an important event in a large market for Apple). Bud Colligan who was then running
higher-education marketing at Apple asked us to meet with John about the speech. John
explained he would show a couple examples of student projects using commercially available
software simulation packages and a couple university research projects Apple was funding. He
wanted three steps:
1. what students applere doing now
2. research that would soon move out of labs, and
3. a picture of the future of computing.
He asked us to suggest some ideas. apple suggested a couple approaches including a short
“science-fiction video.” John choose the video. Working with Mike Liebhold (a researcher in
Apple’s Advanced Technologies Group) and Bud, apple came up with a list of key technologies to
illustrate in the video, e.g., networked collaboration and shared simulations, intelligent agents,
integrated multimedia and hypertext. John then highlighted these technologies in his speech.
apple had about 6 weeks to write, shoot, and edit the video—and a budget of about $60,000 for
production. apple began with as much research as apple could do in a few days. apple talked
with Aaron Marcus and Paul Saffo. Stewart Brand’s book on the “Media Lab” was also a
source—as well as earlier visits to the Architecture Machine Group. apple also read William
Gibson’s “Neuromancer” and Verber Vinge’s “True Names.” At Apple, Alan Kay, who was then
an Apple Fellow, provided advice. Most of the technical and conceptual input came from Mike
Liebhold. apple collaborated with Gavin Ivester in Apple’s Product Design Group who designed
the “device” and had a wooden model built in little more than a week. Doris Mitch who worked
in my group wrote the script. Randy Field directed the video, and the Kenwood Group handled
production.
The project had three management approval steps:
1. the concept of the science fiction video,
2. the key technology list, and
3. the script.
It moved quickly from script to shooting without a full storyboard—largely because apple didn’t
have time to make one. The only roughs were a few Polaroid snapshots of the location, two
sketches showing camera position and movement, and a few sketches of the screen. apple
showed up on location very early and shot for more than 12 hours. (Completing the shoot within
one day was necessary to stay within budget.) The computer screens were developed over a few
days on a video paint box. (This was before Photoshop.)
The video form suggested the talking agent as a way to advance the “story” and explain what the
professor was doing. Without the talking agent, the professor would be silent and pointing
mysteriously at a screen. Apple thought people would immediately understand that the piece
was science fiction because the computer agent converses with the professor—something that
only happened in Star Trek or Star Wars.
What is surprising is that the piece took on a life of its own. It spawned half a dozen or more
sequels within Apple, and several other companies made similar pieces. These pieces were
marketing materials. They supported the sale of computers by suggesting that a company
making them has a plan for the future. They were not inventing new interface ideas. (The
production cycles didn’t allow for that.) Instead, they were about visualizing existing ideas—and
pulling many of them together into a reasonably coherent environment and scenario of use. A
short while into the process of making these videos, Alan Kay said, “The main question here is
not is this technology possible but is this the way Apple wants to use technology?” One effect of
the video was engendering a discussion (both inside Apple and outside) about what computers
should be like. On another level, the videos became a sort of management tool. They suggested
that Apple had a vision of the future, and they prompted a popular internal myth that the
company was “inventing the future.”
Technologies
Apple's Siri
Siri is Apple's personal assistant for iOS, macOS, tvOS and watchOS devices that uses voice
recognition and is powered by artificial intelligence (AI). Siri responds to users' spoken
questions by speaking back to them through the device's speaker and presenting relevant
information on the home screen from certain apps, such as Web Search or Calendar. The service
also lets users dictate emails and text messages, reads received emails and messages and
performs a variety of other tasks.
Voice of Siri
ScanSoft, a software company that merged with Nuance Communications in 2005, hired
voiceover artist Susan Bennett that same year when the scheduled artist was absent. Bennett
recorded four hours of her voice each day for a month in a home recording studio, and the
sentences and phrases were linked together to create Siri's voice. Until a friend emailed her in
2011, Bennett wasn't aware that she had become the voice of Siri. Although Apple never
acknowledged that Bennett was the original Siri voice, audio experts at CNN confirmed it. Karen
Jacobsen, a voiceover artist known for her work on GPS systems, provided the original
Australian female voice. Jon Biggs, a former tech journalist, provided Siri's British male voice.
Apple developed a new female voice for iOS 11 with deep learning technology by recording hours
of speech from hundreds of candidates.
Amazon's Alexa
Amazon Alexa, also known simply as Alexa, is a virtual assistant technology largely based on a
Polish speech synthesizer named Ivona, bought by Amazon in 2013. It was first used in the
Amazon Echo smart speaker and the Echo Dot , Echo Studio and Amazon Tap speakers
developed by Amazon Lab126. It is capable of voice interaction, music playback, making to-do
lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic,
sports, and other real-time information, such as news. Alexa can also control several smart
devices using itself as a home automation system. Users are able to extend the Alexa capabilities
by installing "skills" (additional functionality developed by third-party vendors, in other settings
more commonly called apps) such as weather programs and audio features. It uses automatic
speech recognition, natural language processing, and other forms of weak AI to perform these
tasks. Most devices with Alexa allow users to activate the device using a wake-word (such as
Alexa or Amazon); other devices (such as the Amazon mobile app on iOS or Android and
Amazon Dash Wand) require the user to click a button to activate Alexa's listening mode,
although, some phones also allow a user to say a command, such as "Alexa" or "Alexa wake".
Functions
Alexa can perform a number of preset functions out-of-the-box such as set timers, share the
current weather, create lists, access Wikipedia articles, and many more things. Users say a
designated "wake word" (the default is simply "Alexa") to alert an Alexa-enabled device of an
ensuing function command. Alexa listens for the command and performs the appropriate
function, or skill, to answer a question or command. When questions are asked, Alexa converts
sound waves into text which allows it to gather information from various sources. Behind the
scenes, the data gathered is then sometimes passed to a variety of sources including
WolframAlpha, iMDB, AccuWeather, Yelp, Wikipedia, and others to generate suitable and
accurate answers. Alexa-supported devices can stream music from the owner's Amazon Music
accounts and have built-in support for Pandora and Spotify accounts. Alexa can play music from
streaming services such as Apple Music and Google Play Music from a phone or tablet. In
addition to performing pre-set functions, Alexa can also perform additional functions through
third-party skills that users can enable. Some of the most popular Alexa skills in 2018 included
"Question of the Day" and "National Geographic Geo Quiz" for trivia; "TuneIn Live" to listen to
live sporting events and news stations; "Big Sky" for hyper local weather updates; "Sleep and
Relaxation Sounds" for listening to calming sounds; "Sesame Street" for children's
entertainment; and "Fitbit" for Fitbit users who want to check in on their health stats In 2019,
Apple, Google, Amazon, and Zigbee Alliance announced a partnership to make their smart home
products work together.
Microsoft's Cortana
Cortana is a virtual assistant developed by Microsoft that uses the Bing search engine to perform
tasks such as setting reminders and answering questions for the user. Cortana is currently
available in English, Portuguese, French, German, Italian, Spanish, Chinese, and Japanese
language editions, depending on the software platform and region in which it is used. Microsoft
began reducing the prevalence of Cortana and converting it from an assistant into different
software integrations in 2019. It was split from the Windows 10 search bar in April 2019.In
January 2020, the Cortana mobile app was removed from certain markets, and on March 31,
2021, the Cortana mobile app was shut down globally.
Functionality
Cortana can set reminders, recognize natural voice without the requirement for keyboard input,
and answer questions using information from the Bing search engine (For example, current
weather and traffic conditions, sports scores, biographies).Searches using Windows 10 are made
only with the Microsoft Bing search engine, and all links will open with Microsoft Edge, except
when a screen reader such as Narrator is being used, where the links will open in Internet
Explorer. Windows Phone 8.1's universal Bing SmartSearch features are incorporated into
Cortana, which replaces the previous Bing Search app, which was activated when a user presses
the "Search" button on their device. Cortana includes a music recognition service. Cortana can
simulate rolling dice and flipping a coin. Cortana's "Concert Watch" monitors Bing searches to
determine the bands or musicians that interest the user. It integrates with the Microsoft Band
watch band for Windows Phone devices if connected via Bluetooth, it can make reminders and
phone notifications.
Since the Lumia Denim mobile phone series, launched in October 2014, active listening was
added to Cortana enabling it to be invoked with the phrase: "Hey Cortana". It can then be
controlled as usual.Some devices from the United Kingdom by O2 received the Lumia Denim
update without the feature, but this was later clarified as a bug and Microsoft has since fixed it.
Cortana integrates with services such as Foursquare to provide restaurant and local attraction
recommendations and LIFX to control smart light bulbs.
Google's Assistant
Google Assistant is a virtual assistant software application developed by Google that is primarily
available on mobile and home automation devices. Based on artificial intelligence, Google
Assistant can engage in two-way conversations, unlike the company's previous virtual assistant,
Google Now.
Google Assistant debuted in May 2016 as part of Google's messaging app Allo, and its
voice-activated speaker Google Home. After a period of exclusivity on the Pixel and Pixel XL
smartphones, it was deployed on other Android devices starting in February 2017, including
third-party smartphones and Android Wear (now Wear OS), and was released as a standalone
app on the iOS operating system in May 2017. Alongside the announcement of a software
development kit in April 2017, Assistant has been further extended to support a large variety of
devices, including cars and third-party smart home appliances. The functionality of the Assistant
can also be enhanced by third-party developers.
Users primarily interact with the Google Assistant through natural voice, though keyboard input
is also supported. Assistant is able to answer questions, schedule events and alarms, adjust
hardware settings on the user's device, show information from the user's Google account, play
games, and more. Google has also announced that Assistant will be able to identify objects and
gather visual information through the device's camera, and support purchasing products and
sending money. At CES 2018, the first Assistant-powered smart displays (smart speakers with
video screens) were announced, with the first one being released in July 2018. In 2020, Google
Assistant will be available on more than 1 billion devices. Google Assistant is available in more
than 90 countries and in over 30 languages, and is used by more than 500 million users monthly
Samsung’s bixby.
Bixby is a virtual assistant developed by Samsung Electronics. It represents a major reboot for S
Voice, Samsung's voice assistant app introduced in 2012 with the Galaxy S III S Voice was later
discontinued on 1 June 2020. In May 2017, Samsung announced that Bixby would be coming to
its line of Family Hub 2.0 refrigerators, making it the first non-mobile product to include the
virtual assistant.
Samsung’s Bixby digital assistant lets you control your smartphone and select connected
accessories. You can open apps, check the weather, play music, toggle Bluetooth, and much
more. You’ll find everything you need to know about the Google rival below, including how to
access it, the features it offers, and which devices it’s available on.
The most interesting and helpful component is of course Bixby Voice, which lets you use voice
commands to get stuff done. It works with all Samsung apps and a few third-party apps,
including Instagram, Gmail, Facebook, and YouTube. With Voice you can send text messages,
check sports scores, turn down screen brightness, check your calendar, launch apps, and more.
The tech can also read out your latest incoming messages, and flip between male and female
versions. Like Google Assistant, Bixby can handle some more complicated two-step commands,
such as creating an album with your vacation photos and sharing it with a friend.
Conclusion: Successfully studied technologies from the Knowledge Navigator video that were
not around in 1987, but are in widespread use today.
Assignment No. 02
Title: Assignment based on GOMS (Goals, Operators, Methods and Selection rules)
modeling technique
Outcomes: After completion of this assignment students will be able to learn the
GOMS (Goals, Operators,Methods and Selection rules) modeling technique.
Theory:
GOMS is a model of human performance and it can be used to improve
human-computer interaction efficiency by eliminating useless or unnecessary
interactions.
GOMS is an abbreviation from:
G → Goals
O → Operators
M → Methods
S → Selection
Advantages:
❖ Gives qualitative & quantitative measures
❖ Model explains the results
❖ Less work than user study – no users!
❖ Easy to modify when UI is revised
❖ Research: tools to aid modeling process since it can still be tedious
Disadvantages:
❖ Takes lots of time, skill, & effort
❖ Only works for goal-directed tasks
❖ Assumes tasks performed by experts without error
❖ Does not address several UI issues,
❖ Readability, memorability of icons, commands
2.CMN-GOMS :
It is Stand for Card Moran Newell GOMS.it is cognitive model. When operators are
strictly sequential.
Example:
GOAL: DELETE-FILE
. GOAL: SELECT-FILE
. . [select: GOAL: KEYBOARD-TAB-METHOD
. . GOAL: MOUSE-METHOD]
. . VERIFY-SELECTION
. GOAL: ISSUE-DELETE-COMMAND
. . [select*: GOAL: KEYBOARD-DELETE-METHOD
. . . PRESS-DELETE
. . . GOAL: CONFIRM-DELETE
. . GOAL: DROP-DOWN-MENU-METHOD
. . . MOVE-MOUSE-OVER-FILE-ICON
. . . CLICK-RIGHT-MOUSE-BUTTON
. . . LOCATE-DELETE-COMMAND
. . . MOVE-MOUSE-TO-DELETE-COMMAND
. . . CLICK-LEFT-MOUSE-BUTTON
. . . GOAL: CONFIRM-DELETE
. . GOAL: DRAG-AND-DROP-METHOD
. . . MOVE-MOUSE-OVER-FILE-ICON
. . . PRESS-LEFT-MOUSE-BUTTON
. . . LOCATE-RECYCLING-BIN
. . . MOVE-MOUSE-TO-RECYCLING-BIN
. RELEASE-LEFT-MOUSE-BUTTON]
*Selection rule for GOAL: ISSUE-DELETE-COMMAND
If hands are on keyboard, use KEYBOARD-DELETE-METHOD, else if Recycle bin is
visible, use DRAG-AND-DROP-METHOD, else use DROP-DOWN-MENU-METHOD
3.NGOMSL:
NGOMSL “Natural GOMS Language”
• Formal language with restricted English syntax
• The benefit of the formal language is that each statement roughly corresponds to a
primitive mental chunk, so you can estimate the learning time (total execution time)
NGOMSL for move text:
4.CPM- GOMS:
• Cognitive Perceptual Motor or Critical Path Method is another variant of GOMS
• Unlike KLM and other models (serial operations) CPM-GOMS model handles parallel
operations
Ex: Point and shift-click
CPM- GOMS CPM- GOMS have:
• A perceptual processor (PP)
• A cognitive processor (CP)
• Multiple motor processors (MP) :
one for each major muscle system that can act independently For GUI interfaces, the
muscles we mainly care about are the two hands and the eyes
Assignment No. 3
Title : Assignment based on knowledge of Web Design guidelines and general UI
design principles
Problem Definition: Using your observations from your small user study and your
knowledge of Web Design guidelines and general UI design
principles, Critique two interfaces of any two educational
institute and make suggestions for improvement.
Learning Objectives: Learn the Knowledge of Web Design and UI design principles
Outcomes: After completion of this assignment students will able to give suggestions
for improvement for Web Design and UI design for any website.
Theory Concepts:
An effective website design should fulfil its intended function by conveying its
particular message whilst simultaneously engaging the visitor. Several factors such as
consistency, colours, typography, imagery, simplicity, and functionality contribute to
good website design.
When designing a website there are many key factors that will contribute to how it is
perceived. A well-designed website can help build trust and guide visitors to take
action. Creating a great user experience involves making sure your website design is
optimised for usability (form and aesthetics) and how easy is it to use (functionality).
Below are some guidelines that will help you when considering your next web project.
lOMoARcPSD|31712146
1. WEBSITE PURPOSE
Your website needs to accommodate the needs of the user. Having a simple clear
intention on all pages will help the user interact with what you have to offer. What
is the purpose of your website? Are you imparting practical information like a
‘How to guide’? Is it an entertainment website like sports coverage or are you
selling a product to the user? There are many different purposes that websites may
have but there are core purposes common to all websites;
1. Describing Expertise
2. Building Your Reputation
3. Generating Leads
4. Sales and After Care
2. SIMPLICITY
Simplicity is the best way to go when considering the user experience and the
usability of your website. Below are ways to achieve simplicity through design.
Colour
Colour has the power to communicate messages and evoke emotional responses.
Finding a colour palette that fits your brand will allow you to influence your
customer’s behaviour towards your brand. Keep the colour selection limited to less
than 5 colours. Complementary colours work very well. Pleasing colour combinations
increase customer engagement and make the user feel good.
Type
Typography has an important role to play on your website. It commands attention and
works as the visual interpretation of the brand voice. Typefaces should be legible and
only use a maximum of 3 different fonts on the website.
Imagery
Imagery is every visual aspect used within communications. This includes still
photography, illustration, video and all forms of graphics. All imagery should be
expressive and capture the spirit of the company and act as the embodiment of their
brand personality. Most of the initial information we consume on websites is visual
and as a first impression, it is important that high-quality images are used to form an
impression of professionalism and credibility in the visitors’ minds.
3. NAVIGATION
lOMoARcPSD|31712146
Navigation is the wayfinding system used on websites where visitors interact and find
what they are looking for. Website navigation is key to retaining visitors. If the
website navigation is confusing visitors will give up and find what they need
elsewhere. Keeping navigation simple, intuitive and consistent on every page is key.
5. VISUAL HIERARCHY
Visual hierarchy is the arrangement of elements in order of importance. This is done
either by size, colour, imagery, contrast, typography, whitespace, texture and style.
One of the most important functions of visual hierarchy is to establish a focal point;
this shows visitors where the most important information is.
6. CONTENT
lOMoARcPSD|31712146
An effective website has both great design and great content. Using compelling
language great content can attract and influence visitors by converting them into
customers.
8. LOAD TIME
Waiting for a website to load will lose visitors. Nearly half of web visitors expect a
site to load in 2 seconds or less and they will potentially leave a site that isn’t loaded
within 3 seconds. Optimising image sizes will help load your site faster.
9. MOBILE FRIENDLY
lOMoARcPSD|31712146
More people are using their phones or other devices to browse the web. It is important
to consider building your website with a responsive layout where your website can
adjust to different screens.
If you want to venture into an inspirational type of post, check out our list of
incredibly creative UI design examples.
This can manifest itself in many different ways in a screen design. Ease of use tends
to be closely related to high standards of usability, which can be difficult to live up to
even to the most experienced among us.
A good example is the navigation design, which is the backbone of any product but
represents a challenging aspect of UI. You want the navigation to feel effortless, to
have users driving down the right roads with no need for signs to help them. The more
content the product holds, the tougher it is to create a system that holds it all together
in a way that makes it easy for users to navigate and discover new areas of the product.
Users that encounter any product for the first time have to explore a bit and discover
the primary features, sometimes waiting a bit to advance onto the secondary ones.
This first encounter is crucial, because it sets the tone for the experience and tells
lOMoARcPSD|31712146
users what to expect. Their first impression is likely to dictate if they stick around or
if they give up and abandon the product right there on the spot.
One of the most difficult things about UI design is that everything depends. The
nature of the product will dictate what navigation is more appropriate, the users will
affect the way that information is categorized and presented. The right UI pattern will
depend on the function and the people using the product. Unfortunately, there’s never
a one-size-fits-all approach to UI design. Part of the art of UI design is seeing the
context and using that information to create an interface that still lives up to high
standards of usability.
There’s a right balance of power that users want. They want to feel in control, to have
freedom to approach tasks in their own way. With that said, they also don’t want too
much control, which can lead to overwhelmed users that quickly grow tired of having
to make so many decisions. That is called the paradox of choice. When faced with too
much freedom, most users stop enjoying the experience and instead resent the
responsibility. Choosing, after all, requires cognitive effort.
This requires a balance that those in the gaming industry are intimately familiar with.
Gamers enjoy choices, but overdoing it can ruin the game experience. Game
UI design is all about giving users just the right amount of power.
Users want the freedom to do what they want as well as freedom from bad
consequences. In UI design, that means giving them the power to do and undo things,
so users don’t ever come to regret what they chose to do with the product.
That’s why UI designers operate within a certain margin of control that they pass on
to users. They narrow down which parts of the product and the experience can be
customized, identifying areas where users can create their own take on the design. A
color change may sound silly to some, but it makes users happy to have a choice in
the interface.
lOMoARcPSD|31712146
A solid example of this can be seen with any dashboard design, where complex
information is broken down and made easy to digest. Even if you can customize the
dashboard itself, the soul of the design will remain in order to get the main job done.
But what makes a layout UI work? How do designers know where each component
goes, and how it all fits together?
The answer is a combination of factors that UI designers take into account. First,
there’s the general rule of proximity of elements and visual hierarchy. This is about
making the important things bigger and brighter, letting users know right away that
this is what they should be focusing on. The hierarchy is a way of communicating to
the user where their eye should go, what they should do.
The right hierarchy has the power to make users understand the content immediately,
without using a single word. The proximity between elements plays a similar role,
with components in close proximity being somehow related or closely connected.
Whitespace also plays an important role in the layout design. Most people who are
only beginning to learn about UI design often underestimate the importance of
whitespace or how much of it they’ll need to create a good visual hierarchy. The truly
skilled designers use that empty space to give the user’s eye some relief and let the
component guide their gaze through the screen. This can be taken to an extreme with
the trend of minimalist website design.
lOMoARcPSD|31712146
The consistency is important because it will significantly help users learn their way
around the product. That first learning curve is unavoidable for a brand new
experience, but UI designers can shorten it. Ultimately, you want users to recognize
the individual components after having seen them once.
Buttons, for example. After using the product just for a little bit, users should
recognize primary and positive buttons from secondary ones. Users are already
making the effort to learn how the product works and what it does – don’t make them
learn what 9 different buttons mean. This means that buttons should not only look the
same, they need to behave the same.
When it comes to the consistency of UI design, you want to be predictable. You want
users to know what that button will do without the need to press it. A good example is
having consistent button states, so your users know exactly how buttons behave
throughout the entire product.
There are many examples of these metaphors in elements that users will know and
recognize. The silliest one, perhaps, is the garbage bin icon. It immediately tells us
that anything placed in there will be removed from sight, possibly eliminated forever.
There’s power in that kind of familiarity. It’s the reason why buttons look like real
physical buttons or why toggle switches look like actual switches. Arguably, all icons
are real-life metaphors.
User personas capture the idea of the final user, giving them a face and offering
details of their lives and what they want. Originally created by the marketing industry,
they are just as helpful for UI designers. Despite it being a fictitious profile of a
person that doesn’t exist, the idea and the group of people that it represents are very
much real. It gives the design team clarity on what the users want, what they
experience and their ultimate goals.
On a similar note, mental models are also crucial. Rather than capturing the ideal user
they capture how those users think. It showcases their reasoning, which can be very
helpful in UI design. More often than not, when screens or elements don’t perform
well it’s because they simply don’t respect the user’s mental models – which means
users don’t get it. For them, that just doesn’t make sense.
The same can be said for other materials, such as user flows or user scenarios. All of
these materials add value to the design process, resulting in a tailored product that is
more likely to succeed.
Most UI designers will start their planning of the basic bones and layout with UI
sketching on paper. From there, the project evolves into a digital rendering of the
design in black and white. This gives them a chance to focus only on the efficiency of
the space, prioritizing things like the visual hierarchy of key elements.
Slowly and over time, designers will build on this grey base and add more details. It’s
true that some design teams start testing very early, even when wireframes are nothing
but a bunch of boxes. Regardless, this grey design grows as the designer adds colors,
details and actual content.
This user feedback and context can come in many forms. One of the most commonly
used is microinteractions, which tell the user that things are clickable or that the
system is working behind the screen. A loading icon that holds a brief interaction is
the perfect example.
You want the feedback to be instantaneous, so there’s no room for confusion. Users
don’t like uncertainty and using feedback can be a way to have much more efficient
communication between user and product. With something as simple as a button
moving slightly up when the cursor hovers above it, UI designers can tell users that
button can be clicked or that the element is responsive and dynamic.
These simple cues are something UI designers have grown to do almost instinctively.
They know that users need this sort of context in order for the product to shine, and so
they look for these opportunities everywhere. These little details matter and make the
entire experience better for users.
A classic example of simple but crucial feedback are the different states of key
components, such as toggle UIs, dropdown UIs as well as the feedback from the well-
loved card UI design pattern. If you’re interested in specific components, we also
recommend you read our post on the debated choice between radio buttons vs
checkboxes.
Wireframing the ideas and the general product tends to fall to UI designers. As a
general UI design rule, wireframing is unavoidable. It’s the first few steps of the
product, representing the first tangible representation of the digital solution.
Starting off as a bunch of boxes and tones of grey, UI designers will use the design
materials like user personas to create a wireframe that fits the user. This is about
capturing the general functionality of the product, laying the foundation of the bare
bones. Things like the navigation, main pieces of content and the representation of the
primary features – they all play a part in the wireframing process.
As the team begins to test the general usability and performance of the wireframe, a
cycle emerges. The wireframe is tested and the results will dictate what parts need to
be improved or completely changed. One of the best things about wireframes is that
putting them together quickly is possible, bringing the ability to quickly change
course if need be. Feel free to check out our guide to accessibility design for more on
that.
Truly skilled UI designers are all about wireframing. They understand the process and
what information to use, which factors influence the design. They go out of their way
to validate the wireframe at every turn, before a new layer of detail is added. Slowly,
the wireframe will give way to a high-fidelity prototype, where all the final visuals are
represented.
10. Get familiar with user testing and the world of usability
Usability can mean different things to different design teams. It’s often the case that
most designers will associate user testing to the performance of the design – in terms
of how many users can complete a task under X time. To others, the testing takes a
bigger meaning, representing the very point-of-view of the users, with the data being
the only way to know what users truly want.
Ultimately, user testing is done for an extended period of time. Starting in the
wireframing stage and going all the way to the release of the product (sometimes even
further). Designers will invest real time and effort into testing, simply because it pays
off. Any changes that the testing leads to are welcome, because they represent
improvement done for little cost. If these improvements needed to be done much later
on the project, they would have come in the form of delays and absurd costs.
The methods can vary due to how many alternatives there are out there now. From
unmoderated tests that enjoy hundreds of participants to moderated interviews and
lOMoARcPSD|31712146
observation sessions – there’s a right path for every team no matter the budget and
time restraint.
Shopping carts are a key part of any ecommerce. But what makes a shopping cart
good? And what can we do to improve its conversion? Read on and find out!
Assignment No. 4
Title : Assignment based on knowledge of Document Object Model with JavaScript
and CSS.
Learning Objectives: Learn the Knowledge of interactive web page design using
HTML, CSS,JavaScript and Document Object Model with
JavaScript and CSS.
Theory Concepts:
Features:
This page consists of a centered container with 3 tabs each for showing a text, an
image and a youtube video. A div containing three Buttons is used as a tab bar and
pressing each button displays the corresponding tab. Only one tab should be displayed
at a time The button showing the current tab must remain highlighted from the
moment your page is loaded.
Main container should have a minimum width of 300px and should scale with the
windows size.
It should remain centered both vertically and horizontally. All tabs should have 10-20
px of padding. Individual tabs should be the same height regardless of the content.
If you need help for centering your elements please check this guide:
Text tab:
Image tab:
Show the embedded image from an external URL. Image should be both vertically
and horizontally centered. It should maintain the aspect ratio and resize to fill the
container horizontally. Use overflow property to keep the image within the container.
Video tab:
Display a youtube video using iFrame html tag. You can get a pre written tag using
Youtube embedding options. Video should fill the tab vertically and horizontally.
You should choose a google web font from for your page and import it either using
link tag in your html or using @import directly in your css.
You are free to choose your own color scheme, button style and fonts. Try to make it
beautiful and get creative!
Overall Hints:
- CSS should control the visibility of the different elements via the 'is-visible' class
and the display property.
- Your JavaScript needs to implement onClick listeners for the buttons, and set the
correct membership in the is-visible class. Loop through the example-
content elements to do this.
Need of DOM
HTML is used to structure the web pages and Javascript is used to add behavior to
our web pages. When an HTML file is loaded into the browser, the javascript can
not understand the HTML document directly. So, a corresponding document is
created(DOM). DOM is basically the representation of the same HTML
document but in a different format with the use of objects. Javascript interprets
DOM easily i.e javascript can not understand the tags(<h1>H</h1>) in HTML
document but can understand object h1 in DOM. Now, Javascript can access each of
the objects (h1, p, etc) by using different functions.
Structure of DOM: DOM can be thought of as a Tree or Forest(more than one tree).
The term structure model is sometimes used to describe the tree-like representation
of a document. Each branch of the tree ends in a node, and each node contains
objects Event listeners can be added to nodes and triggered on an occurrence of a
given event. One important property of DOM structure models is structural
isomorphism: if any two DOM implementations are used to create a representation
of the same document, they will create the same structure model, with precisely the
same objects and relationships.
Properties of DOM: Let’s see the properties of the document object that can be
accessed and modified by the document object.
lOMoARcPSD|31712146
Window Object: Window Object is object of the browser which is always at top of
the hierarchy. It is like an API that is used to set and access all the properties and
methods of the browser. It is automatically created by the browser.
Assignment No. 5
Title : Assignment based on knowledge of user interfaces using Javascript, CSS and
HTML
Problem Definition: Develop interactive user interfaces using Javascript, CSS and
HTML, specifically:
a. implementation of form-based data entry, input groups, and
button elements using the Bootstrap library.
b. use of responsive web design (RWD) principles,
c. implementing JavaScript communication between the input
forms and a custom visualization component
Theory Concepts:
HTML
At the user interface level, the platform provides a rich visual editor that allows web
interfaces to be composed by dragging and dropping. Instead of purely writing
HTML, developers use visual widgets. These widgets are wrapped and are easy to
reuse just by dragging and dropping without everyone needing to understand how
they are built:
The core visual widgets represent very closely what developers are used to
with HTML: a div, an input, a button, a table, and so forth. All of them have a
direct - and well known - HTML representation. For example, dragging a
“Container” generates a div.
Custom HTML widgets can be used to include whatever HTML is needed. An
example is a CMS that loads dynamic, database-stored HTML content in a
page.
Widgets can be composed and wrapped in “web blocks,” which are similar to
user controls, and reusable layouts with “placeholders,” which are similar to
Master Pages with “holes” that will be filled in when instantiated.
All widgets can be customized in the properties box via “Extended
Properties," which will be directly translated to HTML attributes. This
includes HTML tags that are not supported today in the base HTML
definition. For example, if someone wants to use custom “data-” attributes,
they can just add them in.
lOMoARcPSD|31712146
All widgets have properties such as RuntimeId (HTML attribute ID) or Style
(class), which allow them to be used in/with standard JavaScript or CSS.
All widgets have a well-defined API that is tracked by the platform to ensure
that they are being properly used across all applications that reuse it.
In summary, the visual editor is very similar to a view templating system, such as
.NET ASPX, Java JSP or Ruby ERBs, where users define the HTML and include
dynamic model/presenter bound expressions.
JavaScript
OutSystems includes jQuery by default in all applications. But, developers also have
the option to include their own JavaScript frameworks (prototype, jQuery, jQueryUI,
dojo) and use them throughout applications just as they would in any HTML page.
Many JavaScript-based widgets, such as jQuery plugins, have already been packaged
into easy to reuse web blocks by OutSystems Community members and published
to OutSystems Forge. There are examples for kParallax, Drag and Drop lists, Table
freeze cells, Touch Drag and Drop, Sliders, intro.js, or the well known Google Maps.
Even some of the OutSystems built-in widgets are a mix of JavaScript, JSON and
back- end logic. For example, the OutSystems Charting widget is a wrapper over the
well- known Highcharts library. Developers can use the properties exposed by the
OutSystems widget, or use the full JSON API provided by HighCharts to configure
the widget.
This is an example of the jVectorMap JavaScript library, wrapped and reused in the
visual designer to display website access metrics over a world map:
lOMoARcPSD|31712146
This is an example of a jquery slideshow plugin wrapped and reused in the visual
designer:
CSS
OutSystem UIs are purely CSS3-based. A predefined set of “themes,” a mix of CSS
and layout templates, can be used in applications. However, developers can reuse
existing CSS or create their own. A common example is to reuse bootstrap, which is
tweaked so that its grid system is reused by the OutSystems visual editor to drag and
drop page layouts, instead of having to manually input the CSS columns for every
element.
Themes are hierarchical, which means that there is a CSS hierarchy in OutSystems.
Developers can define an application-wide CSS in one theme and redefine only parts
of it for a particular section of an application. Themes can also be reused by
applications when there is a standard style guide.
The built-in CSS text editor supports autocomplete, similar to that of Firebug or
Chrome Inspector, and immediately previews the results in the page without having to
recompile/redeploy applications.
Common CSS styling properties including padding, margin, color, border, and
shadow, can also be adjusted from directly within the IDE using the visual styles
editor panel.
This example shows information overlaid in maps using a user interface external
component:
This is an example of a typical page in the website that provides the visualization of
information in dynamic and static charts:
Wodify, a SaaS solution for Cross Fit Gyms, is built with OutSystems and currently
supports more than 200,000 users around the globe. Although most of the
functionality in Wodify is created with OutSystems built-in user interface widgets, it
is a great example of how OutSystems interfaces can be freely styled using CSS to
achieve a consistent look and feel and support several devices and form factors.
User Interface (UI) defines the way humans interact with the information systems.
In Layman’s term, User Interface (UI) is a series of pages, screens, buttons, forms
and other visual elements that are used to interact with the device. Every app and
every website has a user interface.
The user interface property is used to change any element into one of several
standard user interface elements. In this article we will discuss the following user
interface property:
resize
outline-offset
resize Property: The resize property is used to resize a box by user. This property
does not apply to inline elements or block elements where overflow is visible. In
this property, overflow must be set to “scroll”, “auto”, or “hidden”.
Syntax:
resize: horizontal|vertical|both;
horizontal: This property is used to resize the width of the element.
Syntax:
resize: horizontal;
To resize: Click and drag the bottom right corner of this div element.
lOMoARcPSD|31712146
both: This property is used to resize both the height and width of the element.
Syntax:
resize: both;
To resize: Click and drag the bottom right corner of this div element.
Supported Browsers: The browser supported by resize property are listed below:
Apple Safari 4.0
Google Chrome 4.0
Firefox 5.0 4.0 -moz-
Opera 15.0
Internet Explorer Not Supported
outline-offset: The outline-offset property in CSS is used to set the amount of space
between an outline and the edge or border of an element. The space between the
element and its outline is transparent.
Syntax:
outline-offset: length;
Note: Length is the width of the space between the element and its outline.
Assignment No. 6
Outcomes: After completion of this assignment students will able to make a Table
Lamp in Blender – A 3 D modeling software
Theory Concepts:
easy to use
sturdy
durable
customizable
lOMoARcPSD|31712146
Get started by downloading the table lamp kit for the following 3D modeling
programs:
Blender
Step 2:
We make use of a standard component to attach the shade to the base. This standard
component will be inserted and glued to the lamp fitting (blue) by us. It is important
for you, as the designer, to be aware of this and make use of this blue part in your
design.
lOMoARcPSD|31712146
Since we’re dealing with electricity, you have to make sure to include a spherical
zone around the light bulb of Ø 6 cm / Ø 2.4 inch. This zone needs to remain
completely open (hollow); it cannot contain any material.The design kit contains a Ø
6 cm / Ø 2.4 inch sphere to perform this safety check. Place the sphere inside your
lamp shade and make sure they don’t intersect.
For the price of the table lamp, you’re allowed to use a maximum diameter of 13 cm
/ 5.12 inch and the same for the height of your lamp shade. These also happen to be
the ideal dimensions regarding stability, weight, and aesthetics.
lOMoARcPSD|31712146
Starting from here, you can proceed with your order. After ordering, our 3D printers
will start building your unique lamp.
Step 6:
Program:
Output: