0% found this document useful (0 votes)
478 views41 pages

Assignment 1 - Merged

Uploaded by

NIGHT YT GAMER
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
478 views41 pages

Assignment 1 - Merged

Uploaded by

NIGHT YT GAMER
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Assignment No.

Title : Assignment based on Technologies from the Knowledge Navigator video

Problem Definition: List five technologies from the Knowledge Navigator video that were not
around in 1987, but are in widespread use today.

Requirements: Knowledge of human computer interface

Learning Objectives: Learn the technologies from the Knowledge Navigator video

Outcomes: After completion of this assignment students will be able to learn various
technologies from the knowledge navigator video.

Theory Concepts:
The Knowledge Navigator is a concept described by former Apple Computer CEO John Sculley
in his 1987 book, Odyssey: Pepsi to Apple. It describes a device that can access a large networked
database of hypertext information, and use software agents to assist searching for information.
Videos Apple produced several concept videos showcasing the idea. All of them featured a tablet
style computer with numerous advanced capabilities, including an excellent text-to-speech
system with no hint of "computerese", a gesture based interface resembling the multi-touch
interface later used on the iPhone and an equally powerful speech understanding system,
allowing the user to converse with the system via an animated "butler" as the software agent.
In one vignette a university professor returns home and turns on his computer, in the form of a
tablet the size of a large-format book. The agent is a bow-tie wearing butler who appears on the
screen and informs him that he has several calls waiting. He ignores most of these, from his
mother, and instead uses the system to compile data for a talk on deforestation in the Amazon
Rainforest. While he is doing this, the computer informs him that a colleague is calling, and they
then exchange data through their machines while holding a video based conversation.
In another such video, a young student uses a smaller handheld version of the system to prompt
him while he gives a class presentation on volcanoes, eventually sending a video of an exploding
volcano to the video "blackboard". In a final installment a user scans in a newspaper by placing
it on the screen of the full-sized version, and then has it help him learn to read by listening to
him read the scanned results, and prompting when he pauses.
Credits
The videos were funded and sponsored by Bud Colligan, Director of Apple's higher education
marketing group, written and creatively developed by Hugh Dubberly and Doris Mitsch of Apple
Creative Services, with technical and conceptual input from Mike Liebhold of Apple's Advanced
Technologies Group and advice from Alan Kay, then an Apple Fellow.The videos were produced
by The Kenwood Group in San Francisco and directed by Randy Field. The director of
photography was Bill Zarchy. The post-production mix was done by Gary Clayton at Russian Hill
Recording for The Kenwood Group. The product industrial design was created by Gavin Ivester
and Adam Grosser of Apple design.
Samir Arora, a software engineer at Apple was involved in R&D on application navigation and
what was then called hypermedia. He wrote an important white paper entitled “Information
Navigation: The Future of Computing". While working for Apple CEO John Sculley at the time,
Arora built the technology to show fluid access to linked data displayed in a friendly manner, an
emerging area of research at Apple. The Knowledge Navigator video premiered in 1987 at
Educom, the leading higher education conference, in a keynote by John Sculley, with demos of
multimedia, hypertext and interactive learning directed by Bud Colligan. The music featured in
this video is Georg Anton Benda 's Harpsichord Concerto in C.

Reception
The astute bow tie wearing software agent in the video has been the center of quite a few heated
discussions in the domain of human–computer interaction. It was criticized as being an
unrealistic portrayal of the capacities of any software agent in the foreseeable future, or even in a
distant future.[citation needed] Some user interface professionals like Ben Shneiderman of the
University of Maryland, College Park have also criticized its use of a human likeness for giving a
misleading idea of the nature of any interaction with a computer, present or future. Some
visions put forth by proponents of the Semantic Web have been likened to that of the Knowledge
Navigator by Marshall and Shipman, who argue that some of these visions "ignore the difficulty
of scaling knowledge-based systems to reason across domains, like Apple's Knowledge
Navigator," and conclude that, as of 2003, "scenarios of the complexity of [a previously quoted]
Knowledge Navigator-like approach to interacting with people and things in the world seem
unlikely."

Siri
The notion of Siri was firmly planted at Apple 25 years ago though “Knowledge Navigator” with
the voice of the assistant was only a concept prototype. In one of the videos, a man is seen asking
the assistant to search for an article published 5 years before his time, the assistant finds it and
tells the article being dated to 2006, and due to this we can conclude that the video is set to take
place in September 2011. In October 2011, Apple relaunched Siri, a voice activated personal
assistant software vaguely similar to that aspect of the Knowledge Navigator just a month after
their initial prediction.

The Making of Knowledge Navigator : Apple made the Knowledge Navigator video for a
keynote speech that John Sculley gave at Educom (the premier college computer tradeshow and
an important event in a large market for Apple). Bud Colligan who was then running
higher-education marketing at Apple asked us to meet with John about the speech. John
explained he would show a couple examples of student projects using commercially available
software simulation packages and a couple university research projects Apple was funding. He
wanted three steps:
1. what students applere doing now
2. research that would soon move out of labs, and
3. a picture of the future of computing.
He asked us to suggest some ideas. apple suggested a couple approaches including a short
“science-fiction video.” John choose the video. Working with Mike Liebhold (a researcher in
Apple’s Advanced Technologies Group) and Bud, apple came up with a list of key technologies to
illustrate in the video, e.g., networked collaboration and shared simulations, intelligent agents,
integrated multimedia and hypertext. John then highlighted these technologies in his speech.
apple had about 6 weeks to write, shoot, and edit the video—and a budget of about $60,000 for
production. apple began with as much research as apple could do in a few days. apple talked
with Aaron Marcus and Paul Saffo. Stewart Brand’s book on the “Media Lab” was also a
source—as well as earlier visits to the Architecture Machine Group. apple also read William
Gibson’s “Neuromancer” and Verber Vinge’s “True Names.” At Apple, Alan Kay, who was then
an Apple Fellow, provided advice. Most of the technical and conceptual input came from Mike
Liebhold. apple collaborated with Gavin Ivester in Apple’s Product Design Group who designed
the “device” and had a wooden model built in little more than a week. Doris Mitch who worked
in my group wrote the script. Randy Field directed the video, and the Kenwood Group handled
production.
The project had three management approval steps:
1. the concept of the science fiction video,
2. the key technology list, and
3. the script.

It moved quickly from script to shooting without a full storyboard—largely because apple didn’t
have time to make one. The only roughs were a few Polaroid snapshots of the location, two
sketches showing camera position and movement, and a few sketches of the screen. apple
showed up on location very early and shot for more than 12 hours. (Completing the shoot within
one day was necessary to stay within budget.) The computer screens were developed over a few
days on a video paint box. (This was before Photoshop.)
The video form suggested the talking agent as a way to advance the “story” and explain what the
professor was doing. Without the talking agent, the professor would be silent and pointing
mysteriously at a screen. Apple thought people would immediately understand that the piece
was science fiction because the computer agent converses with the professor—something that
only happened in Star Trek or Star Wars.
What is surprising is that the piece took on a life of its own. It spawned half a dozen or more
sequels within Apple, and several other companies made similar pieces. These pieces were
marketing materials. They supported the sale of computers by suggesting that a company
making them has a plan for the future. They were not inventing new interface ideas. (The
production cycles didn’t allow for that.) Instead, they were about visualizing existing ideas—and
pulling many of them together into a reasonably coherent environment and scenario of use. A
short while into the process of making these videos, Alan Kay said, “The main question here is
not is this technology possible but is this the way Apple wants to use technology?” One effect of
the video was engendering a discussion (both inside Apple and outside) about what computers
should be like. On another level, the videos became a sort of management tool. They suggested
that Apple had a vision of the future, and they prompted a popular internal myth that the
company was “inventing the future.”

Technologies
Apple's Siri
Siri is Apple's personal assistant for iOS, macOS, tvOS and watchOS devices that uses voice
recognition and is powered by artificial intelligence (AI). Siri responds to users' spoken
questions by speaking back to them through the device's speaker and presenting relevant
information on the home screen from certain apps, such as Web Search or Calendar. The service
also lets users dictate emails and text messages, reads received emails and messages and
performs a variety of other tasks.

Voice of Siri
ScanSoft, a software company that merged with Nuance Communications in 2005, hired
voiceover artist Susan Bennett that same year when the scheduled artist was absent. Bennett
recorded four hours of her voice each day for a month in a home recording studio, and the
sentences and phrases were linked together to create Siri's voice. Until a friend emailed her in
2011, Bennett wasn't aware that she had become the voice of Siri. Although Apple never
acknowledged that Bennett was the original Siri voice, audio experts at CNN confirmed it. Karen
Jacobsen, a voiceover artist known for her work on GPS systems, provided the original
Australian female voice. Jon Biggs, a former tech journalist, provided Siri's British male voice.
Apple developed a new female voice for iOS 11 with deep learning technology by recording hours
of speech from hundreds of candidates.

Important features and popular commands


Siri can perform a variety of tasks, such as:
Navigate directions, including "What's the traffic like on my way to work?"
● Schedule events and reminders, such as "Text Ben 'happy birthday' at midnight on
Tuesday"
● Search the web, including "Find images of dogs"
● Relay information such as "How long does it take water to boil?"
● Change settings such as "Increase the screen brightness" or "Take a photo" Users can
make requests to Siri in natural language.

Amazon's Alexa
Amazon Alexa, also known simply as Alexa, is a virtual assistant technology largely based on a
Polish speech synthesizer named Ivona, bought by Amazon in 2013. It was first used in the
Amazon Echo smart speaker and the Echo Dot , Echo Studio and Amazon Tap speakers
developed by Amazon Lab126. It is capable of voice interaction, music playback, making to-do
lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic,
sports, and other real-time information, such as news. Alexa can also control several smart
devices using itself as a home automation system. Users are able to extend the Alexa capabilities
by installing "skills" (additional functionality developed by third-party vendors, in other settings
more commonly called apps) such as weather programs and audio features. It uses automatic
speech recognition, natural language processing, and other forms of weak AI to perform these
tasks. Most devices with Alexa allow users to activate the device using a wake-word (such as
Alexa or Amazon); other devices (such as the Amazon mobile app on iOS or Android and
Amazon Dash Wand) require the user to click a button to activate Alexa's listening mode,
although, some phones also allow a user to say a command, such as "Alexa" or "Alexa wake".

Functions
Alexa can perform a number of preset functions out-of-the-box such as set timers, share the
current weather, create lists, access Wikipedia articles, and many more things. Users say a
designated "wake word" (the default is simply "Alexa") to alert an Alexa-enabled device of an
ensuing function command. Alexa listens for the command and performs the appropriate
function, or skill, to answer a question or command. When questions are asked, Alexa converts
sound waves into text which allows it to gather information from various sources. Behind the
scenes, the data gathered is then sometimes passed to a variety of sources including
WolframAlpha, iMDB, AccuWeather, Yelp, Wikipedia, and others to generate suitable and
accurate answers. Alexa-supported devices can stream music from the owner's Amazon Music
accounts and have built-in support for Pandora and Spotify accounts. Alexa can play music from
streaming services such as Apple Music and Google Play Music from a phone or tablet. In
addition to performing pre-set functions, Alexa can also perform additional functions through
third-party skills that users can enable. Some of the most popular Alexa skills in 2018 included
"Question of the Day" and "National Geographic Geo Quiz" for trivia; "TuneIn Live" to listen to
live sporting events and news stations; "Big Sky" for hyper local weather updates; "Sleep and
Relaxation Sounds" for listening to calming sounds; "Sesame Street" for children's
entertainment; and "Fitbit" for Fitbit users who want to check in on their health stats In 2019,
Apple, Google, Amazon, and Zigbee Alliance announced a partnership to make their smart home
products work together.

Microsoft's Cortana

Cortana is a virtual assistant developed by Microsoft that uses the Bing search engine to perform
tasks such as setting reminders and answering questions for the user. Cortana is currently
available in English, Portuguese, French, German, Italian, Spanish, Chinese, and Japanese
language editions, depending on the software platform and region in which it is used. Microsoft
began reducing the prevalence of Cortana and converting it from an assistant into different
software integrations in 2019. It was split from the Windows 10 search bar in April 2019.In
January 2020, the Cortana mobile app was removed from certain markets, and on March 31,
2021, the Cortana mobile app was shut down globally.

Functionality
Cortana can set reminders, recognize natural voice without the requirement for keyboard input,
and answer questions using information from the Bing search engine (For example, current
weather and traffic conditions, sports scores, biographies).Searches using Windows 10 are made
only with the Microsoft Bing search engine, and all links will open with Microsoft Edge, except
when a screen reader such as Narrator is being used, where the links will open in Internet
Explorer. Windows Phone 8.1's universal Bing SmartSearch features are incorporated into
Cortana, which replaces the previous Bing Search app, which was activated when a user presses
the "Search" button on their device. Cortana includes a music recognition service. Cortana can
simulate rolling dice and flipping a coin. Cortana's "Concert Watch" monitors Bing searches to
determine the bands or musicians that interest the user. It integrates with the Microsoft Band
watch band for Windows Phone devices if connected via Bluetooth, it can make reminders and
phone notifications.
Since the Lumia Denim mobile phone series, launched in October 2014, active listening was
added to Cortana enabling it to be invoked with the phrase: "Hey Cortana". It can then be
controlled as usual.Some devices from the United Kingdom by O2 received the Lumia Denim
update without the feature, but this was later clarified as a bug and Microsoft has since fixed it.
Cortana integrates with services such as Foursquare to provide restaurant and local attraction
recommendations and LIFX to control smart light bulbs.

Google's Assistant

Google Assistant is a virtual assistant software application developed by Google that is primarily
available on mobile and home automation devices. Based on artificial intelligence, Google
Assistant can engage in two-way conversations, unlike the company's previous virtual assistant,
Google Now.
Google Assistant debuted in May 2016 as part of Google's messaging app Allo, and its
voice-activated speaker Google Home. After a period of exclusivity on the Pixel and Pixel XL
smartphones, it was deployed on other Android devices starting in February 2017, including
third-party smartphones and Android Wear (now Wear OS), and was released as a standalone
app on the iOS operating system in May 2017. Alongside the announcement of a software
development kit in April 2017, Assistant has been further extended to support a large variety of
devices, including cars and third-party smart home appliances. The functionality of the Assistant
can also be enhanced by third-party developers.
Users primarily interact with the Google Assistant through natural voice, though keyboard input
is also supported. Assistant is able to answer questions, schedule events and alarms, adjust
hardware settings on the user's device, show information from the user's Google account, play
games, and more. Google has also announced that Assistant will be able to identify objects and
gather visual information through the device's camera, and support purchasing products and
sending money. At CES 2018, the first Assistant-powered smart displays (smart speakers with
video screens) were announced, with the first one being released in July 2018. In 2020, Google
Assistant will be available on more than 1 billion devices. Google Assistant is available in more
than 90 countries and in over 30 languages, and is used by more than 500 million users monthly

Samsung’s bixby.

Bixby is a virtual assistant developed by Samsung Electronics. It represents a major reboot for S
Voice, Samsung's voice assistant app introduced in 2012 with the Galaxy S III S Voice was later
discontinued on 1 June 2020. In May 2017, Samsung announced that Bixby would be coming to
its line of Family Hub 2.0 refrigerators, making it the first non-mobile product to include the
virtual assistant.
Samsung’s Bixby digital assistant lets you control your smartphone and select connected
accessories. You can open apps, check the weather, play music, toggle Bluetooth, and much
more. You’ll find everything you need to know about the Google rival below, including how to
access it, the features it offers, and which devices it’s available on.
The most interesting and helpful component is of course Bixby Voice, which lets you use voice
commands to get stuff done. It works with all Samsung apps and a few third-party apps,
including Instagram, Gmail, Facebook, and YouTube. With Voice you can send text messages,
check sports scores, turn down screen brightness, check your calendar, launch apps, and more.
The tech can also read out your latest incoming messages, and flip between male and female
versions. Like Google Assistant, Bixby can handle some more complicated two-step commands,
such as creating an album with your vacation photos and sharing it with a friend.

Conclusion: Successfully studied technologies from the Knowledge Navigator video that were
not around in 1987, but are in widespread use today.
Assignment No. 02

Title: Assignment based on GOMS (Goals, Operators, Methods and Selection rules)
modeling technique

Problem Definition: Implement GOMS (Goals, Operators, Methods and Selection


rules) modeling technique to model user's behavior in given scenario.

Requirements: Knowledge of human computer interface and GOMS.


Learning Objectives: Learn the GOMS (Goals, Operators, Methods and Selection
rules) modeling technique.

Outcomes: After completion of this assignment students will be able to learn the
GOMS (Goals, Operators,Methods and Selection rules) modeling technique.

Theory:
GOMS is a model of human performance and it can be used to improve
human-computer interaction efficiency by eliminating useless or unnecessary
interactions.
GOMS is an abbreviation from:
G → Goals
O → Operators
M → Methods
S → Selection

The detailed description, we define:


➢ Goals (G) as a task to do e.g., “Send email”
➢ Operators (O) as all actions needed to achieve the goal e.g., “amount of mouse clicks
to send email”
➢ Methods (M) as a group of operators e.g., “move mouse to send button, click on the
button”
➢ Selection (S) as a user decision approach e.g., “move mouse to send button, click on
the button” or
“move mouse to send button, click ENTER”
➢ GOMS is based on the research phase with end-users and it could be as a strong
analysis benchmark of user’s behaviors. It helps eliminate developing unnecessary
actions, so it’s time and cost-saving.

Advantages:
❖ Gives qualitative & quantitative measures
❖ Model explains the results
❖ Less work than user study – no users!
❖ Easy to modify when UI is revised
❖ Research: tools to aid modeling process since it can still be tedious

Disadvantages:
❖ Takes lots of time, skill, & effort
❖ Only works for goal-directed tasks
❖ Assumes tasks performed by experts without error
❖ Does not address several UI issues,
❖ Readability, memorability of icons, commands

Basically, there are five different GOMS models:


The Keystroke-Level Model, CMN-GOMS, NGOMSL, CPM-GOMS, and SGOMS. Each
model has a different complexity and varies in activities.
1.Keystroke-Level Model (KLM): is, why and how it can be useful in research and
design, and how we can help you perform a KLM analysis.
The Keystroke-Level Model (KLM) is a relatively simple tool that allows a designer,
researcher, or engineer to predict or estimate how long it will take an experienced user
to complete a routine task in their software.
The model is composed of six elements, or operators:
1. K - keystroke or button press.
2. P - pointing with a mouse.
3. H - homing the hands on the keyboard or other device.
4. D - manually drawing. This is used when drawing a straight line with a mouse. It is
not frequently used.
5. M - mental preparation. This is the time needed for thinking or planning an action or
decision making.
6. R - system response time.

Each element has a time associated with it.


For example:
o K for an average typist (40 wpm) is 0.28 seconds, P is 1.1 seconds, and M is 1.35
seconds
o What we want to do then open browser now click on browser need response time next
we are going to mouse cursor to the search box (using p pointer) then we will click the
search box using (keystroke /button press) at this point you will move your hand the
mouse to keyboard.

2.CMN-GOMS :
It is Stand for Card Moran Newell GOMS.it is cognitive model. When operators are
strictly sequential.
Example:
GOAL: DELETE-FILE
. GOAL: SELECT-FILE
. . [select: GOAL: KEYBOARD-TAB-METHOD
. . GOAL: MOUSE-METHOD]
. . VERIFY-SELECTION
. GOAL: ISSUE-DELETE-COMMAND
. . [select*: GOAL: KEYBOARD-DELETE-METHOD
. . . PRESS-DELETE
. . . GOAL: CONFIRM-DELETE
. . GOAL: DROP-DOWN-MENU-METHOD
. . . MOVE-MOUSE-OVER-FILE-ICON
. . . CLICK-RIGHT-MOUSE-BUTTON
. . . LOCATE-DELETE-COMMAND
. . . MOVE-MOUSE-TO-DELETE-COMMAND
. . . CLICK-LEFT-MOUSE-BUTTON
. . . GOAL: CONFIRM-DELETE
. . GOAL: DRAG-AND-DROP-METHOD
. . . MOVE-MOUSE-OVER-FILE-ICON
. . . PRESS-LEFT-MOUSE-BUTTON
. . . LOCATE-RECYCLING-BIN
. . . MOVE-MOUSE-TO-RECYCLING-BIN

. RELEASE-LEFT-MOUSE-BUTTON]
*Selection rule for GOAL: ISSUE-DELETE-COMMAND
If hands are on keyboard, use KEYBOARD-DELETE-METHOD, else if Recycle bin is
visible, use DRAG-AND-DROP-METHOD, else use DROP-DOWN-MENU-METHOD

3.NGOMSL:
NGOMSL “Natural GOMS Language”
• Formal language with restricted English syntax
• The benefit of the formal language is that each statement roughly corresponds to a
primitive mental chunk, so you can estimate the learning time (total execution time)
NGOMSL for move text:

Method for goal: Move text


• Step 1: Accomplish goal: cut text
• Step 2: Accomplish goal: paste text
• Step 3: Return with goal accomplished

Method for goal: cut text


• Step 1: Accomplish goal: Highlight text
• Step 2: Retain that the command is CUT, and accomplish goal: Issue a command (IC)

Method for goal: paste text


• Step 1: Accomplish goal: Position cursor at insertion point
• Step 2: Retain that the command is PASTE and accomplish goal: IC
• Step 3: Return with goal accomplished

4.CPM- GOMS:
• Cognitive Perceptual Motor or Critical Path Method is another variant of GOMS
• Unlike KLM and other models (serial operations) CPM-GOMS model handles parallel
operations
Ex: Point and shift-click
CPM- GOMS CPM- GOMS have:
• A perceptual processor (PP)
• A cognitive processor (CP)
• Multiple motor processors (MP) :
one for each major muscle system that can act independently For GUI interfaces, the
muscles we mainly care about are the two hands and the eyes

5.SGOMS (Sociotechnical GOMS):


• The relationship between micro cognition and macro cognition can be understood by
analogy to the
relationship between neuroscience and cognitive psychology.
• The cognitive system is a product of the neural system, most of cognitive psychology is
based on
information-processing concepts that are, at best, only vaguely related to the biological
functioning
of the brain.

Conclusion: Successfully Implement GOMS (Goals, Operators, Methods and Selection


rules) modeling technique to model user's behavior in a given scenario.
lOMoARcPSD|31712146

Assignment No. 3
Title : Assignment based on knowledge of Web Design guidelines and general UI
design principles

Problem Definition: Using your observations from your small user study and your
knowledge of Web Design guidelines and general UI design
principles, Critique two interfaces of any two educational
institute and make suggestions for improvement.

Requirements: Knowledge of Web Design and UI design principles

Learning Objectives: Learn the Knowledge of Web Design and UI design principles

Outcomes: After completion of this assignment students will able to give suggestions

for improvement for Web Design and UI design for any website.

Theory Concepts:

PRINCIPLES OF GOOD WEBSITE DESIGN

An effective website design should fulfil its intended function by conveying its
particular message whilst simultaneously engaging the visitor. Several factors such as
consistency, colours, typography, imagery, simplicity, and functionality contribute to
good website design.

When designing a website there are many key factors that will contribute to how it is
perceived. A well-designed website can help build trust and guide visitors to take
action. Creating a great user experience involves making sure your website design is
optimised for usability (form and aesthetics) and how easy is it to use (functionality).

Below are some guidelines that will help you when considering your next web project.
lOMoARcPSD|31712146

1. WEBSITE PURPOSE
Your website needs to accommodate the needs of the user. Having a simple clear
intention on all pages will help the user interact with what you have to offer. What
is the purpose of your website? Are you imparting practical information like a
‘How to guide’? Is it an entertainment website like sports coverage or are you
selling a product to the user? There are many different purposes that websites may
have but there are core purposes common to all websites;
1. Describing Expertise
2. Building Your Reputation
3. Generating Leads
4. Sales and After Care

2. SIMPLICITY
Simplicity is the best way to go when considering the user experience and the
usability of your website. Below are ways to achieve simplicity through design.

Colour
Colour has the power to communicate messages and evoke emotional responses.
Finding a colour palette that fits your brand will allow you to influence your
customer’s behaviour towards your brand. Keep the colour selection limited to less
than 5 colours. Complementary colours work very well. Pleasing colour combinations
increase customer engagement and make the user feel good.

Type
Typography has an important role to play on your website. It commands attention and
works as the visual interpretation of the brand voice. Typefaces should be legible and
only use a maximum of 3 different fonts on the website.

Imagery
Imagery is every visual aspect used within communications. This includes still
photography, illustration, video and all forms of graphics. All imagery should be
expressive and capture the spirit of the company and act as the embodiment of their
brand personality. Most of the initial information we consume on websites is visual
and as a first impression, it is important that high-quality images are used to form an
impression of professionalism and credibility in the visitors’ minds.

3. NAVIGATION
lOMoARcPSD|31712146

Navigation is the wayfinding system used on websites where visitors interact and find
what they are looking for. Website navigation is key to retaining visitors. If the
website navigation is confusing visitors will give up and find what they need
elsewhere. Keeping navigation simple, intuitive and consistent on every page is key.

4. F-SHAPED PATTERN READING


The F- based pattern is the most common way visitors scan text on a website. Eye-
tracking studies have found that most of what people see is in the top and left areas of
the screen. The F shaped layout mimics our natural pattern of reading in the West (left
to right and top to bottom). An effectively designed website will work with a reader’s
natural pattern of scanning the page.

5. VISUAL HIERARCHY
Visual hierarchy is the arrangement of elements in order of importance. This is done
either by size, colour, imagery, contrast, typography, whitespace, texture and style.
One of the most important functions of visual hierarchy is to establish a focal point;
this shows visitors where the most important information is.

6. CONTENT
lOMoARcPSD|31712146

An effective website has both great design and great content. Using compelling
language great content can attract and influence visitors by converting them into
customers.

7. GRID BASED LAYOUT


Grids help to structure your design and keep your content organised. The grid helps to
align elements on the page and keep it clean. The grid-based layout arranges content
into a clean rigid grid structure with columns, sections that line up and feel balanced
and impose order and results in an aesthetically pleasing website.

8. LOAD TIME
Waiting for a website to load will lose visitors. Nearly half of web visitors expect a
site to load in 2 seconds or less and they will potentially leave a site that isn’t loaded
within 3 seconds. Optimising image sizes will help load your site faster.

9. MOBILE FRIENDLY
lOMoARcPSD|31712146

More people are using their phones or other devices to browse the web. It is important
to consider building your website with a responsive layout where your website can
adjust to different screens.

UI design principles: key rules and guidelines


While these are often unspoken rules amongst the more experienced designers,
they’re worth mentioning for newbies. When designing any interface, you want to
know which bases need to be covered, no matter what.

If you want to venture into an inspirational type of post, check out our list of
incredibly creative UI design examples.

1. Make it easy to explore and use


This can sound a bit broad and general, but it’s a fundamental principle in UI that
connects to many important concepts. A product that is easy to use is more likely to
offer a high standard of usability, enjoy a short learning curve and be effective in
helping users achieve tasks.

This can manifest itself in many different ways in a screen design. Ease of use tends
to be closely related to high standards of usability, which can be difficult to live up to
even to the most experienced among us.

A good example is the navigation design, which is the backbone of any product but
represents a challenging aspect of UI. You want the navigation to feel effortless, to
have users driving down the right roads with no need for signs to help them. The more
content the product holds, the tougher it is to create a system that holds it all together
in a way that makes it easy for users to navigate and discover new areas of the product.

Users that encounter any product for the first time have to explore a bit and discover
the primary features, sometimes waiting a bit to advance onto the secondary ones.
This first encounter is crucial, because it sets the tone for the experience and tells
lOMoARcPSD|31712146

users what to expect. Their first impression is likely to dictate if they stick around or
if they give up and abandon the product right there on the spot.

One of the most difficult things about UI design is that everything depends. The
nature of the product will dictate what navigation is more appropriate, the users will
affect the way that information is categorized and presented. The right UI pattern will
depend on the function and the people using the product. Unfortunately, there’s never
a one-size-fits-all approach to UI design. Part of the art of UI design is seeing the
context and using that information to create an interface that still lives up to high
standards of usability.

2. Give users control of the interface


People want control over their experience with the product and it’s up to UI designers
to make that happen. This applies to classic things like allowing people to get familiar
with the basics of the product and then giving them power to create shortcuts to the
tasks they do most often. It could also mean giving users the power to customize their
interface, including things like the general color scheme.

There’s a right balance of power that users want. They want to feel in control, to have
freedom to approach tasks in their own way. With that said, they also don’t want too
much control, which can lead to overwhelmed users that quickly grow tired of having
to make so many decisions. That is called the paradox of choice. When faced with too
much freedom, most users stop enjoying the experience and instead resent the
responsibility. Choosing, after all, requires cognitive effort.

This requires a balance that those in the gaming industry are intimately familiar with.
Gamers enjoy choices, but overdoing it can ruin the game experience. Game
UI design is all about giving users just the right amount of power.

Users want the freedom to do what they want as well as freedom from bad
consequences. In UI design, that means giving them the power to do and undo things,
so users don’t ever come to regret what they chose to do with the product.

That’s why UI designers operate within a certain margin of control that they pass on
to users. They narrow down which parts of the product and the experience can be
customized, identifying areas where users can create their own take on the design. A
color change may sound silly to some, but it makes users happy to have a choice in
the interface.
lOMoARcPSD|31712146

More complex stuff, like changing the general hierarchy of information or


customizing highly technical aspects of the product – those don’t fall on the user to
decide. People are happy to be walked to success, so they don’t need to worry about
the complex or small. They want to focus on the task, on having fun. As UI designers,
it’s our job to help them get there.

A solid example of this can be seen with any dashboard design, where complex
information is broken down and made easy to digest. Even if you can customize the
dashboard itself, the soul of the design will remain in order to get the main job done.

3. Create a layout that works efficiently


The layout is often called the foundation of any screen. This is where UI designers
shine the brightest, some would say. Creating a product with a hot trendy style such
as neumorphism is great, but functionality is the true art here. To create a layout that
works, UI designers will use their visual skills to highlight the important things and
encourage users to take a certain action. Ecommerce websites are well-versed in
making compelling layouts that nudge people to give into temptation and buy those
beautiful sneakers.

But what makes a layout UI work? How do designers know where each component
goes, and how it all fits together?

The answer is a combination of factors that UI designers take into account. First,
there’s the general rule of proximity of elements and visual hierarchy. This is about
making the important things bigger and brighter, letting users know right away that
this is what they should be focusing on. The hierarchy is a way of communicating to
the user where their eye should go, what they should do.

The right hierarchy has the power to make users understand the content immediately,
without using a single word. The proximity between elements plays a similar role,
with components in close proximity being somehow related or closely connected.

Whitespace also plays an important role in the layout design. Most people who are
only beginning to learn about UI design often underestimate the importance of
whitespace or how much of it they’ll need to create a good visual hierarchy. The truly
skilled designers use that empty space to give the user’s eye some relief and let the
component guide their gaze through the screen. This can be taken to an extreme with
the trend of minimalist website design.
lOMoARcPSD|31712146

4. Offer a consistent interface


UI designers know that maintaining a consistent design is important, for multiple
reasons. When the word “consistent” is thrown around in the world of web design, it
applies to both the visual and the interactions. The product needs to offer the same
icons and elements, no matter its size or how much content it holds. That means that
once the design team has settled on a visual identity, the product can’t stray from it.

The consistency is important because it will significantly help users learn their way
around the product. That first learning curve is unavoidable for a brand new
experience, but UI designers can shorten it. Ultimately, you want users to recognize
the individual components after having seen them once.

Buttons, for example. After using the product just for a little bit, users should
recognize primary and positive buttons from secondary ones. Users are already
making the effort to learn how the product works and what it does – don’t make them
learn what 9 different buttons mean. This means that buttons should not only look the
same, they need to behave the same.

When it comes to the consistency of UI design, you want to be predictable. You want
users to know what that button will do without the need to press it. A good example is
having consistent button states, so your users know exactly how buttons behave
throughout the entire product.

5. Use real-world metaphors in your UI


Despite the fact that most of us are now extremely familiar with digital products, it’s
still a good idea to use real-world metaphors. Some designers feel very strongly that
these metaphors improve the general usability of the product, because they are so easy
to understand even at first glance.
lOMoARcPSD|31712146

There are many examples of these metaphors in elements that users will know and
recognize. The silliest one, perhaps, is the garbage bin icon. It immediately tells us
that anything placed in there will be removed from sight, possibly eliminated forever.
There’s power in that kind of familiarity. It’s the reason why buttons look like real
physical buttons or why toggle switches look like actual switches. Arguably, all icons
are real-life metaphors.

These metaphors can be a handy way to communicate an abstract concept, putting


into more concrete terms. Without using any words, designers can use their visual
skills to convey ideas and the product becomes much easier to understand.

6. Know your way around design materials


What would UI design be without user personas? It would probably result in broad
experiences that have an absurdly high failure rate, aimless and directionless. Not just
user personas, but all design materials are important for both UI and UX designers.

User personas capture the idea of the final user, giving them a face and offering
details of their lives and what they want. Originally created by the marketing industry,
they are just as helpful for UI designers. Despite it being a fictitious profile of a
person that doesn’t exist, the idea and the group of people that it represents are very
much real. It gives the design team clarity on what the users want, what they
experience and their ultimate goals.

On a similar note, mental models are also crucial. Rather than capturing the ideal user
they capture how those users think. It showcases their reasoning, which can be very
helpful in UI design. More often than not, when screens or elements don’t perform
well it’s because they simply don’t respect the user’s mental models – which means
users don’t get it. For them, that just doesn’t make sense.

The same can be said for other materials, such as user flows or user scenarios. All of
these materials add value to the design process, resulting in a tailored product that is
more likely to succeed.

7. Start with black-and-white UI design, then build on it


Most experienced designers will agree that starting off an interface with the visual
details like the color scheme is a bad idea. There’s a good reason why most
wireframes start with nothing more than varying tones of grey – colors and details are
distracting.
lOMoARcPSD|31712146

Most UI designers will start their planning of the basic bones and layout with UI
sketching on paper. From there, the project evolves into a digital rendering of the
design in black and white. This gives them a chance to focus only on the efficiency of
the space, prioritizing things like the visual hierarchy of key elements.

Slowly and over time, designers will build on this grey base and add more details. It’s
true that some design teams start testing very early, even when wireframes are nothing
but a bunch of boxes. Regardless, this grey design grows as the designer adds colors,
details and actual content.

8. Feedback and context is important


The concept at play for this UI design guideline is that users need feedback for the
experience to be truly good. Unlike people, who send out tiny body language signals,
the device won’t communicate anything unless the UI designer stipulates so. People
need to know not only that their actions were registered, but also a general context
that helps them around the product.

This user feedback and context can come in many forms. One of the most commonly
used is microinteractions, which tell the user that things are clickable or that the
system is working behind the screen. A loading icon that holds a brief interaction is
the perfect example.

You want the feedback to be instantaneous, so there’s no room for confusion. Users
don’t like uncertainty and using feedback can be a way to have much more efficient
communication between user and product. With something as simple as a button
moving slightly up when the cursor hovers above it, UI designers can tell users that
button can be clicked or that the element is responsive and dynamic.

These simple cues are something UI designers have grown to do almost instinctively.
They know that users need this sort of context in order for the product to shine, and so
they look for these opportunities everywhere. These little details matter and make the
entire experience better for users.

A classic example of simple but crucial feedback are the different states of key
components, such as toggle UIs, dropdown UIs as well as the feedback from the well-
loved card UI design pattern. If you’re interested in specific components, we also
recommend you read our post on the debated choice between radio buttons vs
checkboxes.

9. Fine-tune your wireframing game


lOMoARcPSD|31712146

Wireframing the ideas and the general product tends to fall to UI designers. As a
general UI design rule, wireframing is unavoidable. It’s the first few steps of the
product, representing the first tangible representation of the digital solution.

Starting off as a bunch of boxes and tones of grey, UI designers will use the design
materials like user personas to create a wireframe that fits the user. This is about
capturing the general functionality of the product, laying the foundation of the bare
bones. Things like the navigation, main pieces of content and the representation of the
primary features – they all play a part in the wireframing process.

As the team begins to test the general usability and performance of the wireframe, a
cycle emerges. The wireframe is tested and the results will dictate what parts need to
be improved or completely changed. One of the best things about wireframes is that
putting them together quickly is possible, bringing the ability to quickly change
course if need be. Feel free to check out our guide to accessibility design for more on
that.

Truly skilled UI designers are all about wireframing. They understand the process and
what information to use, which factors influence the design. They go out of their way
to validate the wireframe at every turn, before a new layer of detail is added. Slowly,
the wireframe will give way to a high-fidelity prototype, where all the final visuals are
represented.

10. Get familiar with user testing and the world of usability
Usability can mean different things to different design teams. It’s often the case that
most designers will associate user testing to the performance of the design – in terms
of how many users can complete a task under X time. To others, the testing takes a
bigger meaning, representing the very point-of-view of the users, with the data being
the only way to know what users truly want.

Ultimately, user testing is done for an extended period of time. Starting in the
wireframing stage and going all the way to the release of the product (sometimes even
further). Designers will invest real time and effort into testing, simply because it pays
off. Any changes that the testing leads to are welcome, because they represent
improvement done for little cost. If these improvements needed to be done much later
on the project, they would have come in the form of delays and absurd costs.

The methods can vary due to how many alternatives there are out there now. From
unmoderated tests that enjoy hundreds of participants to moderated interviews and
lOMoARcPSD|31712146

observation sessions – there’s a right path for every team no matter the budget and
time restraint.
Shopping carts are a key part of any ecommerce. But what makes a shopping cart
good? And what can we do to improve its conversion? Read on and find out!

Conclusion: Successfully studied Web Design guidelines and general UI design


principles and able to give suggestions for improvement for Web
Design and UI design for any website.
lOMoARcPSD|31712146

Assignment No. 4
Title : Assignment based on knowledge of Document Object Model with JavaScript
and CSS.

Problem Definition: Implement a simple interactive webpage, showing a tabbed UI


(which is implemented not through widgets but by interacting
with and controlling the Document Object Model with
JavaScript and CSS).This page consists of a centered container
with 3 tabs each for showing a text, an image and a youtube
video. A div containing three Buttons is used as a tab bar and
pressing each button displays the corresponding tab. Only one
tab should be displayed at a time The button showing the
current tab must remain highlighted from the moment your
page is loaded.

Requirements: Knowledge of Document Object Model with JavaScript and CSS.

Learning Objectives: Learn the Knowledge of interactive web page design using
HTML, CSS,JavaScript and Document Object Model with
JavaScript and CSS.

Outcomes: After completion of this assignment students will able to implement


Implement a simple interactive webpage.

Theory Concepts:

Features:

This page consists of a centered container with 3 tabs each for showing a text, an
image and a youtube video. A div containing three Buttons is used as a tab bar and
pressing each button displays the corresponding tab. Only one tab should be displayed
at a time The button showing the current tab must remain highlighted from the
moment your page is loaded.

Centering and size:

Main container should have a minimum width of 300px and should scale with the
windows size.
It should remain centered both vertically and horizontally. All tabs should have 10-20
px of padding. Individual tabs should be the same height regardless of the content.
If you need help for centering your elements please check this guide:

Text tab:

Text should overflow with a vertical scroll as shown in the demo.


(If you want more text in the HTML, in VSCode, typing lorem# and pressing tab
generates a placeholder text with # of words.)
lOMoARcPSD|31712146

Image tab:

Show the embedded image from an external URL. Image should be both vertically
and horizontally centered. It should maintain the aspect ratio and resize to fill the
container horizontally. Use overflow property to keep the image within the container.

Video tab:

Display a youtube video using iFrame html tag. You can get a pre written tag using
Youtube embedding options. Video should fill the tab vertically and horizontally.

Colors and fonts:

You should choose a google web font from for your page and import it either using
link tag in your html or using @import directly in your css.
You are free to choose your own color scheme, button style and fonts. Try to make it
beautiful and get creative!

Overall Hints:

- Start with understanding the above HTML code.

- CSS should control the visibility of the different elements via the 'is-visible' class
and the display property.

- Your JavaScript needs to implement onClick listeners for the buttons, and set the
correct membership in the is-visible class. Loop through the example-
content elements to do this.

- Your CSS should implement an attractive look and feel

DOM (Document Object Model)

The Document Object Model (DOM) is a programming


interface for HTML(HyperText Markup Language) and XML(Extensible markup
language) documents. It defines the logical structure of documents and the way a
document is accessed and manipulated.
Note: It is called a Logical structure because DOM doesn’t specify any relationship
between objects.
DOM is a way to represent the webpage in a structured hierarchical way so that it will
become easier for programmers and users to glide through the document. With DOM,
we can easily access and manipulate tags, IDs, classes, Attributes, or Elements of
HTML using commands or methods provided by the Document object. Using DOM,
the JavaScript gets access to HTML as well as CSS of the web page and can also add
behavior to the HTML elements. so basically Document Object Model is an API that
represents and interacts with HTML or XML documents.
lOMoARcPSD|31712146

Need of DOM
HTML is used to structure the web pages and Javascript is used to add behavior to
our web pages. When an HTML file is loaded into the browser, the javascript can
not understand the HTML document directly. So, a corresponding document is
created(DOM). DOM is basically the representation of the same HTML
document but in a different format with the use of objects. Javascript interprets
DOM easily i.e javascript can not understand the tags(<h1>H</h1>) in HTML
document but can understand object h1 in DOM. Now, Javascript can access each of
the objects (h1, p, etc) by using different functions.

Structure of DOM: DOM can be thought of as a Tree or Forest(more than one tree).
The term structure model is sometimes used to describe the tree-like representation
of a document. Each branch of the tree ends in a node, and each node contains
objects Event listeners can be added to nodes and triggered on an occurrence of a
given event. One important property of DOM structure models is structural
isomorphism: if any two DOM implementations are used to create a representation
of the same document, they will create the same structure model, with precisely the
same objects and relationships.

Why called an Object Model?


Documents are modeled using objects, and the model includes not only the structure
of a document but also the behavior of a document and the objects of which it is
composed like tag elements with attributes in HTML.

Properties of DOM: Let’s see the properties of the document object that can be
accessed and modified by the document object.
lOMoARcPSD|31712146

Window Object: Window Object is object of the browser which is always at top of
the hierarchy. It is like an API that is used to set and access all the properties and
methods of the browser. It is automatically created by the browser.

Document object: When an HTML document is loaded into a window, it becomes a


document object. The ‘document’ object has various properties that refer to other
objects which allow access to and modification of the content of the web page. If
there is a need to access any element in an HTML page, we always start with
accessing the ‘document’ object. Document object is property of window object.

 Form Object: It is represented by form tags.


 Link Object: It is represented by link tags.
 Anchor Object: It is represented by a href tags.
 Form Control Elements:: Form can have many control elements such as text
fields, buttons, radio buttons, checkboxes, etc.

Methods of Document Object:


 write(“string”): Writes the given string on the document.
 getElementById(): returns the element having the given id value.
 getElementsByName(): returns all the elements having the given name value.
 getElementsByTagName(): returns all the elements having the given tag
name.
 getElementsByClassName(): returns all the elements having the given class
name.

Conclusion: Successfully implemented a simple interactive webpage, showing a


tabbed UI which is implemented not through widgets but by interacting with and
controlling the Document Object Model with JavaScript and CSS.
Program:
Output:
lOMoARcPSD|31712146

Assignment No. 5

Title : Assignment based on knowledge of user interfaces using Javascript, CSS and
HTML

Problem Definition: Develop interactive user interfaces using Javascript, CSS and
HTML, specifically:
a. implementation of form-based data entry, input groups, and
button elements using the Bootstrap library.
b. use of responsive web design (RWD) principles,
c. implementing JavaScript communication between the input
forms and a custom visualization component

Requirements: Knowledge of Javascript, CSS and HTML.

Learning Objectives: Learn the Knowledge of interactive user interface using


HTML, CSS and JavaScript.

Outcomes: After completion of this assignment students will able to implement a


interactive user interfaces using Javascript, CSS and HTML.

Theory Concepts:

HTML

At the user interface level, the platform provides a rich visual editor that allows web
interfaces to be composed by dragging and dropping. Instead of purely writing
HTML, developers use visual widgets. These widgets are wrapped and are easy to
reuse just by dragging and dropping without everyone needing to understand how
they are built:

 The core visual widgets represent very closely what developers are used to
with HTML: a div, an input, a button, a table, and so forth. All of them have a
direct - and well known - HTML representation. For example, dragging a
“Container” generates a div.
 Custom HTML widgets can be used to include whatever HTML is needed. An
example is a CMS that loads dynamic, database-stored HTML content in a
page.
 Widgets can be composed and wrapped in “web blocks,” which are similar to
user controls, and reusable layouts with “placeholders,” which are similar to
Master Pages with “holes” that will be filled in when instantiated.
 All widgets can be customized in the properties box via “Extended
Properties," which will be directly translated to HTML attributes. This
includes HTML tags that are not supported today in the base HTML
definition. For example, if someone wants to use custom “data-” attributes,
they can just add them in.
lOMoARcPSD|31712146

 All widgets have properties such as RuntimeId (HTML attribute ID) or Style
(class), which allow them to be used in/with standard JavaScript or CSS.
 All widgets have a well-defined API that is tracked by the platform to ensure
that they are being properly used across all applications that reuse it.

In summary, the visual editor is very similar to a view templating system, such as
.NET ASPX, Java JSP or Ruby ERBs, where users define the HTML and include
dynamic model/presenter bound expressions.

JavaScript

OutSystems provides a very simple to use AJAX mechanism. However, developers


can also use JavaScript extensively to customize how users interact with their
applications, to create client side custom validations and dynamic behaviors, or even
to create custom, very specific, AJAX interactions. For example, each application can
have an application-wide defined JavaScript file or set of files included in resources.
Page- specific JavaScript can also be defined.

OutSystems includes jQuery by default in all applications. But, developers also have
the option to include their own JavaScript frameworks (prototype, jQuery, jQueryUI,
dojo) and use them throughout applications just as they would in any HTML page.

Many JavaScript-based widgets, such as jQuery plugins, have already been packaged
into easy to reuse web blocks by OutSystems Community members and published
to OutSystems Forge. There are examples for kParallax, Drag and Drop lists, Table
freeze cells, Touch Drag and Drop, Sliders, intro.js, or the well known Google Maps.

Even some of the OutSystems built-in widgets are a mix of JavaScript, JSON and
back- end logic. For example, the OutSystems Charting widget is a wrapper over the
well- known Highcharts library. Developers can use the properties exposed by the
OutSystems widget, or use the full JSON API provided by HighCharts to configure
the widget.

This is an example of the jVectorMap JavaScript library, wrapped and reused in the
visual designer to display website access metrics over a world map:
lOMoARcPSD|31712146

This is an example of a jquery slideshow plugin wrapped and reused in the visual
designer:

CSS

OutSystem UIs are purely CSS3-based. A predefined set of “themes,” a mix of CSS
and layout templates, can be used in applications. However, developers can reuse
existing CSS or create their own. A common example is to reuse bootstrap, which is
tweaked so that its grid system is reused by the OutSystems visual editor to drag and
drop page layouts, instead of having to manually input the CSS columns for every
element.

Themes are hierarchical, which means that there is a CSS hierarchy in OutSystems.
Developers can define an application-wide CSS in one theme and redefine only parts
of it for a particular section of an application. Themes can also be reused by
applications when there is a standard style guide.

The built-in CSS text editor supports autocomplete, similar to that of Firebug or
Chrome Inspector, and immediately previews the results in the page without having to
recompile/redeploy applications.

Common CSS styling properties including padding, margin, color, border, and
shadow, can also be adjusted from directly within the IDE using the visual styles
editor panel.

Examples of extending the UI


lOMoARcPSD|31712146

A great example of UI extensibility is Pordata. The Pordata website is the most


important statistics website in Europe and is fully implemented with OutSystems
technology. This website makes a large amount of statistical information about Europe
available. The OutSystems UI extensibility capabilities were used for the graphical
representation of data in charts, dynamic charts and geographical information
visualization.

This example shows information overlaid in maps using a user interface external
component:

This is an example of a typical page in the website that provides the visualization of
information in dynamic and static charts:

Wodify, a SaaS solution for Cross Fit Gyms, is built with OutSystems and currently
supports more than 200,000 users around the globe. Although most of the
functionality in Wodify is created with OutSystems built-in user interface widgets, it
is a great example of how OutSystems interfaces can be freely styled using CSS to
achieve a consistent look and feel and support several devices and form factors.

Wodify user interfaces for several devices:


lOMoARcPSD|31712146

This is an example of dashboards for the gyms:

User Interface (UI) defines the way humans interact with the information systems.
In Layman’s term, User Interface (UI) is a series of pages, screens, buttons, forms
and other visual elements that are used to interact with the device. Every app and
every website has a user interface.
The user interface property is used to change any element into one of several
standard user interface elements. In this article we will discuss the following user
interface property:
 resize
 outline-offset

resize Property: The resize property is used to resize a box by user. This property
does not apply to inline elements or block elements where overflow is visible. In
this property, overflow must be set to “scroll”, “auto”, or “hidden”.
Syntax:
resize: horizontal|vertical|both;
horizontal: This property is used to resize the width of the element.
Syntax:
resize: horizontal;

To resize: Click and drag the bottom right corner of this div element.
lOMoARcPSD|31712146

vertical: This property is used to resize the height of the element.


Syntax:
resize: vertical;
To resize: Click and drag the bottom right corner of this div element.

both: This property is used to resize both the height and width of the element.
Syntax:
resize: both;
To resize: Click and drag the bottom right corner of this div element.

Supported Browsers: The browser supported by resize property are listed below:
 Apple Safari 4.0
 Google Chrome 4.0
 Firefox 5.0 4.0 -moz-
 Opera 15.0
 Internet Explorer Not Supported

outline-offset: The outline-offset property in CSS is used to set the amount of space
between an outline and the edge or border of an element. The space between the
element and its outline is transparent.
Syntax:
outline-offset: length;
Note: Length is the width of the space between the element and its outline.

Conclusion: Successfully developed interactive user interfaces using Javascript, CSS


and HTML
Program:
Output:
lOMoARcPSD|31712146

Assignment No. 6

Title : Assignment based on knowledge of Blender – A 3 D modeling software

Problem Definition: Make a Table Lamp in Blender – A 3 D modeling software


Requirements: Knowledge of Blender - A 3 D modeling software

Learning Objectives: Learn the Blender - A 3 D modeling software

Outcomes: After completion of this assignment students will able to make a Table
Lamp in Blender – A 3 D modeling software

Theory Concepts:

What is a Blender modeling table?


The blender modeling table is a device that allows artists to create 3D objects using
software like Blender. The first blender was created by Bob Blender in 1986, and it
was designed as an alternative to the expensive workstations of the time, which were
used primarily by artists and designers.

What are the benefits of a Blender modeling table?


The 3D modeling process is often tedious and time-consuming, but with the help of a
Blender modeling table, you can create complex models in minutes. Blender
modeling tables are also easy to transport, making them ideal for both professional
and personal use.

How does a modelling table work?


A modelling table is a tool used in the automotive industry to create and test
prototypes. It can be used by designers, engineers, and product managers to create
concepts that are closer to the final product.

What are some of the features of the best modelling tables?


In the past, there have been many different types of modelling tables. They range
from simple and basic to complex and expensive. The best modelling tables are those
that have the following features:

 easy to use
 sturdy
 durable
 customizable
lOMoARcPSD|31712146

Steps to Design Your Own 3D Printed Table Lamp

Step 1: Download Table Lamp Kit

Get started by downloading the table lamp kit for the following 3D modeling
programs:

Blender

Step 2:

We make use of a standard component to attach the shade to the base. This standard
component will be inserted and glued to the lamp fitting (blue) by us. It is important
for you, as the designer, to be aware of this and make use of this blue part in your
design.
lOMoARcPSD|31712146

Step 3: Safety Advice

Since we’re dealing with electricity, you have to make sure to include a spherical
zone around the light bulb of Ø 6 cm / Ø 2.4 inch. This zone needs to remain
completely open (hollow); it cannot contain any material.The design kit contains a Ø
6 cm / Ø 2.4 inch sphere to perform this safety check. Place the sphere inside your
lamp shade and make sure they don’t intersect.

Step 4: Size of the Lamp Shade

For the price of the table lamp, you’re allowed to use a maximum diameter of 13 cm
/ 5.12 inch and the same for the height of your lamp shade. These also happen to be
the ideal dimensions regarding stability, weight, and aesthetics.
lOMoARcPSD|31712146

Step 5: Upload & Order

Once your design is uploaded on i.materialise, it will appear in the 3D model


workspace of the 3D print lab.

Starting from here, you can proceed with your order. After ordering, our 3D printers
will start building your unique lamp.

Step 6:

Conclusion: Successfully developed a Table Lamp in Blender – A 3 D modeling


software.

Program:
Output:

You might also like