0% found this document useful (0 votes)
9 views13 pages

Mad Unit 2

Uploaded by

arunajeon.7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views13 pages

Mad Unit 2

Uploaded by

arunajeon.7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

1

UNIT II

Generic UI Development - Multimodal and Multichannel UI –Gesture


Based UI – Screen Elements and Layouts – Voice XML.

Generic UI Development

A good User Interface (UI) focuses on making user?s interactions


simple and efficient. User would appreciate a website with intuitive
user interface that leads them towards their task in most engaging way.
User Interface (UI) design focuses on thinking of a user, what they
might need to do when they visit website and ensure that the interface
has elements that are easy to access and understand. Being a UI The application screens consist of the following parts:
designer, one need to understand the goals, skills, preferences and
• Descriptors – XML files for declarative definition of the screen layout
tendencies of the user to make a better interface. and data components.
• Controllers – Java classes for handling events generated by the screen
and its UI controls and for programmatic manipulation with the screen
The Generic User Interface (Generic UI, GUI) framework allows you components.
to create UI screens using Java and XML. XML is optional but it
The code of application screens interacts with visual component
provides a declarative approach to the screen layout and reduces the interfaces (VCL Interfaces). These interfaces are implemented using
amount of code which is required for building the user interface. the Vaadin framework components.

• Visual Components Library (VCL) contains a large set of ready-to-use


components.

1
2

• Data components provide a unified interface for binding visual All screen controllers implement the Frame Owner marker interface.
components to entities and for working with entities in screen The name of this interface means that it has a reference to a frame,
controllers. which is a visual component representing the screen when it is shown
• Infrastructure includes the main application window and other common in the main application window. There are two types of frames:
client mechanisms.
• Window – a standalone window that can be displayed inside the main
A screen is a main unit of the generic UI. It contains visual components, application window in a tab or as a modal dialog.
data containers and non-visual components. A screen can be displayed • Fragment – a lightweight component that can be added to windows or
inside the main application window either in the tab or as a modal other fragments.
dialog.
Controllers are also divided into two distinct categories according to the
The main part of the screen is a Java or Groovy class called controller. frames they use:
Layout of the screen is usually defined in an XML file called
descriptor. • Screen – a base class of window controllers.
• ScreenFragment – a base class of fragment controllers.
In order to show a screen, the framework creates a new instance of the
Window visual component, connects the window with the screen
controller and loads the screen layout components as child components
of the window. After that, the screen?s window is added to the main
application window.

A fragment is another UI building block which can be used as part of


screens and other fragments. It is very similar to screen internally, but
has a specific lifecycle and the Fragment visual component instead
of Window at the root of the components tree. Fragments also have
controllers and XML descriptors.

A screen controller is a Java or Groovy class that contains the screen


initialization and event handling logic. Normally, the controller is
linked to an XML descriptor which defines the screen layout and data
containers, but it can also create all visual and non-visual components
programmatically.

2
3

Most of these user interactions are touch-based and happen on colorful semantics from the utterances. Some applications use speech input for
touch screen displays that are bursting with high-level interactions. form filling. However, filling each single slot by speech is often not
Naturally, basic mobile UI design principles differ from those of a
traditional desktop UI. more efficient then typing.

After all, users are, by definition, on the move; control is limited, The question arises: What are important challenges in using speech as
giving new meaning to the phrase >all thumbs. Actions and information
need to be big, bold, clear, and simple. a <mainstream= modality? While ASR made significant efforts within
the last years, e.g. partly driven by the successful application of deep
As mobile adoption continues to rise year-by-year, it?s time to develop
a mobile-first strategy, embraced by the likes of Facebook or other neural networks, the identification of the intended semantic for a further
social networks, who make sure their iOS and Android apps offer a
polished user experience on hand-held devices. processing by the dialog manager is still a rather difficult process. ASR
capabilities are easy to integrate into new user interfaces by making use
After all, when users have more choice and freedom to find mobile
applications that work for them, a poor user experience can easily of available programming APIs.
devalue your brand, hurt your revenue, and disengage your users.
On the technical side one of the next challenges is therefore to realize
Aside from investing in mobile applications, many ecommerce stores
see the increase of purchases coming from mobile. If an online store conversational speech interaction in many applications. This requires to
doesn?t optimize checkout experience, usability or their mobile app simplify the usage of NLP methods for information extraction, dialog
design, they may lose market share or even render themselves obsolete.
processing and presentation, so that developers can easily deploy
Multichannel and Multimodal UI speech interfaces.

One of the breakthroughs users can benefit of is that automatic speech


Since the Internet is mobile nowadays and conversational speech is the
recognition (ASR) improved highly significant over the last years. ASR
most convenient interaction mode of complex applications that require
now works good for dictation tasks. However, dictation is a highly
more than simple gestures, this will enable even more services
specific use case which does not require the extraction of

3
4

at the hand of the users. In that matter it is important to better Swapping, swiping, dragging, long-pressing – these are but a few of the
understand the specific benefits that emerge for individual users. gestures that have come to dominate our digital experiences. Touch
Information about these benefits can be revealed by observing the
screen iPhones mainstreamed mobile gestures years ago, and we
users? modality choice behavior.
haven?t looked back since.

Understanding the factors influencing users? modality choice will Gestures affect how we interact with interfaces, including phones,
enable interface designers to adapt applications to the advantage of the laptops and iPads. But we don?t have to look far to find a gestural
user, and to inform the user about extra possibilities of interaction. interface beyond our work and entertainment devices. It?s no longer

Gesture Based UI uncommon to use gestures when interacting with car screens or

bathroom sinks.

Natural User Interfaces (NUIs) are so natural to users that the interface

feels, and sometimes is, invisible, like a touch screen interface. Some

NUIs even use gesture control, allowing users to interact with the

interface without direct physical contact. BMW recently released a

gesture control feature that gives users touch less control over car
volume, calls and more.

Gestures are growing more common in user interface design and play
increasingly complex roles in our everyday lives.

4
5

As technology advances, UX and UI designers and businesses will need ways, it makes using digital applications more fun, but this isn?t

to adapt. You don?t have to know all the technological intricacies or enough to make a gesture a good one.

have an in-depth knowledge of computer intelligence. Still, you should A good motion gesture improves usability by making applications

have a basic understanding of the capabilities, functions and best design easier to use in all contexts. Well-designed gestures have a shorter

practices for gesture technology. learning curve because they feel natural and are easy to pick up on. To

What Makes a Good Gesture quote Bill Gates:


<Until now, we have always had to adapt to the limits of technology
Gestures are a way of communicating. We?ve long used hand gestures
and conform the way we work with computers to a set of arbitrary
and head nods to help convey meaning, and now, gestures play a role
conventions and procedures. With NUI, computing devices will adapt
in communicating with user interfaces.
to our needs and preferences for the first time and humans will begin to
Good gestures provide effective, efficient communication that aligns
use technology in whatever way is most comfortable and natural for
with our way of thinking. Our thoughts and knowledge influence how
us.=
we speak, and they influence our use of gestures, especially in UI
design. Consider how much easier it is for younger generations who

grow up around modern technology to pick up on gestures – or how the


Benefits of Gesture Technology
act of swiping mimics pushing or wiping something away. It?s why The wide use of gestural interfaces is due to the many benefits

understanding your users is essential, even in gesture design. that come with them. Three of the most significant benefits of

Gestures cross the barrier between the physical and digital realms, gestures are cleaner interfaces, ease of use and improved task

allowing us to interact with digital media with our bodies. In some completion.

5
6

1. Cleaner Interfaces 1. Navigational gestures (to navigate)


Humans consume more content than ever before, businesses use more 2. Action gestures (to take action)
3. Transform gestures (to manipulate content)
data and technology continues to provide more services. With this
increase in content, it?s easy for interfaces and displays to appear The following are some of the most common gestures across interfaces

cluttered. Designers can use gestures to reduce the number of visual that all (or almost all) of users are familiar with – even if not
elements, like buttons, that take up space. consciously. We mention screens, but you can substitute the screen for
2. Ease of Use
a touchpad or any other gesture interface.
As discussed above, interactions become more natural with a gesture-
Tap
based interface. The ease of simple hand gestures allows us to use
A tap gesture is when you tap on the screen with one finger to open or
technology with minimal effort at maximum speed.
select something, like an app or page. Here?s a tip: Design clickable

3. Improved Task Completion interface elements so that the entire box or row is clickable – not just

Task completion rates and conversion rates increase when there?s less the text. Giving users more space increases usability.

a user has to do to complete a task. You?re more likely to finish a task


Double-Tap
when it takes less effort. A gesture-based user interface capitalizes on
Double-tapping is when you tap the screen twice in a row in close
this by making tasks simple and quick. They can even reduce the
succession. Many applications use this gesture to zoom in, but on
number of steps it takes to complete a task.
Instagram, users can double-tap a photo to like it.
Types of Gestures in UI Design
Design for touch has led to the development of many types of gestures,
the most common of which are tapping and swiping. There are three
categories of gesture:

6
7

Swipe contact with an element. Flings are often used to remove something
Swiping involves moving your finger across the screen in one direction, from view.
touching down on one side and lifting your finger on the other. Swipe Long Press

gestures are often used for scrolling or switching between pages. Tinder A long press is when you tap the screen but hold your finger down for

uses swiping right to match with a profile and swiping left to pass over longer than usual. Long presses open up menu options, like when you

one. hold text to copy it or hold down an app to delete it.

Multiple-Finger Swipe Pinch

You can also conduct a swipe gesture with two or three fingers. This is One of many two-finger gestures, a pinch is when you hold two fingers

a common feature on laptop touchpads that use two- and three- finger apart on the screen and then drag them towards each other in a pinching

swipes for different actions. motion. Pinch gestures are often used to zoom back out after zooming

in. Sometimes they present a view of all your open screens for
Drag navigation purposes.
Dragging uses the same general motion as a swipe, only you move your
Pinch-Open or Spread
finger slower and don?t lift it until you?ve pulled the object to where

you want it to be. You use dragging to move an item to a new location, A pinch-open or spread gesture is the opposite of a pinch. You hold

like when re-organizing your phone apps. your two fingers down close together and then spread them apart.
Fling Spreading, like double-tapping, is generally used to zoom in.
Like swiping, a fling gesture is when you move your finger across the

screen at a high speed. Unlike a drag, your finger doesn?t remain in

7
8

Rotation

To do a rotation, press on the screen with two fingers and rotate them

in a circular motion. The best example of rotation is when you turn the

map on Google Maps to see what?s around you.

Screen Elements and Layouts

The basic building block for user interface is a View object which is
created from the View class and occupies a rectangular area on the
screen and is responsible for drawing and event handling.

View is the base class for widgets, which are used to create interactive
This tutorial is more about creating your GUI based on layouts defined
UI components like buttons, text fields, etc.
in XML file. A layout may contain any type of widgets such as buttons,
The ViewGroup is a subclass of View and provides invisible labels, textboxes, and so on.
container that hold other Views or other ViewGroups and define their
layout properties. Following is a simple example of XML file having LinearLayout –

At third level we have different layouts which are subclasses of <?xml version="1.0" encoding="utf-8"?>
<LinearLayout
ViewGroup class and a typical layout defines the visual structure for an
xmlns:android="http://schemas.android.com/apk/res/android"
Android user interface and can be created either at run time using android:layout_width="fill_parent"
android:layout_height="fill_parent"
View/ViewGroup objects or you can declare your layout using simple
android:orientation="vertical" >
XML file main_layout.xml which is located in the res/layout folder of
<TextView android:id="@+id/text"
your project.
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="This is a TextView" />

8
9

RelativeLayout
<Button android:id="@+id/button"
android:layout_width="wrap_content" RelativeLayout is a view group that displays child views in relative
android:layout_height="wrap_content" positions.
android:text="This is a Button" />
Android RelativeLayout enables you to specify how child views are
<!-- More GUI components go here --> positioned relative to each other. The position of each view can be
specified as relative to sibling elements or relative to the parent.
</LinearLayout>

Android Layout Types


There are number of Layouts provided by Android which you will use
in almost all the Android applications to provide different view, look
and feel.
LinearLayout
LinearLayout is a view group that aligns all children in a single
direction, vertically or horizontally.

9
10

TableLayout Absolute Layout

Android TableLayout going to be arranged groups of views into rows An Absolute Layout lets you specify exact locations (x/y coordinates)
and columns. You will use the <TableRow> element to build a row in of its children. Absolute layouts are less flexible and harder to
the table. Each row has zero or more cells; each cell can hold one View maintain than other types of layouts without absolute positioning.
object.

Table Layout containers do not display border lines for their rows,
columns, or cells.

Frame Layout

Frame Layout is designed to block out an area on the screen to display


a single item. Generally, FrameLayout should be used to hold a single
child view, because it can be difficult to organize child views in a way

10
11

that's scalable to different screen sizes without the children overlapping inserted to the list using an Adapter that pulls content from a source
each other. such as an array or database.

You can, however, add multiple children to a FrameLayout and control


their position within the FrameLayout by assigning gravity to each
child, using the android:layout_gravity attribute.

ListView

Android ListView is a view which groups several items and display


them in vertical scrollable list. The list items are automatically

11
12

List View
An adapter actually bridges between UI components and the data source
that fill data into UI Component. Adapter holds the data and send the
data to adapter view, the view can takes the data from adapter view and
shows the data on different views like as spinner, list view, grid view
etc.
The ListView and GridView are subclasses of AdapterView and they
can be populated by binding them to an Adapter, which retrieves data
from an external source and creates a View that represents each data
entry.
Android provides several subclasses of Adapter that are useful for
retrieving different kinds of data and building views for an
AdapterView ( i.e. ListView or GridView). The common adapters
are ArrayAdapter,Base
Adapter, CursorAdapter, SimpleCursorAdapter,SpinnerAdapter
and WrapperListAdapter. We will see separate examples for both
the adapters.
GridView
Android GridView shows items in two-dimensional scrolling grid
(rows & columns) and the grid items are not necessarily predetermined
An adapter actually bridges between UI components and the data source
but they automatically inserted to the layout using a ListAdapter
that fill data into UI Component. Adapter can be used to supply the data
to like spinner, list view, grid view etc.

The ListView and GridView are subclasses of AdapterView and they


can be populated by binding them to an Adapter, which retrieves data
from an external source and creates a View that represents each data
entry.

12
13

Screen Elements accepted for submission by the World Wide Web Consortium (W3C)
as a standard for voice markup on the Web.
Input controls are the interactive components in your app's user
interface. Android provides a wide variety of controls you can use in
The VoiceXML language lets you use a familiar markup style and Web
your UI, such as buttons, text fields, seek bars, check box, zoom
server-side logic to deliver voice content to the Internet. The
buttons, toggle buttons, and many more. VoiceXML applications you create can interact with your existing
back-end business data and logic.

Users interact with these Web-based voice applications by speaking or


by pressing telephone keys rather than through a graphical user
interface.

VoiceXML supports dialogs that feature:

• Spoken input
• Telephone keypad input
• Recording of spoken input
• Synthesized speech output ("text-to-speech")
VoiceXML • Recorded audio output
• Telephony features such as call transfer and disconnect
What is VoiceXML? • Dialog flow control
• Scoping of input
The Voice eXtensible Markup Language (VoiceXML) is an XML-
based markup language for creating distributed voice applications that
users can access from any telephone.

VoiceXML is an emerging industry standard defined by the VoiceXML


Forum, of which IBM is a founding member. It has been

13

You might also like