0% found this document useful (0 votes)
601 views105 pages

VR

Virtual reality systems can be classified as non-immersive, immersive, or semi-immersive based on their level of immersion. Non-immersive systems use desktop displays while immersive systems use head-mounted displays and other components to fully surround the user. Key components of VR systems include sensors, lenses, displays, input devices, and software for simulation and rendering. Factors like field of view, frame rate, and latency must be considered to provide realistic experiences with minimal delay. Position tracking, navigation techniques, and direct manipulation interfaces allow users to interact with and explore virtual environments.

Uploaded by

G Kalaiarasi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
601 views105 pages

VR

Virtual reality systems can be classified as non-immersive, immersive, or semi-immersive based on their level of immersion. Non-immersive systems use desktop displays while immersive systems use head-mounted displays and other components to fully surround the user. Key components of VR systems include sensors, lenses, displays, input devices, and software for simulation and rendering. Factors like field of view, frame rate, and latency must be considered to provide realistic experiences with minimal delay. Position tracking, navigation techniques, and direct manipulation interfaces allow users to interact with and explore virtual environments.

Uploaded by

G Kalaiarasi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 105

1

VIRTUAL REALITY
Real-time interactive graphics with three-dimensional models, combined with a display technology that gives the
user the immersion in the model world and direct manipulation.
TYPES OF VR SYSTEM

VR systems can be classified into 3 major categories based on one of the important features of
VR, which is immersion and the type of interfaces or components utilized in the system. These
are:
• Non-immersive
• Immersive
• Semi-immersive
NON-IMMERSIVE VR SYST E M
 Also called Desktop VR system, Fish tank or Window on World system
Least immersive and least expensive of the VR systems, as it requires the least
sophisticated components.
It allows users to interact with a 3D environment through a stereo display monitor and
glasses
IMMERSIVE VR SYSTEM
 Most expensive and gives the highest level of immersion
 Its components include HMD, tracking devices, data gloves and others, which encompass the user with
computer generated 3D animation that give the user the feeling of being part of the virtual environment.
SEMI-IMMERSIVE VR SYST E M
 Provides high level of immersion, while keeping the simplicity of the desktop VR or utilizing some
physical model.
 Example of such system includes the CAVE (Cave Automatic Virtual Environment) and an application is the
driving simulator
DISTRIBUTED VR SYST EM
 New category of VR system, which exists as a result of rapid development of internet.
 Its goal is to remove the problem of distance, allowing people from many different locations to participate and
interact in the same virtual world through the help of the internet and other networks.
KEY
COMPONENTS

10
S E N SO RS
• Magnetometer:

 Measures magnetic fields.

 Essentially acts as a compass, and by being able to detect magnetic North, it can always tell which direction it is
facing

• Accelerometer:

 Used to let your device know which way it is facing.


 Measures acceleration across an axis or multiple axes
 Detect position, velocity, vibration, and to determine orientation

• Gyroscope:

 Used for measuring and maintaining orientation and angular velocity. 11

 A gyroscope senses changes in twist and angle.


LENSES
 Lenses in our eyes is to alter the incoming light in a way that it gets focused on our receptors on the back of our
eyes.
 If you look at something really close your lenses have to bend a lot to give you a sharp image.
 In Virtual Reality, Head Mounted Displays (VR HMDs) are 3 to 7 cm in front of our eyes. That’s why we
need lenses in VR HMDs that bend the light and make it easier for our eyes to see.

Google
Cardboard 12
Fresnel Lenses used in many V R Headsets

13
DISPLAY S C R E E N
• PC/Console/Smartphones:
Computers are used to process inputs and outputs sequentially. To power the content creation and
production significant computing power is required, thereby making PC/consoles/smartphones
important part of V R systems
• Head Mount Display(HMD):
 It is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one
(monocular HMD) or each eye (binocular HMD).

A HMD has many uses, including in gaming, aviation,


engineering, and medicine lift.
 A typical HMD has one or two small displays, with lenses
and
semi-transparent mirrors embedded in eyeglasses
 Have their own screen
14
Input Devices:
 Provides users the sense of immersion and determines the way a user communicates with the computer.
 Helps users to navigate and interact within a VR environment to make it intuitive and natural as possible
 Examples: Joysticks, force Balls/Tracking balls, Controller wands, Data gloves, trackpads, On-device control
buttons, Motion trackers, Bodysuits, Treadmills and Motion platforms.
VR Engine:
 Responsible for calculating and generating graphical models, object rendering, lighting, mapping,
texturing, simulation and display in real-time.
 Handles the interaction with users and serves as an interface with the I/O devices.
Output Devices:
 Used for presenting the VR content or environment to the users
 The output devices get feedback from the VR engine and pass it on to the users through the
corresponding output devices to stimulate the senses.
SIMULATION & R E ND E R I NG
• Simulation:
 Usage of a computer for the imitation of a real-world process or system.
 Handles interactions, object behaviors, simulations of physical laws and determines the world status
 A discrete process that is iterated once for each frame
 A simulation requires a model, or a mathematical description of the real system. This is in the form of
computer programs, which encompass the key characteristics or behaviors of the selected system.
 Here, the model is basically a representation of the system and the simulation process is known to
depict the operation of the system in time.

16
SIMULATION & R E ND E R I NG
Rendering:
 Automatic process of generating a photorealistic or non-photorealistic image from a 2D or 3D model.
 Particular view of a 3D model that has been converted into a realistic image.
 Mostly used in architectural designs, video games, and animated movies, simulators, TV special effects and
design visualization.

17
VR CO MPO NENT SYSTEM ARCHITECTURE

18
FACTORS IN VIRTUAL REALITY
Field of View(FOV)
 It is defined as the total angular size of the image visible to both the eyes.
 FOV describes the angle through which the devices can pick up electromagnetic radiation.
 FOV allows for coverage of an area rather than a single focused point.
 A large FOV is essential to getting an immersive life-like experience.
 Wider FOV also provides better sensor
 coverage or accessibility for many other optical devices

19
FACTORS IN VR SYST EM S
Frame Rate
 Number of frames or images that are projected or displayed per second.
 Greatly impacts the style and viewing experience of a video.
 Different frame rates yield different viewing experiences, and choosing a frame rate often means
choosing between things such as how realistic you want your video to look.
 Frame rates are often measured in frames per second.(fps)
 To give the impression of a dynamic picture, the system updates the display very frequently with a new image
• Real time, >25 frames/s  So the environment is smooth

20
FACTORS IN VR SYSTEM
Latency:
 Latency is the amount of time a message takes to traverse a system.
 Latency or lag is the delay induced by the various components of a VR system between the user’s
inputs and the corresponding response from the system in the form of a change in the display.
 As latency increases a user’s senses become increasingly confused as their actions become more and more
delayed
 Chronic cases can result in simulator sickness, hence latency must be kept to a minimum.
 Latency < 50 ms

21
3D POSITION TRACKERS
IMU(Inertial
Measurement Unit)

It is an electronic
device that measures
and reports a
body's specific, angular
rate, and sometimes the
magnetic field 6 - D EG R E E S O F FREEDOM(DOF)
surroundings the body,
using a combination of
accelerometers and
gyroscope, sometimes
also magnetometers.

22
23
NAVIGATION
 Navigation in VR system is the users' ability to move around and explore the virtual environment
 Navigation in VR is dependent on what functionality you want to give to the users.

Navigation Techniques
 Steering : direction and velocity
 hand-directed
 gaze-directed
 physical devices
 (steering wheel, flight sticks)
 Target-based
 point at object, list of coordinates
 Route planning
 place markers in world
24
MANIPULATION
 Direct Manipulation is the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural
way
 Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing
the investigator the ability to intuitively explore the data environment.
 An investigator can, for example, move a “data probe” about in space, watching the results and getting a sense of how the
data varies within its spatial volume.

25
MANIPULATION INTERFACE
When data probe is taken into account, the user is allowed to move the data probe anywhere in three-
dimensional space, and in response to that movement several operations must occur:
• Collision Detection: the system must identify that the user has “picked up” or “moved” the data probe.

• Data Access: for a given spatial position of the data probe, the system must locate the data (vector data in our
example) for that location and access that data, as well as all data involved in the visualization computation.
• Visualization Computation: The geometry of the visualization (the streamline in our example) must be
computed.
• Graphical Rendering: The entire graphical environment must be re-rendered from the viewpoint of the user’s
current head position.

26
26

GRAPHICS DISPLAY INTERFACES

Virtual Reality: Output Devices


Graphics Display Interfaces 27

 Graphics Display is a computer interface that is designed to


present synthetic world images to the users interacting with
the virtual environment.

Virtual Reality: Output Devices


28

Virtual Reality: Output Devices


Personal Graphics Display 29

 A graphics display that outputs a virtual scene destined to be


viewed by a single user is called a personal graphics display.
 Image produced may be :-

- Monocular (for a single eye).


- Binocular (displayed on both eye).

Virtual Reality: Output Devices


Head Mounted Display 30

 HMD can focus at short distances.


 It is a device worn on head or can be integrated as a part of the
helmet ,with a small display optic in front of the eye.
 It can either be monocular or binocular.
 HMD’s are designed to ensure that no matter in what direction a user
might look ,the monitor stays in front of the eyes.
 The HMD screen can be either LCD or CRT.
◦ LCD-HMD’s: used for consumer grade devices ,
◦ LCD-CRT: used in more expensive professional based devices for VR
interaction. Google glasses

Virtual Reality: Output Devices


31

VIRTUAL REALITY

AUGMENTED REALITY

Virtual Reality: Output Devices


HAND SUPPORTED DISPLAYS 32

 Here ,the user can hold the device in one or both


hands in order to periodically view a synthetic
scene.
 This allows user to go in and out of the simulation
environment as demanded by the application.
 HSD’s have additional feature namely push buttons
that can be used to interact with
the virtual scene.
 Click for video:

Virtual Reality: Output Devices


Floor Supported Displays 33

 These are alternatives to HMD’s and HSD’s in which


an articulated mechanical arm is available to offload
the weight of the graphics display from the user.
 Do not require use of special glass, can be viewed
with unaided eye.
 Has 2 types:
Passive:
Active:

Virtual Reality: Output Devices


34

Click here for the video description:

Floor Supported Displays


Virtual Reality: Output Devices
Desk supported Displays 35

 It overcomes the problems faced by users due to


the excessive display weights in HMD’s and
HSD’s.
 Floor supported display also suffer from problem

of oscillation due to excessive weight, Desk


supported displays overcome these drawbacks.
 These are fixed and designed to be used for

viewing with user in sitting position.


 Desk supported displays can be viewed with

unaided eyes.

Virtual Reality: Output Devices


36

Desk supported Displays


Virtual Reality: Output Devices
Large Volume Displays 37

 Large volume displays are used in VR environment that


allow more than one user located in close proximity.
 These allow the user with larger work envelope ,thus

improving upon user’s freedom of motion and ability of


natural interaction compared to personal displays.

Virtual Reality: Output Devices


38

Large Volume Displays


Virtual Reality: Output Devices
39
Categorization based on type and size of display being used

Virtual Reality: Output Devices


Monitor-based Large Volume Display 40

 This display relies on use of active glasses coupled with stereo-ready


monitor.
 The user of the system looks at the monitor through a set of shutter
glasses.
 The stereo ready monitor are capable of refreshing the screen at
double of the normal scan rate.
 The shutter glasses and the monitors are synchronized with each
other.

Virtual Reality: Output Devices


41

 The active glasses are connected to an IR emitter located


on top of the CRT display (connection is wireless).
 The IR controller controls and signals liquid crystal

shutters to close and occlude the eyes alternately.


 The brain registers this rapid sequence of right and left

eye images and fuses them to give the feeling of 3D.

Overview of the Shutter glasses here:-

Virtual Reality: Output Devices


PROJECTOR-BASED DISPLAYS 42

 Projector based displays have advantage of allowing group of


closely located users to participate in a VR Simulation, on contrary
to personal graphics displays.

Virtual Reality: Output Devices


43

PROJECTOR-BASED DISPLAYS (Cicret Bracelet)


Virtual Reality: Output Devices
Cave-Automatic Virtual Environment(CAVE)
44

Virtual Reality: Output Devices


45

 A CAVE is a projection-based VR display that uses tracked stereo


glasses to feel the environment.
 CAVE is basically a small room or cubicle where at least the 3 walls,

sometimes the floor and the ceiling act as giant monitors.


 The display gives the users a wide field of view ,something that most

HMD’s cannot do.


 User’s can also move around in the CAVE system.
 Wearing tracked stereo glasses is a must

Virtual Reality: Output Devices


 Tracking devices attached to the glasses tell the
computer how to adjust the projected images as we
46

walk around the environment.

Virtual Reality: Output Devices


CAVE systems used in real world as driving simulators. 47

Virtual Reality: Output Devices


Training Simulator for the military forces
48

Virtual Reality: Output Devices


49

  The sense of touch is also incorporated in the


CAVE’s by using sensory gloves and other haptic
devices. Some haptic devices also allow users to
exert a touch or grasp or replace a virtual object.
 The sound interacts with the brain three

dimensionally enhancing the realism of the virtual


environment experience.

Click to see the video on Large volume Displays:

Virtual Reality: Output Devices


VIRTUAL REALITY TECHNOLOGIES 

 UNREAL ENGINE
 Unreal is a suite of tools for developers that can be used to

create games and virtual reality environments. This is


VIATechnik’s preferred tool for highly polished virtual reality
experiences.
UNITY
 Translating CAD or BIM models into virtual reality experiences
used to take considerable time and programming know how.
With the advent of the Unity gaming engine, bringing
Revit/3D models into a virtual reality space becomes much
easier. Now, any AEC professional can take their Revit model,
bring it into Unity, and create a VR experience.
FUZOR
 One thing that Fuzor does for AEC industry professionals is
instantly transform Revit or Sketchup models into virtual
reality experiences. This is a great tool for construction
because greatly speeds up the process to get designs into VR
– allowing users to iterate and improve on their designs.
TWIN MOTION
 Twinmotion is a powerful and simple visualization engine that
takes in various AEC models. Developed for architectural,
construction, urban planning and landscaping professionals,
regardless of the size and complexity of their project, their
equipment, their computer skills or their modeler,
Twinmotion is a simple to use tool powered by Unreal Engine
that generates amazing graphics in a short amount of time.
OCULUS RIFT
 Oculus Rift’s headset is traditionally seen as a tool for
gamers. However, in the AEC space, it makes working in
virtual environments so much easier. Through the 3D
experience combined with motion tracking capabilities, it
becomes a lot easier to move around 3D models and look
around corners than using a mouse and keyboard.
TP CAST
 TPCast is an amazing tool that detethers the VR experience.
Using an HD transmitter, one can experience a wireless VR
experience through the HTC Vive. Imagine being able to run
around a large VR environment without the risk of tripping
over headset wires!
MAGIC LEAP
 Translating CAD or BIM models into virtual reality experiences
used to take considerable time and programming know how.
With the advent of the Unity gaming engine, bringing
Revit/3D models into a virtual reality space becomes much
easier.
HOLOLENS
 Microsoft’s Hololens is still in its development stage, but its
conceptual release has already introduced various ways in
which this technology can be applied to different industries.
The Hololens utilizes augmented reality to create three
dimensional objects within a real space (versus virtual reality,
which focuses on a full virtual experience) through the use of
light to create holographic images.
KINECT
 The popular Kinect is a device that is part of the Xbox 360
and Xbox One Gaming consoles, but developers have pushed
this technology onto other applications aside from gaming.
The Kinect uses a RGB camera, depth sensor, and microphone
to capture motion and sound within its depth of view. The
advantage of the Kinect technology is its ability to not only
recognize people and their gestures, but multiple people at
the same time. The only limitation is how many people you
can fit into its field of view without any obstructions.
LEAP MOTION
 The Leap Motion Controller is a motion-sensing device that
allows users to see their hands in virtual reality and
augmented reality. What’s more, the controller can be easily
plugged into the USB ports of any Mac or PC. The technology
converts hand and finger movements into 3D input with sub-
millimeter accuracy, virtually no latency, and without gloves or
other handheld accessories.
MYO
 Myo is a wearable armband device that allows the user to use
gestures to wirelessly control technology. Combining a set of
EMG (electromyographic) sensors able to sense the electrical
activity in the wearer’s forearm muscles, a gyroscope, an
accelerometer, and a magnetometer, this device is able to
read muscle activity, allowing the user to control software
using gestures and motion. For instance, Myo allows you to
easily navigate through slideshows by simply using gesture
controls such as tapping fingers together, opening and
closing the fist, or waving the hand. T
Tracking
 Three categories of tracking may appear in VR systems, based on
what is being trac
 The user’s sense organs: the sense organs, such as eyes and ears,

have DOFs that are controlled by the body.


 If a display is attached to a sense organ, and it should be perceived

as in VR as being attached to the surrounding world, then the


position and orientation of the organ needs to be tracked.
 The inverse of the tracked transformation is applied to the

stimulus to correctly “undo” these DOFs.


 Most of the focus is on head tracking, which is sufficient for visual

and aural components of VR; however, the visual system may


further require eye tracking if the rendering and display technology
requires compensating for the eye movements.
 The user’s other body parts: If the user would like to see a compelling
representation of his body in the virtual world, then its motion should
be tracked so that it can be reproduced in the matched zone.
 Facial expressions or hand gestures are needed for interaction.
Although perfect matching is ideal for tracking sense organs, it is not
required for tracking other body parts.
 Small movements in the real world could convert into larger virtual
world motions so that the user exerts less energy.
 In the limiting case, the user could simply press a button to change
the body configuration. For example, she might grasp an object in her
virtual hand by a single click.
 The rest of the environment:
 In the real world that surrounds the user, physical objects may be
tracked. For objects that exist in the physical world but not the virtual
world, the system might alert the user to their presence for safety
reasons.
 Imagine that the user is about to hit a wall, or trip over a toddler. In
some VR applications, the tracked physical objects may be matched in
VR so that the user receives touch feedback while interacting with
them.
 In other applications, such as telepresence, a large part of the physical
world could be “brought into” the virtual world through live capture.
Tracking 2D Orientation
 The main application is determining the viewpoint orientation,
Reye
 , while the user is wearing a VR headset. Another application is
estimating the orientation of a hand-held controller
 suppose we would like to make a laser pointer that works in the
virtual world, based on a direction indicated by the user. The
location of a bright red dot in the scene would be determined by
the estimated orientation of a controller. More generally, the
orientation of any human body part or moving object in the
physical world can be determined if it has an attached IMU.
 To estimate orientation, we first consider the 2D case by
closely following the merry-go-round model
 we mount a gyroscope on a spinning merry-go-round. Its job

is to measure the angular velocity as the merry-go-round


spins. It will be convenient throughout to distinguish a true
parameter value from an estimate.
 Calibration: If a better sensor is available, then the two can be
closely paired so that the outputs of the worse sensor are
transformed to behave as closely to the better sensor as
possible.
 Integration: The sensor provides measurements at discrete

points in time, resulting in a sampling rate. The orientation is


estimated by aggregating or integrating the measurements.
 Registration: The initial orientation must somehow be

determined, either by an additional sensor, or a clever default


assumption or start-up procedure.
 Drift error: As the error grows over time, other sensors are
needed to directly estimate it and compensate for it.
Calibration
 You could buy a sensor and start using it with the assumption
that it is already well calibrated. For a cheaper sensor, however,
the calibration is often unreliable. Suppose we have one
expensive, well-calibrated sensor that reports angular
velocities with very little error. Let ˆω ′ denote its output, to
distinguish it from the forever unknown true value ω. Now
suppose that we want to calibrate a bunch of cheap sensors so
that they behave as closely as possible to the expensive sensor.
 . A common criterion is the sum of squares error, which is
given by X
Integration
 Sensor outputs usually arrive at a regular sampling rate. For
example, the Oculus Rift gyroscope provides a measurement
every 1ms (yielding a 1000Hz sampling rate). Let ˆω[k] refer
to the kth sample, which arrives at time k∆t.
 The orientation θ(t) at time t = k∆t can be estimated by

integration as:
Registration
 The initial orientation θ(0) was assumed to be known. In practice,
this corresponds to a registration problem, which is the initial
alignment between the real and virtual worlds.
 To understand the issue, suppose that θ represents direction for a
VR headset.
 One possibility is to assign θ(0) = 0, which corresponds to
whichever direction the headset is facing when the tracking
system is turned on.
 This might be when the system is booted. If the headset has an
“on head” sensor, then it could start when the user attaches the
headset to his head.
Drift correction
 To make a useful tracking system, the drift error cannot be
allowed to accumulate. Even if the gyroscope were perfectly
calibrated, drift error would nevertheless grow due to other
factors such as quantized output values, sampling rate
limitations, and unmodeled noise. The first problem is to
estimate the drift error, which is usually accomplished with an
additional sensor.
VR and Sense organs
TRACKING systems
 The tracking systems are the main components for the VR
systems.
 They interact with the system’s processing unit.
 This relays to the system the orientation of the user’s point of

view.
 In systems which let a user to roam around within a physical

space, the locality of the person can be detected with the help
of trackers, along with his direction and speed.
The various types of systems used for tracking utilized in VR systems.

  A six degree of freedom can be detected (6-DOF)


  Orientation consists of a yaw of an object, roll and pitch.
  These are nothing but the position of the objects within the

x-y-z coordinates of a space, however, it is also the


orientation of the object.
Electromagnetic tracking systems
 – They calculate magnetic fields generated by bypassing an electric
current simultaneously through 3 coiled wires.
 These wires are set up in a perpendicular manner to one another.
These small turns to be an electromagnet.
 The system’s sensors calculate how its magnetic field creates an
impact on the other coils.
 The measurement shows the orientation and direction of the emitter.
 The responsiveness of an efficient electromagnetic tracking system is
really good. They level of latency is quite low.
 The drawback is that whatever that can create a magnetic field, can
come between the signals, which are sent to the sensors
Acoustic tracking systems
  This tracking system senses and produces ultrasonic sound waves to identify
the orientation and position of a target.
 They calculate the time taken for the ultrasonic sound to travel to a sensor.
 The sensors are usually kept stable in the environment.
 The user puts on ultrasonic emitters. the calculation of the orientation as well
as position of the target depending on the time on the time taken by the sound
to hit the sensors is achieved by the system. Many faults are shown by the
acoustic tracking system.
 Sound passes by quite slowly, so the update’s rate on a target’s position is
naturally slow.
 The efficiency of the system can be affected by the environment as the sound’s
speed through air often changes depending on the humidity, temperature or
the barometric pressure sound in the environment
Optical tracking devices 
 – These devices use light to calculate a target’s orientation
along with position.
 The signal emitter typically includes a group of infrared LEDs.

The sensors consist of nothing but only cameras.


 These cameras can understand the infrared light that has

been emitted.
 The LEDs illuminates in a fashion known as sequential pulses.
 The pulsed signals are recorded by the camera and then the

information is sent to the processing unit of the system. Data


can be extrapolated by this unit
Mechanical tracking systems 
 This tracking system is dependent on a physical link between a
fixed reference point and the target.
 One of the many examples is that mechanical tracking system

located in the VR field, which is indeed a BOOM display.


 A BOOM display, an HMD, is attached on the rear of a mechanical

arm consisting 2 points of articulation.


 The detection of the orientation and position of the system is

done through the arm.


 The rate of update is quite high with mechanical tracking systems,

but the demerit is that they limit range of motion for a user.
MOTION TRACKING

 Motion Tracking, the process of digitising your movements


for use in computer software, is a key component of VR
systems.
 Being able to engage and interact with the virtual world the

moment you step into a VR CAVE or put on your VR headset –


without being reminded of the real world – is crucial to the
creation of a truly immersive experience.
HOW DOES MOTION TRACKING WORK?

 To understand how an object is able to move in three-


dimensional space, we need to look at the concept of 
six degrees of freedom (6DoF), which refers to the freedom of
movement of a rigid body in 3D space.
OPTICAL TRACKING

 Optical tracking is where an imaging device is used to track


body motion of an individual. The person who is being
tracked is required to hold handheld controllers or an HMD
(Head Mounted Display) that has the trackers on them or to
wear optical markers, which are placed on certain parts of the
body.
NON-OPTICAL TRACKING

 Non-optical tracking makes use of microscopic


electromechanical sensors that are installed in hardware or
attached to the body to measure and track movements. These
are typically gyroscopes, magnetometers, and accelerometers.
The hardware components of VR systems are conveniently classified as:

• Displays (output): Devices that each stimulate a sense


organ.
 • Sensors (input): Devices that extract information from the

real world.
 • Computers: Devices that process inputs and outputs

sequentially.
DATA GLOVE

 A data glove is an interactive device, resembling a glove worn on the


hand, which facilitates tactile sensing and fine-motion control in 
robotics and virtual reality.
 It is one of electromechanical devices used in haptics applications.
 Tactile sensing involves simulation of the sense of human touch and
includes the ability to perceive pressure, linear force, torque,
temperature, and surface texture.
 Fine-motion control involves the use of sensors to detect the
movements of the user's hand and fingers, and the translation of these
motions into signals that can be used by a virtual hand (for example, in 
gaming) or a robotic hand (for example, in remote-control surgery).
Body Tracking

 Body tracking is the VR system's ability to sense position and actions of


the participants.
 The particular components of movement that are tracked depend on the
body part and how the system is implemented.
 tracking head movement might consist of just 3-DOF location, just 3-
DOF orientation, or full 6-DOF position information.
 tracking finger movement with a glove device, which might measure
multiple joints of finger flexion or might measure just contacts between
fingertips.
 Any component of the body can be tracked in one or more degrees of
freedom, assuming a suitable tracking mechanism is available in an
appropriate size and weight and is attached to the system.
◦ Tracking the head
Tracking the hand and fingers
Tracking the eyes
Tracking the torso
Tracking the feet
Tracking other body parts
Indirect tracking
The Head

 The head is tracked in almost every VR system, although not always the full 6-
DOF.
 VR systems need to know something about the user's head orientation and/or

location to properly render and display the world.


 location or orientation information is required depends on the type of display

being used.
 Head-based displays require head orientation to be tracked. As users rotate

their heads, the scenery must adapt and be appropriately rendered in


accordance with the direction of view, or the users will not be physically
immersed.
 Location tracking helps provide the sense of motion parallax (the sense that an

object has changed position based on being viewed from a different point). This
cue is very important for objects that are near the viewer.
The Hand and Fingers

 Tracking the hand, with or without tracking the fingers, is generally done
to give the user a method of interacting with the world .
 In multiparticipant spaces, hand gestures provide communication
between participants.
 A hand can be tracked by attaching a tracker unit near the wrist or
through the use of a tracked handheld device. I
 if detailed information about the shape and movement of the hand is
needed, a glove input device is used to track the positions of the user's
fingers and other flexions of the hand.
 The hand position tracker is generally mounted directly on the glove.
While glove input devices provide a great amount of information about a
key interactive part of the user's body, they have a few disadvantages.
The Eyes
 Technology for tracking the direction in which the user's eyes
are looking relative to their head has only recently become
practical for use with virtual reality and, consequently, has not
been tried in many applications.
 One is monitoring the direction of the gaze to allocate

computer resources.
 The scene displays a higher degree of detail in the directon of

the tracked eye's gaze.


 Objects might be selected or moved based on the movement

of the eyes.
The Torso

 Very few VR applications actually track the torso of the


participant, although when an avatar of the user is displayed, it
often includes a torso with certain assumptions made about its
position, based on head and hand positions.
 The torso is actually a better indicator of the direction the body is
facing than are either the head or hands.
 The torso's bearing might be a better element to base navigational
direction on than head or hand .
 The benefit of using the torso's bearing for travel direction
correlates with the user's experience level in moving through an
immersive virtual world.
The Feet

 Some work has been done to provide a means for tracking the feet of the user.
 Tracking the feet provides an obvious means of determining the speed and

direction a user wishes to travel.


 The obvious method of determining feet movement is to track the position of

each foot.
 The most common way to do this is to use electromagnetic trackers. The tracker

is attached to the foot with a wire connecting that device to the computer or to
a body pack (containing a radio transmitter).
 Optical tracking is another method and uses cameras to "watch" the feet. This

doesn't require any sensors on the foot or attached wires. Tracking the feet this
way, though, is tricky.
 You would most likely want to put a very high-contrast spot or sticker on the

foot to give the camera a very specific reference for which to "look.”
Indirect Tracking

 Indirect tracking refers to the use of physical objects other


than body parts to estimate the position of the participant.
 These physical objects are usually props and platforms.
 the movement of a hand- held device, such as a wand or a

steering wheel mounted on a platform, are good indicators of


the position of one of the participant's hands.
Sensors:

 Magnetometers,
 Accelerometers and
 Gyroscopes
A magnetometer
 is, as you probably can tell, a device that measures magnetic
fields.
 It act as a compass, by detecting magnetic North it can always

tell which direction it is facing on the surface of the earth.


 Clever developers have repurposed the magnetometer for use

with the Google Cardboard, where a magnetic ring is slid up


and down another magnet, the fluctuation in the field is then
registered as a button click.
 Metal detectors also use magnetometers, which is why they

can only detect ferrous metals.


An accelerometer
 is a mechanism that lets your device, such as a smartphone, know which
way up it is.
 This is the sensor that tells your phone or tablet whether the screen
should be in portrait or landscape mode.
 One accelerometer can tell whether it is in line with the pull of gravity or
not, but if you combines three of them (one for each axis) you can tell
which way up something is, since each axis is fixed in relation to the
device it is in.
 accelerometers can measure more than orientation. As the name suggests
it can also measure the magnitude of acceleration along an axis.
 they are used to trigger airbags during a crash where the g-force along
the horizontal axis exceeds a certain threshold.
Gyroscopes

 No matter how the frame (and whatever it’s mounted to) changes orientation, the
spinning disc in the middle stays true to its original plane.
 aircraft where you may not know if the craft is level or not due to issues like visibility

can be detected by MEMS(Micro-Electro-Mechanical Systems)


 The types of components used to measure rotation in electronic devices can be

quite varied.
 Generally they detect vibration that’s translated to rotational measurement with

microscopic tuning forks, vibrating wheels or resonant solid matter.


 These work like the organs of insects (known as halteres) that help to orient them .

 These MEMS gyroscopes are also known as vibrating structure gyroscopes. Since

objects that vibrate tend to continue vibrating in the same plane even when rotated,
which means the vibrating mass generates a coriolis force that can be detected
electronically.
Motion capture 

 (sometimes referred as mo-cap or mocap, for short) is the process of


recording the movement of objects or people. It is used in military, 
entertainment, sports, medical applications, and for validation of computer
vision and robotics. In filmmaking and video game development,
 it refers to recording actions of human actors, and using that information
to animate digital character models in 2D or 3D computer animation. 
 When it includes face and fingers or captures subtle expressions, it is often
referred to as performance capture. In many fields, motion capture is
sometimes called motion tracking, but in filmmaking and games, motion
tracking usually refers more to match moving.
Optical systems

 Optical systems
 Optical systems utilize data captured from image sensors to triangulate the 3D
position of a subject between two or more cameras calibrated to provide
overlapping projections.
 Data acquisition is traditionally implemented using special markers attached to an
actor; however, more recent systems are able to generate accurate data by
tracking surface features identified dynamically for each particular subject.
 Tracking a large number of performers or expanding the capture area is
accomplished by the addition of more cameras.
 These systems produce data with three degrees of freedom for each marker, and
rotational information must be inferred from the relative orientation of three or
more markers;
 for instance shoulder, elbow and wrist markers providing the angle of the elbow.
Passive markers

 Passive optical systems use markers coated with a  retroreflective


 material to reflect light that is generated near the cameras lens. The
camera's threshold can be adjusted so only the bright reflective
markers will be sampled, ignoring skin and fabric.
 The centroid of the marker is estimated as a position within the two-

dimensional image that is captured.


 An object with markers attached at known positions is used to

calibrate the cameras and obtain their positions and the lens distortion
of each camera is measured.
 If two calibrated cameras see a marker, a three-dimensional fix can

be obtained.
Active marker

 Active optical systems triangulate positions by illuminating


one LED at a time very quickly or multiple LEDs with software
to identify them by their relative positions,
 Rather than reflecting light back that is generated externally,

the markers themselves are powered to emit their own light


Video based input
 Virtual reality (VR) is an exciting new medium with broad
applications in entertainment, marketing, design, and more. 
 Every bit as flexible and dynamic are the professionals who

specialize in creating and adapting content for virtual reality.


 VR content creators use two main methods to create VR

content: computer-generation, wherein every part of the


world is synthetic, designed and integrated into an interactive
experience using code; and 360-degree video, where video is
taken using an omnidirectional camera and edited to create
an immersive experience
360-Degree Immersive Video

 video is shot using an omnidirectional camera, spliced together using


special video stitching software, and edited further in order to be
optimized for viewing on a head-mounted display (HMD).
 content creators will fill more than one role on a single project, but
VR content creators who work in this medium can take on a number
of jobs, including:
 direction—conceiving and leading the project;
 filming—setting up tripods, guiding the Steadicam, or piloting a
drone-mounted camera;
 audio capture—setting up sitting directly beneath the tripod; and
postproduction.
Interactive 3D Development

 The "virtual" in virtual reality is why most people tend to picture this kind
of content when they think of putting on a VR headset.
 creating video-based VR content is like making a film, then creating
interactive VR content is like developing a video game.
 Creators use 3D game development software—called "engines"—to build
entire worlds from the ground up. This often means creating original 3D
models and animations with software such as 3DS Max or Maya; using
applications such as Photoshop or Zbrush to add color and texture;
 using the engine's built-in tools to add lighting to the scene; and 
designing original sound effects to bring the visual content to life.
 interactive VR content creators write code—using the advanced
functionality provided by the game engine—to tie everything together.
Menus

You might also like