0% found this document useful (0 votes)
64 views70 pages

AVR Unit 1 Part 1

This document provides an introduction and definition of virtual reality (VR). It defines VR as inducing targeted behavior in an organism through artificial sensory stimulation, while the organism is unaware of the interference. Key components are targeted behavior, artificial stimulation of one or more senses, and the organism's unawareness. The document also discusses modern VR experiences including video games, telepresence, and the importance of understanding human physiology for developing comfortable VR.

Uploaded by

Honey Bee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views70 pages

AVR Unit 1 Part 1

This document provides an introduction and definition of virtual reality (VR). It defines VR as inducing targeted behavior in an organism through artificial sensory stimulation, while the organism is unaware of the interference. Key components are targeted behavior, artificial stimulation of one or more senses, and the organism's unawareness. The document also discusses modern VR experiences including video games, telepresence, and the importance of understanding human physiology for developing comfortable VR.

Uploaded by

Honey Bee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Introduction to Virtual Reality

• What Is Virtual Reality?


• Virtual reality (VR) technology is evolving rapidly, making it
undesirable to define VR in terms of specific devices that may
fall out of favor in a year or two.
• We are concerned with fundamental principles that are less
sensitive to particular technologies and therefore survive the
test of time.
• Our first challenge is to consider what VR actually means, in a way
that captures the most crucial aspects in spite of rapidly changing
technology.
• The concept must also be general enough to encompass what VR is
considered today and what we envision for its future.
• We start with two thought-provoking examples:
1) A human having an experience of flying over virtual San Francisco by flapping his
own wings. (the user, wearing a VR headset, flaps his wings while flying over
virtual San Francisco. A motion platform and fan provide additional sensory
stimulation. The figure on the right shows the stimulus presented to each eye.)

2) A mouse running on a freely rotating ball while exploring a virtual maze that
appears on a projection screen around the mouse.
• We want our definition of VR to be broad enough to include these examples and
many more.
• Definition of VR: Inducing targeted behavior in an organism by using
artificial sensory stimulation, while the organism has little or no awareness
of the interference.
• Four key components appear in the definition:
1) Targeted behavior: The organism is having an “experience” that was
designed by the creator. Examples include flying, walking, exploring,
watching a movie, and socializing with other organisms.
2) Organism: This could be you, someone else, or even another life form such
as a fruit fly, cockroach, fish, rodent, or monkey (scientists have used VR
technology on all of these!).
3) Artificial sensory stimulation: Through the power of engineering, one or
more senses of the organism become co-opted, at least partly, and their
ordinary inputs are replaced or enhanced by artificial stimulation.
4) Awareness: While having the experience, the organism seems unaware of
the interference, thereby being “fooled” into feeling present in a virtual
world. This unawareness leads to a sense of presence in an altered or
alternative world. It is accepted as being natural.
• Testing the boundaries How far does our VR definition allow one
to stray from the most common examples?
• Perhaps listening to music through headphones should be
included.
• What about watching a movie at a theater? Clearly, technology
has been used in the form of movie projectors and audio systems
to provide artificial sensory stimulation.
• Continuing further, what about a portrait or painting on the wall?
The technology in this case involves paints and a canvass.
• Finally, we might even want reading a novel to be considered as
VR.
• Who is the fool? When an animal explores its environment, neural
structures composed of place cells are formed that encode spatial
information about its surroundings.
• Each place cell is activated precisely when the organism returns to a
particular location that is covered by it.
• It has been shown that these neural structures may form in an
organism, even when having a VR experience.
• In other words, our brains may form place cells for places that are not
real! This is a clear indication that VR is fooling our brains, at least
partially.
• A novel that meticulously describes an environment that does not
exist will cause place cells to be generated.
• Terminology regarding various “realities” The term virtual reality
dates back to German philosopher Immanuel Kant although its use did not
involve technology.
• Kant introduced the term to refer to the “reality” that exists in someone’s
mind, as differentiated from the external physical world, which is also a
reality.
• The real world refers to the physical world that contains the user at the time
of the experience, and the virtual world refers to the perceived world as part
of the targeted VR experience.
• Augmented reality (AR) refers to systems in which most of the visual stimuli
are propagated directly through glass or cameras to the eyes, and some
additional structures, such as text and graphics, appear to be superimposed
onto the user’s world.
• The term mixed reality (MR) is sometimes used to refer to an entire
spectrum that encompasses VR, AR, and ordinary reality.
• The most important idea of VR is that the user’s perception of
reality has been altered through engineering, rather than
whether the environment they believe they are in seems more
“real” or “virtual”.
• A perceptual illusion has been engineered. Thus, another
reasonable term for this area, especially if considered as an
academic discipline, could be perception engineering.
• When considering a VR system, it is tempting to focus only on the traditional
engineering parts: Hardware and software.
• However, it is equally important, if not more important, to understand and exploit
the characteristics of human physiology and perception.
• Because we did not design ourselves, these fields can be considered as reverse
engineering. All of these parts tightly fit together to form perception engineering.
• Interactivity Most VR experiences involve another crucial
component: interaction.
• Does the sensory stimulation depend on actions taken by the
organism?
• If the answer is “no”, then the VR system is called open-loop;
otherwise, it is closed-loop.
• In the case of closed-loop VR, the organism has partial control
over the sensory stimulation, which could vary as a result of body
motions, including eyes, head, hands, or legs.
• Other possibilities include voice commands, heart rate, body
temperature, and skin conductance.
• First- vs. Third-person When a scientist designs an experiment for an
organism, then the separation is clear: The laboratory subject
(organism) has a first-person experience, while the scientist is a third-
person observer.
• The scientist carefully designs the VR system as part of an experiment
that will help to resolve a scientific hypothesis. For example, how does
turning off a few neurons in a rat’s brain affect its navigation ability?
• On the other hand, when engineers or developers construct a VR
system or experience, they are usually targeting themselves and people
like them.
• They feel perfectly comfortable moving back and forth between being
the “scientist” and the “lab subject” while evaluating and refining their
work.
• Synthetic vs. captured Two extremes exist when constructing a virtual world as part
of a VR experience. At one end, we may program a synthetic world, which is completely
invented from geometric primitives and simulated physics.
• At the other end, the world may be captured using modern imaging techniques. For
viewing on a screen, the video camera has served this purpose for over a century.
Capturing panoramic images and videos and then seeing them from any viewpoint in a VR
system is a natural extension.
• As humans interact, it becomes important to track their motions, which is an important
form of capture.
• What are their facial expressions while wearing a VR headset? Do we need to know
their hand gestures? What can we infer about their emotional state? Are their eyes
focused on me?
• Synthetic representations of ourselves called avatars enable us to interact and provide
a level of anonymity. We can also enhance our avatars by tracking the motions and
other attributes of our actual bodies.
• Health and safety Unlike simpler media such as radio or
television, VR has the power to overwhelm the senses and the
brain, leading to fatigue or sickness.
• This phenomenon has been studied under the heading of simulator
sickness for decades; in this book we will refer to adverse
symptoms from VR usage as VR sickness.
• In many cases, it is caused by a careless developer who
misunderstands or disregards the side effects of the experience
on the user.
• To engineer comfortable VR experiences, one must understand
human physiology and perceptual psychology.
• In many cases, fatigue arises because the brain appears to work
harder to integrate the unusual stimuli being presented to the
senses.
• Another factor that leads to fatigue is an interface that
requires large amounts of muscular effort.
• For example, it might be tempting to move objects around in a
sandbox game by moving your arms around in space. This quickly
leads to fatigue and an avoidable phenomenon called gorilla arms,
in which people feel that the weight of their extended arms is
unbearable.
Modern VR Experiences
• The current generation of VR systems was brought about by advances in display, sensing,
and computing technology from the smartphone industry.
• Video games People have dreamed of entering their video game worlds for decades. Figure
below shows several video game experiences in VR. Most gamers currently want to explore
large, realistic worlds through an avatar.
• Figure (a) shows Valve’s Portal 2 for the HTC Vive headset which is a puzzle-solving
experience in a virtual world. Figure (b) shows an omnidirectional treadmill peripheral for
walking through first-person shooter games. These two examples give the user a first-
person perspective of their character. By contrast, Figure (c) shows Lucky’s Tale for the
Oculus Rift, which instead yields a comfortable third-person perspective as the user seems
to float above the character that she controls. Figure (d) shows a game that contrasts all
the others in that it was designed to specifically exploit the power of VR, the player
appears to have a large elephant trunk. The purpose of the game is to enjoy this unusual
embodiment by knocking things down with a swinging trunk.
• Telepresence The first step toward feeling like we are somewhere
else is capturing a panoramic view of the remote environment.
• Google’s Street View and Earth apps already rely on the captured
panoramic images from millions of locations around the world.
• Simple VR apps that query the Street View server directly enable to
user to feel like he is standing in each of these locations.
• Even better is to provide live panoramic video interfaces, through
which people can attend sporting events and concerts.
• An important component for achieving telepresence is to capture
a panoramic view: (a) A car with cameras and depth sensors on
top, used by Google to make Street View. (b) The Insta360 Pro
captures and streams omnidirectional videos.
• People can take video conferencing to the next level by feeling
present at the remote location. By connecting panoramic cameras
to robots, the user is even allowed to move around in the remote
environment
• Current VR technology allows us to virtually visit far away places
and interact in most of the ways that were previously possible
only while physically present.
• This leads to improved opportunities for telecommuting to work.
This could ultimately help reverse the urbanization trend
sparked by the 19th-century industrial revolution, leading to
deurbanization as we distribute.
• panoramic video of Paul McCartney performing, which provides a
VR experience where users felt like they were on stage with the
rock star.
• Virtual societies Whereas telepresence makes us feel like we
are in another part of the physical world.
• VR also allows us to form entire societies that remind us of the
physical world, but are synthetic worlds that contain avatars
connected to real people.
• People interact in a fantasy world through avatars; such
experiences were originally designed to view on a screen but can
now be experienced through VR.
• Groups of people could spend time together in these spaces for a
variety of reasons, including common special interests,
educational goals, or simply an escape from ordinary life.
• Empathy The first-person perspective provided by VR is a powerful
tool for causing people to feel empathy for someone else’s situation.
• The world continues to struggle with acceptance and equality for
others of different race, religion, age, gender, social status, and
education, while the greatest barrier to progress is that most people
cannot understand what it is like to have a different identity.
• Through virtual societies, many more possibilities can be explored.
• What if you were 10cm shorter than everyone else?
• What if you teach your course with a different gender?
• What if you were the victim of racial discrimination by the police?
• Using VR, we can imagine many “games of life”
• Education The first-person perspective could revolutionize many
areas of education.
• In engineering, mathematics, and the sciences, VR offers the chance
to visualize geometric relationships in difficult concepts or data that
are hard to interpret.
• Furthermore, VR is naturally suited for practical training because skills
developed in a realistic virtual environment may transfer naturally to
the real environment.
• The motivation is particularly high if the real environment is costly to
provide or poses health risks.
• One of the earliest and most common examples of training in VR is
flight simulation
• A flight simulator in use by the US Air Force. The user sits in a
physical cockpit while being surrounded by displays that show
the environment.
• Other examples include firefighting, nuclear power plant safety,
search-and-rescue, military operations, and medical procedures.
• Beyond these common uses of VR, perhaps the greatest opportunities
for VR education lie in the humanities, including history, anthropology,
and foreign language acquisition.
• Consider the difference between reading a book on the Victorian era in
England and being able to roam the streets of 19th-century London, in
a simulation that has been painstakingly constructed by historians.
• We could even visit an ancient city that has been reconstructed from
ruins.
• Fascinating possibilities exist for either touring physical museums
through a VR interface or scanning and exhibiting artifacts directly
in virtual museums.
• Virtual prototyping In the real world, we build prototypes to
understand how a proposed design feels or functions.
• Virtual prototyping enables designers to inhabit a virtual world
that contains their prototype. They can quickly interact with it
and make modifications.
• They also have opportunities to bring clients into their virtual
world so that they can communicate their ideas.
• Imagine you want to remodel your kitchen. You could construct a
model in VR and then explain to a contractor exactly how it should
look.
• Virtual prototyping in VR has important uses in many businesses,
including real estate, architecture, and the design of aircraft,
spacecraft, cars, furniture, clothing, and medical instruments.
• Health care Although health and safety are challenging VR issues,
the technology can also help to improve our health.
• There is an increasing trend toward distributed medicine, in which
doctors train people to perform routine medical procedures in remote
communities around the world.
• Doctors can provide guidance through telepresence, and also use VR
technology for training.
• In another use of VR, doctors can immerse themselves in 3D organ
models that were generated from medical scan data.
• This enables them to better plan and prepare for a medical procedure
by studying the patient’s body shortly before an operation.
• In yet another use, VR can directly provide therapy to help
patients.
• Examples include overcoming phobias and stress disorders
through repeated exposure, improving or maintaining cognitive
skills in spite of aging, and improving motor skills to overcome
balance, muscular, or nervous system disorders.
• VR systems could also one day improve longevity by enabling aging
people to virtually travel, engage in fun physical therapy, and
overcome loneliness by connecting with family and friends through
an interface that makes them feel present and included in remote
activities.
• Augmented and mixed reality In many applications, it is
advantageous for users to see the live, real world with some
additional graphics superimposed to enhance its appearance.
• This has been referred to as augmented reality or mixed
reality.
• By placing text, icons, and other graphics into the real world,
the user could leverage the power of the Internet to help
with many operations such as navigation, social interaction,
and mechanical maintenance.
• The Microsoft Hololens, 2016, uses advanced see-through display
technology to superimpose graphical images onto the ordinary
physical world, as perceived by looking through the glasses.
• Nintendo Pokemon Go is a geolocation-based game from 2016 that
allows users to imagine a virtual world that is superimposed on to
the real world. They can see Pokemon characters only by looking
“through” their smartphone screen.
Hardware

• A third-person perspective of a VR system. It is wrong to assume


that the engineered hardware and software are the complete VR
system: The organism and its interaction with the hardware are
equally important. Furthermore, interactions with the surrounding
physical world continue to occur during a VR experience.
• The first step in understanding how VR works is to consider what
constitutes the entire VR system.
• VR system includes hardware components, such as computers, headsets,
and controllers. It is equally important to account for the user.
• The hardware produces stimuli that override the senses of the user.
• Tracking is needed to adjust the stimulus based on human motions. The VR
hardware accomplishes this by using its own sensors, thereby tracking
motions of the user.
• Finally, it is also important to consider the surrounding physical world as
part of the VR system.
• In spite of stimulation provided by the VR hardware, the user will always
have other senses that respond to stimuli from the real world.
• The VR hardware might also track objects other than the user, especially
if interaction with them is part of the VR experience.
• Sensors and sense organs How is information extracted from the
physical world? Clearly this is crucial to a VR system.
• In engineering, a transducer refers to a device that converts energy
from one form to another.
• A sensor is a special transducer that converts the energy it receives
into a signal for an electrical circuit.
• This may be an analog or digital signal, depending on the circuit type. A
sensor typically has a receptor that collects the energy for
conversion.
• Organisms work in a similar way. The “sensor” is called a sense organ,
with common examples being eyes and ears.
• Because our “circuits” are formed from interconnected neurons, the
sense organs convert energy into neural impulses.
• Configuration space of sense organs As the user moves
through the physical world, his sense organs move along with him.
• Furthermore, some sense organs move relative to the body
skeleton, such as our eyes rotating within their sockets.
• Each sense organ has a configuration space, which corresponds to
all possible ways it can be transformed or configured.
• The most important aspect of this is the number of degrees of
freedom or DOFs of the sense organ.
• Rigid object that moves through ordinary space has six DOFs.
Three DOFs correspond to its changing position in space: 1) side-
to-side motion, 2) vertical motion, and 3) closer-further motion.
• The other three DOFs correspond to possible ways the object
could be rotated; in other words, exactly three independent
parameters are needed to specify how the object is oriented.
These are called yaw, pitch, and roll.
• As an example, consider your left ear. As you rotate your head or
move your body through space, the position of the ear changes, as
well as its orientation. This yields six DOFs.
• Keep in mind that our bodies have many more degrees of freedom,
which affect the configuration of our sense organs.
• A tracking system may be necessary to determine the position
and orientation of each sense organ that receives artificial
stimuli.
An abstract view Figure below illustrates the normal
operation of one of our sense organs without interference
from VR hardware.
The brain controls its configuration, while the sense organ
converts natural stimulation from the environment into
neural impulses that are sent to the brain. Figure
• In comparison to above Figure, a VR system “hijacks” each sense
by replacing the natural stimulation with artificial stimulation
that is provided by hardware called a display.
• Using a computer, a virtual world generator maintains a coherent,
virtual world. Appropriate “views” of this virtual world are
rendered to the display.
• Aural: world-fixed vs. user-fixed
• Figure shows the speaker setup and listener location for a Dolby 7.1 Surround Sound
theater system, which could be installed at a theater or a home family room. Seven
speakers distributed around the room periphery generate most of the sound, while a
subwoofer (the “1” of the “7.1”) delivers the lowest frequency components. The aural
displays are therefore world-fixed.
• Compare this to a listener wearing headphones. In this case, the aural
displays are user-fixed.
• What are the key differences? In addition to the obvious portability of
headphones, the following quickly come to mind:
• In the surround-sound system, the generated sound (or stimulus) is far away
from the ears, whereas it is quite close for the headphones.
• One implication of the difference in distance is that much less power is
needed for the headphones to generate an equivalent perceived loudness
level compared with distant speakers.
• Wearing electronics on your head could be uncomfortable over long periods
of time, causing a preference for surround sound over headphones.
• If you want to preserve your perception of where sounds are coming from,
then headphones would need to take into account the configurations of your
ears in space to adjust the output accordingly.
• Visual: world-fixed vs. user-fixed
• Our vision sense is much more powerful and complex than our sense of
hearing.
• Figure (a) shows a CAVE system, which parallels the surround-sound system
in many ways. The user again sits in the center while displays around the
periphery present visual stimuli to his eyes.
• Figure (b) shows a user wearing a VR headset, which parallels the
headphones.
• If you would like to perceive the image as part of a fixed world around you,
then the image inside the headset must change to compensate as you rotate
your head.
• Once we agree that such transformations are necessary, it becomes a
significant engineering challenge to estimate the amount of head and eye
movement that has occurred and apply the appropriate transformation in a
timely and accurate manner.
• The hardware components of VR systems are
conveniently classified as:
1. Displays (output): Devices that each stimulate a sense
organ.
2. Sensors (input): Devices that extract information from
the real world.
3. Computers: Devices that process inputs and outputs
sequentially.
• Displays A display generates stimuli for a targeted sense organ. Vision
is our dominant sense, and any display constructed for the eye must
cause the desired image to be formed on the retina.
• For CAVE systems, some combination of digital projectors and mirrors
is used. An array of large-panel displays may alternatively be employed.
• For headsets, a smartphone display can be placed close to the eyes and
brought into focus using one magnifying lens for each eye.
• Now imagine displays for other sense organs. Sound is displayed to the
ears using classic speaker technology. Bone conduction methods may
also be used, which vibrate the skull and propagate the waves to the
inner ear; this method appeared in Google Glass.
• For the sense of touch, there are haptic displays. Two examples
are pictured in Figure below. Haptic feedback can be given in the
form of vibration, pressure, or temperature.

(a) The Touch X system by 3D Systems allows the user to feel strong
resistance when poking into a virtual object with a real stylus. A robot arm
provides the appropriate forces. (b) Some game controllers occasionally
vibrate.
• Sensors For visual and auditory body-mounted displays, the position and orientation
of the sense organ must be tracked by sensors to appropriately adapt the stimulus.
• The orientation part is usually accomplished by an inertial measurement unit or
IMU.
• The main component is a gyroscope, which measures its own rate of rotation; the
rate is referred to as angular velocity.
• To reduce drift error, resulting from measurements of cumulative change in
orientation IMUs also contain an accelerometer and possibly a magnetometer.
• Over the years, IMUs have gone from existing only as large mechanical systems in
aircraft and missiles to being tiny devices inside of smartphones.
• Due to their small size, weight, and cost, IMUs can be easily embedded in wearable
devices.
• They are one of the most important enabling technologies for the current
generation of VR headsets and are mainly used for tracking the user’s head
orientation.
• Digital cameras provide another critical source of information for tracking
systems.
• Like IMUs, they have become increasingly cheap and portable due to the
smartphone industry, while at the same time improving in image quality.
• The idea is to identify features or markers in the image that serve as
reference points for an moving object or a stationary background.
• Cameras are commonly used to track eyes, heads, hands, entire human bodies,
and any other objects in the physical world.
• One of the main challenges at present is to obtain reliable and accurate
performance without placing special markers on the user or objects around
the scene.
• As opposed to standard cameras, depth cameras work actively by projecting
light into the scene and then observing its reflection in the image.
• This is typically done in the infrared (IR) spectrum. In addition to these
sensors, we rely heavily on good-old mechanical switches and
potentiometers to create keyboards and game controllers.

• (a) The Microsoft Kinect sensor gathers both an ordinary RGB image
and a depth map (the distance away from the sensor for each pixel).
(b) The depth is determined by observing the locations of projected
IR dots in an image obtained from an IR camera.
• Computers As we have noticed, most of the needed sensors exist on a
smartphone.
• Therefore, a smartphone can be dropped into a case with lenses to
provide a VR experience with little added costs.
• In the near future, we expect to see wireless, all-in one headsets that
contain all of the essential parts of smartphones for delivering VR
experiences.
• These will eliminate unnecessary components of smartphones and will
instead have customized optics, microchips, and sensors for VR.
• Graphical processing units (GPUs) have been optimized for quickly
rendering graphics to a screen and they are currently being adapted to
handle the specific performance demands of VR.
• Two headsets that create a VR experience by dropping a
smartphone into a case. (a) Google Cardboard works with a wide
variety of smartphones. (b) Samsung Gear VR is optimized for one
particular smartphone (in this case, the Samsung S6).
• Figure shows the hardware components for the Oculus Rift DK2, which became available in late 2014.

• In the lower left corner, you can see a smartphone screen that serves as the display. Above that is a circuit
board that contains the IMU, display interface chip, a USB driver chip, a set of chips for driving LEDs on the
headset for tracking, and a programmable microcontroller.

• The lenses, shown in the lower right, are placed so that the smartphone screen appears to be “infinitely far”
away, but nevertheless fills most of the field of view of the user.

• The upper right shows flexible circuits that deliver power to IR LEDs embedded in the headset (they are
hidden behind IR-transparent plastic). A camera is used for tracking, and its parts are shown in the center.
Software
• The VWG receives inputs from low-level systems that
indicate what the user is doing in the real world.
• A head tracker provides timely estimates of the user’s head
position and orientation. Keyboard, mouse, and game
controller events arrive in a queue that are ready to be
processed.
• The key role of the VWG is to maintain enough of an internal
“reality” so that renderers can extract the information they
need to calculate outputs for their displays.
• The Virtual World Generator (VWG) maintains another world, which
could be synthetic, real, or some combination. From a computational
perspective, the inputs are received from the user and his
surroundings, and appropriate views of the world are rendered to
displays.
• Matched motion The most basic operation of the VWG is to maintain a correspondence
between user motions in the real world and the virtual world;
• In the real world, the user’s motions are confined to a safe region, which we will call
the matched zone.
• One of the greatest challenges is the mismatch of obstacles: What if the user is
blocked in the virtual world but not in the real world? The reverse is also possible.
• In a seated experience, the user sits in a chair while wearing a headset. The matched
zone in this case is a small region, such as one cubic meter, in which users can move
their heads.
• If the user is not constrained to a seat, then the matched zone could be an entire room
or an outdoor field.
• Larger matched zones tend to lead to greater safety issues. Users must make sure that
the matched zone is cleared of dangers in the real world, or the developer should make
them visible in the virtual world.
• A matched zone is maintained between the user in their real
world and his representation in the virtual world.
• The matched zone could be moved in the virtual world by using an
interface, such as a game controller, while the user does not
correspondingly move in the real world.
• User Locomotion In many VR experiences, users want to move well
outside of the matched zone.
• Imagine you want to explore a virtual city while remaining seated in the
real world.
• A popular option is to move oneself in the virtual world by operating a
game controller, mouse, or keyboard.
• By pressing buttons or moving knobs, yourself in the virtual world could
be walking, running, jumping, swimming, flying, and so on.
• You could also climb aboard a vehicle in the virtual world and operate
its controls to move yourself.
• These operations are certainly convenient, but often lead to sickness
because of a mismatch between your balance and visual senses.
• Networked experiences In the case of a networked VR
experience, a shared virtual world is maintained by a server.
• Each user has a distinct matched zone. Their matched zones
might overlap in a real world, but one must then be careful so that
they avoid unwanted collisions.
• Within the virtual world, user interactions, including collisions,
must be managed by the VWG.
• If multiple users are interacting in a social setting, then the
burdens of matched motions may increase.
• As users meet each other, they could expect to see eye motions,
facial expressions, and body language.
Human Physiology and Perception
• Our bodies were not designed for VR. By applying artificial stimulation to the senses, we
are disrupting the operation of biological mechanisms that have taken hundreds of
millions of years to evolve in a natural environment.
• We are also providing input to the brain that is not exactly consistent with all of our
other life experiences.
• In some instances, our bodies may adapt to the new stimuli. This could cause us to
become unaware of flaws in the VR system.
• In other cases, we might develop heightened awareness or the ability to interpret 3D
scenes that were once difficult or ambiguous.
• Unfortunately, there are also many cases where our bodies react by increased fatigue or
headaches, partly because the brain is working harder than usual to interpret the
stimuli.
• Finally, the worst case is the onset of VR sickness, which typically involves symptoms of
dizziness and nausea.
• Perceptual psychology is the science of understanding how the brain
converts sensory stimulation into perceived phenomena.
• Here are some typical questions that arise in VR
1. How far away does that object appear to be?
2. How much video resolution is needed to avoid seeing pixels?
3. How many frames per second are enough to perceive motion as continuous?
4. Is the user’s head appearing at the proper height in the virtual world?
5. Where is that virtual sound coming from?
6. Why am I feeling nauseated?
7. Why is one experience more tiring than another?
8. What is presence?
• To answer these questions and more, we must understand several things:
1. Basic physiology of the human body, including sense organs and neural
pathways,
2. The key theories and insights of experimental perceptual psychology,
3. The interference of the engineered VR system with our common perceptual
processes and the resulting implications or side effects.
• The perceptual side of VR often attracts far too little attention among
developers.
• In the real world, perceptual processes are mostly invisible to us. Think
about how much effort it requires to recognize a family member.
• Optical illusions One of the most popular ways to appreciate the complexity
of our perceptual processing is to view optical illusions.
• Each one is designed to reveal some shortcoming of our visual system by
providing a stimulus that is not quite consistent with ordinary stimuli in our
everyday lives.
• These should motivate you to appreciate the amount of work that our sense
organs and neural structures are doing to fill in missing details and make
interpretations based on the context of our life experiences and existing
biological structures.
• Classification of senses Perception and illusions are not limited to our
eyes.

• In each eye, over 100 million photoreceptors target electromagnetic energy precisely in the
frequency range of visible light.
• The auditory, touch, and balance senses involve motion, vibration, or gravitational force;
these are sensed by mechanoreceptors.
• The sense of touch additionally involves thermoreceptors to detect change in temperature.
• Our balance sense helps us to know which way our head is oriented.
• Finally, our sense of taste and smell is grouped into one category, called the chemical
senses, that relies on chemoreceptors; these provide signals based on chemical composition
of matter appearing on our tongue or in our nasal passages.
• Perception happens after the sense organs convert the stimuli into neural impulses.

• According to latest estimates, human bodies contain around 86 billion neurons. Around 20 billion are devoted to the
part of the brain called the cerebral cortex, which handles perception and many other high-level functions such as
attention, memory, language, and consciousness.

• Another important factor in perception and overall cognitive ability is the interconnection between neurons. The
nucleus or cell body of each neuron is a node that does some kind of “processing”.

• The dendrites are essentially input edges to the neuron, whereas the axons are output edges.

• Through a network of dendrites, the neuron can aggregate information from numerous other neurons, which
themselves may have aggregated information from others.

• The result is sent to one or more neurons through the axon.


• Hierarchical processing Upon leaving the sense-organ receptors, signals
propagate among the neurons to eventually reach the cerebral cortex.
• Along the way, hierarchical processing is performed. After passing through
several neurons, signals from numerous receptors are simultaneously taken
into account. This allows increasingly complex patterns to be detected in the
stimulus.
• In the case of vision, feature detectors appear in the early hierarchical
stages, enabling us to detect features such as edges, corners, and motion.
• Once in the cerebral cortex, the signals from sensors are combined with
anything else from our life experiences that may become relevant for making
an interpretation of the stimuli.
• Information or concepts that appear in the cerebral cortex tend to
represent a global picture of the world around us.
• Proprioception In addition to information from senses and memory, we also use
proprioception, which is the ability to sense the relative positions of parts of our
bodies and the amount of muscular effort being involved in moving them.
• Close your eyes and move your arms around in an open area. You should have an idea
of where your arms are located.
• Motor cortex, which controls body motion, sends signals called efference copies to
other parts of the brain to communicate what motions have been executed.
• Proprioception is effectively another kind of sense.
• Continuing our comparison with robots, it corresponds to having encoders on joints
or wheels, to indicate how far they have moved.
• One interesting implication of proprioception is that you cannot tickle yourself
because you know where your fingers are moving.
• However, if someone else tickles you, then you do not have access to their
efference copies. The lack of this information is crucial to the tickling sensation.
• Fusion of senses Signals from multiple senses and proprioception
are being processed and combined with our experiences by our neural
structures throughout our lives.
• Any attempt to interfere with these operations is likely to cause a
mismatch among the data from our senses.
• The brain might react by making us so consciously aware of the conflict
that we immediately understand that the experience is artificial.
• This would correspond to a case in which the VR experience is failing to
convince people that they are present in a virtual world.
• To make an effective and comfortable VR experience, trials with human
subjects are essential to understand how the brain reacts. It is
practically impossible to predict what would happen in an unknown.
• One of the most important examples of bad sensory conflict in
the context of VR is vection, which is the illusion of self motion.
• The conflict arises when your vision sense reports to your brain
that you are accelerating, but your balance sense reports that
you are motionless.
• If you are stuck in traffic or stopped at a train station, you
might have felt as if you are moving backwards while seeing a
vehicle in your periphery that is moving forward.
• For example, if you accelerate yourself forward using a
controller, rather than moving forward in the real world, then you
perceive acceleration with your eyes, but not your vestibular
organ.
• Adaptation A universal feature of our sensory systems is adaptation,
which means that the perceived effect of stimuli changes over time.
• For example, the perceived loudness of motor noise in an aircraft or
car decreases within minutes.
• In the case of vision, the optical system of our eyes and the
photoreceptor sensitivities adapt to change perceived brightness.
• In military training simulations, sickness experienced by soldiers
appears to be less than expected, perhaps due to regular exposure.
• Anecdotally, the same seems to be true of experienced video game
players.
• Those who have spent many hours and days in front of large screens
playing first-person shooter games apparently experience less vection
when locomoting themselves in VR.
• Through repeated exposure, developers may become comfortable
with an experience that is nauseating to a newcomer.
• This gives them a terrible bias while developing an experience.
• On the other hand, developers may be able to improve their
debugging skills by noticing flaws in the system that an “untrained
eye” would easily miss.
• Common examples include:
• A large amount of tracking latency has appeared, which interferes with
the perception of stationary.
• The left and right eye views are swapped.
• Objects appear to one eye but not the other.
• One eye view has significantly more latency than the other.
• Stevens’ power law One of the most known results from psychophysics (Psychophysics is the scientific
study of perceptual phenomena that are produced by physical stimuli) is Steven’s power law, which
characterizes the relationship between the magnitude of a physical stimulus and its perceived magnitude.
• The hypothesis is that an exponential relationship occurs over a wide range of sensory systems and stimuli:
• p = cmx
• in which
• m is the magnitude or intensity of the stimulus,
• p is the perceived magnitude,
• x relates the actual magnitude to the perceived magnitude, and is the most important part of the equation,
and c is a constant that depends on units.
• Note that for x = 1, is a linear relationship, p = cm;
• An example of this is our perception of the length of an isolated line segment directly in front of our eyes.
The length we perceive is proportional to its actual length.
• For the case of perceiving the brightness of a target in the dark, x = 0.33, which implies that a large
increase in brightness is perceived as a smaller increase.
• In the other direction, our perception of electric shock as current through the fingers yields x = 3.5. A little
more shock is a lot more uncomfortable.

You might also like