0% found this document useful (0 votes)
27 views25 pages

Lecture 11

The document discusses visual perception and rendering in virtual reality, focusing on how virtual objects are displayed through interfaces using geometric models and light propagation principles. It covers rasterization methods, latency reduction techniques, and issues specific to VR rendering, including optical distortions and aliasing artifacts. Additionally, it addresses the challenges of rendering both synthetic and captured virtual worlds, emphasizing the importance of improving rendering performance and user experience in VR environments.

Uploaded by

ayaalaakamal15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views25 pages

Lecture 11

The document discusses visual perception and rendering in virtual reality, focusing on how virtual objects are displayed through interfaces using geometric models and light propagation principles. It covers rasterization methods, latency reduction techniques, and issues specific to VR rendering, including optical distortions and aliasing artifacts. Additionally, it addresses the challenges of rendering both synthetic and captured virtual worlds, emphasizing the importance of improving rendering performance and user experience in VR environments.

Uploaded by

ayaalaakamal15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Virtual Reality

Dr. Eman Abdellatif


Lecture 4-5
Agenda

Visual Perception

Visual Rendering
Visual Perception
This part addresses visual rendering, which specifies what the visual display should show through an interface
to the virtual world generator (VWG).we provided the mathematical parts (Knowledge only), which express
where the objects in the virtual world should appear on the screen. This was based on geometric models, rigid
body transformations, and viewpoint transformations. We next need to determine how these objects should
appear, based on knowledge about light propagation, visual physiology, and visual perception. We will cover
the basic concepts; these are considered the core of computer graphics, but VR-specific issues also arise.
They mainly address the case of rendering for virtual worlds that are formed synthetically. We will explains
how to determine the light that should appear at a pixel based on light sources and the reflectance properties
of materials that exist purely in the virtual world.
Another part will explains rasterization methods, which efficiently solve the rendering problem and are widely
used in specialized graphics hardware, called GPUs. Section 7.3 addresses VR-specific problems that arise from
imperfections in the optical system.
This apart focuses on latency reduction, which is critical to VR, so that virtual objects appear in the right place
at the right time.
Otherwise, many side effects could arise, such as VR sickness, fatigue, adaptation to the flaws, or simply
having an unconvincing experience. Finally Will explains rendering for captured, rather than synthetic, virtual
worlds. This covers VR experiences that are formed from panoramic photos and videos.
Ray Tracing and Improving Latency Immersive Photos
Rasterization
Shading Models and Frame Rates and Videos

• Object-order • Depth buffer • A simple example • Texture mapping


versus image-order • The perfect system onto a virtual
rendering
• Barycentric • Historical problems screen
coordinates • Overview of latency
• Ray tracing, Ray • Capturing a wider
casting • Mapping the reduction methods field of view
surface • Simplifying the virtual
• Lambertian world • Mapping onto a
shading • Aliasing Correcting Optical sphere
Distortions • Improving rendering
• Blinn-Phong • Culling performance • Perceptual issues
shading • VR-specific • From rendered image • Panoramic light
• Ambient shading rasterization
to switching pixels fields
• BRDFs • The power of
problems prediction
• Global illumination • Post-rendering image
• VR-specific issues warp
• Flaws in the warped
image
• Increasing the frame
rate
• Depth buffer
• Barycentric coordinates
• Mapping the surface
Rasterization • Aliasing
• Culling
• VR-specific rasterization problems

• Depth buffer The ray casting operation quickly becomes a bottleneck. For a 1080p image at 90Hz, it would need to be
performed over 180 million times per second, and the ray-triangle intersection test would be performed
for every triangle (although data structures such as a BSP would quickly eliminate many from
consideration). In most common cases, it is much more efficient to switch from such image-order Due to
the possibility of depth cycles, objects cannot be sorted in three dimensions with respect to distance
from the observer. Each object is partially in front of one and partially behind another.
along the ray from the focal point to the
A simple and efficient method to resolve this problem is to manage
intersection point. Using this method, the
the depth problem on a pixel-by-pixel basis by maintaining a depth buffer (also
triangles can be rendered in arbitrary order.
called z-buffer), which for every pixel records the distance of the triangle from
The method is also commonly applied to
the focal point to the intersection point of the ray that intersects the triangle at
compute the effect of shadows by
that pixel. In other words, if this were the ray casting approach, it would be
determining depth order from a light
distance
source, rather than the viewpoint. Objects
that are closer to the light cast a shadow on
further objects.
• Depth buffer
• Barycentric coordinates
• Mapping the surface
Rasterization • Aliasing
• Culling
• VR-specific rasterization problems

• Barycentric coordinates
As each triangle is rendered, information from it is
mapped from the virtual world onto the screen. This is usually accomplished
using barycentric coordinates

for which 0 ≤ α1, α2, α3 ≤ 1 and α1 + α2 + α3 = 1. The closer p is to a vertex


pi, the larger the weight αi. If p is at the centroid of the triangle, then α1 =
α2 = α3 = 1/3. If p lies on an edge, then the opposing vertex weight is zero. For
example, if p lies on the edge between p1 and p2, then α3 = 0. If p lies on a p = α1p1 + α2p2 + α3p3
vertex,
pi, then αi = 1, and the other two barycentric coordinates are zero.
The coordinates are calculated using Cramer’s rule to solve a resulting linear s = 1/(d d − d d ).
11 22 12 12

system of equations. In particular, let dij = ei · ej for all combinations of i and j.


Furthermore, let
• Depth buffer
• Barycentric coordinates
• Mapping the surface R = α1R1 + α2R2 + α3R3
Rasterization • Aliasing G = α1G1 + α2G2 + α3G3
• Culling
• VR-specific rasterization problems B = α1B1 + α2B2 + α3B3.

• Mapping the surface


Barycentric coordinates provide a simple and efficient method for linearly
interpolating values across a triangle. The simplest case is the
propagation of RGB values. Suppose RGB values are calculated at the
three triangle vertices using the shading methods in values (Ri,Gi,Bi) for
each i from 1 to 3. For a point p in the triangle with barycentric
coordinates (α1, α2, α3), the RGB values for the interior points are
calculated as The object need not maintain the same properties over an
entire triangular patch. With texture mapping, a repeating pattern, such
as tiles or stripes can Texture mapping: A simple pattern or an entire
image can be mapped across the triangles and then rendered in the
image to provide much more detail than provided by the triangles in the
model.
• Depth buffer
• Barycentric coordinates
• Mapping the surface
Rasterization • Aliasing
• Culling
• VR-specific rasterization problems

• Aliasing Several artifacts arise due to discretization. Aliasing problems were


mentioned in last lectures, which result in perceptible staircases in the place of straight lines, due to insufficient
pixel density. selected inside of a small triangle. The point p usually corresponds to the center of the pixel. Note
that the point may be inside of the triangle while the entire pixel is not. Likewise, part of the pixel might be inside
of the triangle while the center is not is not entirely accurate due to the subpixel mosaics used in displays To be
more precise, aliasing analysis should take this into account as well.

Bump mapping: By artificially altering the


surface normals, the shading algorithms
produce an effect that looks like a rough
surface
• Depth buffer
• Barycentric coordinates
• Mapping the surface
Rasterization • Aliasing
• Culling
• VR-specific rasterization problems

• Aliasing

(a) The rasterization stage results in


aliasing; straight edges appear
to be staircases. (b) Pixels are selected for
inclusion based on whether their center
point p lies inside of the triangle
• Depth buffer
• Barycentric coordinates
• Mapping the surface
Rasterization • Aliasing
• Culling
• VR-specific rasterization problems

• Aliasing

A mipmap stores the texture at multiple


resolutions so that it can be
appropriately scaled without causing signficant
aliasing. The overhead for storing
the extra image is typically only 1/3 the size of the
original (largest) image. (The
image is from NASA and the mipmap was created
by Mike Hicks.)
• Depth buffer
• Barycentric coordinates
• Mapping the surface
• Aliasing
Rasterization
• Culling
• VR-specific rasterization problems

• Culling In practice, many triangles can be quickly eliminated before attempting to render them. This results in a
preprocessing phase of the rendering approach called culling, which dramatically improves performance and
enables faster frame

(right) Due to the perspective


transformation,
the tiled texture suffers from spatial aliasing
as the depth increases.

(left) The problem can be fixed by


performing super sampling.
• Depth buffer
• Barycentric coordinates
• Mapping the surface
• Aliasing
Rasterization
• Culling
• VR-specific rasterization
problems
A Fresnel lens simulates a simple lens by making a
• VR-specific rasterization problems corrugated surface. The convex surface on the top lens is
implemented in the Fresnel lens shown on the bottom.
The staircasing problem due to aliasing is expected to be worse for
VR because current resolutions are well below the The problem is but aliasing is likely to remain a significant problem
made significantly worse by the continuously changing viewpoint until displays reach the required retina display density
due to head motion. Even as the user attempts to stare at an edge, for VR. A more serious difficulty is caused by the
the “stairs” appear to be more like an “escalator” because the exact enhanced depth perception afforded by a VR system.
choice of pixels to include in a triangle depends on subtle variations Both head motions and stereo views enable users to
in the viewpoint. As part of our normal perceptual processes, our perceive small differences in depth across surfaces. This
eyes are drawn toward this distracting motion. With stereo should be a positive outcome; however, many tricks
viewpoints, the situation is worse: The “escalators” from the right developed in computer graphics over the decades rely
and left images will usually not match. As the brain attempts to fuse on the fact that people cannot perceive these
the two images into one coherent view, the aliasing artifacts provide differences when a virtual world is rendered onto a
a strong, moving mismatch. Reducing contrast at edges and using fixed screen that is viewed from a significant distance.
anti-aliasing techniques help alleviate the problem, The result for VR is that texture maps may look fake.
• Depth buffer
• Barycentric coordinates
• Mapping the surface
• Aliasing
Rasterization
• Culling
• VR-specific rasterization
problems
• VR-specific rasterization problems

a carpet onto the floor might inadvertently cause the floor to look as if it were
simply painted. In the real world we would certainly be able to distinguish painted
carpet from real carpet. The same problem occurs with normal mapping. A surface
that might look rough in a single static image due to bump mapping could look
completely flat in VR as both eyes converge onto the surface. Thus, as the quality
of VR systems improves, we should expect the rendering quality requirements to
increase, causing many old tricks to be modified or abandoned.
Ray Tracing and Improving Latency Immersive Photos
Rasterization
Shading Models and Frame Rates and Videos

• Object-order • Depth buffer • A simple example • Texture mapping


versus image-order • The perfect system onto a virtual
rendering
• Barycentric • Historical problems screen
coordinates • Overview of latency
• Ray tracing, Ray • Capturing a wider
casting • Mapping the reduction methods field of view
surface • Simplifying the virtual
• Lambertian world • Mapping onto a
shading • Aliasing Correcting Optical sphere
Distortions • Improving rendering
• Blinn-Phong • Culling performance • Perceptual issues
shading • VR-specific • From rendered image • Panoramic light
• Ambient shading rasterization
to switching pixels fields
• BRDFs • The power of
problems prediction
• Global illumination • Post-rendering image
• VR-specific issues warp
• Flaws in the warped
image
• Increasing the frame
rate
The rendered image
Correcting Optical Distortions appears to have a
barrel distortion.
Note that
the resolution is
effectively dropped
near the periphery.

Recall this part from that barrel and pincushion distortions are common for an optical system with a high field
of view (next image). When looking through the lens of a VR headset, a pincushion distortion typically results. If
the images are
drawn on the screen without any correction, then the virtual world appears to be incorrectly warped. If the
user yaws his head back and forth, then fixed lines in the world, such as walls, appear to dynamically change
their curvature because the distortion in the periphery is much stronger than in the center. If it is not corrected,
then the perception of stationarity will fail because static objects should not appear to be warping dynamically.
Furthermore, contributions may be made to VR sickness because incorrect accelerations are being visually
perceived near the periphery. How can this problem be solved? Significant research is being done in this area,
and the possible solutions involve different optical systems and display technologies.
For example, digital light processing (DLP) technology directly projects light
Correcting Optical Distortions
could also be used, u on the right above; however, in practice this is often considered
unnecessary. Correcting the distortion involves two phases:

1. Determine the radial distortion function f for a particular headset, which involves a particular lens
placed at a fixed distance from the screen. This is a regression or curve-fitting problem that involves
an experimental setup that measures the distortion of many points and selects the coefficients c1,
c2, and so on, that provide the best fit.

2. Determine the inverse of f so that it be applied to the rendered image before the lens causes its
distortion. The composition of the inverse with f should cancel out the distortion function.
Unfortunately, polynomial functions generally do not have inverses that can be determined or even
expressed in a closed form. Therefore, approximations are used.
Ray Tracing and Improving Latency Immersive Photos
Rasterization
Shading Models and Frame Rates and Videos

• Object-order • Depth buffer • A simple example • Texture mapping


versus image-order • The perfect system onto a virtual
rendering
• Barycentric • Historical problems screen
coordinates • Overview of latency
• Ray tracing, Ray • Capturing a wider
casting • Mapping the reduction methods field of view
surface • Simplifying the virtual
• Lambertian world • Mapping onto a
shading • Aliasing Correcting Optical sphere
Distortions • Improving rendering
• Blinn-Phong • Culling performance • Perceptual issues
shading • VR-specific • From rendered image • Panoramic light
• Ambient shading rasterization
to switching pixels fields
• BRDFs • The power of
problems prediction
• Global illumination • Post-rendering image
• VR-specific issues warp
• Flaws in the warped
image
• Increasing the frame
rate
•A simple example • Improving rendering performance
Improving Latency and Frame Rates • The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction methods • Post-rendering image warp
• Simplifying the virtual world • Flaws in the warped image
• Increasing the frame rate

The motion-to-photons latency in a VR headset is the amount of time it takes


to update the display in response to a change in head orientation and position.
For example, suppose the user is fixating on a stationary feature in the virtual
world. As the head yaws to the right, the image of the feature on the display must
immediately shift to the left. Otherwise, the feature will appear to move if the
eyes remain fixated on it. This breaks the perception of stationarity.

• A simple example
Consider the following example to get a feeling for the latency problem. Let d be the density of the display in pixels per
degree. Let ω be the angular velocity of the head in degrees per second. Let ℓ be the latency in seconds. Due to latency
ℓ and angular velocity ω, the image is shifted by dωℓ pixels. For example, if d = 40 pixels per degree, ω = 50 degrees per
second, and ℓ = 0.02 seconds, then the image is incorrectly displaced by dωℓ = 4 pixels. An extremely fast head turn
might be at 300 degrees per second, which would result in a 24-pixel error.
• A simple example • Improving rendering performance
Improving Latency and Frame Rates •The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction methods • Post-rendering image warp
• Simplifying the virtual world • Flaws in the warped image
• Increasing the frame rate

• The perfect system


As a thought experiment, imagine the perfect VR system.
As the head moves, the viewpoint must accordingly change for visual rendering.
A magic oracle perfectly indicates the head position and orientation at any time.
The VWG continuously maintains the positions and orientations of all objects
in the virtual world. The visual rendering system maintains all perspective and
viewport transformations, and the entire rasterization process continuously sets
the RGB values on the display according to the shading models. Progressing with
this fantasy, the display itself continuously updates, taking no time to switch the
pixels. The display has retina-level resolution, as described in Section 5.4, and a
dynamic range of light output over seven orders of magnitude to match human
perception. In this case, visual stimulation provided by the virtual world should
match what would occur in a similar physical world in terms of the geometry.
There would be no errors in time and space (although the physics might not
match anyway due to assumptions about lighting, shading, material properties,
color spaces, and so on).
• A simple example • Improving rendering performance
Improving Latency and Frame Rates • The perfect system • From rendered image to switching pixels
•Historical problems • The power of prediction
• Overview of latency reduction methods • Post-rendering image warp
• Simplifying the virtual world • Flaws in the warped image
• Increasing the frame rate
• Historical problems

In practice, the perfect system is not realizable. All of these operations require time to
propagate information and perform computations.
In early VR systems, the total motion-to-photons latency was often over 100ms.
In the 1990s, 60ms was considered an acceptable amount. Latency has been stated
as one of the greatest causes of VR sickness, and therefore one of the main obstructions
to widespread adoption over the past decades. People generally adapt to a
fixed latency, which somewhat mitigates the problem, but then causes problems
when they have to readjust to the real world. Variable latencies are even worse due
to the inability to adapt [69]. Fortunately, latency is no longer the main problem
in most VR systems because of the latest-generation tracking, GPU, and display
technology. The latency may be around 15 to 25ms, which is even compensated
for by predictive methods in the tracking system. The result is that the effective
latency is very close to zero. Thus, other factors are now contributing more
strongly to VR sickness and fatigue, such as vection and optical aberrations.
• A simple example • Improving rendering performance
Improving Latency and Frame Rates • The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction • Post-rendering image warp
methods • Flaws in the warped image
• Simplifying the virtual world • Increasing the frame rate
• Overview of latency reduction methods
The following strategies are used together to both reduce the latency
and to minimize the side effects of any remaining latency:

1. Lower the complexity of the virtual world.


2. Improve rendering pipeline performance.
3. Remove delays along the path from the rendered image to switching
pixels.
4. Use prediction to estimate future viewpoints and world states.
5. Shift or distort the rendered image to compensate for last-moment
viewpoint errors and missing frames. Each of these will be described in
succession.

A variety of mesh simplification algorithms can be used to reduce the model


complexity while retaining the most important structures. Shown here is a
simplification of a hand model made by the open-source library CGAL.
• A simple example • Improving rendering performance
Improving Latency and Frame Rates • The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction methods • Post-rendering image warp
• Simplifying the virtual world • Flaws in the warped image
• Increasing the frame rate
• Simplifying the virtual world
the virtual world is composed of geometric primitives, which are usually 3D triangles arranged in a mesh. The chain of
transformations and rasterization process must be applied for each triangle, resulting in a computational cost that is
directly proportional to the number of triangles. Thus, a model that contains tens of millions of triangles will take
orders of magnitude longer to render than one made of a few thousand. In many cases, we obtain models that are
much larger than necessary. They can often be made much smaller (fewer triangles) with no perceptible difference,
much in the same way that image, video, and audio compression works. Why are they too big in the first place? If the
model was captured from a 3D scan of the real world, then it is likely to contain highly dense data. Capture systems
such as the FARO Focus3D X Series capture large worlds while facing outside. Others, such as the Matter and Form
MFSV1, capture a small object by rotating it on a turntable.
As with cameras, systems that construct 3D models automatically are focused on producing highly accurate and dense
representations, which maximize the model size. Even in the case of purely synthetic worlds, a modeling tool such as
Maya or Blender will automatically construct a highly accurate mesh of triangles over a curved surface. Without taking
specific care of later rendering burdens, the model could quickly become unwieldy. Fortunately, it is possible to
reduce the model size by using mesh simplification algorithms;
• A simple example • Improving rendering performance
Improving Latency and Frame Rates • The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction methods • Post-rendering image warp
• Simplifying the virtual world • Flaws in the warped image
• Increasing the frame rate
• Simplifying the virtual world
In this case, one must be careful to make sure that the simplified model will have sufficient quality from all viewpoints
that might arise in the targeted VR system. In some systems, such as Unity 3D, reducing the number of different
material properties across the model will also improve performance

In addition to reducing the rendering time, a simplified model will also lower computational demands on the Virtual
World Generator (VWG). For a static world, the VWG does not need to perform any updates after initialization. The user
simply views the fixed world. For dynamic worlds, the VWG maintains a simulation of the virtual world that moves all
geometric bodies while satisfying physical laws that mimic the real world. It must handle the motions of any avatars,
falling objects, moving vehicles, swaying trees, and so on. Collision detection methods are needed to make bodies react
appropriately when in contact. Differential equations that model motion laws may be integrated to place bodies
correctly over time.
for now it is sufficient to understand that the VWG must maintain a coherent snapshot of the virtual world each time a
rendering request is made. Thus, the VWG has a frame rate in the same way as a display or visual rendering system.
• A simple example • Improving rendering performance
Improving Latency and Frame Rates • The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction methods • Post-rendering image warp
• Simplifying the virtual world • Flaws in the warped image
• Increasing the frame rate
• Simplifying the virtual world

Each VWG frame corresponds to the placement of all geometric bodies for a common time instant. How
many times per second can the VWG be updated? Can a high, constant rate of VWG frames be maintained?
What happens when a rendering request is made while the VWG is in the middle of updating the world? If
the rendering module does not wait for the VWG update to be completed, then some objects could be
incorrectly placed because some are updated while others are not. Thus, the system should ideally wait until
a complete VWG frame is finished before rendering. This suggests that the VWG update should be at least
as fast as the rendering process, and the two should be carefully synchronized so that a complete, fresh
VWG frame is always ready for rendering.
Thank you

You might also like