Lecture 11
Lecture 11
Visual Perception
Visual Rendering
Visual Perception
This part addresses visual rendering, which specifies what the visual display should show through an interface
to the virtual world generator (VWG).we provided the mathematical parts (Knowledge only), which express
where the objects in the virtual world should appear on the screen. This was based on geometric models, rigid
body transformations, and viewpoint transformations. We next need to determine how these objects should
appear, based on knowledge about light propagation, visual physiology, and visual perception. We will cover
the basic concepts; these are considered the core of computer graphics, but VR-specific issues also arise.
They mainly address the case of rendering for virtual worlds that are formed synthetically. We will explains
how to determine the light that should appear at a pixel based on light sources and the reflectance properties
of materials that exist purely in the virtual world.
Another part will explains rasterization methods, which efficiently solve the rendering problem and are widely
used in specialized graphics hardware, called GPUs. Section 7.3 addresses VR-specific problems that arise from
imperfections in the optical system.
This apart focuses on latency reduction, which is critical to VR, so that virtual objects appear in the right place
at the right time.
Otherwise, many side effects could arise, such as VR sickness, fatigue, adaptation to the flaws, or simply
having an unconvincing experience. Finally Will explains rendering for captured, rather than synthetic, virtual
worlds. This covers VR experiences that are formed from panoramic photos and videos.
Ray Tracing and Improving Latency Immersive Photos
Rasterization
Shading Models and Frame Rates and Videos
• Depth buffer The ray casting operation quickly becomes a bottleneck. For a 1080p image at 90Hz, it would need to be
performed over 180 million times per second, and the ray-triangle intersection test would be performed
for every triangle (although data structures such as a BSP would quickly eliminate many from
consideration). In most common cases, it is much more efficient to switch from such image-order Due to
the possibility of depth cycles, objects cannot be sorted in three dimensions with respect to distance
from the observer. Each object is partially in front of one and partially behind another.
along the ray from the focal point to the
A simple and efficient method to resolve this problem is to manage
intersection point. Using this method, the
the depth problem on a pixel-by-pixel basis by maintaining a depth buffer (also
triangles can be rendered in arbitrary order.
called z-buffer), which for every pixel records the distance of the triangle from
The method is also commonly applied to
the focal point to the intersection point of the ray that intersects the triangle at
compute the effect of shadows by
that pixel. In other words, if this were the ray casting approach, it would be
determining depth order from a light
distance
source, rather than the viewpoint. Objects
that are closer to the light cast a shadow on
further objects.
• Depth buffer
• Barycentric coordinates
• Mapping the surface
Rasterization • Aliasing
• Culling
• VR-specific rasterization problems
• Barycentric coordinates
As each triangle is rendered, information from it is
mapped from the virtual world onto the screen. This is usually accomplished
using barycentric coordinates
• Aliasing
• Aliasing
• Culling In practice, many triangles can be quickly eliminated before attempting to render them. This results in a
preprocessing phase of the rendering approach called culling, which dramatically improves performance and
enables faster frame
a carpet onto the floor might inadvertently cause the floor to look as if it were
simply painted. In the real world we would certainly be able to distinguish painted
carpet from real carpet. The same problem occurs with normal mapping. A surface
that might look rough in a single static image due to bump mapping could look
completely flat in VR as both eyes converge onto the surface. Thus, as the quality
of VR systems improves, we should expect the rendering quality requirements to
increase, causing many old tricks to be modified or abandoned.
Ray Tracing and Improving Latency Immersive Photos
Rasterization
Shading Models and Frame Rates and Videos
Recall this part from that barrel and pincushion distortions are common for an optical system with a high field
of view (next image). When looking through the lens of a VR headset, a pincushion distortion typically results. If
the images are
drawn on the screen without any correction, then the virtual world appears to be incorrectly warped. If the
user yaws his head back and forth, then fixed lines in the world, such as walls, appear to dynamically change
their curvature because the distortion in the periphery is much stronger than in the center. If it is not corrected,
then the perception of stationarity will fail because static objects should not appear to be warping dynamically.
Furthermore, contributions may be made to VR sickness because incorrect accelerations are being visually
perceived near the periphery. How can this problem be solved? Significant research is being done in this area,
and the possible solutions involve different optical systems and display technologies.
For example, digital light processing (DLP) technology directly projects light
Correcting Optical Distortions
could also be used, u on the right above; however, in practice this is often considered
unnecessary. Correcting the distortion involves two phases:
1. Determine the radial distortion function f for a particular headset, which involves a particular lens
placed at a fixed distance from the screen. This is a regression or curve-fitting problem that involves
an experimental setup that measures the distortion of many points and selects the coefficients c1,
c2, and so on, that provide the best fit.
2. Determine the inverse of f so that it be applied to the rendered image before the lens causes its
distortion. The composition of the inverse with f should cancel out the distortion function.
Unfortunately, polynomial functions generally do not have inverses that can be determined or even
expressed in a closed form. Therefore, approximations are used.
Ray Tracing and Improving Latency Immersive Photos
Rasterization
Shading Models and Frame Rates and Videos
• A simple example
Consider the following example to get a feeling for the latency problem. Let d be the density of the display in pixels per
degree. Let ω be the angular velocity of the head in degrees per second. Let ℓ be the latency in seconds. Due to latency
ℓ and angular velocity ω, the image is shifted by dωℓ pixels. For example, if d = 40 pixels per degree, ω = 50 degrees per
second, and ℓ = 0.02 seconds, then the image is incorrectly displaced by dωℓ = 4 pixels. An extremely fast head turn
might be at 300 degrees per second, which would result in a 24-pixel error.
• A simple example • Improving rendering performance
Improving Latency and Frame Rates •The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction methods • Post-rendering image warp
• Simplifying the virtual world • Flaws in the warped image
• Increasing the frame rate
In practice, the perfect system is not realizable. All of these operations require time to
propagate information and perform computations.
In early VR systems, the total motion-to-photons latency was often over 100ms.
In the 1990s, 60ms was considered an acceptable amount. Latency has been stated
as one of the greatest causes of VR sickness, and therefore one of the main obstructions
to widespread adoption over the past decades. People generally adapt to a
fixed latency, which somewhat mitigates the problem, but then causes problems
when they have to readjust to the real world. Variable latencies are even worse due
to the inability to adapt [69]. Fortunately, latency is no longer the main problem
in most VR systems because of the latest-generation tracking, GPU, and display
technology. The latency may be around 15 to 25ms, which is even compensated
for by predictive methods in the tracking system. The result is that the effective
latency is very close to zero. Thus, other factors are now contributing more
strongly to VR sickness and fatigue, such as vection and optical aberrations.
• A simple example • Improving rendering performance
Improving Latency and Frame Rates • The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction • Post-rendering image warp
methods • Flaws in the warped image
• Simplifying the virtual world • Increasing the frame rate
• Overview of latency reduction methods
The following strategies are used together to both reduce the latency
and to minimize the side effects of any remaining latency:
In addition to reducing the rendering time, a simplified model will also lower computational demands on the Virtual
World Generator (VWG). For a static world, the VWG does not need to perform any updates after initialization. The user
simply views the fixed world. For dynamic worlds, the VWG maintains a simulation of the virtual world that moves all
geometric bodies while satisfying physical laws that mimic the real world. It must handle the motions of any avatars,
falling objects, moving vehicles, swaying trees, and so on. Collision detection methods are needed to make bodies react
appropriately when in contact. Differential equations that model motion laws may be integrated to place bodies
correctly over time.
for now it is sufficient to understand that the VWG must maintain a coherent snapshot of the virtual world each time a
rendering request is made. Thus, the VWG has a frame rate in the same way as a display or visual rendering system.
• A simple example • Improving rendering performance
Improving Latency and Frame Rates • The perfect system • From rendered image to switching pixels
• Historical problems • The power of prediction
• Overview of latency reduction methods • Post-rendering image warp
• Simplifying the virtual world • Flaws in the warped image
• Increasing the frame rate
• Simplifying the virtual world
Each VWG frame corresponds to the placement of all geometric bodies for a common time instant. How
many times per second can the VWG be updated? Can a high, constant rate of VWG frames be maintained?
What happens when a rendering request is made while the VWG is in the middle of updating the world? If
the rendering module does not wait for the VWG update to be completed, then some objects could be
incorrectly placed because some are updated while others are not. Thus, the system should ideally wait until
a complete VWG frame is finished before rendering. This suggests that the VWG update should be at least
as fast as the rendering process, and the two should be carefully synchronized so that a complete, fresh
VWG frame is always ready for rendering.
Thank you