Data Visualization
(Lecture 2)
Modeling vs. Rendering
Modeling Rendering
Create models Take “picture” with camera
Apply materials to models
Place models around scene Both can be done with commercial software:
Place lights in scene
Place the camera Autodesk MayaTM ,3D Studio MaxTM, BlenderTM, etc.
Point Light
Spot
Light
Directional Light
Ambient
Light
Upcoming Topics
We manipulated primitive shapes with geometric
transformations (translation, rotation, scale). These
transformations are essential for model organization,
process of composing complex objects from simpler
components.
Hierarchical models and geometric transformations are
also essential for animation – create and edit scenegraphs
Once object’s geometry is established, must be viewed on
screen: map from 3D geometry to 2D projections for
viewing, and from 2D to 3D for 2D input devices (e.g., the
mouse or pen/stylus, or touch)
While mapping from 3D to 2D, object (surface) material
properties and lighting effects are used in rendering one’s
constructions.
How Computers draw images?
The computer program stores the image in a two-dimensional array in
RAM of pixel values (called a frame buffer).
The display hardware produces the image line-by-line (called raster
lines).
A hardware device called a video controller constantly reads the frame
buffer and produces the image on the display.
A program modifies the display by writing into the frame buffer, and
thus instantly altering the image that is displayed.
Raster Display Systems
cathode-ray tube
(CRT)
Frame Buffer
Raster Display Systems
Raster Display Systems
Computer Graphics
3D Graphics = geometry + transformations + materials +
textures + lighting + viewing
geometry (geometric modeling): Objects are reduced to basic shapes such as
line segments and triangles.
transformations: Objects are scaled, rotated, and translated to place them into
the scene.
materials: properties such as color, shininess, transparency.
textures: images and other effects applied to a surface/Object.
lighting: the effects of light sources in the scene, illuminating the objects and
making them visible.
viewing (projection): simulates a "camera", projecting a 3D scene to a 2D
image.
3D Graphics Pipeline
3D Graphics Pipeline
3D Graphics Pipeline
Bus interface/Front End
Interface to the system to send and receive data and commands.
Vertex Processing
Converts each vertex into a 2D screen position, and lighting may be applied to
determine its color. A programmable vertex shader enables the application to
perform custom transformations for effects such as warping or deformations of
a shape.
Clipping
This removes the parts of the image that are not visible in the 2D screen view
such as the backsides of objects or areas that the application or window system
covers.
3D Graphics Pipeline
Primitive Assembly, Triangle Setup
Vertices are collected and converted into triangles. Information is generated that
will allow later stages to accurately generate the attributes of every pixel
associated with the triangle.
Rasterization
The triangles are filled with pixels known as "fragments," which may or may not
wind up in the frame buffer if there is no change to that pixel or if it winds up
being hidden.
Occlusion Cutting
Removes pixels that are hidden (occluded) by other objects in the scene.
3D Graphics Pipeline
Parameter Interpolation
The values for each pixel that were rasterized are computed, based on color, fog,
texture, etc.
Pixel Shader
This stage adds textures and final colors to the fragments. Also called a "fragment shader,"
a programmable pixel shader enables the application to combine a pixel's attributes, such
as color, depth and position on screen, with textures in a user-defined way to generate
custom shading effects.
Pixel Engines
Mathematically combine the final fragment color, its coverage and degree of transparency
with the existing data stored at the associated 2D location in the frame buffer to produce
the final color for the pixel to be stored at that location. Output is a depth (Z) value for the
pixel.
3D Graphics Pipeline
Frame Buffer Controller
The frame buffer controller interfaces to the physical memory used to hold the actual pixel
values displayed on screen. The frame buffer memory is also often used to store graphics
commands, textures as well as other attributes associated with each pixel.
How are graphical images represented?
There are four basic types that make up virtually of computer generated
pictures:
1. Polylines,
2. Filled Regions,
3. Text,
4. Raster Images.
1) Polylines
A polyline (or more properly a polygonal curve is a finite sequence of
line segments joined end to end.
These line segments are called edges, and the endpoints of the line
segments are called vertices.
A single line segment is a special case.
A polyline is closed if it ends where it starts
1) Polylines
A polyline in the plane can be represented simply as a sequence of the
(x; y) coordinates of its vertices.
the way in which the polyline is rendered is determined by a set of
properties call graphical attributes.
Color
Line width
Line style (solid, dotted, dashed)
Segmented joint style (rounded, mitered or beveled).
1) Polylines and joint styles
ﻣﺷطوف
2) Filled regions
Any simple, closed polyline in the plane defines a region consisting of an
inside and outside.
We can fill any such region with a color or repeating pattern.
In some instances the bounding polyline itself is also drawn and others
the polyline is not drawn.
2) Filled regions
3) Text
Text can be thought of as a sequence of characters in some font.
As with polylines there are numerous attributes which affect how the
text appears.
Font’s
Face (times-roman, helvetica, courier, for example),
Weight (normal, bold, light),
Style or slant (normal, italic, oblique, for example),
Size, which is usually measured in points,
A printer’s unit of measure equal to 1/72-inch),
Color.
3) Text
ABC