0% found this document useful (0 votes)
284 views35 pages

Unit-3 CAD Completed

This document discusses techniques for visual realism in computer-aided design (CAD). It describes hidden line, surface, and solid removal algorithms which remove obscured areas to create more realistic renderings. These algorithms prioritize surfaces based on depth and remove any hidden parts. The document also discusses shading models which add color and shadows to create photorealistic images. Coherence principles and sorting methods are explained which improve the efficiency of removal algorithms by leveraging relationships between design elements.

Uploaded by

muthupuvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
284 views35 pages

Unit-3 CAD Completed

This document discusses techniques for visual realism in computer-aided design (CAD). It describes hidden line, surface, and solid removal algorithms which remove obscured areas to create more realistic renderings. These algorithms prioritize surfaces based on depth and remove any hidden parts. The document also discusses shading models which add color and shadows to create photorealistic images. Coherence principles and sorting methods are explained which improve the efficiency of removal algorithms by leveraging relationships between design elements.

Uploaded by

muthupuvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

UNIT III
VISUAL REALISM
9
Hidden Line-Surface-Solid removal algorithms shading colouring computer
animation.

VISUAL REALISM - INTRODUCTION


What is Visual Realism?
Visual Realism is a method for interpreting picture data fed into a computer
and for creating pictures from difficult multidimensional data sets.
Visualization can be classified as :
o Visualization in geometric modeling
o Visualization in scientific computing.
Visualization in geometric modeling is helpful in finding connection in the
design applications.
By shading the parts with various shadows, colors and transparency, the
designer can recognize undesired unknown interferences.
In the design of complex surfaces shading with different texture
characteristics can use to find any undesired quick modifications in surface
changes.
Visualization in computing is viewed as a technique of geometric modeling.
It changes the data in numerical form into picture display, allowing users to
view their simulations and computations.
Visualization offers a process of seeing the hidden.
Visualization in scientific computing is of great interest to engineers during
the design process.
Existing visualization methods are:
o Parallel projections
o Perspective projection.
o Hidden line removal
o Hidden surface removal
o Hidden solid removal
o Shaded models
M. Puviyarasan | CAD | Unit - III

1 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Hidden line and surface removal methods remove the uncertainty of the
displays of 3D models and is accepted the first step towards visual realism.
Shaded images can only be created for surface and solid models.
In multiple step shading process, the first step is removing the hidden
surfaces / solids and second step is shades the visible area only.
Shaded images provide the maximum level of visualization.
The processes of hidden removal need huge amounts of computing times
and also upper end hardware services.
The creation and maintenance of such a models are become complex.
Hence, creating real time images needs higher end computers with the
shading algorithms embedded into the hardware.

Visibility Techniques
In general these techniques attempt to establish relationships among
polygons and edges in the viewing plane. The techniques normally check for
overlapping of polygons in the viewing plane. If overlapping occurs, depth
comparisons are performed.

Minimax Test

Edge Intersections

Containment Test

Segment (Scanline) Comparisons

Surface Test/back face/depth test

Homogeneity Test

Computing Silhouettes
Object-space algorithms are more accurate than image-space algorithms.
The former perform geometric calculations using the floating-point
precision of the computer hardware, while the latter perform calculations
with accuracy equal to the resolution of the display screen used to render
the image.
The enlargement of an object-space image does not degrade its quality of
display as does the enlargement of an image-space image.
As the complexity of the scene increases (large number of objects in the
scene), the computation time grows more quickly for object-space
algorithms than for image-space algorithms.
M. Puviyarasan | CAD | Unit - III

2 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Sorting
Sorting is an operation that orders a given set of records according to a
selected criterion.
The time required to perform the sort depends on the number of records to
be processed, the algorithm that performs the sort and the properties of the
initial ordering of the records (whether it is random or semi ordered).
Many visibility algorithms (hidden-line, hidden-surface, and hidden-solid
algorithms) make extensive use of sorting operations.
Sorting and searching operate on the records of the scene data base. These
records typically contain geometrical, topological and viewing information
about the polygons and faces that make the scene.
Coherence
Naturally, the elements of a scene or its image have some interrelationships,
known as coherence.
Hidden-line algorithms that utilize coherence in their sorting techniques are
more effective than other algorithms that do not.
Coherence is a measure of how rapidly a scene or its image changes.
It describes the extent to which a scene or its image is locally constant.
The coherence of a set of data can improve the speed of its sorting
significantly.
The gradual changes in the appearance of a scene or its image from one
place to another can reduce the number of sorting operations greatly.
Several types of coherence can be identified in both the object space and the
image space.

M. Puviyarasan | CAD | Unit - III

3 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Edge coherence: The visibility of an edge changes only when it crosses


another edge.
Face coherence: If a part of a face is visible, the entire face is probably
visible. Moreover, the penetration of faces is a relatively rare occurrence,
and therefore it is not usually checked by hidden-removal algorithms.
Geometric coherence: Edges that share the same vertex or faces that share
the same edges have similar visibilities in most cases.
For example, if three edges (the case of a box) share the same vertex, they
may be all visible, all invisible, or two visible and one invisible. The proper
combination depends on the angle between any two edges (less or greater
than 180) and on the location of any of the edges relative to the plane
defined by the other two edges.
Frame coherence: A picture does not change very much from frame to
frame.
Scanline coherence: Segments of a scene visible on one scan line are most
probably visible on the next line.
Area coherence: A particular element (area) of an image and its neighbors
are all likely to have the same visibility and to be influenced by the same
face.
Depth coherence: The different surfaces at a given screen location are
generally well separated in depth relative to the depth range of each.

The first three types of coherence are object space based, while the last
four are image space based.
If an image exhibits a particular predominant coherence, the coherence
would form the basis of the related hidden-line removal algorithm.

M. Puviyarasan | CAD | Unit - III

4 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

HIDDEN-LINE REMOVAL
For a given 3D scene, a given viewing
point, and a given direction, eliminate
from a 2D projection of the scene all
parts of edges and faces which the
observer cannot see.
For

orthographic

projections,

the

location of the viewing point is not


needed.

The Priority Algorithm


This algorithm is also known as the
depth or Z algorithm.
The algorithm is based on sorting all the faces (polygons) in the scene
according to the largest z coordinate value of each. This step is sometimes
known as assignment of priorities.
If a face intersects more than one face, other visibility tests besides the z
depth are needed to resolve any ambiguities.
This step constitutes determination of coverings.
Example
Consider a scene of two boxes as shown in Figure. It shows the scene in the
standard VCS where the viewing eye is located at infinite distance -on the
positive Zv direction. The following steps provide guidance for implementing
the algorithm:
Step 1: Utilize the proper orthographic projection to obtain the desired view
(whose hidden lines are to be removed) of the scene.
This result in a set of vertices with coordinates of (Xv, Yv, Zv).

M. Puviyarasan | CAD | Unit - III

5 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Step 2: Utilize the surface test to remove back faces to improve the efficiency of
the priority algorithm.
To enable one to perform the depth test, the plane equation of any face
(polygon) in the image can be obtained using Eq.
Given three points that lie in one face, can be written as:

Any two edges of a given face can be used to calculate the face normal. Steps
1 and 2 result in a face list which will be sorted to assign priorities.
For this example, six faces F1 - F6 form such a list.
The order of the faces in the list is immaterial.
M. Puviyarasan | CAD | Unit - III

6 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Step 3: Assign priorities to the faces in the face list.


The priority assignment is determined by comparing two faces at any one
time.
The priority list is continuously changed, and the final list is obtained after
few interactions. Here is how priorities can be assigned.
The first face in the face list (F1) is assigned the highest priority 1.
F1 is intersected with the other faces in the list, that is, F2- F6.
The intersection between F1 and another face may be an area, A, as in the
case of F1 and F4 shown in Figure, an edge as for faces F1 and F2, or an
empty set (no intersection) as for faces F1 and F6.
In the case of an area of intersection, the (xv, yv) coordinates of a point c
inside A can be computed Utilizing the above equation.
For both faces F1 and F4, the two corresponding Zv values of point c can be
calculated and compared.
The face with the highest Zv values is assigned the highest priority.
In the case of an edge of intersection, both faces are assigned the same
priority.
They obviously do not obscure each other, especially after the removal of the
back faces.
In the case of no face intersection, no priority is assigned.

M. Puviyarasan | CAD | Unit - III

7 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

In the above example F1 intersects F2 and F3 in edges.


Therefore both faces are assigned priority 1. FI and F4 intersect in an area.
Using the depth test, and assuming the depth of F4 is less than that of Fl.
F4 is assigned priority 2. When we intersect faces FI and F5, we obtain an
empty set, that is, no priority assignment is possible.
In this case, the face F1 is moved to the end of the face list, and the sorting
process to determine priority starts all over again.
In each iteration, the first face in the face list is assigned priority 1.
The end of each iteration is detected by no intersection.
Four iterations that yield the final priority list. In iteration 4, faces F4 to F6
are assigned the priority 1 first.
When F4 is intersected with F1 the depth test shows that F1 has higher
priority. Thus, F1 is assigned priority 1 and the priority of F4 to F6 is
dropped to 2.
Step 4: Reorder the face and priority lists so that the highest priority is on top
of the list.
In this case, the face and priority lists are [F1, F2, F3, F4, F5, F6] and [1, 1, 1,
2, 2, 2], respectively.
Step 5: In the case of a raster display hidden line removal is done by the
hardware.
We simply display the faces in reverse order of their priority.
Any faces that would have been hidden by the others would thus first be
displayed, but would be covered later either partially or entirely by faces of
higher priority.
Step 6: In the case of a vector display the hidden line removal must be done by
the software by determining coverings.
For this purpose, edges of a face are compared with all other edges of higher
priority.
M. Puviyarasan | CAD | Unit - III

8 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

An edge list can be created which maintains a list of all line segments that
will have to be drawn as visible.
Visibility techniques such as the containment test and edge intersection are
useful in this case.
In Some scenes, ambiguities may result after applying priority test. The
figure shows a case in which the order of faces is cyclic.
Face F1 covers F2, F2 covers F3, and F3 covers F1. It is very difficult to find
the priority list that produces this cyclic ordering and coverage.
To rectify this ambiguity, additional criteria to determine coverage must be
added to the priority algorithm.

Area Oriented Algorithm


The area oriented algorithm described here subdivides the data set of a
given scene in a stepwise fashion until all visible area in the scene are
determined and displayed.
In this algorithm as well as in the priority algorithm, no penetration of faces
is allowed.
Step 1. Identify Silhouette polygons:
Silhouette polygons are polygons whose edges are silhouette edges.
M. Puviyarasan | CAD | Unit - III

9 | Page

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

First silhouette edges in the scene are recognized.


Second, the connection of Silhouette edges to form closed silhouette
polygons can be achieved by sorting all the edges for four end points.
For the scene shown in figure, two closed silhouette polygons S1 and S2 are
identified.

Step 2. Assign quantitative hiding (QH) values to edges of silhouette polygons:

This is achieved by intersecting the polygons (the containment test can be


utilized first as a quick test).
The intersection points define the points where the value of QH may change.
Applying the depth test to the points of intersection (P1 and P2 Figure), we
determine the segments of the silhouette edges that are hidden.
For example, if the depth test at P1 shows that Zv at P1 of S1 is smaller than
that of S2, edge C1C2 is partially visible.
Similarly, the depth test at P2 shows that edge C2C3 is also partially visible.
To determine which segment of an edge is visible, the visibility test can be
used.
Determination of the values of QH at the various edges or edge segments of
silhouette polygons is based on the depth test.
A value of 0 indicates that the edge or segment is visible, and a value of 1
indicates that the edge or segment is invisible.
M. Puviyarasan | CAD | Unit - III

10 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Step 3. Determine the visible silhouette segments:


From the values of QH, the visibility of silhouette segments can be
determined with the following rules in mind.
If a closed silhouette polygon is completely invisible, it need not be
considered any further.
Otherwise, its segments with the lowest QH values are visible.

Step 4. Intersect the visible silhouette segments with partially visible faces:
This step is used to determine if the silhouette segments hide or partially
hide nonsilhouette edges in partially visible faces.
Edges E1 to E6 of S2 are intersected with the internal edges (edges of the
square in the face) of F1 , and the visible segments of the internal edges are
determined,
By accessing only the silhouette edges of the covering silhouette polygon
only and the partially visible face only, the algorithm avoids any unnecessary
calculations.
Step 5: Display the interior of the visible or partially visible polygons:
This step can be achieved using a stack and simply enumerates all faces lying
inside a silhouette polygon.
The stack is initialized with a visible face which has a silhouette edge.
M. Puviyarasan | CAD | Unit - III

11 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

We know this face belongs to a visible area.


A loop begins with popping a face (F2) from the stack.
We examine all the edges of the face. If an edge (E7) is not fully invisible, the
neighboring face (F3) also has visible edges and, therefore, is pushed into
the stack if it has not already been pushed.
The edge itself or its visible segments are displayed.
The loop is repeated and the algorithm stops when the stack is empty.
The area-oriented algorithm is more efficient than the priority algorithm
because it hardly involves any unnecessary edge/face intersection.

Hidden-Line Removal for Curved Surfaces


The above algorithms described for flat faces are extendable to curved
polyhedral by approximating them by planer polygons.
The u-v grid offered by parametric surface representation offers such an
approximation.
This grid can be utilized to create a grid surface consisting of straightedged regions, as shown in figure by approximating the u-v grid curves by
line segments.
The overlay hidden-line algorithm is suitable for curved surfaces.
The algorithm begins by calculating the u-v grid using the surface equation.
It then creates the grid surface with linear edges.
Various criteria can be utilized to determine the visibility of the grid surface.
There is no best hidden-line algorithm. Many algorithms exist and some are
more efficient and fast in rendering images than others for certain
applications.
Firmware and parallel processing computations of hidden-line algorithms is
making it possible to render images in real time.
This adds to the difficulty of deciding on a best algorithm.

M. Puviyarasan | CAD | Unit - III

12 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Hidden-Surface Removal
Hidden-surface removal and hidden-line removal are one problem.
Most of the line algorithms are applicable here and vice versa.
The following are image-space algorithms only for hidden-surface removal.
A wide variety of these algorithms exist.
They include the z-buffer algorithm, Watkin's algorithm, Warnock's
algorithm, and Painter's algorithm.
The Watkin's algorithm is based on scanline coherence, while the Warnock's
algorithm is an area-coherence algorithm.
The Painter's algorithm is a priority algorithm for raster displays.

The z-Buffer Algorithm


This is also known as the depth-buffer algorithm.
In addition to the frame buffer, this algorithm requires a z-buffer in which z
values can be sorted for each pixel.
The z-buffer is initialized to the smallest z-value, while the frame buffer is
initialized to the background pixel value. Both the frame- and z-buffers are
indexed by pixel coordinates (x, y).
M. Puviyarasan | CAD | Unit - III

13 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

These coordinates are actually screen coordinates.


The z-buffer algorithm works as follows. For each polygon in the scene, find
all the pixels (x, y) that lie inside or on the boundaries of the polygon when
projected onto the screen.
For each of these pixels, calculate the depth z of the polygon at (x, y).
If z > depth (x, y), the polygon is closer to the viewing eye than others
already stored in the pixel.
In this case, the z-buffer is updated by setting the depth at (x, y) to z.
Similarly, the intensity of the frame-buffer location corresponding to the
pixel is updated to the intensity of the polygon at (x, y).
After all the polygons have been processed, the frame buffer contains the
solution.

Warnock's Algorithm
This is one of the first area-coherence algorithms.
Essentially, this algorithm solves the hidden-surface problem by recursively
subdividing the image into subimages.
It first attempts to solve the problem for a window that covers the entire
image.
Simple cases as one polygon in the window or none at all are easily solved.
If polygons overlap, the algorithm tries to analyze the relationship between
the polygons and generates the display for the window.
If the algorithm cannot decide easily, it subdivides the window into four
smaller windows and applies the same solution technique to every window.
If one of the four windows is still complex, it is further subdivided into four
smaller windows.
The recursion terminates if the hidden-surface problem can be solved for all
the windows or if the window becomes as small as a single pixel on the
screen.
M. Puviyarasan | CAD | Unit - III

14 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

In this case, the intensity of the pixel is chosen equal to the polygon visible in
the pixel.
The subdivision process results in a window tree.
One would devise a rule that any window is recursively subdivided unless it
contains two polygons.
In such a case, comparing the z depth of the polygons determines which one
hides the other.
While the subdivision of the original window is governed by the complexity
of the scene, the subdivision of any window into four equal windows makes
the algorithm inefficient.
A better way would be to subdivide a window according to the complexity of
the scene in the window.
This is equivalent to subdividing a window into four unequal sub windows.
M. Puviyarasan | CAD | Unit - III

15 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Hidden-Solid Removal
The hidden-solid removal problem involves the display of solid models with
hidden lines or surfaces removed.
Due to the completeness and unambiguity of solid models, the hidden-solid
removal is fully automatic.
CAD systems provide users with menu choices to display models including
shaded, hidden lines removed, or wireframe (no hidden lines removed).
For displaying CSG models, both the visibility problem and the problem of
combining the primitive solids into one composite model have to be solved.
There are three approaches to displaying CSG models.
The first approach converts the CSG model into a boundary model that can
be rendered with the standard hidden-surface algorithms.
The second approach utilizes a spatial subdivision strategy.
The thrid approach uses a CSG hidden-surface algorithm, which combines
the CSG evaluation with the hidden-surface removal on the basis of ray
classification.

Ray-Tracing Algorithm
The virtue of ray tracing is its simplicity and reliability.
The most complicated numerical problem of the algorithm is finding the,
points at which lines (rays) intersect surfaces.
Therefore a wide variety of surfaces and primitives can be covered.
Ray racing has been used to enhance the visual realism of solids by
generating line drawings with hidden solids removed, animating solids, and
shading pictures.
It has also been utilized in solid analysis, mainly calculating mass properties.
The idea of using ray tracing to generate shaded images of solids is to
emulate the photographic process in reverse.

M. Puviyarasan | CAD | Unit - III

16 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

For each pixel in the screen, a light ray is cast through it into the scene to
identify the visible surface.
The first surface intersected by the ray, found by "tracing" along it, is the
visible one.
At the ray/surface intersection point, the surface normal is computed and
knowing the position of the light source, the brightness of the pixel can be
calculated.
Ray tracing is considered a brute force method for solving problems. The
basic ray-tracing algorithm is simple, but slow.
The CPU usage of the algorithm increases with the complexity of the scene
under consideration.
Various alterations and refinements have been added to the algorithm to
improve its efficiency.
Moreover, the algorithm has been implemented into hardware (ray-tracing
firmware) to speed its execution.
The basics of tray tracing stems from light rays and camera models.
The geometry of a simple camera model is analogous to that of projection of

M. Puviyarasan | CAD | Unit - III

17 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

geometric models.
Referring to Figure, the center of projection, projectors, and the projection
plane represent the focal point, light rays, and the screen of the camera
model, respectively.
We assume that the camera model uses the VCS. For each pixel of the screen,
a straight light ray passes through it and connects the focal point with the
scene.
When the focal length, the distance between focal point and screen, is
infinite, parallel views result, and all light rays become parallel to the Zv axis
and perpendicular to the screen (the Xv Yv plane).
A ray is a straight line which is best defined in a parametric form as a point
(Xo, Yo, zo) and a direction vector (x, y, z).
Thus, a ray is defined as [(Xo, Yo, Zo) (x, y, z)].
For a parameter t, any point (x, y, z) on the ray is given by:

M. Puviyarasan | CAD | Unit - III

18 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

A ray-tracing algorithm takes the ray definition as an input and output


information about how the ray intersects the scene.
Knowing the camera model and the solid in the scene, the algorithm can find
where the given ray enters and exits the solid as shown in Figure for a
parallel view.
The output information is an ordered list of ray parameters, ti which
denotes the enter/exit points, and a list of pointers, Si to the surfaces (faces)
through which the ray passes.
The ray enters the solid at point t1, exits at t2 enters at t3 and finally exits at
t4.
Point t1 is closest to the screen and point t4 is farthest.
The lists of ray parameters and surface pointers suffice for various
applications.

While the basics of ray tracing are simple, their implementation into a solid
modeler is more involved and depends largely on the representation scheme
of the modeler.
M. Puviyarasan | CAD | Unit - III

19 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

When boundary representation is used in the object definition, the raytracing algorithm is simple.
For a given pixel, the first face of the object intersected by the ray is the
visible face at that pixel.
When the object is defined as a CSG model, the algorithm is more
complicated because CSG models are compositions of primitive solids.
Intersecting the primitive solids with a ray yields a number of intersection
points, which requires additional calculations to determine which of these
points are intersection points of the ray with the composite solid.
A ray-tracing algorithm for CSG models consists of three main modules:
ray/primitive intersection, ray/primitive classification, and ray/solid
classification.
The ray tracing algorithm to generate line drawings of hidden solids has
advantages.
It eliminates finding, parameterizing, classifying, and storing the curved
edges formed by the intersection of surfaces.
The silhouettes of curved surfaces are by-products, and they can be found
whenever the view changes.
The main drawbacks of the algorithm are speed and aliasing.
Aliasing causes edges to be jagged and surface "slivers" may be overlooked.
Speed is particularly important to display hidden-solid line drawings in an
interactive environment.
If the user creates a balanced tree of the solid in the scene, the efficiency of
ray tracing improves.
The coherence of visible surfaces (surfaces visible at two neighboring pixels
are more likely to be the same than different) can also speed up the
algorithm.

M. Puviyarasan | CAD | Unit - III

20 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Shading
Line drawings, still the most common means of communicating the
geometry of parts, are limited in their ability to portray intricate shapes.
Shaded images can convey complex shape information.
The visible faces of an object are found by hidden surface algorithms, but
further information is required to display the colour of the object.
Shading refers to the process of altering the colour of an object surface
polygon in the 3D scene based or its angle on lights and its distance from
lights to create a photorealistic effect.
They also can convey features other than shape such as surface finish or
material type (plastic or metallic look).
Shaded-image rendering algorithms filter information by displaying only the
visible surface.
Many spatial relationships that are unresolved in simple wireframe displays
become clear with shaded displays.
Shaded Images are easier to interpret because they resemble the real
objects.
Shaded images have viewing problems not present in wireframe displays.
Solids of interest may be hidden or partially obstructed from view, in which
case various shaded images may be obtained from various viewing points.
Critical geometry such as lines, arcs, and vertices are not explicitly shown.
Well-known

techniques

such

as

shaded-Image/wireframe

overlay,

transparency, and sectioning can be used to resolve these problems.


In shading a scene (rendering an image), a pinhole camera model is almost
universally used.
Rendering begins by solving the hidden-surface removal problem to
determine which objects and/or portions of objects are visible in the scene.

M. Puviyarasan | CAD | Unit - III

21 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

As the visible surfaces are found, they must be broken down into pixels and
shaded correctly.
This process must take into account the position and color of the light
sources and the position, orientation, and surface properties of the visible
objects.
Shading models simulates the way visible surfaces of objects reflect light.
They determine the shade of a point of an object in terms of light sources,
surface characteristics, and the: positions and orientations of the surfaces
and sources.
Two types of light sources can be identified: point light source and ambient
light.
Objects illuminated with only point light source look harsh because they are
illuminated from one direction only.
This produces a flashlight-like effect in a black room. Ambient light is a light
of uniform brightness and is caused by the multiple reflections of light from
the many surfaces present in real environments.
Shading models are simple. The inputs to a shading model include intensity
and color of light source (S), surface characteristics at the point to be
shaded, and the positions and orientations of surfaces and light sources.
The output from a shading model is an intensity value at the point.
Shading models are applicable to points only. To shade an object, a shading
model is applied many times to many points on the object.
These points are the pixels of the display. To compute a shade for each point
on a 1024 x 1024 raster display, the shading model must be calculated over
one million times.
These calculations can be reduced by taking advantage of shading
coherence, that is, the intensity of adjacent pixels is either identical or very
close.
M. Puviyarasan | CAD | Unit - III

22 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Particularly, we consider point light sources shining on the surfaces of


objects. (Ambient light adds a constant intensity value to the shade at every
point.)
The light reflected off a surface can be divided into two components: diffuse
and specular.
When light hits an ideal diffuse surface, it is reradiated equally in all
directions, so that the surface appears to have the same brightness from all
viewing angles. Dull surfaces exhibit diffuse reflection.
Examples of real surfaces that radiate mostly diffuse light are chalk, paper,
and flat paints.
Ideal specular surfaces reradiate light in one direction only, the reflected
light direction.
Examples of specular surfaces are mirrors and shiny surfaces. Physically, the
difference between these two components is that diffuse light penetrates the
surface of an object and is scattered internally before emerging again, while
specular light bounces off the surface.
The absence of diffuse light makes a surface look shiny.
The light reflected from real objects contains both diffuse and specular
components, and both must be modeled to create realistic images.
A basic shading model that incorporates both a point light source and
ambient light can be described as follows:
Ip = Id + Is + Ib; where Ip, Id, Is, and Ib are respectively the resulting
intensity (the amount of shade) at point p, the intensity due to the diffuse
reflection component of the point light source, the intensity due to the
specular reflection component, and the intensity due to ambient light.
The above Equation is written in a vector form to permit the modeling of
colored surfaces.
For the common red, green and blue color system, Eq. represents three
scalar equations, one for each color.
M. Puviyarasan | CAD | Unit - III

23 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Specular reflection is a characteristic of shiny surfaces. Highlights visible on


shiny surfaces are due to specular reflection, while other light reflected from
these surfaces is caused by diffuse reflection.
The location of a highlight on a shiny surface depends on the directions of
the light source and the viewing eye. If you illuminate an apple with a bright
light, you can observe the effects of specular reflection. Note that at the
highlight the apple appears to be White (not red), which is the color of the
incident light.
The specular component is not as easy to compute as the diffuse component.
Real objects are nonideal specular reflectors, and, some light is also reflected
slightly off axis from the ideal light direction (defined by vector r in Figure).
This is because the surface is never perfectly flat but contains microscopic
deformations.
For ideal (perfect) shiny surfaces (such as mirrors), the angles of reflection
and incidence are equal.
This means that the viewer can only see specular reflected light when the
angle Q is zero.
For non ideal (non perfect) reflectors, such as an apple, the intensity of the
reflected light drops sharply as Q increases.
One of the reasonable approximations to the specular component is an
empirical approximation and takes the form.
Most surfaces, including those that are curved, are described by polygonal
meshes when the visible-surface calculations are to be performed by the
majority of rendering algorithms.
The majority of shading techniques are therefore applicable to objects
modeled as polyhedra.
Shading Methods:
o With a single intensity Flat Shading
o Using a interpolation scheme Smooth Shading
M. Puviyarasan | CAD | Unit - III

24 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Flat Shading
Single intensity is calculated for
each polygon and hence looks less
realistic.
Uses the same colour for every pixel
in a face.
Edges appear more pronounced
than they would on a real object.
Individual faces are visualized.
Same colour for any point of the
face.
Not suitable for smooth objects.
Less computationally expensive.
Used for high speed rendering.

Smooth Shading
Intensity at each point of a surface can be
obtained using an interpolation scheme.
Smooth shading uses linear interpolation
of colours between vertices.
The edges disappear with this technique.
Underlying surface is visualized.
Each point of the face has its own colour.
Suitable for any objects.
More computationally expensive.
Used for more realistic rendering.

Types of Smooth shading


Gourand Shading or First derivative shading
Gouraud shading, computes an intensity for each vertex and then
interpolates the computed intensities across the polygons.
Gouraud shading performs a bi-linear interpolation of the intensities down
and then across scan lines.
It thus eliminates the sharp changes at polygon boundaries
The algorithm is as follows:
o Compute a normal N for each vertex of the polygon.
o From N compute an intensity I for each vertex of the polygon.
o From bi-linear interpolation compute an intensity Ii for each pixel.
o Paint pixel to shade corresponding to Ii.
Advantages of Gouraud shading:
Gouraud shading gives a much better image than faceted shading
It is not too computationally expensive
Disadvantages to Gouraud shading:
It eliminates creases that you may want to preserve, e.g. in a cube.
M. Puviyarasan | CAD | Unit - III

25 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Phong Shading or Second derivative shading


Phong shading, is similar to Gouraud shading except that the Normals are
interpolated.
It interpolates normal vectors themselves across polygons and then applies
the shading model at each pixel in the image.
Thus, the specular highlights are computed much more precisely than in the
Gouraud shading model.
The algorithm is as follows:
o Compute a normal N for each vertex of the polygon.
o From bi-linear interpolation compute a normal, Ni for each pixel.
(This must be renormalized each time)
o From Ni compute an intensity Ii for each pixel of the polygon.
o Paint pixel to shade corresponding to Ii.
Note that this method is much more computationally intensive than Gouraud
shading:

M. Puviyarasan | CAD | Unit - III

26 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Shading Enhancements
The visual realism of images can be enhanced by including shading effects such as:

Transparency, Shadows, Surface details, Texture


Transparency :
It is used to shade translucent materials which allows some of the back
pixels to slow through producing a screen door effects.
The light emission from a transparent material is general a combination of
relected and transmitted light.
Shadows :
Important elements of visual realism. They give important cues about light
and object positions.
Two types of shadow algorithm are available: shadow volumes and shadow
mapping.
Surface details :
It refer to the patterns contained by some surfaces such as logo of an object,
paintings on a vessel, dividing lines on a highway, etc that must be taken into
account in the rendering process.
Texture :
It is another important element that enhances the realism of an image.
It is an approach where the texture pattern may either be defined in a
rectangular array or as a procedure that modifies surface intensity values.
It can also contain other surface properties such as wrinkled surfaces.

COLOURING
The two main ingredients of shaded images are colours and textures. The
display of realistic scenes is mostly in colour.
Colours are used for realism, aesthetics and to distinguish the different areas
in the geometry of an object.

M. Puviyarasan | CAD | Unit - III

27 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

They help the designer to classify components in an assembly or highlight


the sectional views and dimensions of the model.
Neutral colours such as black, white or grey are called achromatic colours,
provided by black and white raster displays.
The only attribute of achromatic light is its intensity or amount which is
assigned a value between 0 and 1.
Colour is created by taking advantage of the fundamental trichromacy of the
human eye.
A typical colour CRT uses three electron beams and a triad of colour dots on
the phosphor screen to provide each of the three colours, red , green and
blue.
In the other hand, colour raster can terminate superior colour graphics
capabilities and are usually available with 1024 X 1024 resolution but
require large memory for the refresh buffer.
Three colour parameters are

Hue
Saturation or Purity
Brightness
The combination of frequencies present in the reflected light from an object
is perceived as the colour of the object.
The Hue or simply the colour is the dominant wavelength or dominant
frequency.
Saturation is the purity of a colour. It describes how washed out or how
pure the colour appears.
It defines a range from pure colour (100%) to gray(0%) at a constant
lightness level.
Brightness represents the perceived intensity of light. It refers to the
lightness or darkness of a colour.
M. Puviyarasan | CAD | Unit - III

28 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Dark value with black added are called 'shades' and light value with white
added are called 'tints' by adding both black and white pigments, 'tones' of
the colour are produced.
Colour Models
A colour model is an abstract mathematical model describing the way
colours can be presented as tuples of numbers, typically as three or four
colour components.
It is used to describe colour as accurately as possible.
The range of colours that can be described by a combination of other colurs
is called a colour gamut.
o Additive colour models
o Subtractive colour models
Additive Color models
Additive Color models are based on the principle of transmitted light.
It is created by mixing a number of different colours, the primary colours
red, blue and green being normally used.
This system includes monitors, liquid crystal displays, digital projectors and
televisions.
Each pixel on a monitor screen starts out as black but when red, green and
blue phosphors of the pixel are illuminated simultaneously, that pixel
becomes white.

Subtractive Color models


This model perceive colour as a result of reflected light.
M. Puviyarasan | CAD | Unit - III

29 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

The colour that a surface displays depends on which parts of the visible
spectrum are not absorbed and therefore remain visible.
If an object absorbs (subtracts) all the illuminating light, i.e., no light is
reflected back to the viewer it appears black.
Subractive colour models filter the red, green and blue components of the
image from white light.
Colour paintings, photography and printing processes use the subtractive
process to reproduce colour.
In printing, black is added to improve the contrast.
Printing processes use colour inks that act as filters.

These inks are

transparent it is the paper that reflects the unabsorbed light back to the
viewer.

Different Colour Models


RGB Colour model Additive color model; Black at origin(0,0,0)
CMY (or) CMYK colour model; Subtractive; White at origin (0,0,0)
YIQ

Colour

model;

Adopted

by

the

National

Television

System

Committee(NTSC)
M. Puviyarasan | CAD | Unit - III

30 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

o Y-axis contains the brightness information


o I stands for In-phase
o Q stands for Quadrature, both representing the hue and purity.
HSV Colour model; Hue, Saturation and Value
HSL Colour model; Hue, Saturation and Lightness

COMPUTER ANIMATION
To 'animate' literally means 'to give life to'.
The process of moving something which can't move by itself is called
'animation'.
It is a technique for creating the illusion of motion with a series of static
images. Animation adds to graphics the dimension of time which vastly
increases the amount of information which can be transmitted.
Conventional animation is defined as a technique in which the illusion
movement is created by photographing a series of individual drawings on
successive frames of film.
Computer animation is the use of computer to create animation.
Virtual entities may contain and be controlled by attributes such as location,
orientation and scale.
Animation is the change of an attribute over time. It generally refers to any
time sequence of visual changes in a scene.
To create the illusion of movement, an image is displayed on the computer
screen and then quickly replaced by a new image that is similar to the
previous image but shifted slightly.
To trick the eye and brain into thinking they are seeing a smoothly moving
object, the images should be drawn at around 12 frames per second or
faster.
A typical animation sequence for animated cartoons is:
M. Puviyarasan | CAD | Unit - III

31 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

o Storyboard layout
o Object definitions
o Key-frame specifications
o Generation of in-between frames
o Recording animation sequence
Computer animation demands higher frame rates as it produces more
realistic images.
o Computer assisted and Computer generated
Computer assisted animation is mostly 2-dimensional that computerize the
traditional animation process.
Computer generated or modeled animation or 3D animation utilizes
available computer graphics and CAD techniques to create images, scenes
and movements.
Applications of Computer animation
o Engineering, Educational
o Entertainment, Advertising, Art
o Architecture, Forensics, Medicine
o Military and Space exploration.
o In CAD :
Kinematic Simulation
Analysis of Linkage mechanism
Planning of a robotic work cycle
Types of Animation
o Frame buffer Animation
It provides the illusion animation for a variety of applications.
Limited real-time animation

M. Puviyarasan | CAD | Unit - III

32 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Uses Static pictures which are stored in the image memory of


a digital frame buffer.
The image once created and stored in the memory of the
frame buffer remains completely unchanged.
Dynamics are added by modifying the pattern in which pixels
are read from memory.
Techniques:

Colour

table

animation,

Zoom-pan-scroll

animation, cross-bar animation


o Real-Time Playback
Frames are generated in advance at non-real time rates and
stored on a file.
The frames are then later displayed to create the animation.
Frame rate determines how smoothly the animation plays
back.
The frames are recorded and then played back at the rate
required for real time presentation.
o Real Time Animation
Creating animations in real-time is called real-time/live.
This animation is limited by the capabilities of the computer
and data transfer rates.
Very complex animation is possible in a short time with the
development in parallel processing and multiprocessors.
Computer Animation Techniques
o Keyframing
A keyframe is a detailed drawing of the scene at a certain time
in the animation sequence.
Animator specifies the critical key points.

M. Puviyarasan | CAD | Unit - III

33 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Then the computer automatically generated the intermediate


frames, called inbetweens.
Using interpolation techniques huge amount of time can be
saved.
The animator has direct control over the position, shapes and
motions of models at any moment in the animation.
o Simulation/Procedural
Also called algorithmic animation or simulation.
The computer procedurally follows a set of rules to generate
the motion.
The animator specifies the parameters and initial conditions
and runs simulation.
The effect of changing a parameter is often unpredictable and
the animator has to run a simulation to see the result.
It is easy to generate a family of similar motions.
It can be used for complex systems.
o Motion Capture
Also called as Mocap/performance animation.
Special sensors called trackers, record the motions of a human
or animal in three dimensions.
This data is then used by the computer to generate the motion
for an animation.
Special puppets with joint angle sensors can also be used in
the place of human performers.
Its a popular technique with animators.
Motion capture enables famous athletes to supply the
actionslfor characters in sports video games.
o Combinations
M. Puviyarasan | CAD | Unit - III

34 | P a g e

PEC / DoME / III Year- Mechanical Engineering / V Sem / ME 6501: COMPUTER AIDED DESIGN

Computer animation hardware and software


o The quality of the computer animation produced also depends on
the hardware and software used.
o Commonly used hardware are Silicon Graphics Inc (SGI), PCs,
Macintosh and Amiga.
o Popular softwares are 3D Studio Max, Light Wave 3D, Adobe
Premiere, Alias wavefront, Animator Studio, Soft Image etc.
Animation problems
o Frame to frame flicker
The blank period between erasing and generating the contents
of the pixels of a graphics display caused a blinking effect
called flicker.
o Frame to frame discontinuity
If the time sampling rate is not adequate, it results in
discontinuous and jerky motion.
o Spatial aliasing
This results in jagged edges.
Anti-aliasing techniques can be used to solve the problem.
o Object interactions
The problem of detecting and controlling object interactions is
encountered when several objects are animated at once in a
scene.
In some systems, animator visually inspects the scene for
object interaction.
This is a time consuming and difficult process.
Collision detection and collision response algorithms are
available.

M. Puviyarasan | CAD | Unit - III

35 | P a g e

You might also like