Unit-5 Complete Material
Unit-5 Complete Material
Bhuyan
Computer Graphics
UNIT-5
Illumination and Colour Models, Animation
Graphics
Dr. Himadri B.G.S. Bhuyan
1 Light sources
• In the world of computer graphics, light sources play an essential role in bringing realism and
visual appeal to the computer-generated environment.
• Light sources means, an object that is emitting radiant energy such as light bulb, lamp,
fluorescent tube , sun etc.
• In general, light source is the light emitting source. There are three basic light sources
– Point Source: The source that emit rays in all directions. Example: A bulb in a room
– Parallel Source: When a point source is at infinite distance then light rays become
parallel and act as parallel source. Example: Sun light. It is also called as directional
light source.
– Distributed Light source: Represents a light source with a finite area, such as a
window or a fluorescent light, or a tube light. The area of the light source is not small.
• Color: Wavelength composition of the light, which determines its perceived color (red, green,
blue, etc.).
• Direction: Determines the angle at which light hits a surface, affecting shadow formation and
diffuse reflection.
– The amount of incident light reflected by a surface depends on the type of material.
Shining material reflects more incident light and dull surface absorbs more of the incident
light. For transparent surfaces, some of the incident light will be reflected and some will
be transmitted through the material.
– The ratio of the light reflected from the surface to the total incoming light (falling on
the surface) is called as Coefficient of reflection or Reflective (K). K = R/I’, here I’ =
light from the source (I) + Reflected light from other surrounding object.
– Surface that are rough or grainy, tend to scatter the reflected light in all direction. When
the reflection is same amount across each direction (Up, down, left, right) that is, the
reflections are constant over each surface of the object then it is called diffuse reflection.
They are independent of viewing directions.
– In addition to diffuse reflection, light sources create highlights, or bright spots, called
specular reflection. This highlighting effect is more pronounced on shiny surfaces than
on dull surfaces.
• Ambient light: A surface that is not exposed directly to light source still will be visible
if nearby objects are illuminated. The combination of light reflections form various surfaces
to produce a uniform illumination is called ambient light or background light. That is, the
amount of ambient light incident on each object is a constant for all surfaces and over all
directions.
• There are three factors on which lighting effect depends on: Light source, Surface structure (It
decides the amount of reflection and absorption of light), Observer (The observer’s position
and sensor spectrum sensitivities also affect the lighting effect), and the background lighting
conditions.
• All light sources are considered to be point sources, specified with a co-ordinate position and
an intensity value (color).
– Ambient Reflection
– Diffuse Reflection (Lambert’s Law)
– Specular Reflection (Phong model)
• Ambient light means the light that is already present in a scene, before any additional lighting
is added. It usually refers to natural light, either outdoors or coming through windows etc.
It can also mean artificial lights such as normal room lights.
• A form of diffuse reflection independent of the viewing direction,and the spatial orientation
of a surface
• Ambient light has no spatial or directional characteristics and The amount of ambient light
incident on each object is a constant for all surfaces and over all directions.
• If we assume that ambient light influences equally on all surface from all direction, then
I = Ia ∗ Ka
Where, Ia = Intensity of ambient light.
Ka = Ambient-reflection coefficient, 0 ≤ Ka ≤ 1. The amount of light reflected from an
object’s surface is determined by Ka.
• If Kd = Diffuse reflectivity, and Il = intensity of the point light source, then the diffuse
reflection equation for a point on the surface can be written as
I = Kd ∗ Il ∗ cosθ
• In this equation, I ∝ 1θ , that is, decrease in θ lead to more reflection and vice-versa.
– If θ = 0 then Intensity of the reflected light ’I’ will be maximum i.e I = Kd ∗ Il (Since
cos0 = 1)
– If θ = 90 then Intensity of the reflected light I = 0 (Since cos90 = 0)
• A surface is illuminated by a point source only if the angle of incidence is in the range 00 to
900 (cosθ is in the interval from 0 to 1).
• If N is the unit normal vector to a surface and L is the unit direction vector to the point light
source from a position on the surface, then cosθ = N . L ((Dot product of N and L))
I = Kd ∗ Il ∗ N.L
Il,dif f = Ka ∗ Ia + Kd ∗ Il ∗ cosθ
=⇒ Il,dif f = Ka ∗ Ia + Kd ∗ Il ∗ N.L
Exercise problem: Consider a shiny surface with diffused reflection co-efficient of 0.8 and
ambient coefficient of 0.7, the surface has normal in the direction of 2i+ 3j+4k. A light incidents
on this surface from the direction i+j+k such that the ambient and light source intensities are of 2
and 3 units respectively. Determine the intensity of reflected light.
• The specular-reflection angle equals the angle of the incident light. Refer below figure.
• In this figure, we use R to represent the unit vector in the direction of ideal specular reflection;
L to represent the unit vector directed toward the point light source; and V as the unit vector
pointing to the viewer from the surface position.
• For an ideal reflector (perfect mirror), incident light is reflected only in the specular-reflection
direction. In this case, we would only see reflected light when vectors V and R coincide (θ =
0).
– Angle θ can be assigned values in the range 0 to 90, so that cosθ varies from 0 to 1.
– The value assigned to specular-reflection parameter ns is determined by the type of
surface that we want to display.
∗ A very shiny surface is modeled with a large value for ns (say, 100 or more), and
smaller values (down to 1) are used for duller
∗ For a perfect reflector, ns is infinite.
– We can approximately model monochromatic specular intensity variations using a specular-
reflection coefficient, W (θ) for each surface.
– In general, W (θ) tends to increase as the angle of incidence increases.
– Using the spectral-reflection function W (θ), we can write the Phong specular reflection
model as
Ispec = W (θ) ∗ Il ∗ cosns ϕ
– Since V and R are unit vectors in the viewing and specular-reflection directions, we can
calculate cosθ as V . R (Dot product of V and R)
Ispec = Ks ∗ Il ∗ (V.R)ns
– simplified Phong model is obtained by using the halfway vector H between L and V to
calculate the range of specular reflections.
L+V
– Here, H = |L+V |
– If we replace V.R in the Phong model with the dot product N . H, this simply replaces
the empirical cosθ calculation with the empirical cosα calculation. That is, cosα = N.
H. Now,
Ispec = Ks ∗ Il ∗ (N.H)ns
• For a single point light source we can model the combined diffuse and specular reflections
from a point on an illuminated surface as
=⇒ I = Ka ∗ Ia + Kd ∗ Il ∗ (N.L) + Ks ∗ Il ∗ (N.H)ns
• For a multiple point light source The above equation can be modified as
M
X
I = Ka ∗ Ia + Ili [Kd ∗ (N.Li ) + Ks ∗ (N.Hi )ns ]
i=1
• This means that a surface close to the light source (small d) receives a higher incident intensity
from the source than a distant surface (large d).
– A user can fiddle with the coefficients a0 , a1 , and a2 , to obtain a variety of lighting
effects for a scene.
– The value of the constant term a0 can be adjusted to prevent f(d) from becoming too
large when d is very small.
• With the given set of attenuation coefficients, we can limit the magnitude of the attenuation
function to ’1’ as
1
f (d) = min 1,
a0 + a1 d + a2 d 2
• With the presence of the attenuation our basic illumination model can be re-formulated as
M
X
I = Ka ∗ Ia + fi (d) ∗ Ili [Kd ∗ (N.Li ) + Ks ∗ (N.Hi )ns ]
i=1
• Surfaces typically are illuminated with white light sources, and in general we can set surface
color so that the reflected light has nonzero values for all three RGB components.
• Calculated intensity levels for each color component can be used to adjust the corresponding
electron gun in an RGB monitor.
• In his original specular-reflection model, Phong set parameter ks to a constant value indepen-
dent of the surface color. This produces specular reflections that are the same color as the
incident light (usually white
2.5 Transparency
• A transparent surface, in general, produces both reflected and transmitted light.
• The relative contribution of the transmitted light depends on the degree of transparency of
the surface and whether any light sources or illuminated surfaces are behind the transparent
surface
• We can combine the transmitted intensity Itrans through a surface from a background object
with the reflected intensity Iref l from the transparent surface using a transparency coefficient
Kt .
I = (1 − Kt )Iref l + Kt ∗ Itrans
• For Highly transparent object Kt ≈ 1, that implies opacity ≈ 0. For an Opaque object Kt ≈ 0,
that is opacity ≈ 1.
2.6 Shadows
• By applying a hidden-surface method with a light source at the view position, we can deter-
mine which surface sections cannot be ”seen” from the light source.
• Once we have determined the shadow areas for all light sources, the shadows could be treated
as surface patterns and stored in pattern arrays
• Chromaticity Diagram: A horseshoe-shaped diagram showing the range of colors that can be
created by mixing standard primaries (R, G, B) in various proportions.
– Chromaticity contains two parameters i.e, hue and saturation. When we put hue and
saturation together then it is known as Chrominance.
– Chromaticity diagram represents visible colours using X and Y as horizontal and vertical
axis.
– The various saturated pure spectral colours are represented along the perimeter of the
curve representing the three primary colours – red, green and blue.
– Point C marked in chromaticity diagram represents a particular white light formed by
combining colours having wavelength :RED: 700 nm, GREEN : 546.1 nm, BLUE: 438.8
nm.
– In Chromaticity diagram colours on boundary are completely saturated. The corners in
this chromaticity diagram represents by three primary colours (Red, Green and Blue).
• They often don’t directly translate to specific color models used in computers, but they provide
a foundation for understanding those models.
• Mixing Colors: We can create new colors by mixing existing ones. For example, mixing red
and yellow gives orange.
• Shades, Tints, and Tones: We can adjust a color’s brightness and saturation. A shade is a
color made darker by adding black, a tint is made lighter by adding white, and a tone is made
less saturated by adding gray.
• Complementary Colors: Colors opposite each other on a color wheel tend to contrast strongly,
creating a visually pleasing effect.
4 Color Model
• The purpose of a color model is to facilitate the specification of colors in some standard
generally accepted way.
10
• A color model is a mathematical system or standard that describes how colors can be repre-
sented using numerical values.
• These numerical values typically consist of a combination of two, three, or four components.
• Most color models use a combination of two, three, or four numbers to represent a color.
These components often correspond to specific colors like red, green, blue, cyan, magenta,
yellow, or black (depending on the model).
• The values of these components can be integers (e.g., 0-255) or floating-point numbers (e.g.,
0.0-1.0), representing the intensity or concentration of each color component.
– RGB
– CMYK
– YIQ
– HSV
– HLS
• The choice of color model depends on the specific application. Here are some general guide-
lines:
• This model is called additive, and the colors, Red, green, blue are called primary colors.
• The primary colors can be added to produce the secondary colors of light: magenta (red +
blue), cyan (green + blue), and yellow (red + green). The combination of red, green, and
blue at full intensities makes white.
11
• It is an additive model, Represented by an unit cube defined on the R, G and B axes, The
origin represents black and the vertex with coordinates(1,1,1) represents white,
• The color subspace of interest is a cube shown in Figure “RGB and CMY Color Models” (RGB
values are normalized to 0..1), in which RGB values are at three corners; cyan, magenta, and
yellow are the three other corners, black is at their origin; and white is at the corner farthest
from the origin.
• The gray scale extends from black to white along the diagonal joining these two points. The
colors are the points on or inside the cube, defined by vectors extending from the origin.
• Thus, images in the RGB color model consist of three independent image planes, one for each
primary color.
• Advantages
– The importance of the RGB color model is that it relates very closely to the way that
the human eye perceives color.
– RGB is a basic color model for computer graphics because color displays use red, green,
and blue to create the desired color. Therefore, the choice of the RGB color space
simplifies the architecture and design of the system.
– Besides, a system that is designed using the RGB color space can take advantage of a
large number of existing software routines, because this color space has been around for
a number of years.
• Disadvantages
12
• CMYK is an acronym for cyan, magenta, and yellow along with black (noted as K).
• The CMYK color model is subtractive, meaning that cyan, magenta, yellow, and black pig-
ments or inks are applied to a white surface to subtract some color from white surface to
create the final color.
– For example (Refer below Figure ”Primary (RGB) and Secondary Colors CMYK”), cyan
is white minus red, magenta is white minus green, and yellow is white minus blue.
Subtracting all colors by combining the CMY at full saturation should, in theory, render
black.
• However, impurities in the existing CMY inks make full and equal saturation impossible, and
some RGB light does filter through, rendering a muddy brown color.
• The CMY cube is shown in Figure above, in which CMY values are at three corners; red,
green, and blue are the three other corners, white is at the origin; and black is at the corner
farthest from the origin.
• It is a rotation of the RGB colour space such that the Y axis contains the luminance informa-
tion, allowing backwards-compatibility with black-and-white colour Tv’s, which display only
this axis of the colour space.
• Y (Luminance): This carries the black and white (brightness) information of the image. It’s
essentially the same as the original black and white signal.
13
• I (In-phase): This component encodes the chrominance (color) information, focusing on the
blue color information.
• Q (Quadrature): This component also carries chrominance information, focusing on the red
color information.
• Benifits of YIQ
– Compatibility: Since Y contains the luminance information, even black and white
televisions could display a somewhat distorted version of the image (lacking color but
showing brightness variations). This ensured backward compatibility with existing black
and white sets during the transition to color television.
– Bandwidth Efficiency: YIQ encoding compresses the color information more efficiently
compared to transmitting a full RGB signal. This was crucial for limited bandwidth
available in analog television broadcasts.
• These models are useful for image editing software because they allow artists to adjust colors
based on these intuitive properties.
• The HSV color wheel may be depicted as a cone or cylinder. Instead of Value, the color model
may use Brightness, making it HSB (Photoshop uses HSB). See the figure below
14
• Hue is expressed as a number from 0 to 360 degrees representing hues of red (starts at 0),
yellow (starts at 60), green (starts at 120), cyan (starts at 180), blue (starts at 240), and
magenta (starts at 300). The above figure shows HSV color model and RGB to HSV mapping
in a tabular form.
• Value (or Brightness) works in conjunction with saturation and describes the brightness or
intensity of the color from 0% to 100%.
• Hue is represented as an angle about vertical axis ranging from 0 degree to 360 degrees. S
varies from 0 to 1. V varies from 0 to 1.
• Like HSv Model here, Hue indicates the color sensation of the light, in other words if the color
is red, yellow, green, cyan, blue, magenta, ... This representation looks almost the same as
the visible spectrum of light, except on the right is now the color magenta (the combination
of red and blue), instead of violet (light with a frequency higher than blue)
• Like HSV model here, Saturation indicates the degree to which the hue differs from a neutral
gray. The values run from 0%, which is no color, to 100%, which is the fullest saturation
of a given hue at a given percentage of illumination. The more the spectrum of the light is
concentrated around one wavelength, the more saturated the color will be.
• Lightness indicates the illumination of the color, at 0% the color is completely black, at 50%
the color is pure, and at 100% it becomes white. In HSL color, a color with maximum lightness
(L=255) is always white, no matter what the hue or saturation components are. Lightness is
15
16
R G B
R′ = , G′ = , B′ =
255 255 255
where R′ , G′ , B ′ are the normalized values.
Cmax = max(R′ , G′ , B ′ )
Cmin = min(R′ , G′ , B ′ )
∆ = Cmax − Cmin
where ∆ represents the chroma (color intensity).
17
• If Cmax = R′ , then
G′ − B ′
H = 60 × mod 6
∆
• If Cmax = G′ , then
B ′ − R′
H = 60 × +2
∆
• If Cmax = B ′ , then
R′ − G′
H = 60 × +4
∆
If H < 0, add 360◦ to ensure it remains within the range [0◦ , 360◦ ].
S = S × 100
V = Cmax
To express V as a percentage:
V = V × 100
Example:
Convert RGB (100, 50, 150) to HSV
18
R′ − G′
H = 60 × +4
∆
0.392 − 0.196
H = 60 × +4 = 60 × (0.5 + 4) = 60 × 4.5 = 270◦
0.392
19
• S (Saturation) is in the range [0, 1] and represents the intensity of the color.
If S and V are given as percentages (0% − 100%), convert them to the range [0, 1]:
S V
S= , V =
100 100
C =V ×S
m=V −C
20
G = (G′ + m) × 255
B = (B ′ + m) × 255
Cmax = max(R′ , G′ , B ′ )
Cmin = min(R′ , G′ , B ′ )
∆ = Cmax − Cmin
21
• If Cmax = R′ , then
G′ − B ′
H = 60 × mod 6
∆
• If Cmax = G′ , then
B ′ − R′
H = 60 × +2
∆
• If Cmax = B ′ , then
R′ − G′
H = 60 × +4
∆
If H < 0, add 360◦ to ensure it remains within the range [0◦ , 360◦ ].
H ◦, L%, S%
If S and L are given as percentages (0% − 100%), convert them to the range [0, 1]:
S L
S= , L=
100 100
C = (1 − |2L − 1|) × S
22
R = (R′ + m) × 255
G = (G′ + m) × 255
B = (B ′ + m) × 255
• In traditional newspaper and magazine production, this process is carried out photographically
by projection of a transparency through a ’halftone screen’ onto film.
• Different screens can be used to control the size and shape of the dots in the halftoned image.
23
• A fine grid, with a ’screen frequency’ of 200-300 lines per inch, gives the image quality necessary
for magazine production.
• A screen frequency of 85 lines per inch is deemed acceptable for newspapers.
• A simple digital halftoning technique known as patterning involves replacing each pixel by a
pattern taken from a ’binary font’.
• Below Figure shows such a font, made up of ten 3 x 3 matrices of pixels. This font can be
used to print an image consisting of ten grey levels.
• A pixel with a grey level of 0 is replaced by a matrix containing no white pixels; a pixel with
a grey level of 1 is replaced by a matrix containing a single white pixel; and so on.
• Note that, since we are replacing each pixel by a 3 x 3 block of pixels, both the width and the
height of the image increase by a factor of 3.
• Below Figure shows an example of halftoning using the binary font.
6.1 Dithering
Another technique for digital halftoning is dithering
• Dithering can be accomplished by thresholding the image against a dither matrix.
• The first two dither matrices, re-scaled for application to 8-bit images, are
• The elements of a dither matrix are thresholds.
• The matrix is laid like a tile over the entire image and each pixel value is compared with the
corresponding threshold from the matrix.
• The pixel becomes white if its value exceeds the threshold or black otherwise.
• This approach produces an output image with the same dimensions as the input image, but
with less detail visible.
24
7 Animation
• Animation in computer graphics is the process of creating the illusion of movement by dis-
playing a sequence of images, one after the other, very rapidly. Each image, or frame, differs
slightly from the one before it, and when played back quickly, our eyes perceive them as
continuous motion.
• Animation includes all the visual changes on the screen of display devices. These are:
– Change of shape
– Change in size
– Change in color
– Change in structure
– Change in angle
7.1 Applications
There are several applications of the animation. Some of them are given below
• Education and Training: Animation is used in school, colleges and training centers for
education purpose. Flight simulators for aircraft are also animation based.
• Entertainment: Animation methods are now commonly used in making motion pictures,
music videos and television shows, etc.
• Computer Aided Design (CAD): One of the best applications of computer animation is
Computer Aided Design and is generally referred to as CAD. One of the earlier applications
of CAD was automobile designing. But now almost all types of designing are done by using
CAD application, and without animation, all these work can’t be possible.
• Advertising: This is one of the significant applications of computer animation. The most
important advantage of an animated advertisement is that it takes very less space and capture
people attention.
– Storyboard layout
25
– Object definitions
– Key-frame specifications
– Generation of in-between frames
• Storyboard layout: It is an outline of the action. It defines the motion sequence as a set
of basic events that are to take place. Depending on the type of animation to be produced,
the storyboard could consist of a set of rough sketches, along with a brief description of the
movements, or it could just be a list of the basic ideas for the action. Originally, the set of
motion sketches was attached to a large board that was used to present an overall view of the
animation project. Hence, the name ”storyboard.”
• Object definitions: It is given for each participant in the action. Objects can be defined in
terms of basic shapes, such as polygons or spline surfaces. In addition, a description is often
given of the movements that are to be performed by each character or object in the story.
• Key-frame specifications: It is a detailed drawing of the scene at a certain time in the ani-
mation sequence. Within each key frame, each object (or character) is positioned according to
the time for that frame. Some key frames are chosen at extreme positions in the action; others
are spaced so that the time interval between key frames is not too great. More key frames
are specified for intricate motions than for simple, slowly varying motions. Development of
the key frames is generally the responsibility of the senior animators, and often a separate
animator is assigned to each character in the animation
• Generation of in-between frames: In-betweens are the intermediate frames between the
key frames. The total number of frames, and hence the total number of in-betweens, needed
for an animation is determined by the display media that is to be used. Film requires 24
frames per second, and graphics terminals are refreshed at the rate of 60 or more frames per
second. Typically, time intervals for the motion are set up so that there are from three to five
in-betweens for each pair of key frames. Depending on the speed specified for the motion, some
key frames could be duplicated. As an example, a 1-minute film sequence with no duplication
requires a total of 1,440 frames.
• If we want to generate an animation in real time, however, we need to produce the motion
frames quickly enough so that a continuous motion sequence is displayed.
• For a complex scene, one frame of the animation could take most of the refresh cycle time
to construct. In that case, objects generated first would be displayed for most of the frame
refresh time, but objects generated toward the end of the refresh cycle would disappear almost
as soon as they were displayed.
26
• For very complex animations, the frame construction time could be greater than the time to
refresh the screen, which can lead to erratic motion and fractured frame displays. Because
the screen display is generated from successively modified pixel values in the refresh buffer,
we can take advantage of some of the characteristics of the raster screen-refresh process to
produce motion sequences quickly. Hence, in animation, the frame construction time could
be lesser than the time to refresh the screen.
• There are two methods using which we can generate real-time raster animations. These are
– Double Buffering
– Raster Operations
27
• A simple method for translating an object from one location to another in the xy plane is to
transfer the group of pixel values that define the shape of the object to the new location.
• Two-dimensional rotations in multiples of 900 are also simple to perform, although we can
rotate rectangular blocks of pixels through other angles using anti-aliasing procedures. For a
rotation that is not a multiple of 900 , we need to estimate the percentage of area coverage for
those pixels that overlap the rotated block.
• Sequences of raster operations can be executed to produce realtime animation for either two-
dimensional or three-dimensional objects, so long as we restrict the animation to motions in
the projection plane. Then no viewing or visible-surface algorithms need be invoked.
• For complex scenes, we can separate the frames into individual components or objects called
cels (celluloid transparencies). This term developed from cartoon animation techniques where
the background and each character in a scene were placed on a separate transparency. Then,
with the transparencies stacked in the order from background to foreground, they were pho-
tographed to obtain the completed frame. The specified animation paths are then used to
obtain the next cel for each character, where the positions are interpolated from the key-frame
times.
• With complex object transformations, the shapes of objects may change over time. Examples
are clothes, facial features, magnified detail, evolving shapes, and exploding or disintegrating
objects. For surfaces described with polygon meshes, these changes can result in significant
changes in polygon shape such that the number of edges in a polygon could be different from
one frame to the next. These changes are incorporated into the development of the in-between
frames by adding or subtracting polygon edges according to the requirements of the defining
key frames.
28
7.5.1 Morphing
Morphing is an animation function. Transformation of object shapes from one form to another is
termed morphing, which is a shortened form of “metamorphosing.” It is one of the most complicated
transformations. This function is commonly used in movies, cartoons, advertisement, and computer
games.
Morphing Process: The process of Morphing involves three steps:
• In the first step, one initial image and other final image are added to morphing application
as shown in fig: 1st and 4th object consider as key frames.
• The second step involves the selection of key points on both the images for a smooth transition
between two images as shown in 2nd object.
• In the third step, the key point of the first image transforms to a corresponding key point of
the second image as shown in 3rd object of the figure.
7.5.2 Wrapping
Wrapping function is similar to morphing function. It distorts only the initial images so that it
matches with final images and no fade occurs in this function.
7.5.3 Tweening
Tweening is the short form of ’inbetweening.’ Tweening is the process of generating intermediate
frames between the initial & last final images. This function is popular in the film industry.
29
7.5.4 Panning
Usually Panning refers to rotation of the camera in horizontal Plane. In computer graphics, Panning
relates to the movement of fixed size window across the window object in a scene. In which direction
the fixed sized window moves, the object appears to move in the opposite direction as shown in fig:
If the window moves in a backward direction, then the object appear to move in the forward
direction and the window moves in forward direction then the object appear to move in a backward
direction.
7.5.5 Zooming
In zooming, the window is fixed an object and change its size, the object also appear to change in
size. When the window is made smaller about a fixed center, the object comes inside the window
appear more enlarged. This feature is known as Zooming In.
When we increase the size of the window about the fixed center, the object comes inside the
window appear small. This feature is known as Zooming Out.
7.5.6 Fractals
Fractal Function is used to generate a complex picture by using Iteration. Iteration means the
repetition of a single formula again and again with slightly different value based on the previous
iteration result. These results are displayed on the screen in the form of the display picture.
30
31
vector for each object. For example, if a velocity is specified as (3, 0, -4) km per sec, then this
vector gives the direction for the straight-line motion path and the speed (magnitude of velocity)
is calculated as 5 km per sec. If we also specify accelerations (rate of change of velocity), we can
generate speedups, slowdowns, and curved motion paths. Kinematic specification of a motion can
also be given by simply describing the motion path. This is often accomplished using spline curves.
An alternate approach is to use inverse kinematics. Here, we specify the initial and final po-
sitions of objects at specified times and the motion parameters are computed by the system. For
example, assuming zero acceleration, we can determine the constant velocity that will accomplish
the movement of an object from the initial position to the final position. This method is often used
with complex objects by giving the positions and orientations of an end node of an object, such as
a hand or a foot. The system then determines the motion parameters of other nodes to accomplish
the desired motion.
Dynamic descriptions, on the other hand, require the specification of the forces that produce the
velocities and accelerations. The description of object behavior in terms of the influence of forces
is generally referred to as physically based modeling. Examples of forces affecting object motion
include electromagnetic, gravitational, frictional, and other mechanical forces.
32