0% found this document useful (0 votes)
17 views32 pages

Unit-5 Complete Material

The document discusses the role of light sources in computer graphics, categorizing them into point, parallel, and distributed sources, and detailing their properties such as intensity, color, and direction. It explains basic illumination models including ambient, diffuse, and specular reflection, along with equations governing light interaction with surfaces. Additionally, it addresses intensity attenuation and color considerations in illumination models to enhance realism in graphics rendering.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views32 pages

Unit-5 Complete Material

The document discusses the role of light sources in computer graphics, categorizing them into point, parallel, and distributed sources, and detailing their properties such as intensity, color, and direction. It explains basic illumination models including ambient, diffuse, and specular reflection, along with equations governing light interaction with surfaces. Additionally, it addresses intensity attenuation and color considerations in illumination models to enhance realism in graphics rendering.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Dr. Himadri B.G.S.

Bhuyan

Computer Graphics
UNIT-5
Illumination and Colour Models, Animation
Graphics
Dr. Himadri B.G.S. Bhuyan

1 Light sources
• In the world of computer graphics, light sources play an essential role in bringing realism and
visual appeal to the computer-generated environment.

• Light sources means, an object that is emitting radiant energy such as light bulb, lamp,
fluorescent tube , sun etc.

• Light source that illuminate an object are of two types

– Light emitting sources: Bulb or Sun


– Light Reflecting sources: Walls of a room

• In general, light source is the light emitting source. There are three basic light sources

– Point Source: The source that emit rays in all directions. Example: A bulb in a room
– Parallel Source: When a point source is at infinite distance then light rays become
parallel and act as parallel source. Example: Sun light. It is also called as directional
light source.
– Distributed Light source: Represents a light source with a finite area, such as a
window or a fluorescent light, or a tube light. The area of the light source is not small.

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

1.1 Properties of light


• Intensity: Brightness of the light source, measured in watts or lumens.

• Color: Wavelength composition of the light, which determines its perceived color (red, green,
blue, etc.).

• Direction: Determines the angle at which light hits a surface, affecting shadow formation and
diffuse reflection.

1.2 Terminologies Related to light


• Reflection of light: When light is incident (I) on opaque surface part of it is reflected (R)
and part of it is absorbed (A). So I = R + A.

– The amount of incident light reflected by a surface depends on the type of material.
Shining material reflects more incident light and dull surface absorbs more of the incident
light. For transparent surfaces, some of the incident light will be reflected and some will
be transmitted through the material.
– The ratio of the light reflected from the surface to the total incoming light (falling on
the surface) is called as Coefficient of reflection or Reflective (K). K = R/I’, here I’ =
light from the source (I) + Reflected light from other surrounding object.
– Surface that are rough or grainy, tend to scatter the reflected light in all direction. When
the reflection is same amount across each direction (Up, down, left, right) that is, the
reflections are constant over each surface of the object then it is called diffuse reflection.
They are independent of viewing directions.
– In addition to diffuse reflection, light sources create highlights, or bright spots, called
specular reflection. This highlighting effect is more pronounced on shiny surfaces than
on dull surfaces.

• Ambient light: A surface that is not exposed directly to light source still will be visible
if nearby objects are illuminated. The combination of light reflections form various surfaces
to produce a uniform illumination is called ambient light or background light. That is, the
amount of ambient light incident on each object is a constant for all surfaces and over all
directions.

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

1.3 Useful concept to understand the illumination

2 Basic illumination models


• Illumination model, also known as Shading model or Lighting model, is used to calculate the
intensity of light that is reflected at a given point on surface.

• There are three factors on which lighting effect depends on: Light source, Surface structure (It
decides the amount of reflection and absorption of light), Observer (The observer’s position
and sensor spectrum sensitivities also affect the lighting effect), and the background lighting
conditions.

• All light sources are considered to be point sources, specified with a co-ordinate position and
an intensity value (color).

• There are three basic illumination models

– Ambient Reflection
– Diffuse Reflection (Lambert’s Law)
– Specular Reflection (Phong model)

2.1 Ambient Reflection


• This is a simplest illumination model

• Ambient light means the light that is already present in a scene, before any additional lighting
is added. It usually refers to natural light, either outdoors or coming through windows etc.
It can also mean artificial lights such as normal room lights.

• Multiple reflection of nearby (light-reflecting) objects yields a uniform illumination

• A form of diffuse reflection independent of the viewing direction,and the spatial orientation
of a surface

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

• Ambient light has no spatial or directional characteristics and The amount of ambient light
incident on each object is a constant for all surfaces and over all directions.
• If we assume that ambient light influences equally on all surface from all direction, then
I = Ia ∗ Ka
Where, Ia = Intensity of ambient light.
Ka = Ambient-reflection coefficient, 0 ≤ Ka ≤ 1. The amount of light reflected from an
object’s surface is determined by Ka.

2.2 Diffuse Reflection (Lambert’s Law)


• We assume that object is illuminated by light which doesn’t come from any particular source
but it comes from all direction (i.e Ambient light).
• If such illumination is uniform from all directions then the illumination is called as diffusion
illumination.
• Actually, diffuse illumination is a background light which is reflected from walls, floor, and
ceiling.
• If a surface is exposed only to ambient light, we can express the intensity of the diffuse
reflection at any point on the surface as
Iamb,Dif f = Kd ∗ Ia
Where, Kd = Diffuse-reflection coefficient or Diffuse reflectivity, 0 ≤ Kd ≤ 1.
Ia = Intensity of ambient light.
– If there is a highly reflective surface, then Kd ≈ 1
– To simulate a surface that absorbs most of the incident light, Kd ≈ 0
• The diffuse reflections from the surface are scattered with equal intensity in all directions
independent of the viewing direction. Such surfaces are called as Ideal diffuse reflectors. They
are also called Lambertian Reflectors since since radiated light energy from any point on the
surface is governed by Lambert’s cosine law.
• If N is the unit normal vector to a surface and L is the unit direction vector to the point
light source from a position on the surface then as per the Lambert’s law the intensity of the
reflected light I ∝ cosθ Where θ = Angle between ’N’ and ’L’ (Indecent Angle). Refer the
figure below.

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

• If Kd = Diffuse reflectivity, and Il = intensity of the point light source, then the diffuse
reflection equation for a point on the surface can be written as

I = Kd ∗ Il ∗ cosθ

• In this equation, I ∝ 1θ , that is, decrease in θ lead to more reflection and vice-versa.

– If θ = 0 then Intensity of the reflected light ’I’ will be maximum i.e I = Kd ∗ Il (Since
cos0 = 1)
– If θ = 90 then Intensity of the reflected light I = 0 (Since cos90 = 0)

• A surface is illuminated by a point source only if the angle of incidence is in the range 00 to
900 (cosθ is in the interval from 0 to 1).

• When cosθ is negative, the light source is ”behind” the surface.

• If N is the unit normal vector to a surface and L is the unit direction vector to the point light
source from a position on the surface, then cosθ = N . L ((Dot product of N and L))

• So, the diffuse reflection equation for single point-source illumination is

I = Kd ∗ Il ∗ N.L

2.2.1 Combined Effect of Ambient and Diffused Reflection


• We can combine the ambient and point source intensity calculations to obtain an expression
for the total diffuse reflection.

• In addition, many graphics packages introduce an ambient-reflection coefficient Ka to modify


the ambient light intensity I, for each surface. This simply provides us with an additional
parameter to adjust the light conditions in a scene.

• Using parameter Ka we can write the total diffuse reflection equation as

Il,dif f = Ka ∗ Ia + Kd ∗ Il ∗ cosθ

=⇒ Il,dif f = Ka ∗ Ia + Kd ∗ Il ∗ N.L

Exercise problem: Consider a shiny surface with diffused reflection co-efficient of 0.8 and
ambient coefficient of 0.7, the surface has normal in the direction of 2i+ 3j+4k. A light incidents
on this surface from the direction i+j+k such that the ambient and light source intensities are of 2
and 3 units respectively. Determine the intensity of reflected light.

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

2.3 Specular Reflection (Phong Model)


• we see a highlight, or bright spot, at certain viewing directions. This phenomenon, called
specular reflection. It is the result of total, or near total reflection of the incident light in a
concentrated region around the specular reflection angle.

• The specular-reflection angle equals the angle of the incident light. Refer below figure.

• In this figure, we use R to represent the unit vector in the direction of ideal specular reflection;
L to represent the unit vector directed toward the point light source; and V as the unit vector
pointing to the viewer from the surface position.

• Angle θ is the viewing angle relative to the specular-reflection direction R.

• For an ideal reflector (perfect mirror), incident light is reflected only in the specular-reflection
direction. In this case, we would only see reflected light when vectors V and R coincide (θ =
0).

• Phong model, sets the intensity of specular reflection proportional to cosns θ.

– Angle θ can be assigned values in the range 0 to 90, so that cosθ varies from 0 to 1.
– The value assigned to specular-reflection parameter ns is determined by the type of
surface that we want to display.
∗ A very shiny surface is modeled with a large value for ns (say, 100 or more), and
smaller values (down to 1) are used for duller
∗ For a perfect reflector, ns is infinite.
– We can approximately model monochromatic specular intensity variations using a specular-
reflection coefficient, W (θ) for each surface.
– In general, W (θ) tends to increase as the angle of incidence increases.
– Using the spectral-reflection function W (θ), we can write the Phong specular reflection
model as
Ispec = W (θ) ∗ Il ∗ cosns ϕ
– Since V and R are unit vectors in the viewing and specular-reflection directions, we can
calculate cosθ as V . R (Dot product of V and R)

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

– Assuming the specular-reflection coefficient is a constant, we can determine the intensity


of the specular reflection at a surface point with the calculation

Ispec = Ks ∗ Il ∗ (V.R)ns

– simplified Phong model is obtained by using the halfway vector H between L and V to
calculate the range of specular reflections.

L+V
– Here, H = |L+V |
– If we replace V.R in the Phong model with the dot product N . H, this simply replaces
the empirical cosθ calculation with the empirical cosα calculation. That is, cosα = N.
H. Now,
Ispec = Ks ∗ Il ∗ (N.H)ns

• For a single point light source we can model the combined diffuse and specular reflections
from a point on an illuminated surface as

I = Idif f + Ispec = Ks ∗ Ii ∗ (N.H)ns

=⇒ I = Ka ∗ Ia + Kd ∗ Il ∗ (N.L) + Ks ∗ Il ∗ (N.H)ns

• For a multiple point light source The above equation can be modified as

M
X
I = Ka ∗ Ia + Ili [Kd ∗ (N.Li ) + Ks ∗ (N.Hi )ns ]
i=1

Here, M = number of point light sources

2.3.1 Intensity Attenuation and its effect on Illumination Model


• As radiant energy from a point light source travels through space, its amplitude is attenuated
by the factor l/d2 , where d is the distance that the light has traveled.

• This means that a surface close to the light source (small d) receives a higher incident intensity
from the source than a distant surface (large d).

• A general inverse quadratic attenuation function can be set up as


1
f (d) =
a0 + a1 d + a2 d 2

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

– A user can fiddle with the coefficients a0 , a1 , and a2 , to obtain a variety of lighting
effects for a scene.
– The value of the constant term a0 can be adjusted to prevent f(d) from becoming too
large when d is very small.
• With the given set of attenuation coefficients, we can limit the magnitude of the attenuation
function to ’1’ as  
1
f (d) = min 1,
a0 + a1 d + a2 d 2
• With the presence of the attenuation our basic illumination model can be re-formulated as
M
X
I = Ka ∗ Ia + fi (d) ∗ Ili [Kd ∗ (N.Li ) + Ks ∗ (N.Hi )ns ]
i=1

2.4 Color Consideration in Illumination Model


• Most graphics displays of realistic scenes are in colour. But the illumination model discussed
so far considers only monochromatic lighting effects.
• To incorporate colour, we need to write the intensity equation as a function of the colour
properties of the light sources and object surfaces.
• One way to set surface colors is by specifing the reflectivity coefficients as three-element
vectors.
• The diffuse reflection coefficient vector, for example, would then have RGB components ( KdR
, KdG , KdB )
– If we want an object to have a blue surface, we select a nonzero value in the range from
0 to 1 for the blue (e.g KdB = 0.3) , while the red and green reflectivity components are
set to zero (i.e, KdR =0, KdG =0)
– That is, red or green components in the incident light are absorbed, and only the blue
component is reflected. The intensity calculation for this example reduces to the single
expression as
M
X
I = KaB ∗ IaB + fi (d) ∗ IliB [KdB ∗ (N.Li ) + KsB ∗ (N.Hi )ns ]
i=1

• Surfaces typically are illuminated with white light sources, and in general we can set surface
color so that the reflected light has nonzero values for all three RGB components.
• Calculated intensity levels for each color component can be used to adjust the corresponding
electron gun in an RGB monitor.
• In his original specular-reflection model, Phong set parameter ks to a constant value indepen-
dent of the surface color. This produces specular reflections that are the same color as the
incident light (usually white

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

2.5 Transparency
• A transparent surface, in general, produces both reflected and transmitted light.

• The relative contribution of the transmitted light depends on the degree of transparency of
the surface and whether any light sources or illuminated surfaces are behind the transparent
surface

• We can combine the transmitted intensity Itrans through a surface from a background object
with the reflected intensity Iref l from the transparent surface using a transparency coefficient
Kt .

• 0 ≤ Kt ≤ 1. Kt specifies how much of the background light is to be transmitted.

• Total surface intensity is then calculated as

I = (1 − Kt )Iref l + Kt ∗ Itrans

Here, (1 − Kt ) is the opacity factor

• For Highly transparent object Kt ≈ 1, that implies opacity ≈ 0. For an Opaque object Kt ≈ 0,
that is opacity ≈ 1.

2.6 Shadows
• By applying a hidden-surface method with a light source at the view position, we can deter-
mine which surface sections cannot be ”seen” from the light source.

• These are the shadow areas.

• Once we have determined the shadow areas for all light sources, the shadows could be treated
as surface patterns and stored in pattern arrays

2.7 Standard Primaries and Chromaticity Diagram


• Standard Primaries: These are three specific red, green, and blue (RGB) colors chosen by a
standard body (e.g., NTSC, Rec. 709). They define the color gamut (range of colors) that can
be displayed on a particular device (monitor, TV). Different standards have slightly different
primaries.

• Chromaticity Diagram: A horseshoe-shaped diagram showing the range of colors that can be
created by mixing standard primaries (R, G, B) in various proportions.

– Chromaticity contains two parameters i.e, hue and saturation. When we put hue and
saturation together then it is known as Chrominance.
– Chromaticity diagram represents visible colours using X and Y as horizontal and vertical
axis.

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

– The various saturated pure spectral colours are represented along the perimeter of the
curve representing the three primary colours – red, green and blue.
– Point C marked in chromaticity diagram represents a particular white light formed by
combining colours having wavelength :RED: 700 nm, GREEN : 546.1 nm, BLUE: 438.8
nm.
– In Chromaticity diagram colours on boundary are completely saturated. The corners in
this chromaticity diagram represents by three primary colours (Red, Green and Blue).

3 Intuitive colour concepts


• These are concepts that relate to how we naturally perceive and manipulate colors.

• They often don’t directly translate to specific color models used in computers, but they provide
a foundation for understanding those models.

• Mixing Colors: We can create new colors by mixing existing ones. For example, mixing red
and yellow gives orange.

• Shades, Tints, and Tones: We can adjust a color’s brightness and saturation. A shade is a
color made darker by adding black, a tint is made lighter by adding white, and a tone is made
less saturated by adding gray.

• Complementary Colors: Colors opposite each other on a color wheel tend to contrast strongly,
creating a visually pleasing effect.

4 Color Model
• The purpose of a color model is to facilitate the specification of colors in some standard
generally accepted way.

10

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

• In essence, a color model is a specification of a 3D coordinate system and a subspace within


that system where each color is represented by a single point.

• A color model is a mathematical system or standard that describes how colors can be repre-
sented using numerical values.

• These numerical values typically consist of a combination of two, three, or four components.

• Most color models use a combination of two, three, or four numbers to represent a color.
These components often correspond to specific colors like red, green, blue, cyan, magenta,
yellow, or black (depending on the model).

• The values of these components can be integers (e.g., 0-255) or floating-point numbers (e.g.,
0.0-1.0), representing the intensity or concentration of each color component.

• Widely used color models are

– RGB
– CMYK
– YIQ
– HSV
– HLS

• The choice of color model depends on the specific application. Here are some general guide-
lines:

– RGB: Ideal for displaying colors on electronic devices.


– CMYK: Best suited for printing processes using inks.
– YIQ: The YIQ color model is primarily used in analog television broadcasting systems,
specifically the NTSC (National Television System Committee) standard used in North
America and some parts of South America and Japan.
– HSV and HLS: Useful for intuitive color manipulation in image editing.

4.1 RGB colour model


• In the RGB model, each color appears as a combination of red, green, and blue.

• This model is called additive, and the colors, Red, green, blue are called primary colors.

• The primary colors can be added to produce the secondary colors of light: magenta (red +
blue), cyan (green + blue), and yellow (red + green). The combination of red, green, and
blue at full intensities makes white.

• The below figure shows the RGB color model

11

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

• It is an additive model, Represented by an unit cube defined on the R, G and B axes, The
origin represents black and the vertex with coordinates(1,1,1) represents white,

• Any color C can be represented as RGB components as C = R + G + B

• The color subspace of interest is a cube shown in Figure “RGB and CMY Color Models” (RGB
values are normalized to 0..1), in which RGB values are at three corners; cyan, magenta, and
yellow are the three other corners, black is at their origin; and white is at the corner farthest
from the origin.

• The gray scale extends from black to white along the diagonal joining these two points. The
colors are the points on or inside the cube, defined by vectors extending from the origin.

• Thus, images in the RGB color model consist of three independent image planes, one for each
primary color.

• Advantages

– The importance of the RGB color model is that it relates very closely to the way that
the human eye perceives color.
– RGB is a basic color model for computer graphics because color displays use red, green,
and blue to create the desired color. Therefore, the choice of the RGB color space
simplifies the architecture and design of the system.
– Besides, a system that is designed using the RGB color space can take advantage of a
large number of existing software routines, because this color space has been around for
a number of years.

• Disadvantages

– RGB is not very efficient when dealing with real-world images.


– To generate any color within the RGB color cube, all three RGB components need to be
of equal pixel depth and display resolution.
– Any modification of the image requires modification of all three planes.

12

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

4.2 CMYK colour model


• The CMYK color model is a subset of the RGB model and is primarily used in color print
production.

• CMYK is an acronym for cyan, magenta, and yellow along with black (noted as K).

• The CMYK color model is subtractive, meaning that cyan, magenta, yellow, and black pig-
ments or inks are applied to a white surface to subtract some color from white surface to
create the final color.

– For example (Refer below Figure ”Primary (RGB) and Secondary Colors CMYK”), cyan
is white minus red, magenta is white minus green, and yellow is white minus blue.
Subtracting all colors by combining the CMY at full saturation should, in theory, render
black.

• However, impurities in the existing CMY inks make full and equal saturation impossible, and
some RGB light does filter through, rendering a muddy brown color.

• Therefore, the black ink is added to CMY.

• The CMY cube is shown in Figure above, in which CMY values are at three corners; red,
green, and blue are the three other corners, white is at the origin; and black is at the corner
farthest from the origin.

4.3 YIQ colour model


• The YIQ colour space model is used in U.S. commercial colour television broadcasting (NTSC).

• It is a rotation of the RGB colour space such that the Y axis contains the luminance informa-
tion, allowing backwards-compatibility with black-and-white colour Tv’s, which display only
this axis of the colour space.

• Y (Luminance): This carries the black and white (brightness) information of the image. It’s
essentially the same as the original black and white signal.

13

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

• I (In-phase): This component encodes the chrominance (color) information, focusing on the
blue color information.

• Q (Quadrature): This component also carries chrominance information, focusing on the red
color information.

• These Y, I, and Q signals are combined and transmitted together.

• Benifits of YIQ

– Compatibility: Since Y contains the luminance information, even black and white
televisions could display a somewhat distorted version of the image (lacking color but
showing brightness variations). This ensured backward compatibility with existing black
and white sets during the transition to color television.
– Bandwidth Efficiency: YIQ encoding compresses the color information more efficiently
compared to transmitting a full RGB signal. This was crucial for limited bandwidth
available in analog television broadcasts.

4.4 Human-Centric Color Models


• HSV and HLS are both based on how humans perceive color. They take RGB as input and
convert it to a representation using:

– Hue: The actual color itself (red, green, blue, etc.).


– Saturation: How vibrant or pure the color is.
– Value (V in HSV) or Lightness (L in HLS): How light or dark the color is.

• These models are useful for image editing software because they allow artists to adjust colors
based on these intuitive properties.

4.4.1 HSV colour Model


• Hue, Saturation, Value or HSV is a color model that describes colors (hue or tint) in terms
of their shade (saturation or amount of gray) and their brightness (value or luminance).

• The HSV color wheel may be depicted as a cone or cylinder. Instead of Value, the color model
may use Brightness, making it HSB (Photoshop uses HSB). See the figure below

14

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

• Hue is expressed as a number from 0 to 360 degrees representing hues of red (starts at 0),
yellow (starts at 60), green (starts at 120), cyan (starts at 180), blue (starts at 240), and
magenta (starts at 300). The above figure shows HSV color model and RGB to HSV mapping
in a tabular form.

• Saturation is the amount of gray (0% to 100%) in the color.

• Value (or Brightness) works in conjunction with saturation and describes the brightness or
intensity of the color from 0% to 100%.

• Adding white decreases S while V remains constant

• Hue is represented as an angle about vertical axis ranging from 0 degree to 360 degrees. S
varies from 0 to 1. V varies from 0 to 1.

4.4.2 HLS or HSL colour Model


This model is very much similar to the HSV model.

• Like HSv Model here, Hue indicates the color sensation of the light, in other words if the color
is red, yellow, green, cyan, blue, magenta, ... This representation looks almost the same as
the visible spectrum of light, except on the right is now the color magenta (the combination
of red and blue), instead of violet (light with a frequency higher than blue)

• Like HSV model here, Saturation indicates the degree to which the hue differs from a neutral
gray. The values run from 0%, which is no color, to 100%, which is the fullest saturation
of a given hue at a given percentage of illumination. The more the spectrum of the light is
concentrated around one wavelength, the more saturated the color will be.

• Lightness indicates the illumination of the color, at 0% the color is completely black, at 50%
the color is pure, and at 100% it becomes white. In HSL color, a color with maximum lightness
(L=255) is always white, no matter what the hue or saturation components are. Lightness is

15

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

defined as (maxColor + minColor)/2 where maxColor is the R, G or B component with the


maximum value, and minColor the one with the minimum value among RGB components.

• Below figure shows the HSL model

16

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

5 Conversions Between Color Models


5.1 RGB ↔ CMY and RGB ↔ YIQ
We can convert one color model to another as given below.

5.2 RGB to HSV


The RGB color model is converted to the HSV (Hue, Saturation, Value) model using the following
steps.

i. Normalize the RGB Values


If the RGB values are in the range [0, 255], normalize them to [0, 1]:

R G B
R′ = , G′ = , B′ =
255 255 255
where R′ , G′ , B ′ are the normalized values.

ii. Compute Maximum, Minimum, and Difference


Find the maximum and minimum values among R′ , G′ , B ′ :

Cmax = max(R′ , G′ , B ′ )

Cmin = min(R′ , G′ , B ′ )
∆ = Cmax − Cmin
where ∆ represents the chroma (color intensity).

17

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

iii. Compute Hue (H)


The hue H is measured in degrees within the range [0◦ , 360◦ ]:

• If ∆ = 0, then H = 0◦ (gray, no hue).

• If Cmax = R′ , then
G′ − B ′
 
H = 60 × mod 6

• If Cmax = G′ , then
B ′ − R′
 
H = 60 × +2

• If Cmax = B ′ , then
R′ − G′
 
H = 60 × +4

If H < 0, add 360◦ to ensure it remains within the range [0◦ , 360◦ ].

iv. Compute Saturation (S)


Saturation S represents the intensity or purity of the color:
(
0, if Cmax = 0
S= ∆
Cmax
, otherwise
To express S as a percentage:

S = S × 100

v. Compute Value (V)


The value V (also called brightness) is given by:

V = Cmax
To express V as a percentage:

V = V × 100

Example:
Convert RGB (100, 50, 150) to HSV

18

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

i. Normalize RGB Values


Given RGB values:
R = 100, G = 50, B = 150
Convert them to the range [0,1]:
100 50 150
R′ = = 0.392, G′ = = 0.196, B′ = = 0.588
255 255 255

ii. Compute Maximum, Minimum, and Difference


Cmax = max(R′ , G′ , B ′ ) = 0.588
Cmin = min(R′ , G′ , B ′ ) = 0.196
∆ = Cmax − Cmin = 0.588 − 0.196 = 0.392

iii. Compute Hue (H)


Since Cmax = B ′ , we use the formula:

R′ − G′
 
H = 60 × +4

 
0.392 − 0.196
H = 60 × +4 = 60 × (0.5 + 4) = 60 × 4.5 = 270◦
0.392

iv. Compute Saturation (S)


∆ 0.392
S= = = 0.667
Cmax 0.588

v. Compute Value (V)


V = Cmax = 0.588

Final HSV Values


H = 270◦ , S = 0.667, V = 0.588
or in percentage form:
H = 270◦ , S = 66.7%, V = 58.8%

Note: Check the correctness of the computed values

19

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

5.3 HSV to RGB


The RGB (Red, Green, Blue) color model can be derived from the HSV (Hue, Saturation,
Value) model using the following steps:

i. Given HSV Values


• H (Hue) is in the range [0◦ − 360◦ ] and represents the color type.

• S (Saturation) is in the range [0, 1] and represents the intensity of the color.

• V (Value) is in the range [0, 1] and represents brightness.

If S and V are given as percentages (0% − 100%), convert them to the range [0, 1]:
S V
S= , V =
100 100

ii. Compute Chroma (C)


The chroma C represents the difference between the maximum and minimum RGB components:

C =V ×S

iii. Compute Intermediate Value X


 
H
X =C × 1− mod 2 − 1
60

iv. Compute Offset Value m


The value m shifts the RGB components to adjust brightness:

m=V −C

v. Compute RGB’ Based on Hue Range


Depending on the value of H, the temporary values (R′ , G′ , B ′ ) are calculated as:


 (C, X, 0), 0◦ ≤ H < 60◦
(X, C, 0), 60◦ ≤ H < 120◦





(0, C, X), 120◦ ≤ H < 180◦

(R′ , G′ , B ′ ) =


 (0, X, C), 180◦ ≤ H < 240◦
(X, 0, C), 240◦ ≤ H < 300◦





(C, 0, X), 300◦ ≤ H < 360◦

20

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

vi. Compute Final RGB Values


Convert (R′ , G′ , B ′ ) back to standard RGB values in the range [0, 255]:
R = (R′ + m) × 255

G = (G′ + m) × 255

B = (B ′ + m) × 255

5.4 RGB to HLS Conversion


The HLS (Hue, Lightness, Saturation) model represents colors differently from the RGB
(Red, Green, Blue) model. The conversion follows these steps:

i. Normalize RGB Values


If the RGB values are in the range [0, 255], normalize them to [0, 1]:
R G B
R′ = , G′ = , B′ =
255 255 255
where R′ , G′ , B ′ are the normalized values.

ii. Compute Maximum, Minimum, and Difference


Find the maximum and minimum values among R′ , G′ , B ′ :

Cmax = max(R′ , G′ , B ′ )

Cmin = min(R′ , G′ , B ′ )

∆ = Cmax − Cmin

iii. Compute Lightness (L)


Lightness represents the average intensity of the color:
Cmax + Cmin
L=
2

iv. Compute Saturation (S)


Saturation in HLS is different from HSV:
(
0, if ∆ = 0
S= ∆
1−|2L−1|
, otherwise

21

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

v. Compute Hue (H)


Hue represents the color type, measured in degrees within the range [0◦ , 360◦ ]:

• If ∆ = 0, then H = 0◦ (gray, no hue).

• If Cmax = R′ , then
G′ − B ′
 
H = 60 × mod 6

• If Cmax = G′ , then
B ′ − R′
 
H = 60 × +2

• If Cmax = B ′ , then
R′ − G′
 
H = 60 × +4

If H < 0, add 360◦ to ensure it remains within the range [0◦ , 360◦ ].

Final HLS Representation


After calculations, the HLS color is represented as:

H ◦, L%, S%

5.5 HLS to RGB Conversion


The HLS (Hue, Lightness, Saturation) model can be converted back to the RGB (Red,
Green, Blue) model using the following steps:

i. Given HLS Values


• H (Hue) is in the range [0◦ , 360◦ ] and represents the color type.

• L (Lightness) is in the range [0, 1] and represents brightness.

• S (Saturation) is in the range [0, 1] and represents intensity.

If S and L are given as percentages (0% − 100%), convert them to the range [0, 1]:
S L
S= , L=
100 100

ii. Compute Chroma (C)


Chroma determines the intensity of the color:

C = (1 − |2L − 1|) × S

22

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

iii. Compute Intermediate Value X


 
H
X =C × 1− mod 2 − 1
60

iv. Compute Offset Value m


The value m adjusts the brightness:
C
m=L−
2

v. Compute RGB’ Based on Hue Range


Depending on the value of H, the temporary values (R′ , G′ , B ′ ) are calculated as:


 (C, X, 0), 0◦ ≤ H < 60◦
(X, C, 0), 60◦ ≤ H < 120◦





(0, C, X), 120◦ ≤ H < 180◦

(R′ , G′ , B ′ ) =


 (0, X, C), 180◦ ≤ H < 240◦
(X, 0, C), 240◦ ≤ H < 300◦





(C, 0, X), 300◦ ≤ H < 360◦

vi. Compute Final RGB Values


Convert (R′ , G′ , B ′ ) back to standard RGB values in the range [0, 255]:

R = (R′ + m) × 255

G = (G′ + m) × 255

B = (B ′ + m) × 255

6 Halftone patterns and Dithering techniques


• The process of generating a binary pattern of black and white dots from an image is termed
halftoning.

• In traditional newspaper and magazine production, this process is carried out photographically
by projection of a transparency through a ’halftone screen’ onto film.

• The screen is a glass plate with a grid etched into it.

• Different screens can be used to control the size and shape of the dots in the halftoned image.

23

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

• A fine grid, with a ’screen frequency’ of 200-300 lines per inch, gives the image quality necessary
for magazine production.
• A screen frequency of 85 lines per inch is deemed acceptable for newspapers.
• A simple digital halftoning technique known as patterning involves replacing each pixel by a
pattern taken from a ’binary font’.
• Below Figure shows such a font, made up of ten 3 x 3 matrices of pixels. This font can be
used to print an image consisting of ten grey levels.

• A pixel with a grey level of 0 is replaced by a matrix containing no white pixels; a pixel with
a grey level of 1 is replaced by a matrix containing a single white pixel; and so on.
• Note that, since we are replacing each pixel by a 3 x 3 block of pixels, both the width and the
height of the image increase by a factor of 3.
• Below Figure shows an example of halftoning using the binary font.

6.1 Dithering
Another technique for digital halftoning is dithering
• Dithering can be accomplished by thresholding the image against a dither matrix.
• The first two dither matrices, re-scaled for application to 8-bit images, are
• The elements of a dither matrix are thresholds.
• The matrix is laid like a tile over the entire image and each pixel value is compared with the
corresponding threshold from the matrix.
• The pixel becomes white if its value exceeds the threshold or black otherwise.
• This approach produces an output image with the same dimensions as the input image, but
with less detail visible.

24

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

7 Animation
• Animation in computer graphics is the process of creating the illusion of movement by dis-
playing a sequence of images, one after the other, very rapidly. Each image, or frame, differs
slightly from the one before it, and when played back quickly, our eyes perceive them as
continuous motion.

• Animation includes all the visual changes on the screen of display devices. These are:

– Change of shape
– Change in size
– Change in color
– Change in structure
– Change in angle

7.1 Applications
There are several applications of the animation. Some of them are given below

• Education and Training: Animation is used in school, colleges and training centers for
education purpose. Flight simulators for aircraft are also animation based.

• Entertainment: Animation methods are now commonly used in making motion pictures,
music videos and television shows, etc.

• Computer Aided Design (CAD): One of the best applications of computer animation is
Computer Aided Design and is generally referred to as CAD. One of the earlier applications
of CAD was automobile designing. But now almost all types of designing are done by using
CAD application, and without animation, all these work can’t be possible.

• Advertising: This is one of the significant applications of computer animation. The most
important advantage of an animated advertisement is that it takes very less space and capture
people attention.

• Presentation: Animated Presentation is the most effective way to represent an idea. It is


used to describe financial, statistical, mathematical, scientific & economic data.

7.2 Stages of the Animation


• It is also called as Design of Animation Sequences.

• Constructing an animation sequence can be a complicated task, particularly when it involves


a story line and multiple objects, each of which can move in a different way. A basic approach
is to design such animation sequences using the following development stages:

– Storyboard layout

25

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

– Object definitions
– Key-frame specifications
– Generation of in-between frames

• Storyboard layout: It is an outline of the action. It defines the motion sequence as a set
of basic events that are to take place. Depending on the type of animation to be produced,
the storyboard could consist of a set of rough sketches, along with a brief description of the
movements, or it could just be a list of the basic ideas for the action. Originally, the set of
motion sketches was attached to a large board that was used to present an overall view of the
animation project. Hence, the name ”storyboard.”

• Object definitions: It is given for each participant in the action. Objects can be defined in
terms of basic shapes, such as polygons or spline surfaces. In addition, a description is often
given of the movements that are to be performed by each character or object in the story.

• Key-frame specifications: It is a detailed drawing of the scene at a certain time in the ani-
mation sequence. Within each key frame, each object (or character) is positioned according to
the time for that frame. Some key frames are chosen at extreme positions in the action; others
are spaced so that the time interval between key frames is not too great. More key frames
are specified for intricate motions than for simple, slowly varying motions. Development of
the key frames is generally the responsibility of the senior animators, and often a separate
animator is assigned to each character in the animation

• Generation of in-between frames: In-betweens are the intermediate frames between the
key frames. The total number of frames, and hence the total number of in-betweens, needed
for an animation is determined by the display media that is to be used. Film requires 24
frames per second, and graphics terminals are refreshed at the rate of 60 or more frames per
second. Typically, time intervals for the motion are set up so that there are from three to five
in-betweens for each pair of key frames. Depending on the speed specified for the motion, some
key frames could be duplicated. As an example, a 1-minute film sequence with no duplication
requires a total of 1,440 frames.

7.3 Raster Animation


• In general, though, we can produce an animation sequence on a raster-scan system one frame at
a time, so that each completed frame could be saved in a file for later viewing. The animation
can then be viewed by cycling through the completed frame sequence, or the frames could be
transferred to film.

• If we want to generate an animation in real time, however, we need to produce the motion
frames quickly enough so that a continuous motion sequence is displayed.

• For a complex scene, one frame of the animation could take most of the refresh cycle time
to construct. In that case, objects generated first would be displayed for most of the frame
refresh time, but objects generated toward the end of the refresh cycle would disappear almost
as soon as they were displayed.

26

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

• For very complex animations, the frame construction time could be greater than the time to
refresh the screen, which can lead to erratic motion and fractured frame displays. Because
the screen display is generated from successively modified pixel values in the refresh buffer,
we can take advantage of some of the characteristics of the raster screen-refresh process to
produce motion sequences quickly. Hence, in animation, the frame construction time could
be lesser than the time to refresh the screen.
• There are two methods using which we can generate real-time raster animations. These are
– Double Buffering
– Raster Operations

7.3.1 Double Buffering


• For producing a real-time animation with a raster system is to employ two refresh buffers.
• Initially, we create a frame for the animation in one of the buffers. Then, while the screen is
being refreshed from that buffer, we construct the next frame in the other buffer. When that
frame is complete, we switch the roles of the two buffers so that the refresh routines use the
second buffer during the process of creating the next frame in the first buffer. This alternating
buffer process continues throughout the animation.
• If a program can complete the construction of a frame within the time of a refresh cycle, say
1/60 of a second, each motion sequence is displayed in synchronization with the screen refresh
rate.
• However, if the time to construct a frame is longer than the refresh time, the current frame
is displayed for two or more refresh cycles while the next animation frame is being generated.
– For example, if the screen refresh rate is 60 frames per second and it takes 1/50 of a
second to construct an animation frame, each frame is displayed on the screen twice and
the animation rate is only 30 frames each second. Similarly, if the frame construction
time is 1/25 of a second, the animation frame rate is reduced to 20 frames per second
because each frame is displayed three times.
• Irregular animation frame rates can occur with double buffering when the frame construction
time is very nearly equal to an integer multiple of the screen refresh time.
– Example: if the screen refresh rate is 60 frames per second, then an erratic animation
frame rate is possible when the frame construction time is very close to 1/60 of a second,
or 2/60 of a second, or 3/60 of a second, and so forth. Because of slight variations in the
implementation time for the routines that generate the primitives and their attributes,
some frames could take a little more time to construct and some a little less time. Thus,
the animation frame rate can change abruptly and erratically.
– To compensate for this effect
∗ One way to add a small time delay to the program.
∗ Another possibility is to alter the motion or scene description to shorten the frame
construction time.

27

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

7.3.2 Raster Operations


• We can also generate real-time raster animations for limited applications using block transfers
of a rectangular array of pixel values.

• This is used in game playing applications.

• A simple method for translating an object from one location to another in the xy plane is to
transfer the group of pixel values that define the shape of the object to the new location.

• Two-dimensional rotations in multiples of 900 are also simple to perform, although we can
rotate rectangular blocks of pixels through other angles using anti-aliasing procedures. For a
rotation that is not a multiple of 900 , we need to estimate the percentage of area coverage for
those pixels that overlap the rotated block.

• Sequences of raster operations can be executed to produce realtime animation for either two-
dimensional or three-dimensional objects, so long as we restrict the animation to motions in
the projection plane. Then no viewing or visible-surface algorithms need be invoked.

7.4 Key-Frame Systems


• A set of in-betweens can be generated from the specification of two (or more)key frames using
a key-frame system. Motion paths can be given with a kinematic description as a set of spline
curves, or the motions can be physically based by specifying the forces acting on the objects
to be animated.

• For complex scenes, we can separate the frames into individual components or objects called
cels (celluloid transparencies). This term developed from cartoon animation techniques where
the background and each character in a scene were placed on a separate transparency. Then,
with the transparencies stacked in the order from background to foreground, they were pho-
tographed to obtain the completed frame. The specified animation paths are then used to
obtain the next cel for each character, where the positions are interpolated from the key-frame
times.

• With complex object transformations, the shapes of objects may change over time. Examples
are clothes, facial features, magnified detail, evolving shapes, and exploding or disintegrating
objects. For surfaces described with polygon meshes, these changes can result in significant
changes in polygon shape such that the number of edges in a polygon could be different from
one frame to the next. These changes are incorporated into the development of the in-between
frames by adding or subtracting polygon edges according to the requirements of the defining
key frames.

7.5 Animation Functions


There exist several animation functions used.

28

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

7.5.1 Morphing
Morphing is an animation function. Transformation of object shapes from one form to another is
termed morphing, which is a shortened form of “metamorphosing.” It is one of the most complicated
transformations. This function is commonly used in movies, cartoons, advertisement, and computer
games.
Morphing Process: The process of Morphing involves three steps:

• In the first step, one initial image and other final image are added to morphing application
as shown in fig: 1st and 4th object consider as key frames.

• The second step involves the selection of key points on both the images for a smooth transition
between two images as shown in 2nd object.

• In the third step, the key point of the first image transforms to a corresponding key point of
the second image as shown in 3rd object of the figure.

7.5.2 Wrapping
Wrapping function is similar to morphing function. It distorts only the initial images so that it
matches with final images and no fade occurs in this function.

7.5.3 Tweening
Tweening is the short form of ’inbetweening.’ Tweening is the process of generating intermediate
frames between the initial & last final images. This function is popular in the film industry.

29

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

7.5.4 Panning
Usually Panning refers to rotation of the camera in horizontal Plane. In computer graphics, Panning
relates to the movement of fixed size window across the window object in a scene. In which direction
the fixed sized window moves, the object appears to move in the opposite direction as shown in fig:

If the window moves in a backward direction, then the object appear to move in the forward
direction and the window moves in forward direction then the object appear to move in a backward
direction.

7.5.5 Zooming
In zooming, the window is fixed an object and change its size, the object also appear to change in
size. When the window is made smaller about a fixed center, the object comes inside the window
appear more enlarged. This feature is known as Zooming In.
When we increase the size of the window about the fixed center, the object comes inside the
window appear small. This feature is known as Zooming Out.

7.5.6 Fractals
Fractal Function is used to generate a complex picture by using Iteration. Iteration means the
repetition of a single formula again and again with slightly different value based on the previous
iteration result. These results are displayed on the screen in the form of the display picture.

7.6 Motion Specifications


General methods for describing an animation sequence range from an explicit specification of the
motion paths to a description of the interactions that produce the motions. Thus, we could define
how an animation is to take place by giving the transformation parameters, the motion path pa-
rameters, the forces that are to act on objects, or the details of how objects interact to produce
motion.

30

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

7.6.1 Direct Motion Specification


The most straightforward method for defining an animation is direct motion specification of the
geometric-transformation parameters. Here, we explicitly set the values for the rotation angles and
translation vectors. Then the geometric transformation matrices are applied to transform coordinate
positions. Alternatively, we could use an approximating equation involving these parameters to
specify certain kinds of motions. We can approximate the path of a bouncing ball (Figure shown
below), for instance, with a damped, rectified, sine curve (Equation of bouncing ball is given below).

y(x) = A|sin(ωx + θ0 )|e−kx


Here, where A is the initial amplitude (height of the ball above the ground), ω is the angular
frequency, θ0 is the phase angle, and k is the damping constant. This method for motion specification
is particularly useful for simple user programmed animation sequences.

7.6.2 Goal-Directed Systems


At the opposite extreme, we can specify the motions that are to take place in general terms that
abstractly describe the actions in terms of the final results. In other words, an animation is specified
in terms of the final state of the movements. These systems are referred to as goal-directed, since
values for the motion parameters are determined from the goals of the animation. For example, we
could specify that we want an object to “walk” or to “run” to a particular destination; or we could
state that we want an object to “pick up” some other specified object. The input directives are
then interpreted in terms of component motions that will accomplish the described task. Human
motions, for instance, can be defined as a hierarchical structure of submotions for the torso, limbs,
and so forth. Thus, when a goal, such as “walk to the door” is given, the movements required of
the torso and limbs to accomplish this action are calculated.

7.6.3 Kinematics and Dynamics


We can also construct animation sequences using kinematic or dynamic descriptions. With a kine-
matic description, we specify the animation by giving motion parameters (position, velocity, and
acceleration) without reference to causes or goals of the motion. For constant velocity (zero acceler-
ation), we designate the motions of rigid bodies in a scene by giving an initial position and velocity

31

Dr. Himadri B.G.S. Bhuyan


Dr. Himadri B.G.S. Bhuyan

vector for each object. For example, if a velocity is specified as (3, 0, -4) km per sec, then this
vector gives the direction for the straight-line motion path and the speed (magnitude of velocity)
is calculated as 5 km per sec. If we also specify accelerations (rate of change of velocity), we can
generate speedups, slowdowns, and curved motion paths. Kinematic specification of a motion can
also be given by simply describing the motion path. This is often accomplished using spline curves.
An alternate approach is to use inverse kinematics. Here, we specify the initial and final po-
sitions of objects at specified times and the motion parameters are computed by the system. For
example, assuming zero acceleration, we can determine the constant velocity that will accomplish
the movement of an object from the initial position to the final position. This method is often used
with complex objects by giving the positions and orientations of an end node of an object, such as
a hand or a foot. The system then determines the motion parameters of other nodes to accomplish
the desired motion.
Dynamic descriptions, on the other hand, require the specification of the forces that produce the
velocities and accelerations. The description of object behavior in terms of the influence of forces
is generally referred to as physically based modeling. Examples of forces affecting object motion
include electromagnetic, gravitational, frictional, and other mechanical forces.

32

Dr. Himadri B.G.S. Bhuyan

You might also like