Chapter Six
State Management and Drawing
Geometric objects
Basic State Management
OpenGL maintains many states and state variables.
An object may be rendered with lighting, texturing, hidden surface
removal, fog, and other states affecting its appearance.
By default, most of these states are initially inactive.
These states may be costly to activate; for example, turning on texture
mapping will almost certainly slow down the speed of rendering a
primitive.
However, the quality of the image will improve and look more realistic, due
to the enhanced graphics capabilities.
Cont’d
To turn on and off many of these states, use these two simple commands:
• void glEnable(GLenum cap);
• void glDisable(GLenum cap);
glEnable() turns on a capability, and glDisable() turns it off.
The glEnable function is used to enable specific OpenGL capabilities.
It takes a single argument ‘cap’ which is an enumeration (GLenum)
representing the capability to be enabled.
Examples
// Enable depth testing for accurate rendering of 3D scenes
• glEnable(GL_DEPTH_TEST);
// Enable blending for transparency
• glEnable(GL_BLEND);
// Enable face culling for efficient rendering of closed surfaces
• glEnable(GL_CULL_FACE);
• Face culling is a technique used to improve rendering performance by omitting
the drawing of polygons that are facing away from the viewer.
// Enable Fog
• glEnable(GL_FOG);
• By enabling fog, you simulate effects like foggy weather or mist, and distant
objects appear less distinct.
Displaying Point, Lines, and Polygons
By default, a point is drawn as a single pixel on the screen,
A line is drawn solid and one pixel wide, and
polygons are drawn solidly filled in.
Points Details
To control the size of a rendered point, use glPointSize() and supply the desired size
in pixels as the argument.
void glPointSize(GLfloat size);
• Sets the width in pixels for rendered points; size must be greater than 0.0 and
by default is 1.0.
• if the width is 1.0, the square is 1 pixel by 1 pixel;
• if the width is 2.0, the square is 2 pixels by 2 pixels, and so on.
Lines Details
With OpenGL, you can specify lines with different widths and lines that are stippled
in various ways :
• dotted,
• dashed,
• drawn with alternating dots and dashes, and so on.
Wide Lines
void glLineWidth(GLfloat width);
Sets the width in pixels for rendered lines; width must be greater than 0.0 and by
default is 1.0.
Cont’d
Stippled Lines
To make stippled (dotted or dashed) lines,
You use the command glLineStipple() to define the stipple pattern, and then you
enable line stippling with glEnable().
glEnable(GL_LINE_STIPPLE);
glLineStipple(1, 0x3F07);
Cont’d
Figure 6.1 Stippled Lines
Polygons Details
Polygons are typically drawn by filling in all the pixels enclosed within the boundary,
but you can also draw them as outlined polygons or simply as points at the vertices.
A filled polygon might be solidly filled or stippled with a certain pattern.
Polygons as Points, Outlines, or Solids
A polygon has two faces:
• Front and
• Back
and might be rendered differently depending on which side is
facing the viewer.
Cont’d
By default, both front and back faces are drawn in the same way.
To change this, or to draw only outlines or vertices, use glPolygonMode().
glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK, GL_LINE);
Stippling Polygons
glEnable(GL_POLYGON_STIPPLE);
Normal Vectors
A normal vector (or normal, for short) is a vector that points in a direction that’s
perpendicular to a surface.
An object’s normal vectors define the orientation of its surface in space in
particular, its orientation relative to light sources.
Cont’d
normal vectors are essential for simulating realistic lighting and shading effects in
computer graphics. glVertex3f(0.0, 0.0, 3.0); // length
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(2.0, 0.0, 0.0); // length
Vector Arrays
You may have noticed that OpenGL requires many function calls to render
geometric primitives.
Drawing a 100−sided polygon requires 102 function calls:
• one call to glBegin(), one call for each of the vertices, and a final call to glEnd().
• Generally, to draw sided polygon, call.
In the others code, additional information (polygon boundary, edge flags or surface
normal) added function calls for each vertex.
This can quickly double or triple the number of function calls required for one
geometric object.
For some systems, function calls have a great deal of overhead and can hinder
performance.
Cont’d
OpenGL has vertex array routines that allow you to specify a lot of vertex−related
data with just a few arrays and to access that data with equally few function calls.
Using vertex array routines, all vertices in a −sided polygon could be put into
one array and called with one function.
Arranging data in vertex arrays may increase the performance of your application.
Also, using vertex arrays may allow non−redundant processing of shared
vertices(Vertex sharing is not supported on all implementations of OpenGL).
Cont’d
There are three steps to use vertex arrays to render geometry.
i. Activate (enable) up to six arrays, each to store a different type of data:
vertex coordinates, RGBA colors, color indices, surface normals, texture
coordinates, or polygon edge flags.
ii. Put data into the array or arrays. The arrays are accessed by the addresses
of (that is, pointers to) their memory locations.
iii. Draw geometry with the data. OpenGL obtains the data from all activated
arrays by dereferencing the pointers.
Cont’d
Step 1: Enabling Arrays
• The first step is to call glEnableClientState() with an enumerated
parameter, which activates the chosen array.
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
Step 2: Specifying Data for the Arrays
Specify Vertex Array and Color Array
static GLint vertices[] = {25, 25, static GLfloat colors[] = {1.0, 0.2, 0.2,
100, 325, 0.2, 0.2, 1.0,
175, 25}; 0.8, 1.0, 0.2};
Cont’d
Step 3: Setting up the pointer
glColorPointer(3, GL_FLOAT, 0, colors);
Color (RGB) DataType starting Offset(index) colorArrays
glVertexPointer(2, GL_INT, 0, vertices);
2D DataType starting Offset(index) VertexArrays
Cont’d
Step 4: Draw then polygon
glDrawArrays(GL_POLYGON, 0, 3);
PrimitiveType starting Offset(index) Ending Offset(index)
Step 5: Disable client states
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
Step 6: Swap Buffers
glutSwapBuffers(); // Swap the front and back buffers
o // Perform rendering operations in the back buffer
Chapter Seven
Representing Objects
Introduction
A 3D object, or three-dimensional object, is a physical or digital entity that
exists in three-dimensional space.
In computer graphics and geometry, a 3D object is typically described using
coordinates in a three-dimensional Cartesian coordinate system.
These objects have length, width, and height, providing a more realistic
representation compared to 2D objects.
Modeling Using Polygon
3D modeling using polygons is a common and widely used approach in computer
graphics.
In this method, 3D objects are represented as surfaces made up of interconnected
polygons, typically triangles or quads.
Creating Polygon Meshes
Creating representational polygon meshes involves techniques that enable the
modeling of 3D objects with polygons in a way that accurately represents the
intended shapes.
Cont’d
Polygonal meshes be represented using Table of data
• Geometric tables:
o These store information about the geometry of the polygonal mesh,
o i.e. what are the shapes/positions of the polygons?
Attribute tables:
o These store information about the appearance of the polygonal mesh,
o i.e. what colour is it, is it opaque or transparent, etc.
o This information can be specified for each polygon individually or for the
mesh as a whole.
Cont’d
Non Polygon Representations
In computer graphics, non-polygonal representations are alternative methods
for representing 3D objects that don't rely on polygonal meshes.
These representations often provide specific advantages in certain
applications.
Here are some common non-polygonal representations:
• Voxel Representation
• Point Clouds
• Skeleton or Wireframe Models
• Blobby Models
Voxel Representation
Voxel (volume element) grids divide space into small 3D cubes.
Each voxel stores information about the object's presence or properties.
Voxels are a unit of graphic information that defines a point in three-
dimensional space
Point Cloud Representation
A collection of individual points in 3D space, where each point represents a
specific position.
There's no connectivity information between points.
Skeleton or Wireframe Model
Represent the 3D object using a skeletal structure composed of lines or
curves.
The structure defines the overall shape.
Chapter Eight
Color and Images
Color Models
PROPERTIES OF LIGHT
What we perceive as 'light", or different colors, is a narrow frequency band
within the electromagnetic spectrum.
A few of the other frequency bands within this spectrum are called radio
waves, microwaves, infrared waves, and X-rays.
Each frequency value within the visible band corresponds to a distinct color
At the low-frequency end is a red color (4.3 X 10" hertz), and the highest
frequency we can see is a violet color (7.5 X 10" hertz)..
Cont’d
We perceive EM radiation with in the 400-700 nm range, the tiny piece of
spectrum between infra-red and ultraviolet.
Cont’d
The purpose of a color model is to facilitate the specification of colors in
some standard generally accepted way.
Each industry that uses color employs the most suitable color model.
RGB Color Models
In the RGB model, each color appears as a combination of red, green, and blue.
This model is called additive, and the colors are called primary colors.
The primary colors can be added to produce the secondary colors of light.
Magenta= Red + Blue
Cyan = Green + Blue
Yellow = Red + Green
Cont’d
The color subspace of interest is a cube.
RGB values are normalized to 0 to 1, in which RGB values are at three corners;
cyan, magenta, and yellow are the three other corners,
black is at their origin; and white is at the corner farthest from the origin.
Cont’d
White = W (r, g, b) = (1, 1, 1)
Black = K (r, g, b) = (0, 0, 0)
Red = R (r, g, b) = (1, 0, 0)
Green = G (r, g, b) = (0, 1, 0)
Blue = B (r, g, b) = (0, 0, 1)
Cyan = C (r, g, b) = (0, 1, 1)
Magenta = M (r, g, b) = (1, 0,
Yellow = Y (r, g, b) = (1, 1, 0)
CIE color Space
Color is a human perception (a percept).
Color is not a physical property..
But, it is related the light spectrum of a stimulus.
The CIE XYZ color space is a fundamental color space defined by the International
Commission on Illumination (CIE).
It is designed to be a linear model that encompasses all perceivable colors.
The CIE XYZ color space is based on the concept of tristimulus values, which
represent the amounts of three imaginary primaries: X, Y, and Z.
These values are derived from the spectral power distributions of light.
Cont’d
Tristimulus Values:
1) Components:
•X (Red-Green Axis): Represents the amount of energy in the red-green axis.
•Y (Luminance): Represents the brightness or luminance.
•Z (Blue-Yellow Axis): Represents the amount of energy in the blue-yellow axis.
2) Normalization:
• Color perceived by human eye.
Cont’d
Image Format and their application
An image format is a standardized way of representing and storing
digital images.
It defines the structure and encoding of the data that makes up an
image,
specifying how the visual information is organized and stored in a file.
Different image formats have distinct characteristics, compression
methods, color representations, and features that make them suitable
for specific use cases.
JPEG (Joint Photographic Experts Group)
JPEG is a lossy compression format, meaning it achieves high
compression ratios by discarding some image data.
This can result in a reduction in image quality, especially at higher
compression levels.
JPEG supports 24-bit color, allowing it to represent millions of colors.
Applications:
Photography
Web Images
PNG (Portable Network Graphics)
PNG uses lossless compression, preserving all image data without loss
of quality. It is suitable for images that require high fidelity.
It is suitable for images that require high fidelity.
PNG supports an alpha channel, allowing for transparency.
This makes PNG ideal for images with a need for a transparent
background.
PNG supports 24-bit color as well as 8-bit color with alpha channel,
providing a wide range of color options.
Applications:
Graphics with Transparency
Images with Text
GIF (Graphics Interchange Format):
GIF uses lossless compression but is less effective than PNG in terms of
compression ratios.
GIF supports up to 8 bits per pixel, limiting the number of colors to 256.
This makes it less suitable for complex photographic images but
sufficient for simple graphics.
GIF supports animation by combining multiple frames into a single file,
GIF supports a single color to be fully transparent, which can be useful
for creating simple images with transparent regions.
Applications:
Simple Graphics: Suitable for icons, logos, and other simple graphics.
Basic Animations: Used for creating simple animated images.