0% found this document useful (0 votes)
14 views36 pages

Model QP Solution 2 37

The document is a model question paper solution for a Computer Graphics and Image Processing course at APJ Abdul Kalam Technological University. It includes various questions and answers related to algorithms, image processing techniques, and applications of digital image processing. Key topics covered include Bresenham's line drawing algorithm, sampling and quantization, histogram equalization, and clipping algorithms.

Uploaded by

Cinu Joseph
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views36 pages

Model QP Solution 2 37

The document is a model question paper solution for a Computer Graphics and Image Processing course at APJ Abdul Kalam Technological University. It includes various questions and answers related to algorithms, image processing techniques, and applications of digital image processing. Key topics covered include Bresenham's line drawing algorithm, sampling and quantization, histogram equalization, and clipping algorithms.

Uploaded by

Cinu Joseph
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY

SIXTH SEMESTER B.TECH DEGREE EXAMINATION

Model Question Paper - SOLUTION

Course Code: CST 304

Course Name: Computer Graphics and Image Processing

Max. Marks : 100 Duration: 3 Hours

PART A

Answer All Questions. Each Question Carries 3 Marks


1. Justify the approach of using integer arithmetic in Bresenham’s line
drawing algorithm.
Answer:
It is an efficient method because it involves only integer addition, subtractions,
and multiplication operations. These operations can be performed very rapidly so
lines can be generated quickly.

SIMAT|CSE 1

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

2. Consider a raster system with a resolution of 1024*1024. What is the size of the
raster needed to store 4 bits per pixel? How much storage is needed if 8 bits per
pixel are to be stored?

SIMAT|CSE 2

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

Show that two successive reflections about either of the coordinate axes is
equivalent to a single rotation about the coordinate origin

4. Determine a sequence of basic transformations that are equivalent to the x-direction


shearing matrix.

SIMAT|CSE 3

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

5. Find the window to viewport normalization transformation with window lower


left corner at (1,1) and upper right corner at (2,6).

SIMAT|CSE 4

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

6. Find the orthographic projection of a unit cube onto the x=0, y=0 and z=0 plane.

SIMAT|CSE 5

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

SIMAT|CSE 6

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

7. Define Sampling and Quantization of an image.

In order to become suitable for digital processing, an image function f(x,y) must be digitized
both spatially and in amplitude.

A frame grabber or digitizer gets the analogue video signal and to create an image which is
digital, we need to covert continuous data into digital form.

There are two steps in which it is done:

➢ Sampling
➢ Quantization

• The sampling rate determines the spatial resolution of the digitized image

• The quantization level determines the number of grey levels in the digitized image.

A magnitude of the sampled image is expressed as a digital value in image processing.

The transition between continuous values of the image function and its digital equivalent is
called quantization.

SIMAT|CSE 7

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

8. Give any three applications of digital image processing

1) Image sharpening and restoration


It refers to the process in which we can modify the look and feel of an image. It basically
manipulates the images and achieves the desired output. It includes conversion, sharpening,
blurring, detecting edges, retrieval, and recognition of images.

2) Medical Field
There are several applications under medical field which depends on the functioning of digital image
processing.
➢ Gamma-ray imaging
➢ PET scan
➢ X-Ray Imaging
➢ Medical CT scan
➢ UV imaging

3) Robot vision
There are several robotic machines which work on the digital image processing. Through image
processing technique robot finds their ways, for example, hurdle detection root and line follower
robot.

4) Pattern recognition
It involves the study of image processing, it is also combined with artificial intelligence such that
computer-aided diagnosis, handwriting recognition and images recognition can be easily
implemented. Now a days, image processing is used for pattern recognition.

5) Video processing
It is also one of the applications of digital image processing. A collection of frames or pictures are
arranged in such a way that it makes the fast movement of pictures. It involves frame rate
conversion, motion detection, reduction of noise and colour space conversion etc.

9. A captured image appears very dark because of wrong lens aperture setting. Describe an
enhancement technique which is appropriate to enhance such an image.

➢ Histogram equalization (HE) is a simple and effective contrast enhancement technique for
enhancing an image.
➢ HE spreads the intensities of an image pixels based on the whole image information.
➢ The shape of the histogram of an image gives us useful information about the possibility for
contrast enhancement.

SIMAT|CSE 8

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

• Aim is to transform the first 3 histograms into the 4th type. That is try to increase the
dynamic range of the image . This is called Histogram Processing
• In dark Image the components of the histogram are concentrated on the low (dark) side of
gray scale
• The components of the histogram of the bright image are biased toward the high side of
gray scale.
• A low contrast image histogram will be narrow and centered towards the middle of the gray
scale.
• Components of the histogram in the high-contrast image cover a broad range of the gray
scale and, further, that the distribution of pixels is not too far from uniform, with very few
vertical lines being much higher than the others.

10. Suggest an approach of thresholding that should be used in case of uniform


illumination.

Global Thresholding
• The simplest of all thresholding techniques is to partition the image histogram by using a
single global threshold, T.
• Segmentation is then accomplished by scanning the image pixel by pixel
• Labeling each pixel as object or back-ground, depending on whether the gray level of that
pixel is greater or less than the value of T

• Binary Thresholding

SIMAT|CSE 9

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

Input Image

Threshold =4

Output image matrix=

0007
7700
0707
0077

Part B
(Answer any one question from each module. Each question carries 14
Marks)

11 (a) Write Midpoint circle drawing algorithm and use it to plot a circle
withradius=20 and center is (50,30).

SIMAT|CSE 10

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

SIMAT|CSE 11

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

b) Draw the architecture of raster scan display systems and explain its workingprinciple.(4)

SIMAT|CSE 12

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

OR

12 (a) Derive the initial decision parameter of Bresenham’s line drawing


algorithmand use the algorithm to rasterize a line with endpoints (2,2) and
(10,10).

SIMAT|CSE 13

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

(10)

SIMAT|CSE 14

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

SIMAT|CSE 15

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

b) Explain the working principle of color CRT monitors with suitableillustrations.


Color CRT Monitors:
The CRT Monitor display by using a combination of phosphors. The phosphors are different colors. There
are two popular approaches for producing color displays with a CRT are:

SIMAT|CSE 16

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

1. Beam Penetration Method


2. Shadow-Mask Method

1. Beam Penetration Method:


The Beam-Penetration method has been used with random-scan monitors. In this method, the CRT screen
is coated with two layers of phosphor, red and green and the displayed color depends on how far the
electron beam penetrates the phosphor layers. This method produces four colors only, red, green, orange
and yellow. A beam of slow electrons excites the outer red layer only; hence screen shows red color only.
A beam of high-speed electrons excites the inner green layer. Thus screen shows a green color.

2. Shadow-Mask Method:
o Shadow Mask Method is commonly used in Raster-Scan System because they produce a much
wider range of colors than the beam-penetration method.

Construction: A shadow mask CRT has 3 phosphor color dots at each pixel position.

o One phosphor dot emits: red light


o Another emits: green light
o Third emits: blue light

This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid just behind the
phosphor coated screen.

0Shadow mask grid is pierced with small round holes in a triangular pattern.
Figure shows the delta-delta shadow mask method commonly used in color CRT system.

SIMAT|CSE 17

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

➢ Working: Triad arrangement of red, green, and blue guns.


➢ The deflection system of the CRT operates on all 3 electron beams simultaneously; the 3 electron beams are
deflected and focused as a group onto the shadow mask, which contains a sequence of holes aligned with the
phosphor- dot patterns.
➢ When the three beams pass through a hole in the shadow mask, they activate a dotted triangle, which occurs as
a small color spot on the screen.
➢ The phosphor dots in the triangles are organized so that each electron beam can activate only its corresponding
color dot when it passes through the shadow mask.
➢ Inline arrangement: Another configuration for the 3 electron guns is an Inline arrangement in which the 3
electron guns and the corresponding red-green-blue color dots on the screen, are aligned along one scan line
rather of in a triangular pattern.
➢ This inline arrangement of electron guns in easier to keep in alignment and is
commonly used in high-resolution color CRT's.

13 (a) Compare boundary fill algorithm and flood fill algorithm. (5)

SIMAT|CSE 18

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

b) Reflect a triangle ABC about the line 3x-4y+8=0. The position vector of the
coordinate ABC is given as A(4,1), B(5,2) and C(4,3). (9)

SIMAT|CSE 19

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

OR

(a) Explain the need of using vanishing points in projections. (4)

A vanishing point is a point on the image plane of a perspective drawing where the two-
dimensional perspective projections of mutually parallel lines in three-dimensional space appear to
converge.

The vanishing point may also be referred to as the “direction point”.

SIMAT|CSE 20

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

Vanishing Point Perspective is used in Graphic editing and 3D video games. It can be used to render 3D
shapes (3D Buildings and objects), add perspective to a background scene (road, train track) or add
shadow effects.

The above image demonstrates the use of vanishing points in computer graphics.

Explain Cohen-Sutherland line clipping algorithm. Use the algorithm to clip (10)
line P1(70, 20) and P2(100,10) against a window lower left hand corner
(50,10) and upper right hand corner (80,40).

SIMAT|CSE 21

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

SIMAT|CSE 22

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

SIMAT|CSE 23

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

SIMAT|CSE 24

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

(a) Describe Sutherland Hodegman polygon clipping algorithm and what are its
limitations (7)
Sutherland-Hodgeman Polygon Clipping Algorithm
1. Begin
2. Read coordinates of all vertices of the Polygon.
3. Read coordinates of the clipping window
4. Consider the left edge of the window
5. Compare the vertices of each edge of the polygon, individually with the clipping plane.
6. Save the resulting intersections and vertices in the new list of vertices according to four possible
relationships between the edge and the clipping boundary.
7. Repeat the steps 5 and 6 for remaining edges or the clipping window. Each time the resultant list of vertices
is successively passed to process the next edge of the clipping window.
8. End
9.

Limitations

➢ Convex polygons are correctly clipped by the Sutherland-Hodgeman Algorithm. But, concave polygons cannot
be clipped correctly.

➢ It may be displayed with extraneous lines. Example shown in the following figure.

SIMAT|CSE 25

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

Explain
(b) How visible surfaces can be detected using depth buffer algorithm. (7)

(
Depth Buffer Z−Buffer Method)
➢ This method is developed by Cutmull. It is an image-space approach. The basic idea is to test
the Z-depth of each surface to determine the closest visible surface.
➢ In this method each surface is processed separately one pixel position at a time across the
surface.
➢ The depth values for a pixel are compared and the closest smallest z surface determines the
color to be displayed in the frame buffer.

➢ It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order.
➢ To override the closer polygons from the far ones, two buffers named frame buffer and depth
buffer, are used.

➢ Depth buffer is used to store depth values for x,y position, as surfaces are
processed 0≤depth≤1.

➢ The frame buffer is used to store the intensity value of color value at each position x,y
➢ The z-coordinates are usually normalized to the range [0, 1].
➢ The 0 value for z-coordinate indicates back clipping pane and 1 value for z-coordinates
indicates front clipping pane.

➢ The algorithm:

SIMAT|CSE 26

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

17. (a) Explain the components of an image processing system with suitable (9)
diagram
➢ Image Processing System is the combination of the different elements involved in
the digital image processing.

➢ Digital image processing is the processing of an image by means of a digital


computer.

➢ Digital image processing uses different computer algorithms to perform image


processing on the digital images.

It consists of following components:-

➢ Image Sensors:
Image sensors senses the intensity, amplitude, co-ordinates and other features of the
images and passes the result to the image processing hardware. It includes the
problem domain.
➢ Image Processing Hardware:
Image processing hardware is the dedicated hardware that is used to process the
instructions obtained from the image sensors. It passes the result to general purpose
computer.
➢ Computer:
Computer used in the image processing system is the general purpose computer that
is used by us in our daily life.
➢ Image Processing Software:
Image processing software is the software that includes all the mechanisms and
algorithms that are used in image processing system.
➢ Mass Storage:
Mass storage stores the pixels of the images during the processing.
➢ Hard Copy Device:
Once the image is processed then it is stored in the hard copy device. It can be a pen
drive or any external ROM device.
➢ Image Display:
It includes the monitor or display screen that displays the processed images.
➢ Network:
Network is the connection of all the above elements of the image processing system.

SIMAT|CSE 27

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

(b) Define Resolution of an image. Explain the spatial and gray level (5)
resolutionof an image with an example.
Resolution
Image resolution is typically described in PPI, which refers to how many pixels are
displayed per inch of an image.
Higher resolutions mean that there more pixels per inch (PPI), resulting in more pixel
information and creating a high-quality, crisp image.
Images with lower resolutions have fewer pixels, and if those few pixels are too large
(usually when an image is stretched), they can become visible
Spatial resolution
• Spatial resolution is the smallest distinguishable detail in an image.
• It depends on sampling.

Gray Level Resolution


• Refers to the smallest distinguishable change in the gray level.
• Gray level resolution is highly subjective and it depends on the hardware utilized
to capture the image
• In short gray level resolution is equal to the number of bits per pixel.
• The number of different colors in an image is depends on the depth of color or
bits per pixel.
The mathematical relation that can be established between gray level resolution and bits
per pixel can be given as.

In this equation L refers to number of gray levels. k refers to bpp or bits per pixel.
For example: Consider an image with 8 bits per pixel or 8bpp.
Now if were to calculate the gray level resolution:

It means it gray level resolution is 256. Or in other way we can say that this image has
256 different shades of gray.
The more is the bits per pixel of an image, the more is its gray level resolution.

SIMAT|CSE 28

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

OR
18. (a) Define 4-adjacency, 8 adjacency and m-adjacency. Consider the image (7)
segment shown.
4 2 3 2 (q)
3313
2322
(p)2 1 2 3
Let V={1,2} and compute the length of the shortest 4- ,8- and m- path
between p and q. If a particular path does not exist between these two
points, explain why?

Three types of adjacency:

a) 4-adjacency: Two pixels p and q with values from V are 4-adjacent if

q is in the set N4(p).

b) 8-adjacency: Two pixels p and q with values from V are 8-adjacent if

q is in the set N8(p).

c) m-adjacency(mixed adjacency): Two pixels p and q with values from

V are m-adjacent if
1. q is in N4(p), or
2. 2) q is in ND(p) and the set N4(p)∩N4(q) has no pixels whose
values are from V.
Example:

SIMAT|CSE 29

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

(b) Using any one application, explain the steps involved in image (7)
processing.

Image processing is the process of transforming an image into a digital


form and performing certain operations to get some useful information
from it. The image processing system usually treats all images as 2D

Fundamental Image Processing Steps

➢ Image Acquisition
➢ Image acquisition is the first step in image processing. This step is
also known as preprocessing in image processing. It involves
retrieving the image from a source, usually a hardware-based source.
➢ Image Enhancement
➢ Image enhancement is the process of bringing out and highlighting
certain features of interest in an image that has been obscured. This
can involve changing the brightness, contrast, etc.
➢ Image Restoration
➢ Image restoration is the process of improving the appearance of an
image. However, unlike image enhancement, image restoration is
done using certain mathematical or probabilistic models.
➢ Color Image Processing
➢ Color image processing includes a number of color modeling
SIMAT|CSE 30

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

techniques in a digital domain. This step has gained prominence due


to the significant use of digital images over the internet.
➢ Wavelets and Multiresolution Processing
➢ Wavelets are used to represent images in various degrees of
resolution. The images are subdivided into wavelets or smaller regions
for data compression and for pyramidal representation.
➢ Compression
➢ Compression is a process used to reduce the storage required to save
an image or the bandwidth required to transmit it. This is done
particularly when the image is for use on the Internet.
➢ Morphological Processing
➢ Morphological processing is a set of processing operations for
morphing images based on their shapes.
➢ Segmentation
➢ Segmentation is one of the most difficult steps of image processing. It
involves partitioning an image into its constituent parts or objects.
➢ Representation and Description
➢ After an image is segmented into regions in the segmentation
process, each region is represented and described in a form suitable
for further computer processing. Representation deals with the
image’s characteristics and regional properties. Description deals with
extracting quantitative information that helps differentiate one class
of objects from the other.
➢ Recognition
➢ Recognition assigns a label to an object based on its description.
➢ Knowledge Base:
Knowledge may be as simple as detailing regions of an image where the
information of interest is known to be located, thus limiting the search that
has to be conducted in seeking that information.
The knowledge base also can be quite complex, such as an interrelated list of
all major possible defects in a materials inspection problem or an image
database containing high-resolution satellite images of a region in connection
with change-detection applications.

SIMAT|CSE 31

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

19. (a) A 5x5 image patch is shown below. Compute the value of the marked (4)
pixel ifit is smoothened by a 3x3 average filter and median filter.

(b) Define Image segmentation and describe in detail method of edge and
regionbased segmentation technique.
• A method of extracting and representing information from an image is to group
pixels together into regions of similarity.
• This process is commonly called as segmentation. (10)

SIMAT|CSE 32

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

Region Splitting & Merging


The basic idea of region splitting is to break the image into a set of
disjoint regions which are coherent within themselves:
1. Initially take the image as a whole to be the area of interest.
2. Look at the area of interest and decide if all pixels contained in
the region satisfy some similarity constraint.
a) If TRUE then the area of interest corresponds to a region in the
image.
b) If FALSE split the area of interest (usually into four equal sub-
areas) and consider each of the sub-areas as the area of interest
in turn.
c) This process continues until no further splitting occurs.
In the worst case this happens when the areas are just one pixel in
size.
This is a divide and conquer or top down method.

• If only a splitting schedule is used then the final segmentation


would probably contain many neighboring regions that have
identical or similar properties.
• Thus, a merging process is used after each split which
compares adjacent regions and merges them if necessary.
• Algorithms of this nature are called split and merge algorithms.
Example:
• Let I denote the whole image shown in Fig(a).
• Not all the pixels in I are similar so the region is split as in Fig
(b).
• Assume that all pixels within regions I1, I2 & I3 are similar but
those in I4 are not.
• Therefore I4 is split next as in Fig(c).
• Now assume that all pixels within each region are similar with
respect to that region, and that after comparing the split
regions, regions I43 & I44 are found to be identical.
• These are thus merged together as in Fig (d).

SIMAT|CSE 33

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

Region growing
• Region growing approach is the opposite of the split and merge approach
• An initial set of small areas are iteratively merged according to similarity
constraints.
• Start by choosing an arbitrary seed pixel and compare it with neighbouring
pixels
• Region is grown from the seed pixel by adding in neighbouring pixels that are
similar, increasing the size of the region.
• When the growth of one region stops we simply choose another seed pixel which
does not yet belong to any region and start again.
• This whole process is continued until all pixels belong to some region.
• A bottom up method.

Egde Based Method


Steps:
1. Smoothing:
• Suppress as much noise as possible, without destroying the true edges.
2. Enhancement:
• Apply a filter to enhance the quality of the edges in the image (sharpening).
3. Detection:
• Determine which edge pixels should be discarded as noise and which should be retained (usually,
thresholding provides the criterion used for detection).
4. Localization:
• Determine the exact location of an edge

SIMAT|CSE 34

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

OR
20. (a) Distinguish between smoothing and sharpening filters in terms of (10)
Functionality
Types
Applications
Mask Coefficients

Smoothing Filter Sharpening Filter

1 Functionality ➢ Low pass Filter ➢ High Pass Filter


➢ Useful for ➢ Useful for emphasizing fine
reducing noise and ➢ details.
removing small ➢ The elements of the mask
details. contain both positive and
➢ The elements of negative weights.
the mask must be ➢ Mask elements sum to 0.
positive.
➢ Mask elements
sum to 1 assuming
normalized
weights (i.e.,
divide each weight
by the sum of
weights).
2 Types • Averaging (linear) • Unsharp masking
• Gaussian (linear) • High Boost filtering
• Median filtering • Gradient (1st derivative)
(non-linear) • Laplacian (2nd derivative)

3 Applications ➢ These filters can ➢ Sharpening filters are used to


enhance the edges of objects and
effectively adjust the contrast and the shade
reduce noise characteristics.
➢ In combination with threshold they
can be used as edge detectors.
➢ Sharpening or high-pass filters let
high frequencies pass and reduce the
lower frequencies and are extremely
sensitive to shut noise.
4 Mask Laplacian of Gaussian Filters
Coefficients The Laplacian of Gaussian filter (LoG)
is a combination of a Laplacian and
Gaussian filter where its characteristic
is determined by the parameter and
the kernel size as shown in the
mathematical expression of the kernel:

SIMAT|CSE 35

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)


lOMoARcPSD|36688094

CST 304 CGIP Model Question Paper - Solution

Describe how an image is segmented using split and merge


technique inassociation with the region adjacency graph.
The basic idea of region splitting is to break the image into a set of disjoint regions which are
(b) coherent within themselves:
3. Initially take the image as a whole to be the area of interest.
4. Look at the area of interest and decide if all pixels contained in the region satisfy
some similarity constraint.
d) If TRUE then the area of interest corresponds to a region in the image.
e) If FALSE split the area of interest (usually into four equal sub-areas) and consider
each of the sub-areas as the area of interest in turn.
f) This process continues until no further splitting occurs.

In the worst case this happens when the areas are just one pixel in size.

This is a divide and conquer or top down method.

• If only a splitting schedule is used then the final segmentation would probably
contain many neighboring regions that have identical or similar properties.
• Thus, a merging process is used after each split which compares adjacent regions
and merges them if necessary.
• Algorithms of this nature are called split and merge algorithms.
Example:
• Let I denote the whole image shown in Fig(a).
• Not all the pixels in I are similar so the region is split as in Fig (b).
• Assume that all pixels within regions I1, I2 & I3 are similar but those in I4 are not.
• Therefore I4 is split next as in Fig(c).
• Now assume that all pixels within each region are similar with respect to that region,
and that after comparing the split regions, regions I43 & I44 are found to be
identical.
• These are thus merged together as in Fig (d).

**********

SIMAT|CSE 36

Downloaded by Cinu Joseph (cinu.nice@nirmalacollege.edu.in)

You might also like