0% found this document useful (0 votes)
26 views27 pages

DIP - Module 4

Module 4 of the Digital Image Processing course focuses on image segmentation techniques, including edge detection, line detection, and region-based segmentation methods. It discusses the importance of segmentation accuracy for effective image analysis and outlines various algorithms based on intensity discontinuities and similarity. Key techniques covered include point detection, edge linking, and the use of gradient operators like Roberts, Prewitt, and Sobel for edge detection.

Uploaded by

ravishankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views27 pages

DIP - Module 4

Module 4 of the Digital Image Processing course focuses on image segmentation techniques, including edge detection, line detection, and region-based segmentation methods. It discusses the importance of segmentation accuracy for effective image analysis and outlines various algorithms based on intensity discontinuities and similarity. Key techniques covered include point detection, edge linking, and the use of gradient operators like Roberts, Prewitt, and Sobel for edge detection.

Uploaded by

ravishankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

DIGITAL IMAGE PROCESSING

18CS741
[As per Choice Based Credit System (CBCS) scheme]
(Effective from the academic year 2018 -2019)
SEMESTER – VII
MODULE 4
Notes

Prepared By
Athmaranjan K
Associate Professor
Dept. of Information Science & Eng.
Srinivas Institute of Technology, Mangaluru

Athmaranjan K Dept of ISE


DIGITAL IMAGE PROCESSING 18CS741 Module 4

MODULE 4
SYLLABUS
Image Segmentation: Introduction, Detection of isolated points, line detection, Edge detection, Edge
linking, Region based segmentation- Region growing, split and merge technique, local processing, regional
processing, Hough transform, Segmentation using Threshold.
Textbook 1: Rafael C G., Woods R E. and Eddins S L, Digital Image Processing, Prentice Hall, 2nd edition,
2008
Textbook 1: Ch.10.1 to 10.3

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 2


DIGITAL IMAGE PROCESSING 18CS741 Module 4

IMAGE SEGMENTATION
INTRODUCTION:
So far we saw image processing methods whose input and output are images, but now let us see those
methods in which the inputs are images, but the outputs are attributes extracted from those images.
Segmentation subdivides an image into its constituent regions or objects.
The level to which the subdivision is carried depends on the problem being solved. Segmentation should stop
when the objects of interest have been isolated. Segmentation accuracy determines the eventual success or
failure of computerized analysis procedures. Hence, care should be taken to improve the probability of
segmentation.
In industrial inspection applications, at least some measure of control over the environment is possible at
times. The experienced image processing system designer invariably pays considerable attention to such
opportunities. In other applications, such as autonomous target acquisition, the system designer has no
control of the environment. Then the usual approach is to focus on selecting the types of sensors which could
enhance the objects of interest while diminishing the contribution of irrelevant image detail.
E.g.: the use of infrared imaging by the military to detect objects with strong heat signatures, such as
equipment and troops in motion.
 After a successful segmenting the image, the contours of objects can be extracted using edge
detection and/or border following techniques.
 Shape of objects can be described.
 Based on shape, texture, and colour objects can be identified.
 Image segmentation techniques are extensively used in similarity searches
IMAGE SEGMENTATION
What is image segmentation?
Image segmentation is a commonly used technique in digital image processing and analysis to partition an
image into multiple parts or regions, based on the characteristics of the pixels in the image.
The goal of Image segmentation is to simplify and/or change the representation of an image into something
that is more meaningful and easier to analyze.
CLASSIFICATION OF IMAGE SEGMENTATION TECHNIQUES
Image Segmentation algorithms are based on one of two basic properties of intensity values:
1. Discontinuity: It is an approach to partition an image based on abrupt changes in intensity, such as
edges in an image

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 3


DIGITAL IMAGE PROCESSING 18CS741 Module 4
Here the assumption is that boundaries of regions are sufficiently different from each other and from
the background to allow boundary detection based on local intensity discontinuity.
The three basic types of gray level discontinuities in a digital image:
a) Points b) Lines and c) Edges
2. Similarity: It is an approach to partition an image into regions that are similar according to a set of
predefined criteria. Example; Thresholding, Region growing and Region splitting and merging
DETECTION OF DISCONTINUITIES
********Explain the techniques for detecting three basic types of gray level discontinuities in a digital
image.
The three basic types of gray level discontinuities in a digital image: Points, Lines and Edges. The most
common way to look for discontinuities is to run a mask through the image.
Let us consider a 3 x 3 mask with mask coefficients are as shown below:

3 x 3 Mask is superimposed with a 3 x 3 region of an image (the z’s are gray-level values) shown below:

Here the procedures (convolution) involves in computation of sum of products of the mask coefficients (wi)
with the gray levels (zi) contained in the region encompassed by the mask. Therefore by applying the
convolution process we get the response of the mask at any point in the image and is given by:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 4


DIGITAL IMAGE PROCESSING 18CS741 Module 4

Where zi is the gray level of the pixel associated with mask coefficient wi. As usual the response of the mask
is defined with respect to its center location.
POINT DETECTION:
*******Explain detection of isolated points
An isolated point is a point whose gray level is significantly different from its background in a homogeneous
area.
Example: Any dark spot in white background or white spot in dark background, in an image can be observed
as an isolated point.
To identify the availability of any isolated point in image we must select the threshold value. The threshold
value is denoted by T, which is used to identify the points. We say that a point has been detected at the
location f(x, y) on which the mask is centered if the absolute value of the response of the mask at that point
exceeds a specified threshold. Such points are labelled 1 in the output image and all others are labelled 0,
thus producing a binary image.

A sample mask used for point detection is shown below:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 5


DIGITAL IMAGE PROCESSING 18CS741 Module 4
LINE DETECTION:
****Explain how line can be detected in a digital image. Give masks for detection of horizontal line,
vertical line, left diagonal line and right diagonal line
In an image line is another of kind discontinuity. Here four types of masks are used to get the responses. That
is R1, R2, R3, and R4 for the directions vertical, horizontal, +450 and -450 respectively.

Again apply these masks on the given image (refer the input image with zi’s are gray-level values) and then
superimpose and convolution process is being applied and accordingly we have to calculate the response of
the mask. The response of the mask is given by:

Here R1 is the response when moving the mask from left to right and R2 is the response when moving the
mask from top to bottom. Similarly R3 is the response of the mask along +450 line and R4 is the response of
the mask along -450 line.
Suppose that an image is filtered (individually) with the four masks. If, at a given point in the image;

That point is said to me more likely associated with a line in the direction of mask k.
Example: if at a point in the image response R1 of a mask,

That particular point is said to be more likely associated with a horizontal line.
EDGE DETECTION:
Edge detection is a fundamental tool in image processing and computer vision, particularly in the areas of
feature detection and feature extraction, which aim at identifying points in a digital image at which the image
brightness changes sharply or more formally has discontinuities.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 6


DIGITAL IMAGE PROCESSING 18CS741 Module 4
An edge is a set of connected pixels that lies on the boundary between two regions which differ in gray value.
Pixels on edge are known as edge points. Edges provide an outline of the object. An edge can be extracted by
computing the derivative of the image function. Magnitude of the derivative indicates the contrast of edge
and direction of the derivative vector indicates the edge orientation.
Edge models are classified according to their intensity profiles:
Step edge involves abrupt change intensity and the transition between two intensity levels occurring ideally
over the distance of 1 pixel.

Ramp edge: involves slow and gradual change in intensity

Roof edge: It is not instantaneous over short distance.

Edge detection stages:


It can be summarized as:
1. For the given image filtering is applied in order to get a better input image for edge detection by
performing smoothing and noise reduction.
2. Detection of edge points: this is a local operation that extracts from an image all points that are
potential candidates to become edge points. This can be done by using first order derivative or second
order derivative. The magnitude of the first derivative can be used to detect the presence of an edge at
a point in an image. Similarly, the sign of the second derivative can be used to determine whether an
edge pixel lies on the dark or light side of an edge. Second derivatives produce two values for every
ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 7
DIGITAL IMAGE PROCESSING 18CS741 Module 4
edge in an image. An imaginary straight line joining the extreme positive and negative values of the
second derivative would cross zero near the midpoint of the edge. This zero crossing property of the
second derivative is quite useful for locating the center of thick edges.
In this method we take the 1st derivative of the intensity value across the image and find points where
the derivative is maximum then the edge could be located.
The image gradient is used to find edge strength and direction at location (x, y) of image, and defines
as the vector.

α(x, y) , of the gradient vector at the point.

3. Edge localization; steps involves in determining the exact location of edge in an image, and also it
involves in edge thinning and edge linking steps, so that edges can be viewed in a sharp and
connected manner. .

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 8


DIGITAL IMAGE PROCESSING 18CS741 Module 4
FIRST ORDER EDGE DETECTION OPERATORS
Obtaining the gradient of an image requires computing the partial derivatives δf/ δx and δf/ δy at every pixel
location in the image. We know that;

These two equations can be implemented for all pertinent values of x and y by filtering f(x, y) with the 1-D
masks as shown below:

One-dimensional masks used to implement the above equations.


Local transitions among different image intensities constitute an edge. Therefore the objective is to measure
the intensity gradient. First order derivatives in image processing are implemented using the magnitude of the
gradient. For a function f(x, y), the gradient of f at coordinates (x, y) is defined as the two-dimensional
column vector.

Magnitude:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 9


DIGITAL IMAGE PROCESSING 18CS741 Module 4

By using the above formula we calculate magnitude and direction of gradient. Thus the gradient of an image
measures the change in image function f(x, y) in X and Y directions.
 An edge can be extracted by computing the derivative of the image function where magnitude of the
derivative indicates the strength or contrast of edge.
 Direction of the derivative vector indicates the edge orientation.
A 3 x 3 region of an image (the z’s are gray-level values) and masks used to compute the gradient at point labeled
z5 is shown below:

Three different types of First order derivative operators are:


******With the help of Mask discuss Robert, Prewitt and Sobel edge detection methods.
ROBERT OPERATOR
Computation of the gradient of an image is based on obtaining partial derivatives δf/δx and δf/δy at every
pixel location. One of the simplest ways to implement a first order partial derivative at point z5 is by using
Roberts cross gradient operators:
Gx = (z9 – z5)
Gy = (z8 – z6)

= |z9 – z5| + |z8 – z5|


These derivatives can be implemented for entire image by using the 2D- masks with diagonal preference as
shown below:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 10


DIGITAL IMAGE PROCESSING 18CS741 Module 4

Robert operator
PREWITT OPERATOR
The above Masks of size 2 x 2 are difficult to implement as they do not have clear center. So an approach by
using Prewitt operator masks of size 3 x 3 is calculated for the given image;
Gx = (z7 +z8 + z9) – (z1 +z2 + z3)

Gy = (z3 +z6 + z9) – (z1 +z4 + z7)

 Here the difference between first and third rows of the 3 X 3 image region approximates the
derivative in x direction, and
 The difference between third and first columns approximates the derivative in y direction.
Thus the prewitt operator provides us two masks one for detecting edges in the horizontal direction and
another for detecting edges in a vertical direction.
The masks shown below is called Prewitt operator:

SOBEL OPERATOR
It is a first order derivative estimators in which we can specify whether the edge detector is sensitive to
horizontal or vertical edges or both. It provides both a differentiating and smoothing effect. It also relies on
central differences.
A weight value of 2 is used to achieve some smoothing by giving more importance to the centre point.

Gx =

Gy =

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 11


DIGITAL IMAGE PROCESSING 18CS741 Module 4

Sobel operator for the 3 x 3 mask can be written as:

SOBEL OPERATOR
By using the mask gradient of image intensity at each pixel within the image is calculated. It finds the
direction of the largest increase from light to dark and the rate of change in that direction..
PREWITT AND SOBEL MASK FOR DETECTING DIAGONAL EDGES
Explain gradient operators with respect to prewitt and sobel mask for detecting diagonal edges.
The two additional prewitt and sobel masks for detecting discontinuities in the diagonal directions are shown
below:

EDGE LINKING AND BOUNDARY DETECTION


Edge detection what we discussed so far yield pixels lying only on edges. These set of pixels from edge
detecting algorithms, seldom define a boundary completely because of noise, breaks in the boundary etc.
Therefore, Edge detecting algorithms are typically followed by linking and other detection procedures,
designed to assemble edge pixels into meaningful boundaries.
Edge linking is used to group the edge points that are detected using edge detection algorithms such as first
order and second order derivative operators. There are two methods used for edge linking.
1. Local Processing (Local Edge Linker)
2. Global Edge Linker (Using Hough Transform)
ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 12
DIGITAL IMAGE PROCESSING 18CS741 Module 4
LOCAL PROCESSING
**********Explain local processing in edge linking.
In local processing method edge points are grouped based on some similarity criteria. The steps to be
followed in local processing are:
Steps:
1. Detect the edges (edge points) using edge detection algorithm.
2. Analyze the characteristics of edge points with neighborhood (3 x 3 or 5 x 5) pixels and group into an
edge if the pre-defined criteria for similarity are met.
The two principal properties used for establishing pre-defined criteria for similarity are:
a) The strength (Magnitude) of the response of the gradient operator used to produce the edge pixel.
b) The direction of the gradient vector.
Let us consider an edge point f(x, y) and another neighboring edge point f(x0, y0), they can be grouped to
form the edge (two edge points can be connected) only if the;
{Magnitude f(x, y) - Magnitude f(x0 , y0 )} ≤ E ; where E is a non-negative threshold

And {direction f(x, y) – direction f(x0, y0)} < A ; where A is a non-negative angle threshold

The direction of edge at f(x, y) is perpendicular to the direction of gradient vector at that point.
The magnitude of gradient is;

An edge point in the pre-defined neighborhood of (x, y) is link to the pixel at (x, y) if both magnitude and
direction criteria are satisfied. This process is repeated at every location in the image. A record must be kept
of linked points as the centre of the neighbourhood is moved from pixel to pixel and assign a different
intensity value to each set of linked edge pixels.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 13


DIGITAL IMAGE PROCESSING 18CS741 Module 4
GLOBAL PROCESSING VIA THE HOUGH TRANSFORM
Edge points are linked by determining first if they lie on a curve of specified shape. Given n points in an
image, suppose that we want to find subsets of these points that lie on straight lines. One possible solution is
to find first all lines determined by every pair of points and then find all subsets of points that are close to
particular lines. This approach involves finding n(n-1)/2 ≈ n2 lines and then performing (n) (n(n-1))/2 ≈ n3
comparisons of every point to all lines. This is a computationally prohibitive task in all but the most trivial
applications.

HOUGH TRANSFORM
***************Explain Hough Transform in edge linking OR
****Describe the procedure for detecting lines using Hough Transform

The Hough transform is a feature extraction method used to detect simple shapes such as lines, circles and
objects in an image using the concept of parameter space. Hough transform is used to connect the disjoined
edges in image.
A simple shape is one that can be represented by only few parameters. For example;
 A line can be represented by two parameters such as slope and intercept and
 A circle has three parameters such as coordinates of the center (x, y) and radius (r)
Hough transform does an excellent job in finding such shapes in an image.

Working Principle:
Consider any point (xi, yi) on xy plane and the general equation of a straight line in slope-intercept form is
given by:
yi = axi + b
Infinitely many lines pass through this (xi, yi) but they all satisfy the equation yi = axi + b for varying values
of a and b.
A line in xy plane contains many points on it. We need to transfer this line from xy plane to ab plane using
the equation; b = -axi + yi.
In fact, all the points on this line have lines in parameter (Hough) space that intersect at point (a’, b’).
Let us consider a line in xy plane which contains two points (xi, yi) and (xj, yj) as shown below:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 14


DIGITAL IMAGE PROCESSING 18CS741 Module 4

When we convert this line to parameter (Hough) space we get two lines intersecting at point (a’, b’) unless
these two lines are parallel where a’ is the slope and b’ is the intercept of the line containing both (xi, yi) and
(xj, yj) in the xy plane.

Thus all the edge points; say (x1, y1), (x2, y2), (x3, y3), ……… lie on the line only if their corresponding lines
intersect at one common point in parameter space. This is what Hough transform does; for each edge point
we draw the lines in the parameter space and then find their point of intersection (if any). The intersection
point will give us the parameter (slope and intercept) of the line.
Drawback:
A practical difficulty with this approach, however, is that a (the slope of a line) approaches infinity as the line
approaches the vertical direction. One way around this difficulty is to use the normal representation of a line:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 15


DIGITAL IMAGE PROCESSING 18CS741 Module 4
Where ɵ is the angle between lines and ρ is the diameter.

THRESHOLDING
Thresholding is the simplest method of image segmentation, where we change the pixels of an image to make
the image easier to analyze. In thresholding, we convert an image from colour or grayscale into a binary
image,
Thresholding is carried out with the assumption that the range of intensity levels covered by objects of
interest is different from the background.
Steps involved in thresholding:
1. A threshold T is selected
2. Any point (x, y) in the image at which f(x, y) > T is called an object point otherwise background
point.
3. The segmented image denoted by g(x, y);

Suppose that the gray level histogram corresponding to an image f(x, y), composed of light objects on dark
background. In such a way that object and background pixels have gray levels grouped into two dominant
modes.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 16


DIGITAL IMAGE PROCESSING 18CS741 Module 4
For example an image contains two types of light objects on dark background. The histogram corresponding
to this image characterized by 3 dominant modes.

• Here multilevel thresholding classifies a point (x, y) as belonging to one object class if

• To another object class if

• To the background class if

Thus the thresholding is a very important technique for image segment. It provides uniform regions based on
the threshold value T. Key parameter of thresholding process is the choice of threshold value.
Discuss image segmentation using Thresholding in detail.
Thresholding is carried out with the assumption that the range of intensity levels covered by objects of
interest is different from the background.
Steps involved in thresholding:
1. A threshold T is selected
2. Any point (x, y) in the image at which f(x, y) > T is called an object point otherwise background
point.
3. The segmented image denoted by g(x, y);

Types of Thresholding
1. Global Thresholding: Threshold operation depend upon only gray scale value f(x, y), where T is
constant value.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 17


DIGITAL IMAGE PROCESSING 18CS741 Module 4
2. Local Thresholding: Threshold operation depends on both gray scale value f(x, y) and
neighbourhood properties p(x, y)
3. Dynamic or Adaptive: Threshold operation depends upon spatial coordinates x, y

Global Thresholding:
The simplest of all thresholding techniques is to partition the image histogram by using single global
threshold T as shown below

Here the gray level histogram corresponding to an image f(x, y), composed of light objects on dark
background. In such a way that object and background pixels have gray levels grouped into two dominant
modes.
The following algorithm can be used to obtain global threshold value T automatically:
1. Select an initial estimate for T (This value should be greater than the minimum and less than the
maximum intensity levels in the image. It is better to choose the average intensity of an image.
2. Segment the image using T: This will produce 2 groups of pixels; G1: consisting of all pixels with
gray level values > T and G2 consisting of pixel values ≤ T
3. Compute the average gray level values μ1 and μ2 for the pixels in regions G1 and G2
4. Compute a new threshold value:
5. Repeat steps 2 through 4 until the difference in T in successive iterations is smaller than a predefined
parameter T0
Dynamic or Adaptive Thresholding
One approach to reduce the effect of non-uniform illumination in image segmentation is to divide the original
image into sub-images and then utilize a different threshold to segment each sub image. Key issues in this
approach are how to sub divide the image and how to estimate the threshold for each resulting sub image.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 18


DIGITAL IMAGE PROCESSING 18CS741 Module 4
Threshold value used for each pixel depends on the location of pixel in terms of the sub images, this type of
thresholding is called adaptive threshold.
In adaptive thresholding method threshold value is calculated separately for each pixel using some statistics
such as mean, median, variance etc. obtained from its neighbourhood. This way we will get different
thresholds for different image regions and thus overcome the problem of varying illumination.
REGION BASED SEGMENTATION
This is segmentation techniques that are based on finding the regions directly.
Basic Formulation:
Let R represent the entire spatial region occupied by an image. We may view image segmentation as a
process that partitions R into n sub-regions, R1, R2, R3,……………..Rn, such that

Here, Q(Rk) is a logical predicate defined over the points in set Rk, and Ø is the null set. The symbols U and
Ո represent set union and intersection, respectively. Two regions and are said to be adjacent if their union
forms a connected set.
 Condition (a) indicates that the segmentation must be complete; that is, every pixel must be in a
region.
 Condition (b) requires that points in a region be connected in some predefined sense (e.g., the points
must be 4- or 8-connected).
 Condition (c) indicates that the regions must be disjoint.
 Condition (d) deals with the properties that must be satisfied by the pixels in a segmented region—for
example, Q(Ri) = TRUE if all pixels in Ri have the same intensity level.
 Finally, condition (e) indicates that two adjacent regions Ri and Rj must be different in the sense of
predicate Q.
Thus Region-based segmentation approaches are based on partitioning an image into regions that are similar
according to a set of predefined criteria.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 19


DIGITAL IMAGE PROCESSING 18CS741 Module 4
REGION GROWING
**********Discuss region growing image segmentation technique with example
Region growing is a procedure that group’s pixels or sub-regions into larger regions based on predefined
criteria. The basic approach is to start with set of ―seed‖ points and from these grow regions by appending
each seed those neighboring pixels that have properties similar to the seed(such as specific ranges of gray
level or color). Selecting a set of one or more starting points often can be based on the nature of the problem.

Procedure for Region Growing:


1. Choose one or more seed (starting) pixels based on the nature of the problem.
2. Check the neighboring pixels and add them to the region, if they are similar to the seed (satisfies the
predefined similarity condition).
3. Repeat step 2 for each newly added pixels. Thus the regions are grown from the seed points to
adjacent point depending on a threshold or criteria that we used.
Threshold like absolute gray level difference between any pixel and seed had to be less than some
threshold value or criteria like 4-connectivity, 8 connectivity can be used.
4. Region growth should stop when no more pixels satisfy the criteria for inclusion in that region.
The homogeneity predicate (Threshold value for similarity criteria) can be based on any characteristic of the
regions in the image such as: Average intensity, Variance, Color, Texture, Motion, Shape, Size etc.

Advantages:
1. Region growing methods can correctly separate the regions that have the same properties we define.
2. Region growing methods can provide the original images which have clear edges the good
segmentation results.
3. The concept is simple. We only need a small numbers of seed point to represent the property we
want, then grow the region.
4. We can choose the multiple criteria at the same time.
5. It performs well with respect to noise, which means it has a good shape matching of its result.

Disadvantage:
1. The computation is time consuming,
2. This method may not distinguish the shading of the real images.
In conclusion, the region growing method has a good performance with the good shape matching and
connectivity. The most serious problem of region growing method is the time consuming.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 20


DIGITAL IMAGE PROCESSING 18CS741 Module 4
Example:
Apply region growing technique to segment a 5 x 5 image shown below: Use 4 way connectivity and
consider the seed points as 1, 3, 5 and 9.

1 1 9 9 9
1 1 9 9 9
5 1 1 9 9
5 5 5 3 9
3 3 3 3 3

Answer:
It is given that seed points are: 1, 3, 5 and 9 and we have to consider the 4 way connectivity criteria for
region growing.
Let us consider the initial seed points as:
1 1 9 9 9
1 1 9 9 9
5 1 1 9 9
5 5 5 3 9
3 3 3 3 3
Consider the 4-way adjacent (similar pixel) points with respect to these seed points
1 1 9 9 9
1 1 9 9 9
5 1 1 9 9
5 5 5 3 9
3 3 3 3 3
Repeat the above step until no more pixel is part of the region

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 21


DIGITAL IMAGE PROCESSING 18CS741 Module 4
Final segmented image is:
1 1 9 9 9
1 1 9 9 9
5 1 1 9 9
5 5 5 3 9
3 3 3 3 3

REGION SPLITTING AND MERGING


***********Explain region splitting and merging algorithm with example image
Region splitting and merging is an image segmentation technique which subdivides an image initially into a
set of arbitrary, disjoint regions and then merges and/or split the regions in an attempt to satisfy some criteria
(threshold conditions).
Procedure for Region splitting:
Let R represents the entire image region and select a predicate (threshold value for similarity criteria) P.
1. Start with entire image region R; if P(R) = FALSE then divide the given image into four disjoint
quadrants.
2. If the predicate P is FALSE for any quadrants then we sub-divide that quadrant into sub-quadrants
and so on.
3. If predicate P is TRUE for any region (quadrant) Ri then splitting is not required for that region Ri.
4. Stop the process when further splitting is not possible.
This splitting technique is represented in the form of quad tree in which the root node of the tree corresponds
to the entire image region R and each node corresponds to a sub-division.
Let us consider the region R, which is divided into 4 sub regions as: R1, R2, R3 and R4 and the sub region R4
is only further sub-divided into R41, R42, R43 and R44.

Partitioned Image Corresponding Quad tree


ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 22
DIGITAL IMAGE PROCESSING 18CS741 Module 4
Region Merging:
The input image which is already split into different regions using region splitting techniques contains
adjacent regions with identical properties. So we need to merge only adjacent regions whose combined pixels
satisfy the predicate P (similarity threshold condition)
Procedure for Region merging
1. Merge any adjacent regions Rj and Rk for which P(Rj U Rk) = TRUE.
2. Stop when no further merging is possible.
Example:
Apply region split and merge technique for the image given below; consider the threshold value as 3

Splitting:
Consider the entire image region and identify the maximum and minimum gray level values.
Maximum gray level value = 7
Minimum gray level value = 0
Consider the predicate P is the absolute difference of Max and Min gray level value ≤ Threshold value
|Max gray value –Min gray value| ≤ 3
|7 – 0| = 7 > 3
So the predicate P is FALSE; we need to split the image into 4 parts:

In region R1:
Max gray value = 7 and Min gray value = 4
P = |7- 4| ≤ 3 = TRUE ; So splitting of R1 is not required
ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 23
DIGITAL IMAGE PROCESSING 18CS741 Module 4
In region R2:
Max gray value = 7 and Min gray value = 2
P = |7- 2| ≤ 3 = FALSE ; So we need to split the region R2 into 4 parts:

In region R3:
Max gray value = 3 and Min gray value =0
P = |3- 0| ≤ 3 = TRUE; so splitting of R3 is not required
In region R4:
Max gray value = 7 and Min gray value = 0
P = |7- 0| ≤ 3 = FALSE; so we need to split the region R4 into 4 parts:

Furthermore we will check all sub-regions, since all of them are having Predicate P ≤ 3; so no further
splitting is required
Merging:
Check adjacent regions if they are falling within the threshold then merge.
Consider region R1 and R11:
Max gray value = 7 and Min gray value =4
P = |7- 4| ≤ 3 = TRUE; so merge R1 with R11

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 24


DIGITAL IMAGE PROCESSING 18CS741 Module 4

Similarly region R1R21 and R22 can be merged into R1R21R22 region;
R1R21R22 and R23 is merged into R1R21R22R23 region
R1R21R22R23 and R42 is merged into R1R21R22R23R42 region
R1R21R22R23R42 and R43 is merged into R1R21R22R23R42R43 region.

Also the regions R3 and R41 is merged into R3R41


R3R41 and R24 is merged into R3R41R24 region
R3R41R24 and R44 is merged into R3R41R24R44 region
The final segmented image:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 25


DIGITAL IMAGE PROCESSING 18CS741 Module 4
Apply region growing technique for the image shown below: consider the threshold value = 3 and the
seed point is 6, use 8 way connectivity.
5 6 6 7 6 7 6 6
6 7 6 7 5 5 4 7
6 6 4 4 3 2 5 6
5 4 5 4 2 3 4 6
0 3 2 3 3 2 4 7
0 0 0 0 2 2 5 6
1 1 0 1 0 3 4 4
1 0 1 0 2 3 5 4

Answer:
Let us start by considering the seed point as 6, from top left. Check the neighbouring pixels with respect this
seed point by using 8 -way connectivity and add those neighbouring pixels whose absolute gray level
difference with respect to seed point is ≤ 3.
Repeat the process for newly added pixels in which gray level value 6 is present. Thus the regions are grown
from the seed points to adjacent point depending on a threshold or criteria that we used.
Region growth should stop when no more pixels satisfy the criteria for inclusion in that region.

Initial seed point 8 way connectivity points with newly added pixels in which gray level
respect to seed point 6 value 6 is present and condition for
Threshold value is satisfied.

Final segmented image into two regions


ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 26
DIGITAL IMAGE PROCESSING 18CS741 Module 4
QUESTION BANK
MODULE 4
1 What is image segmentation? Explain its application 5

2 Explain the techniques for detecting three basic types of gray level discontinuities in a digital 10

image.
3 Explain detection of isolated points 5

4 Explain how line can be detected in a digital image, give masks for detection of horizontal 8

line, vertical line, left diagonal line and right diagonal line
5 With the help of Mask discuss Robert, Prewitt and Sobel edge detection methods 8

6 Explain local processing in edge linking 8

7 Explain Hough Transform in edge linking 10

8 Describe the procedure for detecting lines using Hough Transform 8

9 Discuss image segmentation using Thresholding in detail. 10

10 Discuss region growing image segmentation technique with example 10

11 Explain region splitting and merging algorithm with example image 10

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 27

You might also like