0% found this document useful (0 votes)
15 views106 pages

M5 CST304 Ktunotes - in

Module 5 covers image enhancement techniques in the spatial domain, including gray level transformations, histogram equalization, and spatial filtering methods. It also discusses image segmentation approaches, such as similarity and discontinuity methods, to identify objects and boundaries within images. Key concepts include intensity transformations, smoothing and sharpening filters, and the use of derivatives for edge detection.

Uploaded by

boxabhi2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views106 pages

M5 CST304 Ktunotes - in

Module 5 covers image enhancement techniques in the spatial domain, including gray level transformations, histogram equalization, and spatial filtering methods. It also discusses image segmentation approaches, such as similarity and discontinuity methods, to identify objects and boundaries within images. Key concepts include intensity transformations, smoothing and sharpening filters, and the use of derivatives for edge detection.

Uploaded by

boxabhi2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

1

Module 5
Module - 5 (Image Enhancement in Spatial
Domain and Image Segmentation)
Basic gray level transformation functions - Log
transformations, Power-Law transformations,
Contrast stretching. Histogram equalization. Basics
of spatial filtering - Smoothing spatial filter- Linear
and nonlinear filters, and Sharpening spatial filters-
Gradient and Laplacian. Fundamentals of Image
Segmentation. Thresholding - Basics of Intensity
Thresholding and Global Thresholding. Region
based Approach - Region Growing, Region Splitting
and Merging. Edge Detection - Edge Operators-
Sobel and Prewitt.
Spatial Domain vs. Transform Domain

• Spatial domain
image plane itself, directly process the intensity values of the image plane

• Transform domain
process the transform coefficients, not directly process the intensity values
of the image plane

7/28/2022
2
Spatial Domain Process

g ( x, y )  T [ f ( x, y )])
f ( x, y ) : input image
g ( x, y ) : output image
T : an operator on f defined over
a neighborhood of point ( x, y )

7/28/2022
3
Spatial Domain Process

7/28/2022
4
Spatial Domain Process
Intensity transformation function
s  T (r )

7/28/2022
5
6

Contrast Stretching
7

Contrast Stretching
Some Basic Intensity Transformation Functions

7/28/2022
8
Image Negatives
Image negatives
s  L 1 r

7/28/2022
9
Example: Image Negatives

Small
lesion

7/28/2022
10
Log Transformations
Log Transformations
s  c log(1  r )

7/28/2022
11
Example: Log Transformations

7/28/2022
12
Power-Law (Gamma) Transformations


s  cr

7/28/2022
13
Example: Gamma Transformations

7/28/2022
14
Example: Gamma Transformations Cathode ray tube
(CRT) devices have an
intensity-to-voltage
response that is a
power function, with
exponents varying
from approximately
1.8 to 2.5

sr 1/2.5

7/28/2022
15
Example: Gamma Transformations

7/28/2022
16
Example: Gamma Transformations

7/28/2022
17
Histogram Processing

• Histogram Equalization

• Histogram Matching

• Local Histogram Processing

• Using Histogram Statistics for Image Enhancement

7/28/2022 18
Histogram Processing

Histogram h(rk )  nk
rk is the k th intensity value
nk is the number of pixels in the image with intensity rk

nk
Normalized histogram p( rk ) 
MN
nk : the number of pixels in the image of
size M  N with intensity rk

7/28/2022 19
Histogram Processing
Histogram Processing
Histogram Processing
Histogram Processing
• Histogram
The histogram of an image is a plot of the number of
occurrences of gray levels in the image against the gray
level values.

• Histogram Equalization
Histogram equalization is a process that attempts to
improve the contrast by spreading out the gray levels in
an image using probability density function, so that they
are evenly distributed across the image.
Histogram Equalization –continuous
The intensity levels in an image may be viewed as
random variables in the interval [0, L-1].
Let pr (r ) and ps ( s) denote the probability density
function (PDF) of random variables r and s.

7/28/2022 24
Histogram Equalization – discrete
Histogram Equalization
Histogram Equalization - conditions
s  T (r ) 0  r  L 1

a. T(r) is a strictly monotonically increasing function


in the interval 0  r  L -1;
b. 0  T (r )  L -1 for 0  r  L -1.

7/28/2022 27
Histogram Equalization
Histogram Equalization
Histogram Equalization - Problem
• Perform histogram
16
Histogram - original
equalization for the
16 Histogram
following -equalized
image.
14 14

4 4 4 4 4 12 12

Pixel Count
Pixel Count
10 10
3 4 5 4 3 8 8
6 6
3 5 5 5 3 4 4
2 2
3 4 5 4 3
0 0
4 4 4 4 4 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
nk Cumulative
Grey Levels sk =(C/(MxN))x(L-1) Grey Levels
Value (C)
0 0 0 (0/25) x 7 = 0 0 0 0
Equalized Image 1 0 0 (0/25) x 7 = 0 0 1 0
6 6 6 6 6 2 0 0 (0/25) x 7 = 0 0 2 6
3 6 6 (6/25) x 7 = 1.68 2 3 0
2 6 7 6 2
4 14 20 (20/25) x 7 = 5.6 6 4 0
2 7 7 7 2
5 5 25 (25/25) x 7 = 7 7 5 0
2 6 7 6 2
6 0 25 (25/25) x 7 = 7 7 6 14
6 6 6 6 6
7 0 25 (25/25) x 7 = 7 7 7 5
Histogram Equalization - Problem
• Equalize the histogram of the following image:
Histogram Equalization - Problem
Spatial Filtering / Mask Processing
Spatial Processing - Point processing (discussed already) and
Mask processing

A spatial filter consists of a neighborhood, and a predefined


operation

Linear spatial filtering of an image of size MxN with a filter of size


mxn is given by the following expression (Normally, we deal with
filters of odd size – smallest being 3x3)
a b
g ( x, y )    w(s, t ) f ( x  s, y  t )
s  a t  b

7/28/2022 33
Spatial Filtering

a b
g ( x, y )    w(s, t ) f ( x  s, y  t )
s  a t  b

7/28/2022 34
Spatial Filtering
Spatial Filtering
Spatial Filtering
Smoothing Spatial Filters

Smoothing filters are used for blurring and for noise


reduction

Blurring is used in removal of small details and bridging of


small gaps in lines or curves

Smoothing spatial filters include linear filters and nonlinear


filters.

7/28/2022 38
Spatial Smoothing Linear Filters

The general implementation for filtering an M  N image


with a weighted averaging filter of size m  n is given
a b

  w(s, t ) f ( x  s, y  t )
g ( x, y )  s  a t  b
a b

  w(s, t )
s  a t  b

where m  2a  1, n  2b  1.

7/28/2022 39
Two Smoothing Averaging Filter Masks

7/28/2022 40
Example: Gross Representation of Objects

7/28/2022 41
7/28/2022 42
Order-statistic (Nonlinear) Filters

— Nonlinear

— Based on ordering (ranking) the pixels contained in the


filter mask

— Replacing the value of the center pixel with the value


determined by the ranking result

E.g., median filter, max filter, min filter

7/28/2022 43
Order-statistic (Nonlinear) Filters
Order-statistic (Nonlinear) Filters
Example: Use of Median Filtering for Noise Reduction

7/28/2022 46
Sharpening Spatial Filters

► Foundation

► Laplacian Operator

7/28/2022 47
Sharpening Spatial Filters: Foundation

► The first-order derivative of a one-dimensional function f(x)


is the difference

f
 f ( x  1)  f ( x)
x

► The second-order derivative of f(x) as the difference – it is


the difference between the successive second order
derivatives
2 f
 f ( x  1)  f ( x  1)  2 f ( x)
x 2

7/28/2022 48
Sharpening Spatial Filters:
Foundation
Sharpening Spatial Filters: Foundation
Sharpening Spatial Filters: Foundation
Sharpening Spatial Filters: Laplace Operator

The second-order isotropic derivative operator is the


Laplacian for a function (image) f(x,y)

 2
f  2
f
 f  2  2
2

x y
2 f
 f ( x  1, y )  f ( x  1, y )  2 f ( x, y )
x 2

2 f
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y)
y 2

 2 f  f ( x  1, y )  f ( x  1, y )  f ( x, y  1)  f ( x, y  1)
- 4 f ( x, y)
7/28/2022 52
Sharpening Spatial Filters: Laplace Operator
 2 f  f ( x  1, y )  f ( x  1, y )  f ( x, y  1)  f ( x, y  1)
- 4 f ( x, y)

7/28/2022 53
Sharpening Spatial Filters: Laplace Operator
Image sharpening using the Laplacian filter:

g ( x, y )  f ( x, y )  c  2 f ( x, y ) 
where,
f ( x, y ) is input image,
g ( x, y ) is sharpenend images,
c  -1 if  2 f ( x, y ) corresponding to Fig. 3.37(a) or (b)
and c  1 if either of the other two filters is used.
Note: Sharpening is obtained by adding the Laplacian image to the Original
image. If the definition has a negative center coefficient , it will be a
subtraction. Otherwise, it will be an addition.
7/28/2022 54
7/28/2022 55
Example:

Combining
Spatial
Enhancement
Methods

Goal:

Enhance the
image by
sharpening it and
by bringing out
more of the
skeletal detail

7/28/2022 56
Example:

Combining
Spatial
Enhancement
Methods

Goal:

Enhance the
image by
sharpening it
and by bringing
out more of the
skeletal detail

7/28/2022 57
Image Segmentation
• Image Segmentation is the process of dividing an image
into different regions based on the characteristics of
pixels to identify objects or boundaries to simplify an
image and more efficiently analyze it. Segmentation
impacts a number of domains, from the filmmaking
industry to the field of medicine.

7/28/2022 58
Image Segmentation - Fundamentals
• Let R represent the entire spatial region occupied by an image. Image segmentation
is a process that partitions R into n sub-regions, R1, R2, …, Rn.

7/28/2022 59
Image Segmentation
Image Segmentation

7/28/2022 61
Image Segmentation
Approaches in Image Segmentation
• Similarity approach: This approach is based on
detecting similarity between image pixels to form a
segment, based on a threshold. ML algorithms like
clustering are based on this type of approach to
segment an image.
• Discontinuity approach: This approach relies on the
discontinuity of pixel intensity values of the image.
Line, Point, and Edge Detection techniques use this
type of approach for obtaining intermediate
segmentation results which can be later processed to
obtain the final segmented image.
Background

• First-order derivative

f
 f '( x)  f ( x  1)  f ( x)
x

• Second-order derivative
2 f
 f ( x  1)  f ( x  1)  2 f ( x)
x 2

7/28/2022 63
7/28/2022 64
Characteristics of First and Second Order Derivatives

• First-order derivatives generally produce thicker edges in


image

• Second-order derivatives have a stronger response to fine


detail, such as thin lines, isolated points, and noise

• Second-order derivatives produce a double-edge response


at ramp and step transition in intensity

• The sign of the second derivative can be used to determine


whether a transition into an edge is from light to dark or
dark to light
7/28/2022 65
Detection of Isolated Points
• The Laplacian

 2
f  2
f
 f ( x, y )  2  2
2

x y
 f ( x  1, y )  f ( x  1, y )  f ( x, y  1)  f ( x, y  1)
4 f ( x, y )
9

1 if | R( x, y ) | T R   wk zk
g ( x, y )   k 1

0 otherwise

7/28/2022 66
7/28/2022 67
Line Detection

• Second derivatives to result in a stronger response and to produce thinner lines


than first derivatives

• Double-line effect of the second derivative must be handled properly

7/28/2022 68
7/28/2022 69
Detecting Line in Specified Directions

• Let R1, R2, R3, and R4 denote the responses of the masks in Fig. 10.6. If, at a given
point in the image, |Rk|>|Rj|, for all j≠k, that point is said to be more likely
associated with a line in the direction of mask k.
7/28/2022 70
7/28/2022 71
Edge Detection
• Edges are pixels where the brightness function changes
abruptly
• Edge models

7/28/2022 72
7/28/2022 73
7/28/2022 74
Fundamental steps in edge detection
The four steps of edge detection

• (1) Smoothing: suppress as much noise as possible, without destroying the true
edges.

• (2) Enhancement: apply a filter to enhance the quality of the edges in the image
(sharpening).

• (3) Detection: determine which edge pixels should be discarded as noise and which
should be retained (usually, thresholding provides the criterion used for detection).

• (4) Localization: determine the exact location of an edge (sub-pixel resolution might
be required for some applications, that is, estimate the location of an edge to better
than the spacing between pixels). Edge thinning and linking are usually required in
this step.
7/28/2022 75
Basic Edge Detection by Using First-Order Derivative
 f 
 g x   x 
f  grad ( f )      
 g y   f 
 y 
The magnitude of f
M ( x, y )  mag(f )  g x 2  g y 2
The direction of f
 gx 
 ( x, y )  tan  
1

 g y 
The direction of the edge
   - 90
7/28/2022 76
Basic Edge Detection by Using First-Order Derivative

 f 
 g x   x 
Edge normal: f  grad ( f )      
 g y   f 
 y 
Edge unit normal: f / mag(f )

In practice,sometimes the magnitude is approximated by


f f  f f 
mag(f )= + or mag(f )=max  | |,| | 
x y  x y 
7/28/2022 77
7/28/2022 78
7/28/2022 79
7/28/2022 80
7/28/2022 81
7/28/2022 82
7/28/2022 83
7/28/2022 84
Thresholding for Segmentation
• Thresholding is a very popular segmentation
technique, used for separating an object from its
background.
• The process of thresholding involves, comparing
each pixel value of the image (pixel intensity) to
specified threshold. This divides all the pixels of the
input image into groups:
• Pixels having intensity value lower than threshold.
• Pixels having intensity value greater than threshold.
• Specific values are assigned to these groups
depending on the type of segmentation used.
Thresholding

1 if f ( x, y )  T (object point)
g ( x, y )  
0 if f ( x, y )  T (background point)
T : global thresholding

Multiple thresholding
a if f ( x, y)  T2

g ( x, y )   b if T1  f ( x, y )  T2
c if f ( x, y)  T1

7/28/2022 86
7/28/2022 87
Basic Global Thresholding

1. Select an initial estimate for the global threshold, T.


2. Segment the image using T. It will produce two groups of pixels: G1 consisting of all pixels with
intensity values > T and G2 consisting of pixels with values T.
3. Compute the average intensityvalues m1 and m2 for the pixels in G1 and G2, respectively.
4. Compute a new threshold value.

1
T  m1  m2 
5. Repeat Steps 2 through24 until the difference between values of T in successive iterations is
smaller than a predefined parameter .

T

7/28/2022 88
7/28/2022 89
Using Image Smoothing to Improve Global Thresholding

7/28/2022 90
Multiple Thresholds

7/28/2022 91
Variable Thresholding: Image Partitioning

• Subdivide an image into nonoverlapping rectangles

• The rectangles are chosen small enough so that the illumination of each is
approximately uniform.

7/28/2022 92
7/28/2022 93
7/28/2022 94
Region-Based Segmentation
Region Growing

• Region growing is a procedure that groups pixels or subregions


into larger regions.

• The simplest of these approaches is pixel aggregation, which


starts with a set of “seed” points and from these grows regions by
appending to each seed points those neighboring pixels that have
similar properties (such as gray level, texture, color, shape).

• Region growing based techniques are better than the edge-based


techniques in noisy images where edges are difficult to detect.

7/28/2022 95
Region-Based Segmentation

Example: Region Growing based on 8-connectivity

f ( x, y ) : input image array


S ( x, y ): seed array containing 1s (seeds) and 0s
Q( x, y ): predicate

7/28/2022 96
Region Growing based on 8-connectivity
1. Find all connected components in S ( x, y ) and erode each
connected components to one pixel; label all such pixels
found as 1. All other pixels in S are labeled 0.
2. Form an image fQ such that, at a pair of coordinates (x,y),
let fQ ( x, y )  1 if the Q is satisfied otherwise f Q ( x, y )  0.
3. Let g be an image formed by appending to each seed point
in S all the 1-value points in fQ that are 8-connected to that
seed point.
4. Label each connencted component in g with a different region
label. This is the segmented image obtained by region growing.
7/28/2022 97
TRUE if the absolute difference of the intensities

Q between the seed and the pixel at (x,y) is  T
 FALSE
 otherwise

7/28/2022 98
Region Growing

7/28/2022 99
4-connectivity

7/28/2022 100
8-connectivity

7/28/2022 101
Region Splitting and Merging

R : entire image Ri :entire image Q: predicate

1. For any region Ri , If Q( Ri ) = FALSE,


we divide the image Ri into quadrants.
2. When no further splitting is possible,
merge any adjacent regions R j and Rk
for which Q( R j  Rk ) = TRUE.
3. Stop when no further merging is possible.
7/28/2022 102
7/28/2022 103
7/28/2022 104
Image Segmentation – NPTEL Videos
Image Segmentation Fundamentals
https://www.youtube.com/watch?v=3qJej6wgezA&t=5s
Thresholding and Region growing
https://www.youtube.com/watch?v=vaS6rS8ZpkU&t=328s
https://www.youtube.com/watch?v=CD4KyEHfVx4

You might also like