0 ratings 0% found this document useful (0 votes) 135 views 28 pages Digital Image Unit 2
The document discusses image enhancement techniques in the spatial domain, focusing on direct pixel manipulation methods. It covers various approaches such as gray-level transformations, contrast stretching, thresholding, and histogram equalization, emphasizing the subjective nature of image enhancement. Additionally, it introduces mathematical analyses and applications of these techniques in improving image quality for human perception.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here .
Available Formats
Download as PDF or read online on Scribd
Go to previous items Go to next items
Save Digital Image Unit 2 For Later cee lees el eeieed
IMAGE ENHANCEMENT IN
SPATIAL DOMAIN
2.1_ INTRODUCTION
“The objective of enhancement is to process an image so that the result is more suitable for human being in
comparison to original image”
Image enhancement’s approaches falls two categories
(@) Spatial Domain : Spatial means the plane of image itself. So spatial domain is based on direct
manipulation of pixels of an image. Term spatial domain refers to the aggregate of pixels
composing an image.
(®) Frequency Domain : Frequency domain processing techniques are based on manipulation in
Fourier transform of image.
Enhancementis a subjective matter. The image can be ‘good’ for one human being and can be
for other person. Thus, for applying any enhancement technique, first of all we apply trial and error
method.
2.2__MATHEMATICAL ANALYSIS OF ENHANCEMENT IN SPATIAL DOMAIN
1. Spatial domain analysi
is applied directly to the pixels of image so it can be shown by
YW=TI(XY)
Input image
--(21)
fey)
8 (x,y) = Output image.
T= Operator of f(x, y) whose value depends the neighbourhoods of (x, v)-
2. Gray-level transformation (Intensity Mapping) :Itis the simplest form of spatial domain, if
oe Be pects Be wwhen the neighbourhoods is size of 1 x1 (single pixel). In this case,
%, y) depends upon f(x, y) only. In this case, this is called gray-ler SI ion “or
“Lappland gray-level transformation ‘er
16Image Enhancement in Spatial Domain 17
T(r) a mentee)
SerGh) Tip the. hon
=f) + Value of gos,
3, Contrast Stretching : The spatial domain equation (2.2) shows basically the image we get after pel
equation (2.2) would be of high contrast. (
s=T) Ce pra Nola
S=T()
a) Practical
Fig. 2.1. Contrast stretching and eo
4 Thresholding : The result of equation 2.2 is also thresholding of image If pixel has value less
then ‘m’, we get dark pixel and if pixel has value high then ‘m’, we get light pixel
y point in an image ideperad only on the gray level at that point, so
this technique is also referred as point processing.
} This is always done with the help of mask, that will be discussed later, “VO Urabe —bivos
Because enhancement at ar
BASIC GRAY LEVEL TRANSFORMATION
Basically we have different types of gray level transformations
1. Point transformation.
2. Linear (negative and identity) transformation.
3. Logarithmic (log and inverse log) transformation
4. Power-low transformation.
5. Piece-wise linear transformation.
2.3.1 Point Transformation
By equation,
S=T(r) (By equation 22)
where T = transform operator, this can be done by using 1x1 window that means only asingle pixel;
that type of transformation is called point transformation.
Basically use of point transformation is masking of a particular pixel.
2.3.2. Linear Transformation
Positive ‘or’ identity transformation does not play an important role in image processing, because
transformed image is image itself.18 Digital Image Processing
ng, Suppose we have total number of
Negative transformation plays a vital role in image process
gray level =L
Then the range of gray levels = [0, | ~ 1] so the negative imaging ¢
(c=1)-
The application of negative imaging is to create a negative of im
in image that can be converted into white area and very small but im|
now converted into black spot. So now,
Black area with small spot white = White area with small spot black
Underlined image can be analysed easily
an be presented by equation,
+ (2.3)
age. If black area is dominantarea
portant white in real picture is
pou sS
White output
s
esr brood
bia cuput
fh
Berintacss in vite
Fig. 2.2 Line or transformation.
2.3.3 Log Transforma
ow
¥S (24)
Without
any
transform
cred
eit ee
i‘ Up=r lip
larrow range of Ip converted to Wide range of /p converted to
wide range of o/p Narrow range of ofp
A general equation for log transformation is given by
where
The transfer characteristics has been drawn below :
Fig. 2.3 Log transformation.
|
|Image Enhancement in Spatial Domain 19
That shows that by log transformation, we get spreading / compressing of gray levels (that can
also be done by power low transformation). But the actual advantage of log-transformation is com-
pression and expansion of an image.
2.3.4 Power Lasy Transformation (n'" power and n'" root transformation).
The bi
ic equation for power low transformation is
S=cr" (2.5)
transfer function can be drawn as given below :
(i) fory>1Itisn'* power
(i) fory<1 Itisn"” root of pixel.
S (output
gray
level)
oO} t(\/p gray levels) 1 ©
ee ie
Fig.2.4 Power low transformation Awe
also called gamma correction due to the use of gamma as constant). Basically y correction play$
same role as log- transformation ; but now with small variation in the value of ycauses a long range
(dynamic range) coverage. The most popular advantage of gamma level-correction is in CRT.
Asat internet, a single picture in a website is visited by different peoples having different CRT. So
many CRT has automatic gamma correction and some have manual that carries to see the image
| according to our choice and our monitor. (ob a6
2.3.5 Piece-wise Linear Transformation
In the transformations discussed till now has one disadvantage that “transformation is applied to
‘whole image. On the other hand, ifwe require to enhancea particular part of image, we generally prefer a piece-
wise linear transformation”.
Now we will discuss different types of piece-wise transformation functions and their applications.
(@) Contrast Stretching : Basically due to different reasons such as :
( Poor illumination.
(ii) Lack of dynamic range in imaging sensor.
(iii) Wrong setting of lens aperture during image acquisition.20 Digital Image Processing
We get poor contrast, so to get good contrast, we apply contrast stretching. The transfer func-
tion has been drawn below
(t, S,) a
Nowa
weg Op oray Ly ter conkoat' |
ae, wel Tac
gray level i
output
T (1) for Identity
es % A
Input gray level r
Fig. 2.5 Contrast stretching.
This figure clearly shows that a range of input gray levels (r, ~r,) has been stretched (5, ~ 5) by
piece-wise linear transform. If we don’t apply piece wise we get normal unstretched output
image.
I (F451) = ie)
and (F252) = Fmuaye b= 1) (26)
it is also calied threshold mapping (that gives us digital binary image)
" - ®) Gray Level Slicing Sometime, itis desirable to stretch the specific range of image should be
- enhanced not the whole image.
So gray level slicing provide this facility, by two possible ways.:
(8) One approach is that we boast up the desired range of gray levels and other gray levels are
suppressed.
Range of input to be
Boasted up.
Fig. 2.6(4) Gray level slicing.21
Image Enhancement in Spatial Domain
(i) Another approach is that, we bo:
tity of remaining part of input,
up the desired range of input, but also preserve the quan-
Remaining are
preserved
lp’
1
pr
Boasting up the range AB
Fig. 2.6(b) Gray level slicing.
(©) Bit-Plane Slicing : Sometime it is also desired to know the contribution of each bit in an image.
Suppose that we have 8 bits in each pixel.
Pixel having 8 Bits
4
MN] Mic
So it can be
ivided into 8-plane. Plane 0 will have all LSBs and plane ‘7’ will have all MSB.
pa
Bit plane 7
cmsB
One 8 Bit
Byte
Say ay,
Bit plane 0
(SB)
Fig. 2.7 Bir plane slicing,
Itshould be cleared that all LSB means plane ‘0’ will play very less significant role and plane 7" |
will play very much significant role in image.g
22 Digital Image Processing
We may follow the following steps for bit slicing:
1. We have total 2° = 256 levels. So by a appropriate threshold divide the levels 0 ~ 127 and
129-255.
2. Thus, we get two segments, again make threshold and subdivide the levels.
nS
2. UAE sais PROCESSING Ak
1. The histogram of a digital image with gray levels in the range [0, L—1] is a discrete function
and given by
y=" (28)
where r= k gray level and
‘ny = Number of pixels having gray level r.
2. Practically generaily we represent normalized histogram thatis given by
Normalized Value = ___Thatvalue__
Total number of values
Thus, we have
(2.9)
Loosely speaking,
P (4) = Probability of occurring ky, level.
Simply,
Addition of all terms to normalised histogram is always =
3. Applications of histogram are:
(@) Inimage enhancement () Inimage compression
(©) Inimage segmentation (@) Inreal-time-image processing.
4. Now let us discuss the image enhancement using histogram.
PG) i)
lh
a
(a) Dark image (b) light image
Per) PCr)
th lll |
r
(0) low contrast image (4) high contrast image
Fig. 2.8 Histogram,(ee ean, ae
Image Enhancement in Spatial Domain 23
‘As seen in diagram,
(@) Adark image will have histogram very near to origin.
() Alight image will have histogram very far from origin near the maximum levels at rscale.
(9. Alow contrast image will have very narrow and at middle histogram.
(d) Ahigh contrast image will contain a wide dynamic range of pixels thats why its histogram
willbe uniformly distributed through scaler.
So, in general case, we have to apply a software that will convert each image histogram in form of
image ‘d’ automatically,
2,5” HISTOGRAM EQUALIZATION
“In histogram equalization, we get the image titat las equaliz
‘gray levels”
By the equation,
S=T(r) 0
26 Digital Image Processing
2. Inimage processing, these operations are called functionally completed. Because they can be
combined to generate a new function
3. Logical operation is always applicable to binary images whereas arithmetic operations, apply
to multivalued pixels.
4. Logic operator are also applied to whole image pixel-by-pixel as arithmetic operation.
A NOT (A)
Nor
A 8 (AND (8)
-——— ) —$_—__
(oR 8)
; oR Os
& a
>
ne (A) XOR (8)
ae 4 > |e
|
[NOT (A)] AND (@)
NOT-
AND
Fig. 2.10 Logical operationsVas. TRANSFORMATIONS
(@ Translation :Suppose that, we make translation of a point (X, Y, Z) to a new location by using
displacement (Xp, Yo, Zp). Then translation can be done by using
Image Enhancement in Spatial Domain 27
X*=X4+Xq
YX =Y+Yp 4 WP 0 19)
Z*=Z+Zy \s
where (X*, Y*, Z") = New co-ordinate after translation. os
In the form of matrix K
Xe ln Ola PG
lO a © ell he
eS (2.20)
B10 @ a. zlllz oe
i (oO Oo oO alln
So, the matrix used for translation of a point is
LO 0 %%
(Coemluen Oley) eH
. 9O 1 w@ ee)
oy "0" 40 a
(b) Scaling : Scaling by factor Sx, Sy, S, along the points X, Y and Z ; then transformation matrix :
i § © 0 O ;
vows} eae ° & Oo (223)
vi = a0!
que “ 0 0 & O
0 © Ga
(0) Rotation : To rotate any point ; just do the following steps:
(a) Shift that point at origin.
() Now apply the rotation.
(© Again reshift that point to original condition.
Rotation about Z-axis by angle ‘6’ then transformation matrix is given by
cos? sin@ 0 0
-sin@ cos@ 0 0
Re=| 5 Pe as «= (2.23)
0 Dies Ole
Rotation about ‘x’ axis by angle ‘a’ then transformation matrix is given by
ao o 6
0 COs OF sina 0 2.24)
0 -sina cosa 0 ae)
(Cheea0 Oe28 Digital Image Processing
Rotation about Y-axis by angle ‘f’ then transformation matrix is given by
cosp 0 =sinB 0
0 1 0 i)
: (2.
Re=Jcinp 0 cosB 0 25)
0 0 0 1
(2) Perspective Transformations
() Aperspective transforma
(i), Thisis totally non-linear transformation because
Perspective transformation matrix is given by
1.0 0 0
on projects 3:D point on toa plane.
is divided by some co-ordinate value.
(2.26)
2.9 BASICS OF SPATIAL FILTERING
The basic of spatial filtering has been shown in Fig. 2.11. The process is simple that we move the filter
mask from point to point of an image.
\ y
May
wit, =1)] (4,0) | wot, 1)
image fy)
wia.1) | 0,0) | 0.1)
wit.=t) | wits0) | w(t, 1)
x-siy-1)] fomty) [10-1 aa i
‘coordinate aranigement
Hax-t) | thay | foeyer)
1} tex+ty9 |itae nye)
Puss of mage
‘section under mask
Fig. 2.11 Basics of spatial filtering.Iniage Enhancement in Spatial Domain 29
Ateach point of image, the response of filter is calculated using a predefined relationship. For
“inear filtering, the response is given by sum of products of filter coefficients and the corresponding image
pixels in the area directly under the mask’.
For 3x3 filter mask, the response ‘R’ of linear filtering at any point (x, y) is given by
R= w1,-1)fe-1,y-1) +1, 0). flx=1, 9) + ne (0,0). flr, y)
+ (1,0) .f(x+1,y +1) +w (1,1) .f(x+1,y +1) - (2.27)
Itis very easy, we can see that we have multiplied mask coefficient with respective image coeffi-
cient and after that added all, simple to say that3 x3 mask will have total ‘9’ product terms.
oo
Ingeneral, linear filtering of an image f of size M x N with a filter mask of size m xn is given by
a
R=g@y= D Dw flx+sy+s) (2.28)
ab,
where @ and bare integer values. Itis better to opt always a odd size filter because they have clear
center point so easy to be located at image.
2.9.1 Smoothing Spatial Filtering
“The basic use of smoothing filter method is for blurring and for noise reduction’
“Blurring is a process to fill up the small gaps in lines ‘or’ curves. It is also used as a preprocessing for
removal of small object before extracting large object information from image. In practical meaning, blurring
‘means to reduce the sharpness”.
2.9.1.1 Mean Smoothing Spatial Filtering
There may be a number of smoothing filtering methods (which will be discussed as per requirement).
Butat this stage, we will discuss about mean smoothing spatial filters.
“In mean filtering, we replace the value of every pixel in an image by the average / arithmetic mean of the
gray levels in the neighbourhood defined by the filter mask”.
The basic advantage of mean filtering is that by this type of filtering, the very small irrelevant
areas of images are removed (as small areas have very small values and if we take further averaging,
of these areas definitely the result will be smaller so negligible). But only the disadvantage is that by
mean filtering, we may loose the edges (sharp information) of an image.
Figure given below shows the basic 3 x; filter used for averaging.
Tp1]t
3x2 afi
tfafa
Fig. 2.12 3 x3 {filter for averaging.
Ifwe apply simply masking formula, we get response
iL
a (2.29)
R 371 (2.29)
as we see that all filter co-efficients are equal so it is also called box filter.g
30 Digital Image Processing
“Sometime itis necessary to protect some information of our choice during averaging, for that purpose, we
use weighted averaging”. In this type of averaging/mean, we multiply the pixels of an image with
different co-efficient. The pixels having high importance are multiplied by co-efficients of high
values in comparison to others. Sometime, it is a practical approach that we use high valued co-
efficients at center and decreasing the values of co-efficients as we go away from center. For example,
take the 3 x3 mask given below
in| 2
eZ 4|2
mle
Fig. 2.13 Another mask for averaging.
Here the sum ofall co-efficients of mask is ‘16’, So for averaging, we divide the total values by 16,
So for averaging, we may conclude with general expression for filtering.
+» (2.30)
2.9.1.2 Non-Linear Smoothing Spatial Filtering
The method discussed above is linear smoothing spatial filtering. But in some cases, we require
response of filtering based on ordering ‘or’ ranking of pixels. They have really advantages in case of
presence of impulse noise. Here at a glance, we will discuss some non-linear smoothing filters,
(@) Median Filer: Let S, , represent the set of co-ordinates in a rectangular mask of size m x1
centered at point (x, y). The given image is g (x, y). Then median is given by
R= Median {g (s, )) (2.31)
BNE Sy
Thus, now each pixel value of g (x,y) is replaced by median of its neighbourhood pixel under
mask.
(0) Maximum and Minimum Filtering : Median filter represents the 50" percentile of ranked set
of pixels. For some purposes, we may use 100" percentile ‘or’ 0"" percentile filter. The filter that
is using 100" percentile of ranked set of pixel is called max filter and given by
R= max (g(s,#)} ++» (2.32)
GHeSy
The filter thatiis using 0" percentile of ranked set of pixel is called min filter and given by
R= min (g(s,8)) +s (2.33)
6 HeSy
Max filter is useful for finding brightest point of an image and minimum filter is useful for
finding darkest pointof image.
(9 Mid-point Filtering : Mid-point filter simply computes the mid-point between the maximum
and minimum values in the area covered by the filter/mask.
come ; [max |g (s,)] +min [g (s,1)) (2.34)Image Enhancement in Spatial Domain 31.
NES, 6 He Sy]
This filter is combination of statistics and averaging.
Itis very useful for enhancement if image is corrupted with gaussian ‘or’ uniform noise.
2.9.2. Sharpening Spatial Filter
'Assometime itis required to suppress some irrelevant information by blurring, it is sometime essen-
tial for image enhancement, to sharp some small but relevant information. So process opposite to
blurring is sharpening,
“Sharpening isa process to highlight fine detail in an image ‘or’ enhance detail that has been blurred, either
inerror or’ asa natural effect of particular method of image acquisition”.
‘One main difference between blurring and sharpening is also that in blurring, we take averaging
ofneighbouring pixel averaging of pixels is equivalent to integration of pixel.
So Blurring = Average of neighbouring pixels = Integration of pixels.
So definitely logically sharpening should be equal to derivative of pixels.
So sharpening = Derivative of pixels.
‘As we know that mathematically, derivative can be of
1. First derivative that can be represented by
aia *
aaa aa)
a)
@ = =fy+)-sW) (2.35)
For an image f (x, y) ;s0 ata time, we take partial derivative along one axis.
2. Second derivative that can be represented by
ay
ax?
(x +1) +f(x-1)-2f@)
=fy+1 +fY-1)-2FY) - (2.36)
So now, we will discuss first derivative and second derivative in detail for image enhancement.
2
1 Use of First Derivative for Enhancement (The Gradient)
The basic properties that should be followed by first derivative are:
1. First determine should be zero at the area of constant gray levels (also called flat segments of
image),
2. Mustbe non-zero at gray-level step ‘or’ ramp.
Both properties are easily understandable mathematically. Practically, first derivative in image
processing are implemented using magnitude of the gradient. For a function (x, y), the gradientofat
co-ordinates (x, y) is defined as the two-dimensional column vector.
G
w[g
The magnitude of this vector is given by meg (Vf)
(237)32 Digital Image Processing
[ci +a}
Of) (af) |
(a) a
Itshould be cleared :
1. Gradient vector itself sa linear operator. But its magnitude is a squaring a square root means
non linear operator.
2. Magnitude of gradient vectoris referred as gradient.
Here for easy calculation, we approximate gradient by
Vf=1G,1+1G,! (239)
that serves our purpose sufficiently.
Now let us discuss the calculation of G, and G, by the use of different filters, Here we have drawn
the filter in terms of ,.....,29in stead of f(-1,~1),.....f(1, 1) only for simplicity. Both have same meaning,
One of the simplest ways to implementa first order derivative is to use Robert-Cross gradient
operators.
G, = (9-25)
and G, Es) (240)
Butit is not practical to use 2x 2 mask, as it does not have any center.
An approach using masks of size 3 x3 is given by
= (G+ 2%g+%)- (4 +%+%)
and Gy = 5 +%5 +25) — (1 +2442) (2.41)
In this formulation, the difference between first and third row of the 3 x 3 image region calculate
the derivative in x-direction. The difference between the third and first columns calculate the deriva-
tive in y-direction.
This masks called prewitt operators. A slight variation of these two equations uses a weight of2
in the center co-efficient.
Gy = (27 + 22g + 29) — (2, + 222 +23)
and Gy = (5 +226 + 29) —(z; +224 +27).
‘A weight value of 2’ is used to achieve some smoothing by giving more importance to the center
point. To implement these derivative, we use sobel operators as shown in figure 2.14.
One thing should be cleared to the students that the addition of all co-efficients should be zero.
Paw
ALE
% | 2 | 2
0 0] -1
o fi 1] 0
Roberts
=1[=1]=1 =1] 0,2
of o -1/0]1
| -1[0]i
Prewitteer "cae “Eh anil eT asa ke")
Image Enhancement in Spatial Domain 33
=1]-2]-1
of of 0
| |
Sobel
Fig. 2.14. Mask used for Gradient,
2.9.2.2 Use of Second Derivative for Enhancement (the Laplacian)
Second derivative must follow the properties given below :
Second derivative must be at constant gray-level areas (called flat segment of image).
Must be non-zero at gray-level ramp,
Must be zero at gray-level ramp of constant slope.
1
3,
The Laplacian of a 2D function f (x, y) is a second-order derivative defined as
fae w= (2.42)
fae oy? aa
Fora 3x3 region, one of the two forms encountered most frequently in pract
V2 f= 425 — (Zp +24 +2 +2). w= (2.43)
Another digital approximation including the diagonal neighbours is given by
V? f= Bz5— (2 + 29 +25 + 24 + 2p +27 + Zy + 2) (244)
Both equations are isotropic for rotation increments of 90° and 45° respectively. |
Because, the Laplacian is a derivative operator, its use high lights gray-level discontinuities in an
image and de-emphasizes regions with slowly varying gray levels.
Soby Laplacian, we get response with very highlighted edges and totally suppressed low gray-level
area. So if we add original image with Laplacian of Image ; Resultant image will have highlighted
edges along with all other information. Thus, we use Laplacian for image enhancement by the follow-
ing expression
JU: y)-V? f(x, y) if the center co-efficient of the Laplacian mask is negative
s(x y)= | f(x,y) + V2 (x, y) if the center co-efficient of the Laplacian maskis positive | ..(2-45)
| oOo, 1]0 Sh Say
| 1[-4[1 1{-8/1
| 1[ 1f1
o[-1] 0 et 18 [ad
)-1| 4/-1 -1[ 8[-1
ws oe =Iales1s Goal
Fig. 2.15. Mask used 10 implement the digital laplacian,
(Remember a the gradient and Laplacian will be discussed under the topic ‘edge detection’, Reader
need not be confused. By sharpending, we mean that we are highlighting some fine details of image
that are also called edges of an image. So sharpening and edge detection both are same).34 Digital Image Processing
L summary —— —————
1. Objective of the image enhancement is to process an image so that result is more sui
human being.
able for
. Image enhancement is categorized in two categories : spatial domain and frequency domain,
|. Spatial domain is based on direct manipulation of pixels of an image. It is also called aggre.
gate of pixels.
4. Spatial domain enhancement is a subjective analysis.
5. Forenhancement purpose, we use different gray level transformation functions such as point,
linear, log a power low and piece-wise linear transformation:
6. For image enhancement, image segmentation, image compression, it is better to represent an_
image by its histogram. ;
7. Inhistogram, we draw a plot between different possible values of gray levels and the number _
of pixels having these values. i
8. In histogram equalization, we get an image that has equalized gray levels.
9. By histogram matching, we get an image of our desired histogram.
10. By different arithmetic and logical operations, enhancement of an image may be achieved.
11. Basic purpose of spatial filtering in smoothing and sharpening of an image. Spatial filtering
done with the help of masks. '
12. Smoothing isa process that removes the noise and blurr the image. Smoothing is done with’
help of mean-filters.
13. Sharpening is process that removes blurring of an image and sharps the edges of an
Sharpending is done with the help of Robert cross gradient, prewitt and sobel
provides the first derivative of an image. For second derivative, we use Laplacian.
a» Review Questions
1. What do you mean by image enhancement ? Whatare different methods of image:
2. What are the basic gray level transformation ?
3. What do you mean by histogram processing ?
4. What do you understand by:
(a) Histogram equalization
(6) Histogram matching
How do these process enhance the image?
5. Write short notes on
(a) Smoothing spatial filtering
(b) Sharpening spatial filtering.Ce
i
pon
On
wo
ieee ENHINCo eRe IN
FREQUENCY Domain
One method of image enhancement has been discussed in previous chapter. Now we will discuss
another method called frequency domain image enhancement. As disc In this
method, Enhancement is applied on the fourier transform of whole imas
7688
ls ders te. Igo
So before going in detail for image enhancement using frequency domain, we jubt revise the
fourier transform of a function because we will use fourier transform for converting
timedomain to frequency domain.
an image from
nko Wh Ale, 2 Withe
3.1_ONE DIMENSIONAL FOURIER TRANSFORM AND ITS INVERSE corpora
The fourier transform of a single variable, continuous function, f(x) is defined by equation
Fa)= ff).
Inverse fourier transform fora given F (u) is given by
andy 6.1)
fed= J FQ).0* du 2)
If given function f (x) is a discrete function of one variable x = 0, 1,
M-—1; then fourier
transform is called discrete fourier transform (DFT) and given by
Mat
re >) fla) ePRM (83)
0
for 4 =0/1,2,.s00M=1
Simply the inverse discrete fourier transform is given by
Mar
fe)= Y Fu).
X= 0,1,2, soe, M-1.
(34)
for
3940 Digital Image Processing
3.2. TWO DIMENSIONAL FOURIER TRANSFORM AND ITS INVERSE .
re
If wehave two variable and v; then fourier transform is given by 7
(u,v) i f f(x,y).07 Per de. dy 68)
and similarly its inverse fourier transform is given by |
fiuy) f I F(u, 0). du. do w= (3.6)
“The discrete fourier transform (DFT) of f(x,y) of size M x (two dimensional) is given by equation
M-1N-1
1 jan (e+) 3,
E(u,0) = = f(x ue 7
Fu)= 5 x x
where u = 0, 1,2, v1 (M=1)
and 0=0,1,2, 00 (N=1)
Similarly the inverse fourier transform is given by
Met N=
u 38)
faw= Y YFwre
1,1, eraser (M1)
Y= 0,1, ea, (N=D).
again
and
3.3. BASICS OF FILTERING IN FREQUENCY DOMAIN
Filtering in frequency domain is easy and straight forward. It consists of the following steps:
1. Multiply the input image by (- 1)"*" to center the transform (to take the image at its original
place after transformation). It should cleared to the students that by equation :
Met Nat
Fuo= ty Ys (oy). Perm) G9)
x=0 y=0
f(x,y) has been multiplied by exponential function that will cause to shift the function by
shifting property of fourier transform. (Itis called preprocessing on image).
2. Now compute F (1, 0), the DFT of image from step ‘I’.
3. Now forenhancement of image, we multiply any filter (according to our choice and req!
ment) H (u, 0) to the image so.
G (u,v) = H (u,v). F(u,0) eee (3.10)
Definitely G (u, v)is called enhanced image. ;
Practically we prefer real function H (u, v) that’s why it will not provide any phase shift to
image (if any function has imaginary value and it is multiplied to another function, the
will be phase shift in the second function). So itis called zero-phase shift filter.
4, Now calculate the IDFT (Inverse DFT) of image of step (3).
So filtered image =F"G(u,l=g (xy).Image Enhancement in Frequency Domain 41.
5. Now tocompensate step (1) again multiply the image by (- 1)**Y thats called post processing
of image.
G(u,0) =F (u, 2). H(u, 2)
|
| Filer Inverse
| Fourier function fourier
| transform mtu) transform
F (uv)
Pre Post
processing ae
fy) g (xy)
Input image fale)
image
Fig. 3.1 Basic steps for enhancement—Enhanced Image.
3.4_ BASIC FREQUENCY DOMAIN FILTERS
Now we will study the basic filters used for enhancement. Just like spatial filtering, frequency
domain filtering also has two types :
(2) Smoothing frequency domain filtering and
(b) Sharpening frequency domain filtering,
Now we
Il study them in detail.
3.4.1 Smoothing Frequency Domain Filtering
As
cussed in spatial domain also, smoothing is a process that will blur the sharp edges ‘or’
transition of images. In term of frequency domain, smoothing simply reflect the same process. Sharp
ed ‘or’ transition, after taking fourier transform produces high frequency components. So “int
Frequency domain smoothing means suppressing high frequency component from image”. As we know that
low pass-filters are used for removing high frequency components.
(°) Low Pass Filtering : A 2-D ideal low-pass filter is one whose transfer function satisfies the relation
if D(u,v)$Dy
if D(u,v)>Dy
Where Dg isa specified non-negative quantity, and D (u,v) is the distance from point (Wi, ¥) to
the origin of the frequency plane ; that is
H(u,v)= {a «= @.1)
D(u2)= [re +07]! (312)42 Digital Image Processing
i ©
Figure 3.2 shows the plot of H (1, ) of an ideal low pass filter.
H (uo)
° Dy D(uv)
re “ov
H(u,v)
Ds D(u,v)
Fig. 3.2. Ideal low-pass filter. |
Thus, this low-pass filter will pass all frequency components inside Dp and alll frequency —
components outside the circles are suppressed. Dy is called the cut-off frequency of filter.
Practically this much sharp cut-off is not possible. Due to sharp cut-off frequencies ; ideal low
pass filter has ringing problem. The solution of this problem is Butterworth filter.
Butterworth low-pass filter : The transfer function of the butterworth low-pass filter (BLPF) of
order ‘n’ and with cut-off frequency locus ata distance Dy from the origin is defined by the relation
H(u,v) = aa (3.13)
i)
Pa]
Where, Diu,7)=[u? +07}! (From equation3.12)
The plots for BLPF has been shown in Fig. 3.3. BLPF of order ‘1’ has no ringing problem. ButImage Enhancement in Frequency Domain 43
higher order BLPF has the ringing problem.
H(u,v)
D(u,v)
Fig. 3.3 Butterworth low pass filter.
() Gaussian low-pass filter : The transfer function of GLPF is given by
H (u,v) =e D ur)/20° wa (3.14)
where, [e +0) J (From equation 3.12)
and is the measure of spread of Gaussian curve.
If 6 = Dp then
H(u,0) = e7P(7)/208 (3.15)
The basic property of GLPF is that it will not have ringing problem. The plots for GLPF has been
drawn in Fig. 3.4.
H(u.v)
t
Ay44 Digital Image Processing
H(uv)
0,667
Due)
Fig. 3.4 Gaussian low pass filter
3.4.2 Sharpening Frequency Domain Filters
As discussed previously, sharpening of image means enhancing blurred information of image. Itis
“we can say that sharpening means, we allow to pass
opposite to the smoothing filtering. So obviously,
high frequency components of our interest and suppress the low frequency components”.
So now instead of using low pass filter, we will use high-pass filter for sharpening.
(a) Ideal high pass filter (IHPF) : A 2-D high pass (DHPF) is one whose transfer function satisfies
the relation.
0 if D(u,v)Dp a
where D(u,v)= [e +0] (From equation 3.12)
and Dj = Cutoff distance measured from the origin of the frequency plane.
Thus, IHPF attenuate all the frequency components inside Dy and allows all the frequency
components to be passed outside Dy. As in case of ILPF, IHPFis also practically not realizable.
The plots has been drawn in Fig. 3.5.
nH
a‘,
Mur)
0)
"Df, )
Fig. 3.5 Ideal high pass filterSR cen: 8 sn ar ph a om er a nae
Image Enhancement in Frequency Domain 45
()) Butterworth high pass-filter (BHPF) : The transfer function of BHPF of order‘n’ and with cut-
off frequency locus Dg from the origin is defined by the relation.
+ (B.17)
The plots have been drawn in Fig. 3.6.
Hue)
SC
Hine)
10)
= 5}
Fig. 3.6 Butterworth high pass filter
(0) Gaussian High Pass Filtering : The transfer function of the GHPF with cut-off frequency locus
ata distance Dy from the origin is given by
(8.18)
The plot has been given in figure 3.7.
Mu.)
wo
Hiu,0)
\g
Diu.0)
Fig. 3.7 Gaussian High pass filter46 Digital Image Processing
WA vrovionmme FILTERING
Now we will discuss “a special method of enhancement in which we deal with illumination and reflectang
component specially”. Let us start with basic definition of image, according to that
fl y)=ity)r%y) (3.19)
Taking fourier transform of this function directly is not possible in other words :
Fifa, l4 Fi y)- Fir (oy) (3.2)
So a better mathematical way is defining Z (x, y) =In f(x,y) B21)
=Infi(x,y).r(%y)
Z(x,y)=Ini(x, y)+Inr (zy) ex G2)
Now F(Z(x,y))= F(Ini(x,y)} + F (Inr(x,y)) ow
Z (u,v) = F;(u,0) +F, (u, 2) BB)
where F,(u,0) = F (Ini(x,y))
and F, (u,v) = F (nr (x, y)}
Now second step is to multiply this fourier transform of image by a filter function H (1, 2).
So S(u,v)=Z(u,v).H(u,2,
S (u,v) =H (u,v). F; (u,v) +H (u,v). F, (u,v)
Now third step is to take inverse fourier transform.
S(x,y) =F '(S(,0))
= F"{H (u,v) F;(u,v)} + F(A (u, 2). F, (u, 2)}
(3.24)
| S(x,y)=i (xy) +1’ (xy)
. where i (x,y) =F" (H (u,v) F; (u,0))
Y (x,y) = FH (u,v) E,(u,0)}
Now final step is that we have to take inverse operation of logarithm applied at beginning. 5°
enhanced image
Be y=eer
=F oY
8(%,Y) = ig (%,¥) 79 (%, 9) (3:25)
where int, =O
ro(x,y) =e")
These are the illumination and reflectance components of output image. This method is based 0"
a special case of a class of systems known as homomorphic system. The all steps has been draw"
below in block diagram.
f(xy)—>] In OFT H (uy)}—>| (OFT) on } 9 (xy)
Fig. 3.8 Homomorphic filtering for image enhancement..s
Image Enhancement in Frequency Domain 47
L summary
1
Second method of image enhancement is frequency domain enhancement. In frequency do-
main enhancement, we apply enhancement an fourier transform of image.
Steps for image enhancement using frequency domain can be defined as pre-processing,
fourier transform of that image, apply desired enhancement function on image, now take the
inverse fourier transform of resultant. If required apply post-processing on the output of pro-
cess.
Same as spatial domain filtering, frequency domain filtering also is of two types—smoothing
filtering and sharpening filtering.
As in case of spatial filtering ; in frequency domain also, smoothing filter blurs the image. By
blurring here we mean removing high frequency components of an image so smoothing in
frequency domain is achieved by low-pass-filters.
Here also, by sharpening we again means to enhance the blurred image. In frequency domain,
it means allowing high frequency components, so we use high pass filter in frequency domain.
Homomorphic filtering is a special method of enhancement in frequency domain. In homo-
morphic filtering, we deal with illumination and reflectance components of an image.
ay Review Questions
| 1
| 2.
as
What are the fundamental steps of frequency domain enhancement ?
Differentiate between spatial domain enhancement and frequency domain enhancement.
What are basic smoothing frequency domain filtering ? Co-relate frequency domain filtering to
spatial domain filtering.
What do you understand by blurring ? How can itbe removed ?
What is homomorphic filtering ?