0% found this document useful (0 votes)
3 views8 pages

UNIT 2ip

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views8 pages

UNIT 2ip

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

UNIT 2

Image enhancement

Refers to the process of highlighting certain information of an image, as well as weakening or removing any unnecessary
information according to specific needs. For example, eliminating noise, revealing blurred details, and adjusting levels to
highlight features of an image. Image enhancement techniques can be divided into two broad categories:

 Spatial domain — enhancement of the image space that divides an image into uniform pixels according to the spatial
coordinates with a particular resolution. The spatial domain methods perform operations on pixels directly.

 Frequency domain — enhancement obtained by applying the Fourier Transform to the spatial domain. In the
frequency domain, pixels are operated in groups as well as indirectly.

This chapter discusses the image enhancement techniques implemented in the spatial domain.

Types of Spatial Domain Technique

Types of spatial domain operator:

Point operation (intensity transformation) - Point operations refer to running the same conversion operation for each
pixel in a grayscale image. The transformation is based on the original pixel and is independent of its location or
neighboring pixels.

Point Operation

Point operations are often used to change the grayscale range and distribution. The concept of point operation is to map
every pixel onto a new image with a predefined transformation function.

g(x, y) = T(f(x, y))

 g (x, y) is the output image

 T is an operator of intensity transformation

 f (x, y) is the input image

Grey Level Transformation

The simplest image enhancement method is to use a 1 x 1 neighborhood size. It is a point operation. In this case, the
output pixel (‘s’) only depends on the input pixel (‘r’), and the point operation function can be simplified as follows:
s = T(r)

Where T is the point operator of a certain gray-level mapping relationship between the original image and the output
image.

 s,r: denote the gray level of the input pixel and the output pixel.

Different transformation functions work for different scenarios.

1. Linear

Linear transformations include identity transformation and negative transformation. In identity transformation, the
input image is the same as the output image.

s=r

The negative transformation is:

s = L - 1 - r = 256 - 1 - r = 255 - r

L = Largest gray level in an image. The negative transformation is suited for enhancing white or gray detail embedded in
dark areas of an image, for example, analyzing the breast tissue in a digital mammogram.

s = power(10, r * c)-

Note:

 s,r: denote the gray level of the input pixel and the output pixel.
 ‘c’ is a constant; to map from [0,255] to [0,255], c =LOG(256)/ Histogram
 The base of a common logarithm is 10
LINEAR IDENTITY

NEGATIVE IDENTITY

Logarithmic Transformations

Logarithmic transformation is a point processing technique used in the spatial domain to enhance low intensity values in
an image while compressing high intensity values.

Logarithmic transformations can be categorized into two main types:

1. Logarithmic transformation

2. Inverse logarithmic transformation

The transformation follows this formula:

[ s = c log(r + 1)]

Power-Law Transformations

Power-Law transformation (also called Gamma Correction) is a non-linear intensity transformation used to enhance
images by adjusting brightness and contrast. It uses a power function to map input intensity values to output.

s=c⋅rγ

Where:

 r = input pixel intensity (normalized, 0 ≤ r ≤ 1)


 s= output pixel intensity
 c= scaling constant
 γ= gamma value (power factor)
Histograms
 A histogram is a graph. A graph that shows frequency of anything. Usually histogram have bars that represent frequency of occurring of
data in the whole data set.
 A Histogram has two axis the x axis and the y axis.
 The x axis contains event whose frequency you have to count.
 The y axis contains frequency.
 The different heights of bar shows different frequency of occurrence of data.
 Usually a histogram looks like this.

Example

Consider a class of programming students and you are teaching python to them.

At the end of the semester, you got this result that is shown in table. But it is very messy and does not show your overall
result of class. So you have to make a histogram of your result, showing the overall frequency of occurrence of grades in
your class. Here how you are going to do it.

Result sheet

Name Grade

John A

Jack D

Carter B

Tommy A

Lisa C+
Derek A-

Tom B+

Histogram of result sheet

Now what you are going to do is, that you have to find what comes on the x and the y axis.

There is one thing to be sure, that y axis contains the frequency, so what comes on the x axis. X axis contains the event
whose frequency has to be calculated. In this case x axis contains grades.

Histogram of an Image

Histogram of an image, like other histograms also shows frequency. But an image histogram, shows frequency of pixels
intensity values. In an image histogram, the x axis shows the gray level intensities and the y axis shows the frequency of
these intensities.

Example

The histogram of the above picture of the Einstein would be something like this
 The x axis of the histogram shows the range of pixel values. Since its an 8 bpp image, that means it has 256 levels
of gray or shades of gray in it.
 Thats why the range of x axis starts from 0 and end at 255 with a gap of 50. Whereas on the y axis, is the count of
these intensities.

As you can see from the graph, that most of the bars that have high frequency lies in the first half portion which is the
darker portion. That means that the image we have got is darker. And this can be proved from the image too.

Applications of Histograms

 Histograms has many uses in image processing. The first use as it has also been discussed above is the analysis of
the image. We can predict about an image by just looking at its histogram. Its like looking an x ray of a bone of a
body.
 The second use of histogram is for brightness purposes. The histograms has wide application in image
brightness. Not only in brightness, but histograms are also used in adjusting contrast of an image.
 Another important use of histogram is to equalize an image.
 And last but not the least, histogram has wide use in thresholding. This is mostly used in computer vision.

SPATITAL FILTERING

 Spatial Filtering is an image processing technique where an output pixel value is computed using a neighborhood
of input pixels around it.
 The operation is performed directly in the spatial domain (on the image plane, not frequency domain).
 It uses a filter mask (kernel) that is moved (slid) across the image.

1. Fundamentals of Spatial Filtering

 Spatial filtering refers to operations performed directly on pixels of an image using a neighborhood
(mask/kernel).
 The filter mask is moved across the image, and each pixel is replaced with a new value computed from its
neighbors.

2. Filter Mask (Kernel)

 A small matrix (e.g., 3×3, 5×5).

 Each element of the mask acts as a weight.

 Placed at a pixel → multiply with neighborhood values → sum → replace center pixel.

3. Mathematical Expression

g(x,y)=∑s=−aa∑t=−bbw(s,t)⋅f(x+s,y+t)g(x,y) = \sum_{s=-a}^{a} \sum_{t=-b}^{b} w(s,t) \cdot f(x+s, y+t)g(x,y)=s=−a∑at=−b∑b


w(s,t)⋅f(x+s,y+t)

Where:

 f(x,y)f(x,y)f(x,y) = input image

 g(x,y)g(x,y)g(x,y) = output image

 w(s,t)w(s,t)w(s,t) = filter mask coefficients


 a,ba, ba,b = mask dimensions

4. Categories

1. Smoothing (Low-Pass Filtering)

o Reduces noise, blurs images.

o Examples: Mean filter, Gaussian filter.

2. Sharpening (High-Pass Filtering)

o Enhances edges, fine details.

o Examples: Laplacian filter, Sobel/Prewitt operators.

5. Basic Concepts

 Operates in spatial domain, not frequency domain.

 Uses local neighborhood around each pixel.

 The filter response depends on both pixel values and mask coefficients.

 Common masks: 3×3, 5×5, etc.

6. Applications

 Image noise removal.

 Blurring for background detection.

 Edge detection and feature extraction.

 Sharpening medical/satellite images.

Image smoothing is a digital image processing technique that reduces and suppresses image noises. In the spatial
domain, neighborhood averaging can generally be used to achieve the purpose of smoothing. Commonly seen
smoothing filters include average smoothing, Gaussian smoothing, and adaptive smoothing.

Average Smoothing

 Average Smoothing replaces each pixel with the average of its neighborhood pixels.
 It is a low-pass filter that removes noise and blurs small details.
 Implemented using a filter mask (kernel) with equal weights.

Effect of Average Smoothing

 Removes random noise.

 Reduces sharp transitions.

 Blurs fine details and edges.

Applications

 Noise reduction (especially salt-and-pepper noise).


 Preprocessing before edge detection.

 Blurring background in images.

Gaussian Smoothing

The average smoothing treats the same to all the pixels in the neighborhood. In order to reduce the blur in the
smoothing process and obtain a more natural smoothing effect, it is natural to think to increase the weight of the
template center point and reduce the weight of distant points. So that the new center point intensity is closer to its
nearest neighbors. The Gaussian template is based on such consideration.

Adaptive Smoothing

The average template blurs the image while eliminating the noise. Gaussian template does a better job, but the blurring
is still inevitable as it’s rooted in the mechanism. A more desirable way is selective smoothing, that is, smoothing only in
the noise area, and not smoothing in the noise-free area. This way potentially minimizes the influence of the blur. It is
called adaptive filtering.

Some applications of where sharpening filters are used include:

 Medical image visualization

 Photo enhancement

 Industrial defect detection

 Autonomous guidance in military systems

You might also like