0% found this document useful (0 votes)
165 views15 pages

DIP UNIT 1 Enotes

The document provides an overview of Digital Image Processing (DIP), detailing its fundamental concepts, techniques, and applications. It discusses the basic steps involved in DIP, including image acquisition, enhancement, restoration, segmentation, and analysis, as well as the types of images and their representations. Additionally, it outlines the advantages and disadvantages of DIP, components of an image processing system, and the relationship between visual perception and image processing.

Uploaded by

ragavihr131211
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
165 views15 pages

DIP UNIT 1 Enotes

The document provides an overview of Digital Image Processing (DIP), detailing its fundamental concepts, techniques, and applications. It discusses the basic steps involved in DIP, including image acquisition, enhancement, restoration, segmentation, and analysis, as well as the types of images and their representations. Additionally, it outlines the advantages and disadvantages of DIP, components of an image processing system, and the relationship between visual perception and image processing.

Uploaded by

ragavihr131211
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

MARUDHAR KESARI JAIN COLLEGE FOR WOMEN

(AUTONOMOUS)
VANIYAMBADI
PG and Research Department of Computer Science
II M.Sc. Computer Science – Semester - II
E-Notes (Study Material)

Core Course -1: DIGITAL IMAGE PROCESSING​ Code:

Unit: 1 - Introduction: What is Digital image processing – the origin of DIP – Examples of
fields that use DIP – Fundamentals steps in DIP – Components of an image processing
system. Digital Image Fundamentals: Elements of Visual perception – Light and the
electromagnetic spectrum – Image sensing and acquisition – Image sampling and
Quantization – Some Basic relationship between Pixels – Linear & Nonlinear operations.
(17 Hours) ​ ​ ​ ​ ​
Learning Objectives: Learn basic image processing techniques for solving real problems.
Course Outcome: Understand the fundamentals of Digital Image Processing.
Overview:
Digital Image Processing Basics
Digital Image Processing means processing digital image by means of a digital computer. We
can also say that it is a use of computer algorithms, in order to get enhanced image either to
extract some useful information.

Digital image processing is the use of algorithms and mathematical models to process and
analyze digital images. The goal of digital image processing is to enhance the quality of
images, extract meaningful information from images, and automate image-based tasks.

The basic steps involved in digital image processing are:(Fundamental Steps in DIP)

1.​ Image acquisition: This involves capturing an image using a digital camera or
scanner, or importing an existing image into a computer.
2.​ Image enhancement: This involves improving the visual quality of an image, such
as increasing contrast, reducing noise, and removing artifacts.
3.​ Image restoration: This involves removing degradation from an image, such as
blurring, noise, and distortion.
4.​ Image segmentation: This involves dividing an image into regions or segments,
each of which corresponds to a specific object or feature in the image.
5.​ Image representation and description: This involves representing an image in a
way that can be analyzed and manipulated by a computer, and describing the
features of an image in a compact and meaningful way.
6.​ Image analysis: This involves using algorithms and mathematical models to
extract information from an image, such as recognizing objects, detecting patterns,
and quantifying features.
7.​ Image synthesis and compression: This involves generating new images or
compressing existing images to reduce storage and transmission requirements.
8.​ Digital image processing is widely used in a variety of applications, including
medical imaging, remote sensing, computer vision, and multimedia.
Image processing mainly include the following steps:
1.​ Importing the image via image acquisition tools;

2.​ Analysing and manipulating the image;

3.Output in which result can be altered image or a report which is based on analysing that
image.

What is an image?
An image is defined as a two-dimensional function,F(x,y), where x and y are spatial
coordinates, and the amplitude of F at any pair of coordinates (x,y) is called the intensity of
that image at that point. When x,y, and amplitude values of F are finite, we call it a digital
image. ​
In other words, an image can be defined by a two-dimensional array specifically arranged in
rows and columns.
Digital Image is composed of a finite number of elements, each of which elements have a
particular value at a particular location.These elements are referred to as picture
elements,image elements,and pixels. A Pixel is most widely used to denote the elements of a
Digital Image.
Types of an image
1.​ BINARY IMAGE– The binary image as its name suggests, contain only two
pixel elements i.e 0 & 1,where 0 refers to black and 1 refers to white. This image
is also known as Monochrome.
2.​ BLACK AND WHITE IMAGE– The image which consist of only black and
white color is called BLACK AND WHITE IMAGE.
3.​ 8 bit COLOR FORMAT– It is the most famous image format.It has 256 different
shades of colors in it and commonly known as Grayscale Image. In this format, 0
stands for Black, and 255 stands for white, and 127 stands for gray.
4.​ 16 bit COLOR FORMAT– It is a color image format. It has 65,536 different
colors in it.It is also known as High Color Format. In this format the distribution
of color is not as same as Grayscale image.
A 16 bit format is actually divided into three further formats which are Red, Green and Blue.
That famous RGB format. ​
Image as a Matrix

As we know, images are represented in rows and columns we have the following syntax in
which images are represented:

The right side of this equation is digital image by definition. Every element of this matrix is
called image element , picture element , or pixel.

DIGITAL IMAGE REPRESENTATION IN MATLAB:

In MATLAB the start index is from 1 instead of 0. Therefore, f(1,1) = f(0,0). ​


henceforth the two representation of image are identical, except for the shift in origin. ​
In MATLAB, matrices are stored in a variable i.e X,x,input_image , and so on. The variables
must be a letter as same as other programming languages.

PHASES OF IMAGE PROCESSING:

1.ACQUISITION– It could be as simple as being given an image which is in digital form.


The main work involves:
a) Scaling
b) Color conversion(RGB to Gray or vice-versa)
2.IMAGE ENHANCEMENT– It is amongst the simplest and most appealing in areas of
Image Processing it is also used to extract some hidden details from an image and is
subjective. ​
3.IMAGE RESTORATION– It also deals with appealing of an image but it is
objective(Restoration is based on mathematical or probabilistic model or image degradation). ​
4.COLOR IMAGE PROCESSING– It deals with pseudocolor and full color image
processing color models are applicable to digital image processing.
5.WAVELETS AND MULTI-RESOLUTION PROCESSING– It is foundation of
representing images in various degrees.
6.IMAGE COMPRESSION-It involves in developing some functions to perform this
operation. It mainly deals with image size or resolution.
7.MORPHOLOGICAL PROCESSING-It deals with tools for extracting image
components that are useful in the representation & description of shape.
8.SEGMENTATION PROCEDURE-It includes partitioning an image into its constituent
parts or objects. Autonomous segmentation is the most difficult task in Image Processing. ​
9.REPRESENTATION & DESCRIPTION-It follows output of segmentation stage,
choosing a representation is only the part of solution for transforming raw data into processed
data. ​
10.OBJECT DETECTION AND RECOGNITION-It is a process that assigns a label to an
object based on its descriptor.
OVERLAPPING FIELDS WITH IMAGE PROCESSING

According to block 1,if input is an image and we get out image as a output, then it is termed
as Digital Image Processing.
According to block 2,if input is an image and we get some kind of information or
description as a output, then it is termed as Computer Vision.
According to block 3,if input is some description or code and we get image as an output,
then it is termed as Computer Graphics.
According to block 4,if input is description or some keywords or some code and we get
description or some keywords as a output,then it is termed as Artificial Intelligence
Advantages of Digital Image Processing:

1.​ Improved image quality: Digital image processing algorithms can improve the
visual quality of images, making them clearer, sharper, and more informative.
2.​ Automated image-based tasks: Digital image processing can automate many
image-based tasks, such as object recognition, pattern detection, and
measurement.
3.​ Increased efficiency: Digital image processing algorithms can process images
much faster than humans, making it possible to analyze large amounts of data in a
short amount of time.
4.​ Increased accuracy: Digital image processing algorithms can provide more
accurate results than humans, especially for tasks that require precise
measurements or quantitative analysis.

Disadvantages of Digital Image Processing:

1.​ High computational cost: Some digital image processing algorithms are
computationally intensive and require significant computational resources.
2.​ Limited interpretability: Some digital image processing algorithms may produce
results that are difficult for humans to interpret, especially for complex or
sophisticated algorithms.
3.​ Dependence on quality of input: The quality of the output of digital image
processing algorithms is highly dependent on the quality of the input images. Poor
quality input images can result in poor quality output.
4.​ Limitations of algorithms: Digital image processing algorithms have limitations,
such as the difficulty of recognizing objects in cluttered or poorly lit scenes, or the
inability to recognize objects with significant deformations or occlusions.
5.​ Dependence on good training data: The performance of many digital image
processing algorithms is dependent on the quality of the training data used to
develop the algorithms. Poor quality training data can result in poor performance
of the algorithms.

Components of Image Processing System


Image Processing System is the combination of the different elements involved in the digital
image processing. Digital image processing is the processing of an image by means of a
digital computer. Digital image processing uses different computer algorithms to perform
image processing on the digital images.​
It consists of following components:-

●​ Image Sensors:​
Image sensors senses the intensity, amplitude, co-ordinates and other features of the
images and passes the result to the image processing hardware. It includes the problem
domain.
●​ Image Processing Hardware:​
Image processing hardware is the dedicated hardware that is used to process the
instructions obtained from the image sensors. It passes the result to general purpose
computer.
●​ Computer:​
Computer used in the image processing system is the general purpose computer that is
used by us in our daily life.
●​ Image Processing Software:​
Image processing software is the software that includes all the mechanisms and algorithms
that are used in image processing system.
●​ Mass Storage:​
Mass storage stores the pixels of the images during the processing.
●​ Hard Copy Device:​
Once the image is processed then it is stored in the hard copy device. It can be a pen drive
or any external ROM device.
●​ Image Display:​
It includes the monitor or display screen that displays the processed images.
●​ Network:​
Network is the connection of all the above elements of the image processing system.
Elements of Visual Perception
The field of digital image processing is built on the foundation of mathematical and
probabilistic formulation, but human intuition and analysis play the main role to make the
selection between various techniques, and the choice or selection is basically made on
subjective, visual judgements.
In human visual perception, the eyes act as the sensor or camera, neurons act as the
connecting cable and the brain acts as the processor.
The basic elements of visual perceptions are: ​

1.​ Structure of Eye


2.​ Image Formation in the Eye
3.​ Brightness Adaptation and Discrimination
Structure of Eye:

The human eye is a slightly asymmetrical sphere with an average diameter of the length of
20mm to 25mm. It has a volume of about 6.5cc. The eye is just like a camera. The external
object is seen as the camera take the picture of any object. Light enters the eye through a
small hole called the pupil, a black looking aperture having the quality of contraction of eye
when exposed to bright light and is focused on the retina which is like a camera film.
The lens, iris, and cornea are nourished by clear fluid, know as anterior chamber. The fluid
flows from ciliary body to the pupil and is absorbed through the channels in the angle of the
anterior chamber. The delicate balance of aqueous production and absorption controls
pressure within the eye.
Cones in eye number between 6 to 7 million which are highly sensitive to colors. Human
visualizes the colored image in daylight due to these cones. The cone vision is also called as
photopic or bright-light vision.
Rods in the eye are much larger between 75 to 150 million and are distributed over the retinal
surface. Rods are not involved in the color vision and are sensitive to low levels of
illumination.
Image Formation in the Eye: ​
When the lens of the eye focus an image of the outside world onto a light-sensitive membrane
in the back of the eye, called retina the image is formed. The lens of the eye focuses light on
the photoreceptive cells of the retina which detects the photons of light and responds by
producing neural impulses.

The distance between the lens and the retina is about 17mm and the focal length is
approximately 14mm to 17mm.
Brightness Adaptation and Discrimination: ​
Digital images are displayed as a discrete set of intensities. The eyes ability to discriminate
black and white at different intensity levels is an important consideration in presenting image
processing result.

The range of light intensity levels to which the human visual system can adapt is of the order
of 1010 from the scotopic threshold to the glare limit. In a photopic vision, the range is about
106.
Image sensing and Acquisition in Image Processing
Image sensing and Acquisition

The types of images in which we are interested are generated by the combination of an
“illumination” source and the reflection or absorption of energy from that source by the
elements of the “scene” being imaged.

We enclose illumination and scene in quotes to emphasize the fact that they are considerably
more general than the familiar situation in which a visible light source illuminates a common
everyday 3-D (three-dimensional) scene.

For example, the illumination may originate from a source of electromagnetic energy such as
radar, infrared, or X-ray energy.

But, as noted earlier, it could originate from less traditional sources, such as ultrasound or
even a computer-generated illumination pattern. Similarly, the scene elements could be
familiar objects, but they can just as easily be molecules, buried rock formations, or a human
brain.

We could even image a source, such as acquiring images of the sun. Depending on the nature
of the source, illumination energy is reflected from, or transmitted through, objects. An
example in the first category is light reflected from a planar surface. An example in the
second category is when X-rays pass through a patient's body for the purpose of generating a
diagnostic X-ray film.

In some applications, the reflected or transmitted energy is focused onto a photo converter
(e.g., a phosphor screen), which converts the energy into visible light. Electron microscopy
and some applications of gamma imaging use this approach.

The idea is simple: Incoming energy is transformed into a voltage by the combination of
input electrical power and sensor material that is responsive to the particular type of energy
being detected.

The output voltage waveform is the response of the sensor(s), and a digital quantity is
obtained from each sensor by digitizing its response. In this section, we look at the principal
modalities for image sensing and generation.

Fig: Single Image sensor


Fig: Line Sensor

Fig: Array sensor

Sampling and Quantization in Digital Image Processing


The concepts of image sampling and quantization, exploring their purposes, techniques, and
differences. Image sampling involves capturing discrete samples of an image's continuous
spatial domain, while quantization focuses on reducing the number of colors or intensity
levels in an image.
The intricacies of image sampling and quantization, discussing their importance in digital
image processing and their impact on image quality and file size.
To create a digital image, we need to convert the continuous sensed data into digital form.​
This process includes 2 processes:
1.​ Sampling: Digitizing the co-ordinate value is called sampling.
2.​ Quantization: Digitizing the amplitude value is called quantization.
To convert a continuous image f(x, y) into digital form, we have to sample the function in
both co-ordinates and amplitude.
Difference between Image Sampling and Quantization:

Sampling Quantization

Digitization of co-ordinate values. Digitization of amplitude values.

x-axis(time) – discretized. x-axis(time) – continuous.

y-axis(amplitude) – continuous. y-axis(amplitude) – discretized.

Sampling is done prior to the quantization


Quantizatin is done after the sampling process.
process.

It determines the spatial resolution of the It determines the number of grey levels in the
digitized images. digitized images.

It reduces c.c. to a series of tent poles over It reduces c.c. to a continuous series of stair
a time. steps.

A single amplitude value is selected from Values representing the time intervals are
different values of the time interval to rounded off to create a defined set of possible
represent it. amplitude values.

Basic Relationships Between Pixels


●​ Neighborhood
●​ Adjacency
●​ Paths
●​ Connectivity
●​ Regions
●​ Boundaries
Neighbors of a pixel – N4(p)
●​ Any pixel p(x, y) has two vertical and two horizontal neighbors, given by
(x+1, y),

(x-1, y),
(x, y+1),

(x, y-1)

●​ This set of pixels are called the 4-neighbors of P, and is denoted by N4(P).

x , y+1

x-1 , y x,y x+1 , y

x , y-1

Neighbors of a pixel – ND(p)


●​ Any pixel p(x, y) has four diagonal neighbors, given by
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1 ,y-1)

●​ This set is denoted by ND(p).

x-1 , y+1 x+1, y+1

x,y

x-1, y-1 x+1,y-1

Neighbors of a pixel – N8(p)


●​ ND(p) and N4(p) are together known as 8-Neighbors and are denoted by N8(p)
●​ ND(p) U N4(p) = N8(p)
●​ What about when p(x,y) is a border pixel of the image ?

x-1,y+1 x,y+1 x+1,y+1

x-1,y x,y x+1,y

x-1,y-1 x,y-1 x+1, y-1

Adjacency
●​ Let V be the set of intensity values used to define adjacency
●​ For binary images à V = {1}
●​ A particular grayscale image à V = {1,3,5,…,251,253,255}
●​ 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the
set N4(p).
●​ 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the
set N8(p).
●​ m-adjacency: Two pixels p and q with values from V are m-adjacent if,
q is in N4(p)

OR

q is in ND(p) AND N4(p)∩N4(q) has no pixels whose values are from V

Path
●​ set of pixels lying in some adjacency definition
●​ 4-adjacency à 4-path
●​ 8-adjacency à 8-path
●​ m-adjacency à m-path
●​ path length ?
●​ Number of pixels involved
Connectivity
●​ Let Sà subset of pixels in an image
●​ Two pixels p and q are said to be connected in S if there exist a path between them
consisting entirely of pixels in S.
●​ For any pixel p in S the set of pixels that are connected to it in S is called connected
component of S.
●​ If S has only one connected component, then it is called connected set.
Region
●​ A connected set is also called a Region.
●​ Two regions (let Ri and Rj) are said to be adjacent if their union forms a connected set.
Adjacent Regions or joint regions
●​ Regions that are not adjacent are said to be disjoint regions.
●​ 4- and 8-adjacency is considered when referring to regions (author)
●​ Discussing a particular region, type of adjacency must be specified.
●​ Fig2.25d the two regions are adjacent only if 8-adjacency is considered
Foreground and Background
●​ Suppose an image contain K disjoint regions Rk , k=1,2,3,…K, none of which touches
the image border
●​ Let Ru denote the union of all the K regions.
●​ Let (Ru)c denote its compliment.
●​ We call all the points in Ru the foreground and all the points in (Ru)c the background
Boundary
●​ The boundary (border or contour) of a region R is the set of points that are
adjacent to the points in the complement of R.
●​ Set of pixels in the region that have at least one background neighbor.
●​ The boundary of the region R is the set of pixels in the region that have one or
more neighbors that are not in R.
●​ Inner Border: Border of Foreground
●​ Outer Border: Border of Background
●​ If R happens to be entire Image?
●​ There is a difference between boundary and edge in Digital Image Paradigm. The
author refers this discussion to chapter 10.
Distance Measures
●​ Euclidean Distance: De(p, q) = [(x-s)2 + (y-t)2]1/2
●​ City Block Distance: D4(p, q) = |x-s| + |y-t|
●​ Chess Board Distance: D8(p, q) = max(|x-s|, |y-t|)
REFERENCES
1)​ Digital Image Processing (Rafael c. gonzalez)
2)​ https://cuitutorial.com/basic-relationships-between-pixels/
3)​ https://benchpartner.com/image-sensing-and-acquisition-in-image-processing

Reference books:
1.​ “Digital Image Processing” by Rafael C. Gonzalez and Richard E. Woods.​
2.​ “Computer Vision: Algorithms and Applications” by Richard Szeliski.​
3.​ “Digital Image Processing Using MATLAB” by Rafael C. Gonzalez, Richard E.
Woods, and Steven L. Eddins.

You might also like