SVCE TIRUPATI
COURSE MATERIAL
COURSE MATERIAL
SNO CONTENTS
1 SYLLABUS
2 LECTURE NOTES
1.1 INTRODUCTION
1.2 A SIMPLE IMAGE MODEL
1.3 COMPONENTS OF IMAGE PROCESSING SYSTEM
1.4 FUNDAMENTALS OF DIGITAL IMAGE PROCESSING
1.5 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING
1.7 IMAGE SENSING & ACQUISITION
1.8 APPLICATIONS OF DIGITAL IMAGE PROCESSING
3 PRACTICE QUIZ
1. SYLLABUS
UNIT I
Fundamentals of Image Processing – I:
Introduction, A simple image model, Components of image processing system,
Fundamental Steps in digital image processing, image sensing and acquisition,
Applications of image processing.
BTECH_CSM-SEM 41
SVCE TIRUPATI
2. LECTURE NOTES
1.1 INTRODUCTION
Digital image processing deals with manipulation of digital images through a digital
computer. It is a subfield of signals and systems but focus particularly on images. DIP
focuses on developing a computer system that is able to perform processing on an
image. The input of that system is a digital image and the system process that image
using efficient algorithms, and gives an image as an output. The most common example
is Adobe Photoshop. It is one of the widely used applications for processing digital
images.
Signal is a function of one or more variables used to define a physically happened thing
in nature. Based on number of variables, signals are defined as one or multidimensional
signals respectively.
Examples:
1. Speech Signal with „time‟ as a variable is 1-D signal
2. Image with spatial co-ordinates (x, y) is a 2-D signal
3. Video with spatial co-ordinates (x, y) and time is a 3-D
4. CT & MRI is a 4-D signal (x, y, z, t)
The main characteristics of a 1-D signal (speech) are Amplitude (A), frequency (f), Phase
(φ); where,
Amplitude represents volume/sound (or voltage in case of electrical signals).
Frequency represents speed of the signal with respect to speed of light
Phase represents the direction of the signal flow.
In the same way a 2-D signal (image) also exhibits the same characteristics like speech
signal. But here amplitude represents the intensity (or value of the function), frequency
and phase represent wavelength (Light) and information about the edges. In viewing a
still image, most of the visual information is contained in edges and at the regions of
high contrast. In general, the region with maximum and minimum intensities can be
considered as a place at which complex exponentials at different frequencies are in
phase.
BTECH_CSM-SEM 41
SVCE TIRUPATI
1.1.1 DIGITAL IMAGE
An Image maybe defined as a two-dimensional function, ƒ(x,y), where x and y are
spatial (Plane) Coordinates, and the amplitude of ƒ at any pair of coordinates (x,y) is
called the Intensity or gray level of the image at that point. When x,y and intensity
values of ƒ are finite, discrete quantities, we call the image as Digital Image.
A Digital Image is a representation of a two dimensional image as a finite set of digital
values, called picture elements or pixels Pixel values typically represent gray levels,
colours , heights etc.. Digital image is an approximation of a real scene.
1.1.2 Types of Images
1. Binary Images / B&W Images:
It is the simplest type of image. It takes only two values i.e, Black and White or 0 and 1.
The binary image consists of a 1-bit image and it takes only 1binary digit to represent a
pixel. Binary images are generated using threshold operation. When a pixel is above the
threshold value, then it is turned white('1') and which are below the threshold value then
they are turned black('0').
2. Gray Scale Images:
Grayscale images are monochrome images, Means they have only one color. Grayscale
images do not contain any information about color. Each pixel determines available
different grey levels. A normal grayscale image contains 8 bits/pixel data, which has 256
different grey levels. In medical images and astronomy, 12 or 16 bits/pixel images are
used.
3. Colour Images:
Colour images are three band monochrome images in which, each band contains a
different color and the actual information is stored in the digital image. The color images
contain gray level information in each spectral band. The images are represented as red,
green and blue (RGB images). And each color image has 24 bits/pixel means 8 bits for
each of the three color band(RGB).
Figure: Types of Images
BTECH_CSM-SEM 41
SVCE TIRUPATI
1.2 A SIMPLE IMAGE MODEL
We can denote an image as function f(x, y). The value or amplitude of f at spatial
coordinates (x, y) is a positive scalar quantity whose physical meaning is determined by
the source of the image.
When an image is generated from a physical process, its values are proportional to
energy radiated by a physical source (e.g., electromagnetic waves). Hence, the
amplitude of the image is nonzero and finite.
i.e. 0 <f(x, y) < ∞
The function f(x, y) may be characterized by two components:
1. The amount of source illumination incident on the scene being viewed, and
2. The amount of illumination reflected by the objects in the scene.
Appropriately, these are called the illumination and reflectance components and are
denoted by i(x, y) and r(x, y), respectively.
The two functions combine as a product to form f(x, y)
f (x,y) =i(x,y)r(x,y)
Where, 0≤i(x,y)<∞ and 0≤r(x,y) ≤1
Thus, reflectance is bounded by 0 (total absorption) and 1 (totalreflectance).The nature of i(x,y) is
determined by the illumination source, and r(x,y) is determined by the characteristics of the imaged
objects.
These expressions are applicable also to images formed via transmission of the illumination
through a medium, such as a chest X- ray.
In this case, we would deal with a transmissivity instead of a reflectivity function, but the
limits would be the same as in (0≤r(x,y)≤1), and the image function formed would be
modeled as the product in f (x,y)=i(x,y)r(x,y)
BTECH-CSM -SEM 41
SVCE TIRUPATI
1.3 COMPONENTS OF IMAGE PROCESSING SYSTEM
The basic components comprising a typical general purpose system used for digital
image processing is as shown in bellow figure. The function of each component is
discussed as follows
Image Sensors: With reference to sensing, two elements are required to acquire digital
image. The first is a physical device that is sensitive to the energy radiated by the object
we wish to image and second is specialized image processing hardware.
Specialize image processing hardware: It consists of the digitizer just mentioned, plus
hardware that performs other primitive operations such as an arithmetic logic unit, which
performs arithmetic such addition and subtraction and logical operations in parallel on
images.
Computer: It is a general purpose computer and can range from a PC to a
supercomputer depending on the application. In dedicated applications, sometimes
specially designed computer are used to achieve a required level of performance
Software: It consists of specialized modules that perform specific tasks a well-designed
package also includes capability for the user to write code, as a minimum, utilizes the
specialized module. More sophisticated software packages allow the integration of these
modules.
BTECH-CSM -SEM 41
SVCE TIRUPATI
Mass storage: This capability is a must in image processing applications. An image of size
1024 x1024 pixels, in which the intensity of each pixel is an 8- bit quantity requires one
Megabytes of storage space if the image is not compressed. Image
processing applications falls into three principal categories of storage
i) Short term storage for use during processing
ii) On line storage for relatively fast retrieval
iii) Archival storage such as magnetic tapes and disks
Image display: Image displays in use today are mainly color TV monitors. These monitors
are driven by the outputs of image and graphics displays cards that are an integral part
of computer system.
Hardcopy devices: The devices for recording image includes laser printers, film cameras,
heat sensitive devices inkjet units and digital units such as optical and CD ROM disk. Films
provide the highest possible resolution, but paper is the obvious medium of choice for
written applications.
Networking: It is almost a default function in any computer system in use today because
of the large amount of data inherent in image processing applications. The key consideration
in image transmission is bandwidth.
1.4 FUNDAMENTALS OF DIGITAL IMAGE PROCESSING
Digital Image processing is a method to perform some operations on a Digital image, in
order to get an enhanced Digital image or to extract some useful information from it
(Or) processing of images which are digital in nature by means of Digital Computer.
Digital image processing focuses on two major tasks
Improvement of pictorial information for human interpretation
Processing of image data for storage, transmission and representation for autonomous
machine perception
A digital image can be processed in 3 ways
1. Low level processing
2. Mid level processing
3. High level processing
In low level processing simple things like reduction of noise, contrast enhancement and
image sharpening will be done, here input to the processor is an image and output is
also an image.
In Mid-level processing segmentation is done, here input to the processor is an image
and output is an attribute of the input image.
In High level processing is combination of both low level and mid level, here input is
always an image but output may vary according to the requirement.
BTECH-CSM -SEM 41
SVCE TIRUPATI
Examples of Image processing applications
1. Remove noise,
2. Improve the contrast of the image,
3. Remove blurring caused by movement of the camera during image acquisition
4. Correcting geometrical distortions caused by the lens.
1.5 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING
There are two categories of the steps involved in the image processing
1. Methods whose outputs and input are images.
2. Methods whose outputs are attributes extracted from those images.
Figure: Fundamental steps in DIP
BTECH-CSM-SEM 41
SVCE TIRUPATI
Image acquisition:
The first process is to acquire a digital image. To do this we need image sensor
equipment having the ability to digitize the signal produced by the sensor. The sensor
could be a TV camera, a line-scan camera, etc.
Image Enhancement:
It is a subjective area of image processing which is used to bring out detail that is
obscured or to highlight certain features of interest in an image. Example: increasing the
contrast of an image for better vision.
Image Restoration:
It is also deals with improving the appearance of an image. But it is done using the
mathematical or probabilistic methods of image degradation.
Color Image Processing:
As we know, to restore the natural characteristics of an image it is necessary to preserve
the color information associated with an image. For this purpose we go for color image
processing.
Wavelets and Multi-resolution:
This is the foundation for representing images in various degrees of resolution. Particularly
it is employed for image data compression and for pyramidal representation where images
are subdivided into successively into smaller regions.
Compression:
This technique is used for the storage required to save an image or the bandwidth
required to transmit it which is most important in Internet applications.
Morphological Processing:
It deals with tools for extracting image components useful in representation and
description of shape.
Segmentation:
It may be defined as portioning an input image into its constituent parts or objects. It is
very important to distinguish between different objects in an image as in the case of
systems employed for traffic control, or crowd control. In character recognition, the key
role of segmentation is to extract individual characters and words from the background.
Representation and Description:
It is a process which transforms raw data into a form suitable for subsequent computer
processing. The first decision is to choose between boundary representation and regional
BTECH-CSM-SEM 41
SVCE TIRUPATI
Representation. Boundary representation is used when the details of external shape
characteristics is important whereas the regional representation is used when the internal
properties are important.
Object Recognition:
It is a process that assigns a label to an object based on its descriptors i.e., the information
provided by its descriptors and the recognized object is interpreted by assigning a meaning
to it.
Knowledge Base:
The function of knowledge base is to guide the operation of each processing module and
control the interaction between them. A feedback request through the knowledge base
to the segmentation stage for another „look‟ is an example of knowledge utilization in
performing image processing tasks.
1.6 Image Sensing & Acquisition
Images are generated by the combination of an “illumination” source and the reflection
or absorption energy from that source by the elements of the “scene” being imaged.
There are three types of sensing methods to acquire images.
1. Image sensing using single sensor.
2. Image sensing using a sensor strip.
3. Image sensing using array of sensors.
The following figure shows these three sensing devices. The main principle involved in
image sensing is same as that of principle of working of a photo-diode, which accepts
light as input and voltage as output. This same principle is used to sense the real world
images.
Figure: single sensor cell (Photo diode) Figure: Sensor array
Figure: Sensor strip
BTECH-CSM-SEM 41
SVCE TIRUPATI
Image Sensing with Single Sensor
In order to generate a 2-D image using a single sensor, there has to be relative
displacements in both the x- and y-directions between the sensor and the area to be
imaged.
A film negative is mounted onto a drum whose mechanical rotation provides
displacement in one dimension. The single sensor is mounted on a lead screw that
provides motion in the perpendicular direction. Since mechanical motion can be
controlled with high precision, this method is an inexpensive (but slow) way to obtain
high-resolution images.
Figure: Imaging with single sensor
Imaging with Sensor Strip
The strip provides imaging elements in one direction. Motion perpendicular to the strip
provides imaging in the other direction. This is the type of arrangement used in most flat
bed scanners.
Sensing devices with 4000 or more in-line sensors are possible. In-line sensors are used
routinely in airborne imaging applications, in which the imaging system is mounted on an
aircraft that flies at a constant altitude and speed over the geographical area to be
imaged.
Sensor strips mounted in a ring configuration are used in medical and industrial imaging
to obtain cross-sectional (“slice”) images of 3-D objects. A rotating X-ray source provides
illumination and the portion of the sensors opposite the source collect the X-ray energy
that pass through the object (the sensors obviously have to be sensitive to X-ray energy).
BTECH-CSM-SEM 41
Figure: Imaging with Sensor Strip
Imaging with Array Sensors
The individual sensors arranged in the form of a 2-D array. Numerous electromagnetic
and some ultrasonic sensing devices frequently are arranged in an array format. This is
also the predominant arrangement found in digital cameras.
A typical sensor for these cameras is a CCD array, which can be manufactured with a
broad range of sensing properties and can be packaged in rugged arrays of elements or
more. CCD sensors are used widely in digital cameras and other light sensing
instruments. The response of each sensor is proportional to the integral of the light
energy projected onto the surface of the sensor, a property that is used in astronomical
and other applications requiring low noise images.
Noise reduction is achieved by letting the sensor integrate the input light signal over
minutes or even hours. The two dimensional, its key advantage is that a complete image
can be obtained by focusing the energy pattern onto the surface of the array.
Motion obviously is not necessary, as is the case with the sensor arrangements this figure
shows the energy from an illumination source being reflected from a scene element, but,
as mentioned at the beginning of this section, the energy also could be transmitted through
the scene elements. The first function performed by the imaging system is to collect the
incoming energy and focus it onto an image plane. If the illumination is light, the front
end of the imaging system is a lens, which projects the viewed scene onto the lens focal
plane. The sensor array, which is coincident with the focal plane, produces outputs
proportional to the integral of the light received at each sensor. Digital and analog circuitry
sweeps these outputs and converts them to a video signal, which is then digitized by another
section of the imaging system.
BTECH-CSM-SEM 41
Figure: Imaging with array sensors
1.7 APPLICATIONS OF DIGITAL IMAGE PROCESSING
Since digital image processing has very wide applications and almost all of the technical
fields are impacted by DIP, we will just discuss some of the major applications of DIP.
Digital image processing has a broad spectrum of applications, such as
Remote sensing via satellites and other space crafts
Image transmission and storage for business applications
Medical processing,
RADAR (Radio Detection and Ranging)
SONAR(Sound Navigation and Ranging) and
Acoustic image processing (The study of underwater sound is known as underwater
acoustics or hydro acoustics.)
Robotics and automated inspection of industrial parts.
Images acquired by satellites are useful in tracking of
Earth resources;
Geographical mapping;
Prediction of agricultural crops,
Urban growth and weather monitoring
Flood and fire control and many other environmental applications.
BTECH-CSM-SEM 41
Space image applications include:
Recognition and analysis of objects contained in images obtained from deep space-
probe missions.
Image transmission and storage applications occur in broadcast television
Teleconferencing
Transmission of facsimile images(Printed documents and graphics) for office
automation
Communication over computer networks
Closed-circuit television based security monitoring systems and
In military communications.
Medical applications:
Processing of chest X- rays
Cineangiograms
Projection images of trans axial tomography and
Medical images that occur in radiology nuclear magnetic resonance(NMR)
Ultrasonic scanning
3. PRACTICE QUIZ
1 First step in Digital image processing steps is ?
A) Image segmentation
B) Image sensing
C) Morphological processing
D) None
2 Each element of the matrix is called
A) dots
B) coordinate
C) pixels
D) value
3 The smallest element of an image is called
A) pixel
B) dot
C) coordinate
D) digits
BTECH-CSM-SEM 41
4 DPI stands for
A) dots per image
B) dots per inches
C) dots per intensity
D) diameter per inches
5 Binary Image is also called as
A) bilevel image
B) RGB image
C) graylevel image
D) monochrome image
6 A technique used for reducing the storage required to save an Image
A) segmentation
B) compression
C) acquisition
D)enhancement
7 Digitizing the coordinate values is called
A) radiance
B) illuminance
C) sampling
D) quantization
8 Imaging system produces
A) high resolution image
B) voltage signal
C) digitized image
D) analog signal
9 The elements of a digital Image is referred to as
A) pixels
B) picture elements
C) pels
D) All the above
10 A 128 x 128 Image with 64 gray levels requires bits of storage
A) 4096
B) 8192
C) 12288
D) 98304
BTECH-CSM-SEM 41
10. PRESCRIBED TEXT BOOKS & REFERENCE
BOOKSText Book
1. R.C .Gonzalez& R.E. Woods, “Digital Image Processing”, Addison Wesley/Pearson
education, 3rd Edition, 2010 Edition.
2. A .K. Jain, “Fundamentals of Digital Image processing”, PHI.
References:
1. S jayaraman, S Esakkirajan, T Veerakumar, “Digital Image processing”,Tata
McGraw Hill
2. William K. Pratt, “Digital Image Processing”, John Wilely, 3rd Edition, 2004
3. Rafael C. Gonzalez, Richard E woods and Steven L.Eddins, “Digital Image
processing using MATLAB”, Tata McGraw Hill, 2010.
BTECH-CSM-SEM 41
SVCE TIRUPATI
BTECH-CSM-SEM 41