0% found this document useful (0 votes)
31 views25 pages

Dip Unit 1

Notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views25 pages

Dip Unit 1

Notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

CHAPTER-1

DIGITAL IMAGE FUNDAMENTALS

In the past the cost of image processing was very high because the
imaging sensors and
computational equipments were very expensive and had only limitedfunctions. As optics, imaging
sensors and computational technology advanced, image processing is more commonly used in
different areas.

1.1 What Is Digital Image Processing


An image can be defined as the variation of intensity in the space so, in image, intensity is the
function of spatial co ordinates.
Animage can be formally defined as a two-dimensional function f(x,y) where xand yare
sparial co-odinates) The amplitude of fat any pair of co-ordinates (*, y) is called the intensity or
gray level of the image at that point. For digital images x, yand the amplitude values offare finite
and discrete quantities.JA digital image is composed by a finite number of elements called pixels.
Each pixel has a particular location and value.
Image processing is adiscipline in which both the input and output of a process are images)
being able to
Computer vision use computers to emulate human vision including learning and Intelligence
make inferences andtakeactions based on visual inputs. Computer vision uses Artificial from these
are attributes extracted
(A T). In Image Analysis its inputsare images, but its outputs
individual objects).
Images (e.g., edges, contours and the identity of
Applications of Image Processing
is one of the most important sense organs,
Image processing has many applications as eye following are the few image processing
which is used for analysis and decision making. The
applications.
Medical Imaging for detecting the abnormalities
processing plays an important role
In medical science image
(diseases) in the human body. tissues.
helps in diagnosing diseases and also in analysis of body organs and
Medical imaging
Medical imaging incorporates.
Image Processing

Radiology : Radiology refers to examinations of the inner structure of opaque objects using X
rays or other penetrating radiation.
Radiology includes Images from X-rays, ultrasound, computed tomography (CT) Nuclear medicine,
Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI).

FaysImages
be examined.
A beamn of X-rays is projected towards the part of the body which has to position of
According to the density and composition of the different areas of the body part, a give a 2D
body are detected to
X-rays are absorbed by the body. The X-rays that pass through the
representation in terms of images.
Ultrasound Scanned images
organ, the waves travel through
Ultrasound frequency around (20,000 Hz) is projected on thedetected
are and imaged, which will
and reflected back where density differs. These reflections
to be examined.
reveal details of the inner structure of the body orgm
to vissualise muscles,
Sonography is an ultrasound based diagnostic mnedical imaging used
structure.
tendons and many internal organs, to capture their size and
Obstetric sonography is used during pregnancy to visualise fetus.
Computed Tomography (CT)
X-ray images taken
A 3D image of the inside of an organ can be generated using series of 2D especially
around a single axis of rotations. To diagnoise complex fractures CT scans can be used,
ones around jointsligesmentous injuries and dislocations.
CT scans are used to diagonise head, lungs,pulmanory angiogram, Cardiac abdominal and
Pelvic.

PET and SPECT (Single Photon Emission Computed Tomography)


The gamma emitting radio esotope is injected into the blood stream of the patient. Gamma
Camera is used to acquire 2D images of the blood nerves.
SPECT and PET can be used to analyse the functioning of Cardiac or brain.
Magnetic Resonance imaging(MRI)
MRIdoes not use any ionizing radiation. MRIuses powerful magnetic field, which makes the
nuclei of the body toproduce a rotating magnetic field detectable by the scanners. MRIis userur
imaging the brains, muscles, heart and cancers.
Digital infrared thermal imaging (DITI)
DITI camera is used to capture images called thermograms, DITICameras detect intral
radiation emitted by objects. The amount of radiation emitted by an object increase be
temperature. So warm objects, are easily visible in cool background. The thermograms will
MRXE ERAEB
BAE
Digital Image Fundamentals
nRxEEKBA E 3

analysed by thermologists (Medical doctors traincd in theermology) to detect breast cancer. Fever
screening (i.e.,
HINI). Monitoring healing process.
Electro encephalography (EEG)
EEG is the recording of electrical activity within nervousof the brain the
of EEG is in case of epilespy, coma, encephalopthies, brain death diagnosticapplications
tumoss, stroke.
Etectro Cardiography (ECG)
ECGis the meaSure of electrical activity of the heart over time captured. It is used to
heart attack and heart related diseases.
detect

Reeote Sensing
Remote sensing is the gathering of information about an object, area or phenomenon without
beingin physical contact witht.
Jmages acquired by satellite are used in remote sensing ie., tracking of earth resources,
prediction of agriculturalcrops, urban growth, wheather forecasting flood control and fire control.
Astronomy
Image processing is used in astronomy to analyse the solarsystem and celestial bodies like
moon, star and other planets.

Business
Digital image transmission helps in journalism. People from different countries can work
tQgether, usingteleconferencing through which pcople can communicate seeing each other onthe
displays. Industries can be automated usingdigital image processing.

Entertainment
Digital Videos can be broadcasted and can be received bytelevision. Videos can be transmitted
through internet in you Tube. Video games are because of image processing.
Security and Survalence
Small target detection and tracking, missile guidance vehicle navigation wide area survelliance
and automated aided target recognition can be done using image processing Biometric image
Processing for personal authentucation and identification.

Robotics
A Robot is an electromechanical machine which is guided by computer and electronic
Programing to emulate human behaviour. Camera and related network works as eyes for the
robots.
Image Processing

Night Vision Infrared images


Allobjects emit a amount of black body radiation as a function of their temperature. The
higher an object's temperature is, the more infrared radiation as black body radiation.A infrared
camera can detect radiation and can be imaged. The intensity of these images depends on the
temperature of the objects in the scene rather than visible light reflected by the objects. Soeven at
night warm objects like warm blooded animals will be visible in the image.

1.2 Fundamental steps in image processing


Fundamentals steps in image processing is as shown in Fig. 1.1.

Outputs of these processes generally are images attributes

Wavelets
and multi Mophological image
Color image Compression
resolution processing
processing
processing
are
generally
Image Segmentation
restoration
processes

Knowledge base Representation


Image
enhancement & description these
of
Outputs
Problem Image Object
domain acquisition recognition

Fig. 1.1 :Fundamental steps in digital image processing


Image Acquisition : It gives information about how to acquire an image i.e., about image origin
Tmage acquisition stage involves preprocessing such as scaling. Scaling is reducing or increasits
the physical size of the image by changing the number of pixels. Image acquisition gives the ina
in digital form.
Image Enhancement : Enhancement technique is used to bring out detail that is obscured(unclear)
or simply to highlight certain features of interest in an image. Enhancement is subjective proce
Mathematical tools are used for enhancing the image.
Digital Image Fundamentals 5

Jyage. Restoration : Restoration means getting something back, Image restoration is a objective
DrOcess. Image restoration is removalof noise in Lheimage. Restoration techniques are based on
athematical or probabilistic models of imagedegradation.
Coler Tmage Processing : This includes fundamental concepl of color mode and básiccolor
processing in digital domain.
Javetets: Using wavelets images can be represented in various degrees of resolution (multi
Vesolution) Wavelets are used in image data compresion and for pyramidal representation in which
image is subdivided intosmaller regions.
Cempression : Compression is a technique used to reduce the storage required to save an image
or the band width required to transmit. Compression is useful in internet which has to send
significant pictorialcontent. JPEG image files are compressed images.
Morphological Processing: Morphological image processing deals with tools for extracting
image
componentsthat are useful in representation and description of shapes. Morphological processing
begins atransition from process that output image to process thatoutputimage. attributes.
Segmentation : Segmentation procedures partition an image into its constituent parts or objects.
Autonomous segmentation or rugged segmentation leads to object identification.
Representation and Description :Segmentation gives usually raw pixel data constitutingeither
the boundary of a region or all the points in the region itself. Boundary representation is suitable
when the focIS is on external shape. Regional representation is appropriate when the focus is on
internal characteristics such as texture. Choosing a representation is_only a part of solution for
ransforming raw data into aform suitable for subsequent computers processing, Description also,
called feature.selectiondeals with extracting attributes that result in some.quantitative information
to identify objects.
Becognition : Recognition is a process that assigns a label to an object based on its description.
Knowledge base : A knowledge base is a special kind.of data base forknowledge management.
Knowledge data base gives knowledge about the problem domain inimage processing system. It
also guides the operation of each processing module. It also controlsthe interaction between modules.

1.3 Components of the Image Processing System


as satellite
Large scale image processing systems are sold for massive imaging application such
is as
Image processing, Medical image processing. Components of the Image Processing System
shown in Fig. 1.2.
the
Image Sensors : Image Sensor is a physical device that is sensitive to the energy radiated by
Object that we wish to image. In digital video camera, the sensors produce an electrical output
PYoportional to light intensity, example CCD (Charge Couple Device), Photodiodes etc.,
Specialized Image Processing Hardware : Usually consists of the digitizer, a device for converting
primitive
e output of the physical sensing device into digital form, plus hardware that performs
Operations such as ALU. One example how an ALUis used is in averaging images as quickly as
Image Processing

Network

Image displays Computer Mass storage

Specialized Image processing


Hardcopy image processing software
hardware

Image sensors

Problem domain
Fig. 1.2: Components of ageneral purpose image processing system
they are digitized, for the purpose of noise reduction. This type of hardware is called front end
subsystem. This unit perform functions. that require fast data throughputs that the typical main
Computerscannot handle.
.Computer : Image processing requires intensive processing capability as it has to handle large
data. So computer to super computer is required.
Software :It Consists of specialized modules that perform specific task such as
image or filtering the image for restoration. More enhancing the
sophisticated-
integration of these specialized modules for user friendly and generalsoftware-packages allow the
from at least one computer language. pårpose software commands
Mass Storage : This capability is a must in image
system deals with thousands or even millions of processing application. Usually image processing
(24 u bytes for 1 Mb image)Digital storage images. Each uncompressed image may pelne
categories. system for image processing falls into three mao
1. Short term storage for
use during processing.
Computer memory can be short term storage
piaital Image Fundamentals 7

2 On-line storage for relativelyfast call.


The key factor characterizing on line storage is frequent access to the stored image.
2 Archival storage characterized by
infrequent Acess.
Magnetic tapes and optical disks.
Image Displays :Displays are part of thecomputer system. In some cases it is necessary to have
stereo display (3D).
Hard copy : Laser printer, optical and CD-ROM disks.
Networking :The key factor inimage transmission is bandwidth.
1.4 Elennents ofvisual perception
Human intuition and analysis play acentral role in the choice of one technique verses another
in image processing and this choice often is based on subjective visual judgements.
1.4.1 Stracture of the HLmaneye
The eye is nearly asphere with an average diameter of approximately 20mm.Three membranes
enclose the eye as shown in Fig. 1.3.
1. The cornea and sclera outer cover
2. The choroid
3. The retina
Cornea is tough transparent tissue that covers the anterior surface of the eye. Continuous
with the cornea the sclera is an opaque. membrane that encloses the remainder of the optic globe,
The choroid directly lies belowthe sclera witha network of blood vessels, as the major source of
nutition to the cye?At its anterior extreme the choroid is dividedinto the ciliary body andtheiris
diaphragmThe iris contracts and expands to control the amount of light that enters the ey The
front of thefris conlains visible pigment of the eye, where as the back contains ablack pigment.
The lens is made up of concentric layers of fibrous cells and is suspended by ciliary fibers. Lens
contains 60to 70% water, about 6% fat and more protein than any other tissue in the eye. The lens
absorbs approximately 8% of the visible light.spectrum. The innermost membrane of the eye is the
retina.
When the eye is properly focused, the reflected light from an objects outside the eye is
imaged on the Retina. The light receptions are distributed over the surface of the retina. There are
two classes of receptors: cones and rods. The cones in cach eye number between 6and 7 million.
The cones are located primarily in the central portion of theretinacalled the fovea and are highly
scnsitive to color. Muscles controlling the eye rotate the eye ball untill the image of the object of
interest falls onthe fovea. Cone vision is called photopicor bright light vision.The number of rods
are 75 to 150 million are distributed over the retinal surface. Humans can resolve fine details with
Cones largely because each one is connected to its own nerve end. Rods give a general overall
plicture of the field of view as several rods are connected to a single nerve end. Rods are not
Involved in colour vision and are sensitive to low levels of illumination. This phenomenon is known
aS scotopic or dimlight vision. The region with no receptors are called Blind spot.
8 Image Processing

Cornea

Iris

Cilibar
o ydy

Anterior chamber Ciliary muscle

Lens

Ciliary fibers

Visual axis

Vitreous humor

Retina

Blind spot
Fovea
Sclera

Choroid

Nerve &sheath
Fig. 1.3 : Simplified diagram of a cross section of the human eye
pigital Image Fundamentals

1.4.2Image Formation In The Eye


The lens of the eye is flexible. The shaDe of the lens is controlled by tension in the fibers of the
ciliary body. To focus on distant objects the controlling muscle cause the lens to be relatively
flattened. Similarly there muscles allow the lens to become thicker in order to focus on objects
near the eye. Ihe distance between the center of the lens and the retina (focal length) varies from
approximately 17mm to about 14mm. For example the observer is looking at atree 14m bigh at a
distance of 100m. If his the height in mm of thatLobject in the retinal image the geometry fields as
shown in the Fig. L4.
15/100 = /17
h = 2.55 mm

15 m

-100 m 47mm

Fig.1.4 : Graphicalrepresentation of the eye looking at a tree. Point Cis the optical cênter of tlhe lens
1.4.3 Brightness Adaptation and Discrimination
Digital image is displayed as a iscrete set of intensities, the eye's ability to discriminate
between different intensity levels is an important considerations inpresenting Image processing
results. Experimental evidence indicates that subjective brightness (intensity as perceived by the
human visual system) is alogarithmic function ofthe light intensityincident on the eye. Fig. 1.5 plot
of lightintensity versus subjective brightness. The long solid curve represents the range of intensities
to which the visual system can adapt. The visual system cannot operate over such a range
simultaneously. It accomplishes this large variation by changes in its overall sensitivity, aphenomenon
known as brightness adaptation. The total range of distinct intensity levels it can discriminate
simultaneously is small. The short intersecting curve represents the range of subjective brightness
that the eye canperceivewhen adapted to the level B,This rangeis restricted having alevel B, at
and below which all stimuliare perceived as in distinguishable black.
10
image Processin
Glare limit

Subjective
brightness
Adaptation
range
B.

B,

Scotopic

Photopic
Scotopic
threshold
-6
-4-20 2 4

Log of intensity (mL) mL’ milli lambert


Fig. 1.5 : Log of the intensity versus subjective
brightness shown for a particular cdaptation lever
Aclassic experiment used to determine the capability of the
discrimination. The experiment consists of having a subject look athuman visual system for biIgadia
a flat, uniformly illuminated u
large enough to occupy the entre field of view. The illuminated
flat
from behind by a light source whose intensity (I) can be area is a opaque glass, that is
varied as shown in the Fig. 1.6. To hS

Fig. 1.6 : Experimental setup for


determination of brightness discrimination in human y
pigital Image Fundamentals 11

ts added an incrementof illunmination AI. in the form of a short duration flash that appcars as a circle
ta the center of the uniformly illuminated field. If AI is not bright enough, the subject says "no"
indicating no perceivable change. As Al gets stronger the subject may give a positive response of
"yes" indicating aperceived change. The quantity Alc/I where Alc is the increment of illumination
with back ground illuminate I, is called the "weber ratio". Asmall value of Alclmeans that a small
%change in intensity is discriminable.
1.0

0.5

log
Al/1
--0.5

-1.0

-1.5

-20o 0
log !
1 2 3 4

Fig. 1.7: Typical weber ratio function of intensity


Fig. 1.7 is aplot of log Alc/lc verses log I. This curve shows that brightness discrimination is
poor at low levels of illumination. The two branches in the curve, reflect the fact that the low levels
of illumination is due to rods where as highlevels is a function of cones.
Two phenomena demonstrate that perceived brightness is not a simple function of intensity.
The first is called Mach bands as shown in Fig. 1.8. Mach bands is based on the fact that the visual
system tends to perceive less brightness or more brightness around the boundary of regions of
different intensities to undershoot or overshoot around the boundary of regions of different intensities.
12 Image Processing

Perceived brightness -

- -

-Actual illumination
--

Fig. 1.8: Mach bands

Fig. 1.9 : Simultanious contrast


Whenever there is a sudden change in intensity from low to high for example consider the first
two bands at the edge of the intensity transition the border of the low intensity region (black)
appears to be more darker and the border of the higher intensity appears to be more
lighter:.
The second is called simultaneous contrast.
Simultaneous contrast is related to the Tacl ui
the regions perceived brightness does not depend simply on its intensity but also on its back
Fig. 1.9 shows the illustration. In Fig. 1.9 the centre square is of equal grounu
but the first oneappears to be more lighter than the rest of the tWo. bringhtness in all the ina3
DigitalImage Fundamentals 13

(a) (b)

(c)
Fig. 1.10:Some well knownoptical illusions
In optical illusions the eyes fill in non existing information as shown in Fig. 1.10. In
Fig. 1.10(a) a square seems to be existing between the circles, but there is no outlining of the
square. Similarly a circle seems to be existing in the centre of the lines in the Fig. 1.10 (b). In the
Fig. 1.10 (c) the two horizontal lines are of same length but one seems to be shorter than the othe.
1.5 Image Sensing and Acquisition
Images are generated by the combination of an "illumination" source and the reflection or
absorption of energy from that source by the elements of the "scene" being imaged.

1.5.1Image Acquisition using a single sensor


The sensor may be photo diode which is constructed of silicon materials andwhose output
Voltage is proportional to light. The use of a filter in front of a sensor improves (selectivityselection
of coloressence in the image eg: greenish image) as shown in Fig. 1.11(6). In order to generate a
2-D image using a single sensor as shown in the Fig. 1.11(a) there has to be relative displacement
in both the x andydirections between the sensor and the area to be imaged. Afilm is mounted onto
adrum whose mechanical rotation provides displacement in vertical direction. The single sensor is
Mounted on a lead screw that provides motion in the perpendicular direction. The mechanical
motion can be controlled with high precision, this method is an inexpensive way to obtain high
Tesolution image butthe disadvantage of this method is image acquisition takes more time and the
14
Image Processing
Film Energy
Filter

Serisor Rotation
Power in -Sensing
material
Linear motion
Housing- Apm Voltage
One image line out per waveform
increment of rotation and full linear
Out
displacement of sensor from left to right
(a) Setup to image using single sensor (b) Xmage sensOr
Fig. 1.1l:Use of Single senser to generate to 2-D ivmage
Scene to be imaged has to be constant for a such a long time. Flat bed with a similar mechanical
arrangementswith the sensor moving in two linear directions are used. These types of mechanical
digitizers sometimes are referred to as "microdensitometers".
1.5.2 Image Acquisition using sensor strips
The sensor strip provides imaging elements in one direction as shown in Fig. 1.13(a). Motion
perpendicular to the strip provides imaging in the other dircction as shown in Fig. 1.12. Sensor strips
mounted in a ring configuration are used in medical and industrial imaging to obtain cross-sectional
images of3-D objects. Arotatingx -ray source provides illumination and the portion of the sensors
opposite the source collect the x - ray energy that pass through the object. The output of the
sensors must be processed by reconstruction algorithm whose objective is to transform the sensed
data into meaningful cross-sectional images.

One image line out per


increment of linear motion
aced ateaa

Lineamotion
SanSOrzstrip

Fig. 1.12: Image aquisition


using a linear sensor strips
Diaital Image Fundamentais 15

(a)

(b)
Fig. 1.I3: (a) Linear Array sensors (b) Sensors in matrix forms

1.5.3 Image Acquisition using sensor rrays


Numerous electromagnetic and some ultrasonic sensing devices frequently are arranged in a
2-D aray format as shown in Fig. 1.13(b). This is the predominant arrangement found in digital
cameras witha CCD (Charge Coupled Devices) array. The response of each sensor is proportional
to the integral of the light energy projected onto the surface of the sensor. Noise reduction is
achieved by letting the sensor integrate the input light signal over minutes or even hours. Since the
Sensor arrayis two dimensional, its key advantage is that a complete image can be obtained by
TOcusing the energy pattern onto the surface of the array. Fig. 1.14 shows the energy from an
illumination source being reflected from the scene element, the energy also could be transmitted
through the scene elements as in X-rays. The first function performed by the imaging system is to
collect the incoming energy and focus it intoan image plane. If the illumination is light, the front
end of the imaging system is a lens,which projects the viewed scene into the lens focal plane. The
sensor array, which is coincident with the focal plane, produces outputs proportional to the integral
of the light received at each sensor. Digital and analog circuitry sweep these outputs and convert
hem toavoltage signal, which is then digitized by another section of the imaging system. The
Output is a digital image.
16 Image Processi

llumination (energy)
Source

Output(digitized) image
Imaging
system

(Internal) image plane


Scene element

Fig. 1.14 :An example of the digital image acquisition process

1.6 A Simple Image Formation Model


Images are denoted by two-dimensional function of the form f(r, y). The value or amplitud
offat spatial co-ordinate (*, y) is apositive scalar quantity whose physical meaning is determine
by the source of the image. Monochromatic images said to span the gray scale. The value o
ja, y)must be nonzero and finite.
0 <fa, y) < oo ---- (1.1
The functionf (x, y) may be characterized by two
components.
1. The amount of source illumination incident on the scene
being viewed.
2. The amount of illumination reflected by the
objects in the scene.
These are the illumination and reflectance components and are denoted by i
(, y) and r (4).
respectively. The two functions combine as a product to formf(, y)
fu, y) = i(*, y) r (x, y) -o (1.2)
and

0< i(x, y) <oo -.(1.3)


0< r(x, y) <1 --(1.4)
The reflectance is bounded by 0 (total
absorption) and 1 (total reflectance).
On a clear day, the sun may produce 90,000 Im/m² of illumination on the surface of theearth.
On acloudy day, the
of illuminatjon. In anillumination may be 10,000Im/m². On a clear
office illumination willbe 1000Im/m, evening moon yields 0.IlImm/m
0.01 for black velvet. Similarly r (x, y) IS
DigitalImage Fundamentals 17

0.65 for stainless steel.


0.80 for flat-white wall paint.
0.90 for silver plated metal.
0.93 for snow.
The intensity of a monochrome imagc at any co-ordinates (,, y,), the gray level () of the
point,
image at that
l=f ( Y) ----(1.5)
It is evident that llies in the range
L..min Sl<L (1.6)
In theory the only requirement on L, is that it be positive and on L .that it be finite.
In practice
mini,min ---- (1.7)
L =i
max max
---- (1.8)
The interval [Lmin L ]is called the gray scale.
Common practice is to shift this interval numerically to the interval
[0,L - 1]
black White
All intermediate values are shades of gray varying from black to white.

1.7Image Sampling and Quantization


Our objective is to generate digital images from sensed data.The output of most sensors is a
continuous voltage waveform whose amplitude and spatial behaviour are related to number of
photons sensed. Creating a digital image involyestwo processes.
t Sampling and
2Quantization
1.7.1 Basic Concepts in Sampling and Quantization
An image is continuous with.respeettethetandy co-ordinates and also in amplitude. Digitizing
the coordinate values is called sampling,Digitizing the amplitude values is called quantization.
The one dimensional function shown in Fig. 1J5(6) is aplot of amplitude values corresponding
lo intensity of the continuous image along the line segment AB in Fig. 1.15(a)
Tosample this function, we take equally spaced samples along the line AB, The location ot
cach'sample is given by a vertical tick mark in he bottom part of the Fig. I.15(c). The set of these
discrete locations gives the sampled function In grder to form a digital function the gray level
alue of these samples must be converted (quantized) into discrete quantities. The gray scale is
dlvided into eight levels ranging from black to white. The vertical tick marks indicate the specific
Value assigned to each of the samples from eight gray levels. Now, the continuous gray levels are
18 Image Processin

(a) Continuous image (b)A scan line from A toB in the continuous image, used tO
illustrate the concepts of saisng and quantization.
B

Quantization

Sampling
(c) Sampling and quantization (d) Digital scan line
Fig. 1.15 :Generating a digital image

Fig. 1.16:Digital image obtained after


sampling and qualitization
Digital Image Fundarentals 19

quantized simply by assigningone of theeeight disereie-gray-levels to each sarmplc as shown in Fig.i.5


(d) Starting atthe top of theimage and carrying out this procedurelline by line produces atwo-dimensional
dËzitalimage as shown in the Fig. 1.16. An practice the method of sampling is dctermined by thc sensor
arrangernent tised to genCrate theimage. Ciearty. the quality of a digital image is deternined by the
mber of samples and discrete gray levels Used in sampiing and quantization.
1.7.2 Representing Digital lmages
The result of sampling and quantization is a marix.of real numbers. Assume that an image
fo. y) is sampled so that the resulting digitai image has M rows and Ncolumns. The complete
HXN digital image is represented in the following, matrixform.
23
f(0.0) f(0.1). s(0,N-)
f(0.) f(11). s(,N-)
-u--(1.9)

s(M-10) f(M-.1) f(M -1, N


co-inote conyetiPg
Each element of this matrix array is called and image element or pixel, The sampling process
may be viewed as paritioning the y plane into a grid. The f(x, y) is a digital image if (x, y) are
integers andfis a function that assigns a gray-level value to cach distinct pair of cO-ordinates
,y). This digitization process requires decisions about values for M, Nand for the number L; of
discrete gray levels allowed for each pixel. Due to processing, storage and sampling hardware
consideration,the number of gray levels.typically is an integer power ofof 2
L=2K ----(1.10)
We assume that the discrete levels are equally spaced and that they are integers in the interval
(0, L- 1].
The number, b of bits required to store a digitized image is b= M.x NxK
When M= Nthis equation becomes
b = NK
---- (1.11)
1.7.3 Spatial and Gray-levei Resolution

126 64 32

256

512

1024
Fig. 1.I7:A1024 × 1024 image subsampled down to 32 x
32 witlh number of grey levels kept 256
20 Image Processing

(a) 1024 x 1024,8-bit image

pixels, 8-bit image.


(b) 512 >x 512 image resampled into 1024 x 1024

image.
(c) 256 x 256image resampled into 1024 x 1024 pixels, 8-bit

(d) 128 x 128image resampled into 1024 x 1024 pixels, 8-bit image.

(e) 64 x 64 image resampled into 1024 x 1024 pixels,8-bit image.

(f) 32 x 32 image resampled into 1024 × 1024 pixels, 8-bit image.


Fig. 1.18: Resized Images of Fig. l.17
DigitalImage Fundamentals 21

Sampling is the principal lactor determining the "Spatial resolution" of an image Spaial
resolutionis the smallest detail in an image. Consider a chart with vertical lincs of width W, with
thespacebeetween the lines also having width W. Thus the width of the line pair is 2W. There are
V2Wline pairs per unit distance, which are clearly visible.
Definition of Resolution : The number of smallest discernible line pairs per unit distance. Gray
levelresolutionis the smallest discernible change in gray level. The number of gray levels is usually
an integer power of 2. The most common number is 8bits, Consider an L-level digital image of size
MxN.This image has a spatial resolution of MxNpixels and a gray level resolution of Llevels.
Spatial resolution isexplained using the images in the Fig, 1.17. This image is of size 1024 x
1024 whose gray levelsare represented by 8 bit, (256 gray levels). Other image shown in Fig. 1.17
is of size512 x512 image is obtained from 1024 x1024, by deletingevery other row and column.
Similarly 256x 256 image is generated by deleting every other row and columnfrom 512 x 512
image. Similarly 128 x 128, 64 x 64 and 32 × 32 images are created. It is difficult to see the effect
of reduction innumber of pixels in an image because of dimensional proportions between various
image pixel densities.
In Fig. 1.18 all images with different pixel densities are shown with the same dimension so
the effect of reduction in number of pixels in an image (reducing spatial resolution) can be seen. In
1024 x 1024 and 512 x 512 number much difference is not seen. But in 256 x 256 check board
pattern is seen in borders in the image and they become pronounced in the 64 x 64 and 32 x 32
Image.
Here we keep spatial resolution constant (number of pixels in an image) and reduce the gray
levelresolution by reducing the number of gray levels from 256 to 2 (2* where K= 8 to 1) in
Integer power of 2.
Fig. 1.19 (a) is a 128 x 128 resolution with 256 gray levels (25). Similarly
Fig. 1.19 (b) is with same spatial resolution (128 x 128) with 128 gray levels. Similarly
Fig. 1.19 (c) is with 64 gray levels. The 256, 128, 64 level images are visually identical. The 32
level image shown in Fig. 1.19(d) has very unnoticable set of very fine false edges in the areas of
Smooth gray levels (in the which hexagonal patches of the ball). This effect is pronounced in 16-8
levels images. This effect is called false contouring.
22
Image PrOCess

(a) (b)

(c) (d)

(e) ()

(g) (h)

Fig. 1.19 ; Images to show gray level resolution


lsopreference Curves
Isopreference curves are drawn in the N, Kplane as shown in Fig. 1.21, where NS
number of pixels in the image with M= N. Kis the number of gray levels. Each poin u
thatpoin
N, Kplane represents animage having values of Nand Kequal to the co ordinates of
Points lying on an isopreference curve correspond to images of equal subjective quality. rimage!wit
the baby face image with N= 230 and K= 4 will have the same quality as the same detaila
N= 128 and K= 5. The baby's face is little
representative with relatively ofdetaila
of an image mediate
shown in Fig. 1.20(a). The picture of the flowers contains an inter amount of
shown in Fig. 1.20(b). The picture of group of people contains more detail as showninFig.
increases.
The isopreference curves tended to
become more vertical as the detail in the image
DigitalImage Fundamentals 23

(a) Image with a low level of (b) Image with a medium


detail
(c) Image with a relatively large
level of detail amountof detail

Fig. 1.20 : Example images for Isopreference Curves

Face
K
Flower

Crowd
4

32 64 128 256

Fig. 1.21 : Isopreference Curves


24
Image Process
1.7.4 Zooming and Shrinking Digital Images
Zooming may be viewed as oversampling. Shrinking may be viewed as undersampling
Application of Zooming images
li the image size one normally show images, is not in high
enough
details in the image. For example the image of a model wearing the resolution to grasp
wants to view the in detail design and material of the designery costume, if o
details. costume one can zoom and get
Zooming is very critical in medical imaging to analyse the
infected areas of the organ.
Application of Shrinking images
Shrinkingof images can be used
sereen and when fine details are not where many images are required to display on the sam
required.
Micro photocopyof the book is the result of
shrinking scanning text images.
Zooming requires two steps:
1. The creation of new pixel
locations.
2. Assignment of gray levels to these
new locations.
Three methods for gray level
1. Nearest neighbor assignment.
2. Pixel replication.
interpolation.
3. Bilinear interpolation.
Nearest neighbor interpolation:Suppose that we
want to enlarge it 1.5 times to 750 x have an image of size 500x500
image. Obviously, the spacing in the gridpixels. Lay an imaginary 750 x 750grid overpixels
750
the
and w
origina
assigned to the new pixels. Expand it to the would be less than one pixel. Closest
original specified size to obtain the pixel's gray levelr
Pixel replication : Pixel zoomed image.
number of times. For instancereplication
to
is applicable to
increase the size of an image an integ
each roW. double the size of the image. We can duplicate each
Bilinear Interpolation : This uses the four coloumn and a
coordinates of apoint in the zoomed image andnearest neighbours of a
let V ,y) denote the point. Let (x', v) denote Fo
bilinear interpolation, the assigned gray gray level assigned to it.
U
four coefficients are level is given by V(, ') =ax +
using the four nearest determined from the four equations in four by' + cxy +d, wher
neighbors of point (', y'). unknowns that can be written
8

m
DigitalImage Fundamentals 25

(a) (b) (c)


Images zOomed from 128x 128, 64 x 64 and 32 x 32 to 1024 x 1024 using nearest neighbor
gray level interpolation

(d) (e) ()
Images zoomed from 128 x 128,64 x 64 and 32 x 32 to 1024 x 1024 using bilinear gray level
interpolation
Fig. 1.22 : Image Zooming using nearest neighbor interpolation and bilinear interpolation methods
In Fig. 1.22 (a) (b) and (c) are the images zoomed from 128 x 128, 64 x 64, and 32 x 32 to
1024 x 1024 using nearest neighbor interpolation method. The equivalent results are shown in
Fig. 1.22 (d) (e) (f) using bilinear interpolation. In 128x 128 to 1024 x 1024 overallappearance is
almost clear in both the methods but in 32 × 32 to 1024x 1024 in nearest neighbor interpolation
method checkerboard effect is seen and in bilinear interpolation there is severe blurring effect.
Shrinking the images follow the same methodology but with opposite operations.
1.8 Some Basic Relationships Between pixels
1.8.1 Neighbors of a pixel
4- Neighbors [N, (P)]
Apixel P at Co ordinates (x, y) has four horizontal andvertical neighbors whose co ordinates
are given by (x + 1, y), (x 1, y), (x, y + 1), (, y- 1)
D-Neighbors [N, (P)
The four diagonal neighbors of P have co ordinates
(r+1,y + 1), (x + 1,y- 1), (* -1, y+ 1), (x-1, y- 1)
8- Neighbor [N, P)]
The N4 neighbors and N,, neighbors together are called 8-neighbors
Adjacency, Connectivity, Regions and Boundaries.
To establish two pixels are connected they should be 4 - adjacency, 8 - adjacency or
adjacency.

You might also like