Dip Unit 1
Dip Unit 1
In the past the cost of image processing was very high because the
imaging sensors and
computational equipments were very expensive and had only limitedfunctions. As optics, imaging
sensors and computational technology advanced, image processing is more commonly used in
different areas.
Radiology : Radiology refers to examinations of the inner structure of opaque objects using X
rays or other penetrating radiation.
Radiology includes Images from X-rays, ultrasound, computed tomography (CT) Nuclear medicine,
Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI).
FaysImages
be examined.
A beamn of X-rays is projected towards the part of the body which has to position of
According to the density and composition of the different areas of the body part, a give a 2D
body are detected to
X-rays are absorbed by the body. The X-rays that pass through the
representation in terms of images.
Ultrasound Scanned images
organ, the waves travel through
Ultrasound frequency around (20,000 Hz) is projected on thedetected
are and imaged, which will
and reflected back where density differs. These reflections
to be examined.
reveal details of the inner structure of the body orgm
to vissualise muscles,
Sonography is an ultrasound based diagnostic mnedical imaging used
structure.
tendons and many internal organs, to capture their size and
Obstetric sonography is used during pregnancy to visualise fetus.
Computed Tomography (CT)
X-ray images taken
A 3D image of the inside of an organ can be generated using series of 2D especially
around a single axis of rotations. To diagnoise complex fractures CT scans can be used,
ones around jointsligesmentous injuries and dislocations.
CT scans are used to diagonise head, lungs,pulmanory angiogram, Cardiac abdominal and
Pelvic.
analysed by thermologists (Medical doctors traincd in theermology) to detect breast cancer. Fever
screening (i.e.,
HINI). Monitoring healing process.
Electro encephalography (EEG)
EEG is the recording of electrical activity within nervousof the brain the
of EEG is in case of epilespy, coma, encephalopthies, brain death diagnosticapplications
tumoss, stroke.
Etectro Cardiography (ECG)
ECGis the meaSure of electrical activity of the heart over time captured. It is used to
heart attack and heart related diseases.
detect
Reeote Sensing
Remote sensing is the gathering of information about an object, area or phenomenon without
beingin physical contact witht.
Jmages acquired by satellite are used in remote sensing ie., tracking of earth resources,
prediction of agriculturalcrops, urban growth, wheather forecasting flood control and fire control.
Astronomy
Image processing is used in astronomy to analyse the solarsystem and celestial bodies like
moon, star and other planets.
Business
Digital image transmission helps in journalism. People from different countries can work
tQgether, usingteleconferencing through which pcople can communicate seeing each other onthe
displays. Industries can be automated usingdigital image processing.
Entertainment
Digital Videos can be broadcasted and can be received bytelevision. Videos can be transmitted
through internet in you Tube. Video games are because of image processing.
Security and Survalence
Small target detection and tracking, missile guidance vehicle navigation wide area survelliance
and automated aided target recognition can be done using image processing Biometric image
Processing for personal authentucation and identification.
Robotics
A Robot is an electromechanical machine which is guided by computer and electronic
Programing to emulate human behaviour. Camera and related network works as eyes for the
robots.
Image Processing
Wavelets
and multi Mophological image
Color image Compression
resolution processing
processing
processing
are
generally
Image Segmentation
restoration
processes
Jyage. Restoration : Restoration means getting something back, Image restoration is a objective
DrOcess. Image restoration is removalof noise in Lheimage. Restoration techniques are based on
athematical or probabilistic models of imagedegradation.
Coler Tmage Processing : This includes fundamental concepl of color mode and básiccolor
processing in digital domain.
Javetets: Using wavelets images can be represented in various degrees of resolution (multi
Vesolution) Wavelets are used in image data compresion and for pyramidal representation in which
image is subdivided intosmaller regions.
Cempression : Compression is a technique used to reduce the storage required to save an image
or the band width required to transmit. Compression is useful in internet which has to send
significant pictorialcontent. JPEG image files are compressed images.
Morphological Processing: Morphological image processing deals with tools for extracting
image
componentsthat are useful in representation and description of shapes. Morphological processing
begins atransition from process that output image to process thatoutputimage. attributes.
Segmentation : Segmentation procedures partition an image into its constituent parts or objects.
Autonomous segmentation or rugged segmentation leads to object identification.
Representation and Description :Segmentation gives usually raw pixel data constitutingeither
the boundary of a region or all the points in the region itself. Boundary representation is suitable
when the focIS is on external shape. Regional representation is appropriate when the focus is on
internal characteristics such as texture. Choosing a representation is_only a part of solution for
ransforming raw data into aform suitable for subsequent computers processing, Description also,
called feature.selectiondeals with extracting attributes that result in some.quantitative information
to identify objects.
Becognition : Recognition is a process that assigns a label to an object based on its description.
Knowledge base : A knowledge base is a special kind.of data base forknowledge management.
Knowledge data base gives knowledge about the problem domain inimage processing system. It
also guides the operation of each processing module. It also controlsthe interaction between modules.
Network
Image sensors
Problem domain
Fig. 1.2: Components of ageneral purpose image processing system
they are digitized, for the purpose of noise reduction. This type of hardware is called front end
subsystem. This unit perform functions. that require fast data throughputs that the typical main
Computerscannot handle.
.Computer : Image processing requires intensive processing capability as it has to handle large
data. So computer to super computer is required.
Software :It Consists of specialized modules that perform specific task such as
image or filtering the image for restoration. More enhancing the
sophisticated-
integration of these specialized modules for user friendly and generalsoftware-packages allow the
from at least one computer language. pårpose software commands
Mass Storage : This capability is a must in image
system deals with thousands or even millions of processing application. Usually image processing
(24 u bytes for 1 Mb image)Digital storage images. Each uncompressed image may pelne
categories. system for image processing falls into three mao
1. Short term storage for
use during processing.
Computer memory can be short term storage
piaital Image Fundamentals 7
Cornea
Iris
Cilibar
o ydy
Lens
Ciliary fibers
Visual axis
Vitreous humor
Retina
Blind spot
Fovea
Sclera
Choroid
Nerve &sheath
Fig. 1.3 : Simplified diagram of a cross section of the human eye
pigital Image Fundamentals
15 m
-100 m 47mm
Fig.1.4 : Graphicalrepresentation of the eye looking at a tree. Point Cis the optical cênter of tlhe lens
1.4.3 Brightness Adaptation and Discrimination
Digital image is displayed as a iscrete set of intensities, the eye's ability to discriminate
between different intensity levels is an important considerations inpresenting Image processing
results. Experimental evidence indicates that subjective brightness (intensity as perceived by the
human visual system) is alogarithmic function ofthe light intensityincident on the eye. Fig. 1.5 plot
of lightintensity versus subjective brightness. The long solid curve represents the range of intensities
to which the visual system can adapt. The visual system cannot operate over such a range
simultaneously. It accomplishes this large variation by changes in its overall sensitivity, aphenomenon
known as brightness adaptation. The total range of distinct intensity levels it can discriminate
simultaneously is small. The short intersecting curve represents the range of subjective brightness
that the eye canperceivewhen adapted to the level B,This rangeis restricted having alevel B, at
and below which all stimuliare perceived as in distinguishable black.
10
image Processin
Glare limit
Subjective
brightness
Adaptation
range
B.
B,
Scotopic
Photopic
Scotopic
threshold
-6
-4-20 2 4
ts added an incrementof illunmination AI. in the form of a short duration flash that appcars as a circle
ta the center of the uniformly illuminated field. If AI is not bright enough, the subject says "no"
indicating no perceivable change. As Al gets stronger the subject may give a positive response of
"yes" indicating aperceived change. The quantity Alc/I where Alc is the increment of illumination
with back ground illuminate I, is called the "weber ratio". Asmall value of Alclmeans that a small
%change in intensity is discriminable.
1.0
0.5
log
Al/1
--0.5
-1.0
-1.5
-20o 0
log !
1 2 3 4
Perceived brightness -
- -
-Actual illumination
--
(a) (b)
(c)
Fig. 1.10:Some well knownoptical illusions
In optical illusions the eyes fill in non existing information as shown in Fig. 1.10. In
Fig. 1.10(a) a square seems to be existing between the circles, but there is no outlining of the
square. Similarly a circle seems to be existing in the centre of the lines in the Fig. 1.10 (b). In the
Fig. 1.10 (c) the two horizontal lines are of same length but one seems to be shorter than the othe.
1.5 Image Sensing and Acquisition
Images are generated by the combination of an "illumination" source and the reflection or
absorption of energy from that source by the elements of the "scene" being imaged.
Serisor Rotation
Power in -Sensing
material
Linear motion
Housing- Apm Voltage
One image line out per waveform
increment of rotation and full linear
Out
displacement of sensor from left to right
(a) Setup to image using single sensor (b) Xmage sensOr
Fig. 1.1l:Use of Single senser to generate to 2-D ivmage
Scene to be imaged has to be constant for a such a long time. Flat bed with a similar mechanical
arrangementswith the sensor moving in two linear directions are used. These types of mechanical
digitizers sometimes are referred to as "microdensitometers".
1.5.2 Image Acquisition using sensor strips
The sensor strip provides imaging elements in one direction as shown in Fig. 1.13(a). Motion
perpendicular to the strip provides imaging in the other dircction as shown in Fig. 1.12. Sensor strips
mounted in a ring configuration are used in medical and industrial imaging to obtain cross-sectional
images of3-D objects. Arotatingx -ray source provides illumination and the portion of the sensors
opposite the source collect the x - ray energy that pass through the object. The output of the
sensors must be processed by reconstruction algorithm whose objective is to transform the sensed
data into meaningful cross-sectional images.
Lineamotion
SanSOrzstrip
(a)
(b)
Fig. 1.I3: (a) Linear Array sensors (b) Sensors in matrix forms
llumination (energy)
Source
Output(digitized) image
Imaging
system
(a) Continuous image (b)A scan line from A toB in the continuous image, used tO
illustrate the concepts of saisng and quantization.
B
Quantization
Sampling
(c) Sampling and quantization (d) Digital scan line
Fig. 1.15 :Generating a digital image
126 64 32
256
512
1024
Fig. 1.I7:A1024 × 1024 image subsampled down to 32 x
32 witlh number of grey levels kept 256
20 Image Processing
image.
(c) 256 x 256image resampled into 1024 x 1024 pixels, 8-bit
(d) 128 x 128image resampled into 1024 x 1024 pixels, 8-bit image.
Sampling is the principal lactor determining the "Spatial resolution" of an image Spaial
resolutionis the smallest detail in an image. Consider a chart with vertical lincs of width W, with
thespacebeetween the lines also having width W. Thus the width of the line pair is 2W. There are
V2Wline pairs per unit distance, which are clearly visible.
Definition of Resolution : The number of smallest discernible line pairs per unit distance. Gray
levelresolutionis the smallest discernible change in gray level. The number of gray levels is usually
an integer power of 2. The most common number is 8bits, Consider an L-level digital image of size
MxN.This image has a spatial resolution of MxNpixels and a gray level resolution of Llevels.
Spatial resolution isexplained using the images in the Fig, 1.17. This image is of size 1024 x
1024 whose gray levelsare represented by 8 bit, (256 gray levels). Other image shown in Fig. 1.17
is of size512 x512 image is obtained from 1024 x1024, by deletingevery other row and column.
Similarly 256x 256 image is generated by deleting every other row and columnfrom 512 x 512
image. Similarly 128 x 128, 64 x 64 and 32 × 32 images are created. It is difficult to see the effect
of reduction innumber of pixels in an image because of dimensional proportions between various
image pixel densities.
In Fig. 1.18 all images with different pixel densities are shown with the same dimension so
the effect of reduction in number of pixels in an image (reducing spatial resolution) can be seen. In
1024 x 1024 and 512 x 512 number much difference is not seen. But in 256 x 256 check board
pattern is seen in borders in the image and they become pronounced in the 64 x 64 and 32 x 32
Image.
Here we keep spatial resolution constant (number of pixels in an image) and reduce the gray
levelresolution by reducing the number of gray levels from 256 to 2 (2* where K= 8 to 1) in
Integer power of 2.
Fig. 1.19 (a) is a 128 x 128 resolution with 256 gray levels (25). Similarly
Fig. 1.19 (b) is with same spatial resolution (128 x 128) with 128 gray levels. Similarly
Fig. 1.19 (c) is with 64 gray levels. The 256, 128, 64 level images are visually identical. The 32
level image shown in Fig. 1.19(d) has very unnoticable set of very fine false edges in the areas of
Smooth gray levels (in the which hexagonal patches of the ball). This effect is pronounced in 16-8
levels images. This effect is called false contouring.
22
Image PrOCess
(a) (b)
(c) (d)
(e) ()
(g) (h)
Face
K
Flower
Crowd
4
32 64 128 256
m
DigitalImage Fundamentals 25
(d) (e) ()
Images zoomed from 128 x 128,64 x 64 and 32 x 32 to 1024 x 1024 using bilinear gray level
interpolation
Fig. 1.22 : Image Zooming using nearest neighbor interpolation and bilinear interpolation methods
In Fig. 1.22 (a) (b) and (c) are the images zoomed from 128 x 128, 64 x 64, and 32 x 32 to
1024 x 1024 using nearest neighbor interpolation method. The equivalent results are shown in
Fig. 1.22 (d) (e) (f) using bilinear interpolation. In 128x 128 to 1024 x 1024 overallappearance is
almost clear in both the methods but in 32 × 32 to 1024x 1024 in nearest neighbor interpolation
method checkerboard effect is seen and in bilinear interpolation there is severe blurring effect.
Shrinking the images follow the same methodology but with opposite operations.
1.8 Some Basic Relationships Between pixels
1.8.1 Neighbors of a pixel
4- Neighbors [N, (P)]
Apixel P at Co ordinates (x, y) has four horizontal andvertical neighbors whose co ordinates
are given by (x + 1, y), (x 1, y), (x, y + 1), (, y- 1)
D-Neighbors [N, (P)
The four diagonal neighbors of P have co ordinates
(r+1,y + 1), (x + 1,y- 1), (* -1, y+ 1), (x-1, y- 1)
8- Neighbor [N, P)]
The N4 neighbors and N,, neighbors together are called 8-neighbors
Adjacency, Connectivity, Regions and Boundaries.
To establish two pixels are connected they should be 4 - adjacency, 8 - adjacency or
adjacency.