Why Image ?
Digital Image Processing
❖ Improvement of pictorial information for human perception.
So, this means that whatever image you get, we want to
enhance the quality of the image so that the image will have
a better look and it will be much better when you look at the
image
❖ Processing image data for storage, transmission, &
representation for autonomous machine perception.
Digital Image Processing
What is an image?
Projection of 3D scene into 2D plane
A two-dimensional function, f(x, y), with x and y as spatial
coordinates, and the amplitude of f at any pair of
coordinates (x, y) as the intensity or gray level of the image
at that point.
When the above mathematical representation has
continuous range of values representing position and
intensity, the image is called analog image.
Digital Image Processing
When x, y and intensity values f are all finite,
discrete quantities, image is a digital image.
Analog Digital
Sampling Quantization
Image Image
Digital Image contains finite number of elements, each
of which has a particular location and value.
Each digital image is composed of a finite number of
elements called picture elements, image elements, pels,
or pixels.
Digital Image Processing
Processing of digital images by means of a digital
computer.
So, Digital Image Processing means the analysis and
manipulation of digital image in order to improve its
quality.
Advantage of Digital Image Processing
❖ Humans are limited to the visual band of
electromagnetic (EM) spectrum.
❖ But image machines cover almost the entire EM
spectrum ranging from gamma to radio waves.
❖ Thus, DIP operates on images that generated from
sources that humans are not capable to sence.
Better categorization:
Low-level processes: both input and output are images
❖ Noise reduction
❖ Contrast enhancement
❖ Image sharpening
Mid-level processes: inputs are images, outputs are
image attributes
❖ Segmentation
❖ Classification of objects
High-level processes: “making sense” of the ensemble
of recognized objects, for performing cognitive functions.
DIP Applications
Gamma-ray imaging
X-ray imaging
DIP Applications
Ultraviolet imaging
DIP Applications
Visible
DIP Applications
Microwave imaging
Infrared imaging
DIP Applications
Radio wave imaging
G amma X -ray Optical I nfrared R adio
Components of a DIP System
Sensor
Problem Domain
Two subsystems are required to acquire digital images:
❖ The first is a physical sensor that responds to the energy
radiated by the object we wish to image.
❖ The second, called a digitizer, is a device for converting the
output of the physical sensing device into digital form.
Specialized image processing hardware usually consists of
❖ The digitizer,
❖ Hardware that performs other primitive operations, such as an
Arithmetic Logic Unit (ALU), that performs arithmetic and
logical operations in parallel on entire images
❖ This unit performs functions that require fast data
throughputs (e.g., digitizing and averaging video images at
30 frames/s) that the typical main computer cannot handle
The Computer in an image processing system is a general-
purpose computer and can range from a PC to a supercomputer
Image Processing Software consists of specialized modules
that perform specific tasks
Mass storage is a must in image processing applications.
❖ An image of size 1024 × 1024 pixels, in which the intensity
of each pixel is an 8-bit quantity, requires one megabyte of
storage space if the image is not compressed.
❖ When dealing with image databases that contain thousands,
or even millions, of images, providing adequate storage in an
image processing system can be a challenge.
Digital storage for image processing applications falls into three
principal categories:
❖ short-term storage for use during processing;
❖ on-line storage for relatively fast recall; and
❖ archival storage, characterized by infrequent access.
Storage is measured in
▪ bytes (eight bits),
▪ Kbytes (103 bytes),
▪ Mbytes (106 bytes),
▪ Gbytes (109 bytes), and
▪ Tbytes (1012 bytes).
Image displays
❖ Monitors are driven by the outputs of image and graphics
display cards that are an integral part of the computer
system.
Hardcopy
❖ Devices for recording images include laser printers, film
cameras, heat- sensitive devices, ink-jet units, and digital
units, such as optical and CD-ROM disks.
❖ Film provides the highest possible resolution, but paper is
the obvious medium of choice for written material.
❖ For presentations, images are displayed on film
transparencies or in a digital medium if image projection
equipment is used.
Light and the Electromagnetic Spectrum
Light and the Electromagnetic Spectrum
The colors perceived in an object are determined by the nature of the light
reflected by it.
Monochromatic (achromatic) light, is a light that is void of color,
represented only by its intensity (gray level), ranging from black to white.
Chromatic light spans the electromagnetic energy spectrum from 0.43 to
0.79 micro-meter.
Radiance: total amount of energy that flows from the light source,
measured in watts (W).
Luminance: amount of energy an observer perceives from a light source,
measured in lumens (lm).
Brightness: a subjective descriptor of light perception, impossible to
measure, representing the achromatic notion of intensity.
Image Sensing and Acquisition
Images are generated by the
combination of an “illumination”
source and the reflection or
absorption of energy from the
source by the elements of the
“scene” being imaged.
The illumination may originate from
a source of electromagnetic energy,
such as a radar, infrared, or X-ray
system. But, it could originate from
less traditional sources, such as
ultrasound or even a computer-
generated illumination pattern
Image Sensing and Acquisition
Image Sensing and Acquisition
Image Formation Model
❑ An image is denoted by a function f(x, y), which the value of f at spatial
coordinates (x, y) is a scalar quantity proportional to energy radiated by a
physical source.
❑ The values of f are non-negative, and finite: 0 ≤ f(x,y) < inf.
Function f(x, y) is characterized by two components:
▪ Illumination: the amount of source illumination incident on the scene
being viewed, represented by i(x, y).
• Reflectance: the amount of illumination reflected by the objects in
the scene, r(x,y).
f ( x , y )=i( x , y)∗r ( x , y )
0≤i( x , y )<inf
0≤r (x , y )≤1
In some cases, for example X-ray imaging, we have transmissivity instead
of reflectance.
Image Sampling and Quantization
To create a digital image, we need to convert the continuous sensed data
into a digital format.
Two processes are required:
➢ Sampling: digitization in the spatial domain
➢ Quantization: digitization in the function domain
Image Sampling and Quantization
Assuming f(s, t) as a continuous image
function, using sampling and
digitization, we create the image f(x,y),
containing M rows and N columns.
The spatial coordinate values are
shown by integers as: x=0, 1, 2, …, M-
1 and y=0, 1, 2, …, N-1.
For image f(x, y), we have L number
of intensity levels, represented as a
power of 2. For example, in an 8-bit
image, we have 256 intensity levels:
L=2k
Image Sampling and Quantization
Spatial and Intensity Resolution
Spatial resolution: the size of the
smallest perceptible details in an
image
May be measured by the number
of pixels per unit distance.
Spatial resolution is dependent on
the sampling rate.
Spatial and Intensity Resolution
Intensity resolution: the smallest discernible change in the intensity level.
Measured in the number of bits used for quantization.
Spatial and Intensity Resolution
Both are digitization-dependent:
➢ Spatial resolution depends on the number of samples (N)
➢ Intensity resolution depends on the number of bits (k)
Different artifacts:
➢ Too low spatial resolution results in jagged lines
➢ Too low intensity resolution results in false contouring
Sensitivity:
➢ Spatial resolution is more sensitive to the shape variations
➢ Intensity resolution is more sensitive to the lighting variations