0% found this document useful (0 votes)
4 views5 pages

Improcs

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views5 pages

Improcs

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

- **Advantage**: Efficient for compression and transmission.

Q. Discuss color models used in Digital Image Processing. 4. YIQ (Used in TV Broadcasting)
 ### **Color Models in Digital Image Processing** o Y: Luminance (brightness).
Color models are mathematical systems used to represent colors in a way that both o I, Q: Chrominance (color information).
humans and computers can understand. They are essential for **image processing,
storage, and display**. ### **Conclusion**

#### **1. RGB Color Model** - **RGB** → Used in displays and sensors.

- **Definition**: Represents colors using three components – **Red (R), Green (G), and - **CMYK** → Used in printing. 3. Definition of DFT
Blue (B)**.
- **HSV/HSI** → Useful for segmentation and human perception tasks.  The Discrete Fourier Transform (DFT) represents a sequence in terms of
- **Range**: Each component ranges from 0 to 255 in digital images (8-bit). sinusoidal functions (sine + cosine).
- **YCbCr** → Ideal for image compression and video.
- **Usage**: Used in monitors, cameras, scanners.  Formula:
Q. What is Discrete Cosine Transfom (DCT) and how does it differ from the Discrete
- **Advantage**: Simple and directly related to human vision. Fourier Transform (DFT)?

#### **2. CMY/CMYK Color Model**  Discrete Cosine Transform (DCT) and Its Difference from Discrete Fourier Transform
(DFT)
- **Definition**: Based on **Cyan (C), Magenta (M), Yellow (Y), and Key/Black (K)**.
1. Definition of DCT
- **Principle**: Subtractive color model (used in printing).
 The Discrete Cosine Transform (DCT) is a mathematical technique used to  Produces complex output (real + imaginary parts).
- **Usage**: Printers, publishing, and graphic design. transform a signal or image from the spatial domain (pixels) into the frequency
 Used in spectrum analysis, filtering, and signal processing.
domain (coefficients).
- **Advantage**: Produces high-quality printed colors.
 It uses only cosine functions to represent the signal.
#### **3. HSV / HSI Color Model**
 Formula for 1D DCT:
- **HSV (Hue, Saturation, Value)**

- **Hue** → Type of color (e.g., red, blue, green).

- **Saturation** → Purity or intensity of the color.

- **Value** → Brightness of the color.

- **HSI (Hue, Saturation, Intensity)**


 In image processing, 2D DCT is widely used for image compression (JPEG).
- Similar to HSV but emphasizes **intensity** instead of brightness. 4. Differences between DCT and DFT

- **Usage**: Image enhancement, segmentation, and computer vision. Feature DCT DFT
2. Properties of DCT
- **Advantage**: Matches human perception better than RGB. Sine + Cosine (complex
 Energy compaction: Most signal energy is concentrated in few low-frequency Basis Functions Cosine only
exponentials)
components.
#### **4. YCbCr Color Model** Output Real values only Complex values (real + imaginary)
 Uses real values only (no complex numbers).
- **Definition**: Separates image into **luma (Y)** and **chroma (Cb, Cr)**.  Reduces redundancy and is efficient for compression. Energy
Higher (good for compression) Lower compared to DCT
Compaction
- **Y → Luminance** (brightness), **Cb/Cr → Chrominance** (color information).

- **Usage**: JPEG compression, television broadcasting, video coding.

Feature DCT DFT Q3. What is Image Compression and why is it important in digital imaging?
Image Compression and Its Importance
JPEG, MPEG, image/video Signal analysis, filtering, audio
Applications  1. Definition of Image Compression
compression processing
 Image compression is the process of reducing the size of an image file without
More efficient, fewer coefficients More storage due to complex significantly affecting its visual quality.
Computation
needed numbers
 It works by removing redundancies in image data (spatial, spectral, or psycho-
visual redundancies).
5. Conclusion  Compression can be:
 DCT is mainly used in image and video compression because it represents o Lossless → Original image can be exactly reconstructed.
signals with fewer coefficients (energy compaction).
o Lossy → Some data is lost, but human eyes may not notice the difference.
 DFT is more general and used in signal/spectrum analysis but is less efficient
for compression due to complex values.

Q. Compute 2D DFT of 4x4 gray scale image: 2. Types of Image Compression

f(x.y) = [1 1 1 1] 1. Lossless Compression

[1 1 1 1] o No data is lost.

[1 1 1 1] o Examples: PNG, GIF, TIFF.

[1 1 1 1] o Useful in medical, legal, or scientific images.

2. Lossy Compression

o Irreversible data loss but achieves higher compression ratios.

o Examples: JPEG, WebP.

o Suitable for natural images (photographs).

3. Techniques Used in Compression

 Transform coding (e.g., DCT, Wavelet transform).

 Entropy coding (e.g., Huffman coding, Arithmetic coding).

 Predictive coding (e.g., Differential Pulse Code Modulation).

4. Importance of Image Compression

7) Note on normalization (exam remark) 1. Storage Efficiency → Reduces memory requirements, allowing more images to be
stored.
If your convention uses a 1/(MN)1/(MN)1/(MN) factor in the forward DFT, then the DC
becomes 16/(16)=116/(16)=116/(16)=1. With the standard unnormalized forward DFT 2. Transmission Speed → Smaller image files upload/download faster (important in
(as above), the DC is 16. Always state your convention internet and mobile communication).

3. Cost Reduction → Saves storage costs and reduces bandwidth usage.  Example: hard disk, network, cloud storage, etc. 1. Prediction

4. Real-time Applications → Essential for video conferencing, streaming, and 5. Entropy Decoder o A predictor estimates the current pixel value using previously encoded
telemedicine where speed is crucial. neighboring pixels (e.g., left, top, diagonal).
 Reverse process of entropy encoder.
5. Standardization → Formats like JPEG, MPEG use compression to ensure 2. Error Calculation
 Recovers quantized transform coefficients from compressed bitstream.
interoperability.
o Prediction error = Actual pixel − Predicted pixel.
6. Dequantizer
3. Quantization (Lossy Step)
 Reconstructs approximate values of original transform coefficients.
5. Conclusion
o The prediction error is quantized to reduce the number of bits.
 Exact recovery in lossless, approximate in lossy compression.
 Image compression is crucial in digital imaging because it makes storage,
o This step causes loss of exact information.
processing, and transmission of images more efficient. 7. Source Decoder (Inverse Transform)
4. Encoding
 Depending on the application, either lossless or lossy compression is chosen to  Applies inverse transform (e.g., IDCT, Inverse Wavelet Transform).
balance quality and size. o Quantized errors are encoded using entropy coding (e.g., Huffman or
 Reconstructs the image back into the spatial domain for viewing.
Arithmetic coding).
Q. What are the basic components of an image compression model?
Block Diagram (conceptual)
5. Decoding
 Basic Components of an Image Compression Model
Original Image → Source Encoder → Quantizer → Entropy Encoder → Channel/Storage
o At the receiver, the same predictor is used.
An image compression model consists of several functional blocks that work together
to reduce the size of image data while retaining acceptable quality. ← Source Decoder ← Dequantizer ← Entropy Decoder ←
o The quantized prediction error is added to the predicted value to reconstruct
. Source Encoder the pixel.
1. Source Encoder (or Transformer Stage)
. Quantizer 3. Example
 Removes redundancies from image data.
. Entropy Encoder  Suppose predicted pixel = 125, actual pixel = 130.
 Common methods: Discrete Cosine Transform (DCT), Wavelet Transform, or
Predictive Coding.  Error = 130 − 125 = 5.
. Channel/Storage
 Converts spatial domain image into frequency domain for better energy  After quantization, error = 4 (approx).
. Entropy Decoder
compaction.
. Dequantizer  Reconstructed pixel = 125 + 4 = 129 (close to original 130).
2. Quantizer
. Source Decoder 4. Advantages
 Reduces the precision of transformed coefficients.
Q. Expalin the concept of Lossy Predictive Coding in Image Compression.  High compression ratio compared to lossless predictive coding.
 Introduces loss in lossy compression, but achieves high compression ratio.
 Lossy Predictive Coding in Image Compression  Exploits correlation between pixels effectively.
 Example: In JPEG, high-frequency DCT coefficients are quantized to near zero.
1. Concept  Suitable for images where slight loss is acceptable (photographs, videos).

 Predictive coding works on the idea that neighboring pixels in an image are 5. Disadvantages
3. Entropy Encoder (Lossless Compression Stage)
highly correlated.
 Introduces distortion due to quantization.
 Encodes quantized values efficiently to reduce bit rate.
 Instead of storing each pixel value directly, the encoder predicts a pixel from its
 Not suitable for applications needing exact image reconstruction (e.g., medical
 Removes coding redundancy. neighbors and encodes the difference (error) between the actual and predicted
images).
value.
 Methods: Huffman Coding, Run-Length Encoding (RLE), Arithmetic Coding. 6. Applications
 In lossy predictive coding, this prediction error is quantized, which introduces
4. Channel (Transmission/Storage Medium) some distortion but achieves higher compression.  JPEG-LS (predictive mode).
 Represents the medium where compressed data is stored or transmitted. 2. Working Steps
 Video coding standards like H.264 and MPEG use lossy predictive coding for Q. Discuss the advantages and limitations of Weiner filtering in image
inter-frame compression. restoration.
Advantages and Limitations of Wiener Filtering in Image Restoration
Conclusion
1. Concept Recap
Lossy predictive coding is a technique that compresses images by predicting pixel
values, encoding only the quantized error, and thus reducing redundancy. While it  Wiener filtering is an optimal filtering technique used for image restoration.
introduces minor distortion, it provides efficient compression for natural images and
 It restores a degraded image by considering both the degradation function
videos.
H(u,v)H(u,v)H(u,v) and the statistical characteristics of noise.
Q. Expalin the concept of inverse filtering and its applications in image
 Formula:
restoration.

3. Limitations

 If H(u,v)H(u,v)H(u,v) has zeros or very small values → division amplifies noise


drastically.

 Sensitive to noise and practical inaccuracies. 2. Advantages of Wiener Filtering


 Not suitable for severe degradations. 1. Noise Handling
4. Applications of Inverse Filtering in Image Restoration o Unlike inverse filtering, it takes into account both blur and noise, making it
more robust.
1. Deblurring Images
2. Optimal Solution
o Corrects images blurred by motion or defocus.
o Minimizes the Mean Square Error (MSE) between the restored and original
2. Satellite and Astronomical Imaging
image.
o Removes atmospheric distortion from space images.
3. Better Restoration
3. Medical Imaging
o Produces smoother and more natural images compared to inverse filtering.
o Enhances blurred X-ray, MRI, or CT images.
4. Flexibility
4. Optical System Correction
o Can adapt depending on noise-to-signal ratio, providing balance between
o Compensates for lens distortion and optical blur in microscopy. noise removal and detail preservation.

5. Example (Simple Case) 5. Practical Use

 If blur is modeled as an averaging filter H(u,v)H(u,v)H(u,v), dividing the degraded o Widely applied in astronomy, medical imaging, and satellite imaging
spectrum by this filter restores edges and sharpness. where both blur and noise are present.

5. Conclusion- Inverse filtering is a basic image restoration technique that 3. Limitations of Wiener Filtering
mathematically reverses the effect of degradation. While effective when noise is
1. Requirement of Prior Knowledge
absent, its practical use is limited due to noise amplification. For real-world
cases, improved methods like Wiener filtering are preferred. o Needs information about power spectra of noise and original image,
which is often difficult to obtain in practice.

2. Computational Complexity

o More complex than inverse filtering due to statistical modeling. 2. Local (Adaptive) Thresholding (a) Transform Coding vs. Wavelet Coding

3. Not Effective for Non-Linear Distortions o Different thresholds are chosen for different regions of the image. Aspect Transform Coding Wavelet Coding
Basic Idea Uses mathematical transforms Uses wavelet transform to
o Works well for linear degradations (blur + additive noise), but not for non- o Useful when illumination is uneven.
(e.g., DCT, FFT) to represent represent image in both
linear distortions. 3. Multi-level Thresholding image in frequency domain. frequency and spatial domains.
Energy Good energy compaction (e.g., Better energy compaction across
4. Over-Smoothing o More than one threshold is used to segment an image into multiple regions (e.g., Compaction DCT concentrates energy in low- multiple scales.
separating background, object, and shadow). frequency coefficients).
o May blur fine details when noise power is high. Localization Poor spatial localization (frequency Provides both spatial and
3. Role in Separating Objects from Background only). frequency localization.
5. Degradation Function Sensitivity
Blocking May cause blocking artifacts (e.g., Less prone to blocking artifacts
 Enhances Object Detection → Clearly separates regions of interest (objects) from
o Requires accurate knowledge of degradation function H(u,v)H(u,v)H(u,v). Artifacts in JPEG at high compression). (used in JPEG2000).
irrelevant background.
Errors in H(u,v) reduce performance. Applications Used in JPEG image Used in JPEG2000, medical
 Simplifies Processing → Converts grayscale image into binary form, reducing complexity compression. imaging, scalable coding.
4. Conclusion for further analysis.

 Wiener filtering is a powerful image restoration technique that outperforms  Used in Pre-processing → Essential for feature extraction, shape analysis, and
inverse filtering in noisy conditions by minimizing error statistically. recognition tasks.

 However, its dependence on prior knowledge and computational cost limit its  Application Examples:
practical use.
o Document image processing (text vs background).
(b) Mean Filters vs. Adaptive Filters
 In real-world applications, it is often used with approximations or adaptive o Medical imaging (tumor vs normal tissue).
methods. Aspect Mean Filters Adaptive Filters
o Industrial inspection (defect detection). Definition Linear filter that replaces each Filter that adapts its behavior
Q. Describe the concept of Thresholding in image segmentation and its role in pixel with the average of its based on image characteristics (e.g.,
separating objects from the background. 4. Advantages
neighborhood. local variance, edges).
 Simple and fast to implement. Noise Effective for reducing random Reduces noise while preserving
 Thresholding in Image Segmentation Handling noise but also blurs edges and edges and details.
 Effective when object and background intensity distributions are distinct. fine details.
1. Concept of Thresholding
Adaptability Fixed – same operation for all Variable – changes filtering strength
5. Limitations pixels. depending on local statistics.
 Thresholding is a simple and widely used technique in image segmentation.
Complexity Simple and computationally More complex, requires additional
 Fails when there is poor contrast or overlapping intensities.
 It separates an image into object (foreground) and background based on efficient. calculations.
intensity values of pixels.  Sensitive to noise and illumination variations. Applications Smoothing, removing Gaussian Medical imaging, satellite images,
noise. where edge preservation is
 A threshold value TTT is chosen, and segmentation is done as: 6. Conclusion important.

Thresholding is a fundamental segmentation method that classifies pixels based on intensity


to separate objects from background. While efficient and widely used, it works best in images Conclusion
with clear contrast and may need adaptive methods for complex scenes.
 Transform coding is frequency-based (JPEG), while wavelet coding is multi-
Q. State the difference between: resolution and advanced (JPEG2000).

 Mean filters are simple but blur details, whereas adaptive filters are advanced
and preserve important structures.
2. Types of Thresholding

1. Global Thresholding Q. Write short noles on any hvo:

o A single threshold value is used for the entire image. (a) Histogram Processing

o Works well when background and objects have distinct intensity ranges. Histogram processing is an important technique in digital image enhancement, as it
provides a graphical representation of the intensity distribution of an image. The x-axis

represents intensity levels (0–255 for an 8-bit image), while the y-axis represents the Filters are used in image processing to suppress unwanted frequency components o Image quality degrades at very high compression (blurring, blocking
frequency of occurrence of these levels. This information helps in understanding such as noise or to smooth an image. Two common types are Butterworth and artifacts).
contrast, brightness, and overall quality of the image. Gaussian filters, both of which are better than the Ideal filter as they avoid sharp
o Cannot be used where exact reproduction is required (e.g., medical images).
cutoffs.
 Main Methods of Histogram Processing:
 Applications: JPEG compression for photos, MPEG/MP4 for videos, multimedia
 Butterworth Filter:
1. Histogram Equalization → Improves the global contrast of the image by streaming, and online image sharing where storage and speed are critical.
redistributing pixel intensities evenly. It is particularly useful in images with o Has a frequency response controlled by its order (n), which defines the
poor lighting, medical X-rays, and remote sensing. sharpness of cutoff. Q.Describe briefly the fundamental steps in digital image processing

2. Histogram Matching (Specification) → Adjusts the histogram of an image o Provides a smoother transition compared to Ideal filters and reduces Fundamental Steps in Digital Image Processing
to match a predefined histogram, giving more control over brightness and artifacts like ringing.
contrast. Digital Image Processing (DIP) involves a sequence of operations performed on images to
 Gaussian Filter: improve their quality, extract information, or prepare them for further analysis. The
3. Local Histogram Processing → Enhances small regions of an image instead steps form a pipeline starting from image acquisition to interpretation.
o Based on the Gaussian function, it has a bell-shaped response in frequency
of the whole image, useful for non-uniform illumination.
domain.
 Applications: Enhancing low-contrast photographs, medical imaging, satellite
o Provides very smooth transitions with no sharp edges, ensuring minimal Steps (with explanation):
image analysis, and preprocessing in computer vision tasks.
distortion.
1. Image Acquisition
 Applications: o The first step is capturing the image using sensors like cameras or
(b) Optimum Notch Filtering scanners.
o Noise removal and image smoothing.
o It may include preprocessing such as resizing, noise removal, or enhancing
Optimum notch filtering is a frequency-domain restoration technique used to remove brightness for better quality.
periodic or structured noise from digital images. Periodic noise, such as stripes or o Preprocessing in computer vision tasks such as object detection.
2. Image Preprocessing
repeated patterns, appears as bright spots at specific frequencies in the Fourier o Used in medical imaging, photography, and pattern recognition for better o Enhances the image for further processing.
transform of an image. o Includes operations like noise reduction, contrast enhancement,
image quality.
sharpening, and image resizing.
 Working Process: o Goal: improve visual appearance and prepare image for analysis.
3. Image Enhancement
1. Perform Fourier Transform of the noisy image. (d) Lossy Compression o Improves the visual quality of the image.
o Techniques: histogram equalization, smoothing, sharpening, and contrast
2. Identify noise frequencies in the spectrum. Lossy compression is a method of reducing image size where some data is permanently adjustment.
discarded. Unlike lossless compression, it does not allow exact reconstruction, but o Useful in medical imaging, satellite images, etc.
3. Design notch filters to block these specific frequencies while retaining 4. Image Restoration
instead aims to maintain acceptable visual quality while saving storage.
others. o Removes degradations caused by blurring, noise, or distortion.
 Techniques Used: o Unlike enhancement, it is based on mathematical and probabilistic models
4. Apply the inverse Fourier transform to reconstruct the restored image. to recover the original image.
1. Transform Coding (DCT, Wavelet) → Converts image into frequency 5. Color Image Processing
 Advantages:
domain. High-frequency details that are less important to the human eye are o Deals with processing colored images in different color models (RGB, HSV,
o Selectively removes unwanted periodic noise without affecting overall image discarded. CMY).
o Includes color transformations, enhancement, and segmentation.
detail.
2. Quantization → Rounds off or removes small frequency coefficients. 6. Wavelets and Multiresolution Processing
o Preserves important frequency information. o Used to represent images at multiple levels of resolution.
3. Entropy Coding → Further compresses the remaining data efficiently. o Important in image compression and image analysis (e.g., JPEG2000
 Applications: Useful in satellite imagery correction, removing interference from standard).
 Advantages: 7. Image Compression
scanned documents, industrial imaging, and medical imaging where periodic
o Reduces the size of image data for storage and transmission.
artifacts may distort diagnostic results. o Achieves very high compression ratios (10:1 to 50:1). o Techniques: Lossless (PNG) and Lossy (JPEG).
o Essential for multimedia, internet, and medical data storage.
o Saves memory and reduces bandwidth needs for transmission. 8. Morphological Processing
(c) Butterworth and Gaussian Filters o Focuses on the shape or structure of objects in an image.
 Limitations:
o Operations: erosion, dilation, opening, and closing.
o Mostly used for binary images. Example:  Image enhancement
9. Image Segmentation  Image compression
o Divides an image into meaningful regions or objects.  A 1-bit image can represent only 2 levels (black or white).  Feature extraction
o Techniques: thresholding, edge detection, region growing.  An 8-bit image can represent 256 levels of gray.  Filtering and restoration
o Critical for object recognition and computer vision tasks.  A 24-bit color image can represent 16.7 million colors.
10. Representation and Description Common transforms: Fourier Transform, Discrete Cosine Transform (DCT), Walsh
o After segmentation, objects are represented in suitable formats (boundaries, Transform, Hadamard Transform, Wavelet Transform.
regions).
o Description (features like shape, texture, color) is extracted to make them 3. Relation between Sampling and Quantization
useful for recognition.
11. Object Recognition  Sampling controls the spatial resolution of the image (number of pixels). Image Enhancement by Contrast Stretching (Intensity Transformation)
o Assigns a label to objects based on their features.  Quantization controls the intensity resolution of the image (number of
o Example: detecting vehicles, faces, or medical abnormalities. shades/colors). 1. Intensity Transformation
12. Knowledge-Based Image Analysis (Interpretation)  Together, they convert an analog image into a digital image.
o The final step where high-level reasoning and prior knowledge are applied. Intensity transformation functions directly map the input pixel values (gray levels) to
o Helps in decision-making, e.g., medical diagnosis, satellite image analysis. new output values to improve visibility.

Image Sampling and Quantization  Formula: s = T(r)


o where r = input intensity, s = output intensity, T = transformation function.
4. Difference between Sampling and Quantization
Digital images are obtained by converting continuous images (analog) into digital form
that a computer can process. This conversion requires two basic steps: Sampling and Aspect Image Sampling Image Quantization 2. Contrast Stretching
Quantization. Definition Process of dividing the continuous Process of mapping the continuous
image into a finite number of pixels range of intensity values into discrete Contrast stretching is one of the most widely used intensity transformations for image
by sampling at regular spatial gray levels or colors. enhancement.
intervals.
1. Image Sampling Definition:
What it affects? Affects the spatial resolution Affects the intensity resolution
Contrast stretching expands the range of intensity values in an image so that the darker
(number of pixels in image). (number of shades/levels per pixel).
Definition (detailed): areas become darker and the brighter areas become brighter. This improves the visual
Image sampling is the process of converting a continuous image into a discrete image by Unit of Measured in pixels per inch (PPI) or Measured in bits per pixel (bpp) or quality of images with low contrast (where details are hidden in narrow intensity
measuring its intensity values at regularly spaced intervals in both spatial coordinates measurement image dimensions (e.g., 512×512). number of gray/color levels (e.g., 8-bit = ranges).
(x and y). In simple terms, it determines how many pixels will represent the image. 256 levels).
The sampling process selects points on the image grid, and each selected point becomes High vs Low High sampling → sharper, detailed High quantization → smooth tones; Low
a pixel. image; Low sampling → pixelated. quantization → posterization/banding.

 High sampling rate → more pixels → better quality (fine details visible). Example 128×128 vs 512×512 resolution of 2-level (black & white) vs 256-level gray
 Low sampling rate → fewer pixels → poor quality (image looks blocky or same image. scale of same image.
pixelated).
Role in First step: selects where pixels are Second step: assigns intensity values to
Example: Digitization located. each pixel.
If you sample an image at 512 × 512 pixels, you get more detail than at 64 × 64 pixels.
Image Transform

2. Image Quantization Definition:


An image transform is a mathematical operation that converts an image from one
representation to another, often to make processing, analysis, or enhancement easier. In
Definition (detailed):
digital image processing, transforms are used to change the domain of representation
Image quantization is the process of mapping the continuous range of intensity (gray
(for example, from spatial domain to frequency domain).
levels or colors) into a finite number of discrete levels. In other words, it determines
how many different shades of gray or colors a pixel can take.
 In the spatial domain, operations are performed directly on pixel intensities.
 In the frequency domain, operations are performed on transformed coefficients
 High quantization levels (e.g., 256 gray levels) → smooth images with better
(e.g., Fourier Transform).
quality.
 Low quantization levels (e.g., 2 gray levels: black & white) → poor quality, loss
Uses of image transforms:
of details, visible banding.

4. Example  Image sharpening or smoothing


 Feature extraction
 Suppose an image has intensity values only between 50 and 180 (instead of 0–
255). Filtering can be done in two domains:
 After contrast stretching, these values will be mapped to the full range 0–255.
 Result → better visibility of dark and bright regions. 1. Spatial domain (directly on pixels)
2. Frequency domain (on transformed coefficients like Fourier Transform).

Q.What is image histogram ? Discuss histogram equalization method


1. Spatial Domain Filtering
for image enhancement
 In spatial domain, operations are performed directly on pixel values of the image.
 Filtering is done using a mask/kernel (small matrix) that is moved across the
image.
Image Histogram Types:

Definition:  Smoothing filters (remove noise, blur): e.g., average filter, Gaussian filter.
An image histogram is a graphical representation of the frequency distribution of pixel  Sharpening filters (highlight edges): e.g., Laplacian, Sobel operator.
intensity values in a digital image.

 The x-axis represents possible intensity levels (e.g., 0–255 for an 8-bit image).
 The y-axis represents the number of pixels that have each intensity value.

Interpretation: Result

 If histogram is concentrated in the middle → medium contrast image.  Histogram after equalization is more spread out, covering the full intensity range.
 If histogram is narrow (clustered) → low contrast image.  Image contrast is significantly improved.
 If histogram covers full range (0–255) → high contrast, good quality image.

Example:
Example (Simple)
 A dark image → histogram clustered on the left.
 A bright image → histogram shifted to the right.  Suppose an image’s intensity values lie mostly between 100 and 150 (narrow
 A low-contrast image → histogram concentrated in a small region. range).
 Histogram equalization redistributes them into 0–255 range.
 The resulting image looks sharper with better contrast.

Histogram Equalization (Image Enhancement


Method) Q.What is image filtering ? Compare and contrast between filtering in
the spartial domain and frequency domain
Definition:
Histogram equalization is an image enhancement technique that redistributes the Image Filtering
intensity values of an image so that the histogram becomes approximately uniform
(i.e., pixel values spread across the entire range). Definition:
Image filtering is the process of modifying or enhancing an image by selectively
 The goal is to improve contrast and make hidden details visible. emphasizing or suppressing certain features such as edges, noise, or fine details.
Filtering is mainly used for:

 Noise reduction
 Edge detection

Methods of Image Sharpening Disadvantages


1. Spatial Domain Methods
3. Comparison: Spatial vs Frequency Domain  Can amplify noise if not applied carefully.
 Over-sharpening may cause unnatural appearance.
Filtering Sharpening in the spatial domain is done by applying filters (masks/kernels) that
highlight edges.
Aspect Spatial Domain Filtering Frequency Domain Filtering
Definition Operations directly on pixel values Operations on frequency components  Laplacian Operator Q. What is image smoothing ?Explain different linear and non linear smoothing
using masks/kernels. after Fourier Transform. o Second-order derivative filter. spatial filter .
o Detects edges by finding regions of rapid intensity change.
Basis Uses convolution in image space. Uses multiplication in frequency space. o Example mask:

Computation Simple, faster for small masks. More complex, requires transforms (FT
Image Smoothing
& IFT).
Filter Type Linear filters (average, Gaussian), Low-pass, high-pass, band-pass filters.
Definition
non-linear (median). o Sharpened image = Original image + Laplacian of image.
 Gradient-based Operators (First Derivative) Image smoothing is a digital image processing technique used to reduce noise,
Applications Noise removal, smoothing, edge Large-scale filtering, compression, o Sobel, Prewitt, Roberts operators compute the gradient (rate of change) in variations, and small details in an image. It works by averaging or modifying pixel
detection (small scale). enhancement. intensity. values to create a smoother appearance. The main goal is to blur unwanted details or
o They highlight horizontal and vertical edges. random noise, while still retaining the important overall shapes and objects.
Efficiency Efficient for small neighborhood Efficient for large, complex filters.
operations. In simple words, smoothing filters try to “soften” images, reducing sharp transitions and
Examples Mean filter, Sobel, Laplacian, Median Ideal LPF, HPF, Gaussian filter in making them visually more pleasant.
filter. frequency domain. 2. Frequency Domain Methods

 Convert image into frequency domain using Fourier Transform.


Image Sharpening  Apply high-pass filters (HPF) to retain high-frequency details (edges) and Why Image Smoothing is Needed?
suppress low-frequency background.
Definition  Example filters: Ideal HPF, Gaussian HPF, Butterworth HPF.  To remove noise introduced during image acquisition or transmission.
 After filtering, apply inverse transform to get sharpened image.  To blur irrelevant details before further processing (e.g., edge detection,
Image sharpening is a digital image processing technique used to highlight fine details, segmentation).
edges, and boundaries in an image. It works by enhancing high-frequency  To improve the visual quality of an image.
components (sudden changes in intensity) and suppressing low-frequency components  To prepare an image for higher-level tasks like object recognition.
(smooth areas).
General Process of Sharpening
 While smoothing (blurring) removes details and noise, sharpening does the
opposite → it emphasizes transitions (edges) between regions. 1. Take the original image.
 Sharpening is very useful in medical imaging, satellite images, text 2. Apply a sharpening filter (spatial or frequency domain). Smoothing Spatial Filters
recognition, and industrial inspection where edge details are important. 3. Add or combine the filtered output with the original image.
4. Result: enhanced edges and details. Spatial filters work directly on pixel values of the image by moving a mask (kernel)
across the image. Smoothing spatial filters replace each pixel’s value with a new value
Why Sharpening is Needed? based on its neighborhood.

 Images often appear blurred due to camera motion, focus issues, or noise. Advantages of Image Sharpening They are divided into two main types:
 Sharpening enhances edges, lines, and textures that are crucial for human
interpretation or computer vision.  Improves edge visibility.
 It makes objects more defined and clearer for analysis.  Makes objects more distinguishable. 1. Linear Smoothing Filters
 Useful for pattern recognition, OCR (Optical Character Recognition), medical
analysis. Linear filters compute the new pixel value as a weighted average of neighboring pixel
values. Since the operation is linear (convolution), they are mathematically simple.
a) Averaging Filter (Mean Filter) b) Min and Max Filters Watershed Algorithm of Image Segmentation
 Each output pixel is the average of neighboring pixels.  Min filter: Replaces pixel with the minimum intensity in neighborhood → reduces
The Watershed algorithm is a region-based segmentation method inspired by
 Removes noise effectively but also blurs edges. salt noise (white dots).
 Example: 3×3 mean filter  Max filter: Replaces pixel with the maximum intensity → reduces pepper noise topography (the study of landscapes). Imagine an image as a 3D surface where:
(black dots).
 Pixel intensities = elevation.
 High intensity = peaks, low intensity = valleys.
c) Mode Filter
The idea is to flood the valleys with water and build dams where water from different
 Replaces each pixel with the most frequently occurring value in the neighborhood. valleys would meet. These dams represent the segmented boundaries.
 Useful in categorical images or images with repeating noise patterns.

Summary of Non-linear Smoothing:


 If an image pixel neighborhood is [100, 102, 98], the mean filter will smooth it to Steps in Watershed Segmentation
~100.  Better at preserving edges compared to linear filters.
 Best for impulse noise removal. 1. Gradient Image Computation
b) Weighted Averaging (Gaussian Filter) o Compute the gradient (rate of change of intensity) of the image.
o High gradient = edges (object boundaries), low gradient = flat regions.
 Assigns more weight to central pixels and less to farther neighbors. 2. Topographic Representation
 Provides smoother results without blurring as much as simple averaging.
 Example: 3×3 Gaussian kernel Applications of Image Smoothing o Interpret the gradient image as a topographic surface with valleys and
peaks.
3. Flooding Process
 Noise reduction in scanned documents, photographs, and satellite images. o Imagine water filling the valleys from minima points (low intensity regions).
 Preprocessing step before edge detection and segmentation. o As water rises, when two watersheds are about to merge, dams are built.
 Medical imaging (to remove small variations before diagnosis). 4. Formation of Watershed Lines
 Object recognition and computer vision. o These dams or boundaries correspond to the edges of objects in the image.
5. Final Segmented Image
Q.Define segmentation .Discuss watershed algorithm of image o Each catchment basin corresponds to one object/region in the segmented
segmentation image.
 Useful for natural-looking blur.

Summary of Linear Smoothing: Image Segmentation


Definition Advantages of Watershed Algorithm
 Simple and effective for random noise removal.
 But, they blur edges and fine details.  Produces closed boundaries for regions.
Image segmentation is the process of dividing a digital image into multiple
meaningful regions or objects so that important features can be analyzed separately. It  Very useful for separating touching or overlapping objects.
assigns a label to every pixel in the image such that pixels with the same label share  Works well on gradient images.
certain visual characteristics like intensity, color, or texture.  Provides precise segmentation in many applications (medical imaging, biological
2. Non-linear Smoothing Filters studies).
In simple terms, segmentation helps in isolating objects of interest from the
Non-linear filters use non-linear functions of pixel values in the neighborhood, rather
background.
than just averages. They are especially effective against impulse noise (salt-and-pepper
noise). Disadvantages

a) Median Filter Need for Image Segmentation  Very sensitive to noise and small intensity variations → can cause over-
segmentation.
 Replaces each pixel with the median value of its neighborhood.  To simplify image representation.  Requires preprocessing (smoothing, filtering, or marker-based techniques) to give
 Very effective in removing salt-and-pepper noise while preserving edges.  To make image analysis (like object detection, recognition, measurement) easier. good results.
 Example: Neighborhood values [12, 15, 200, 14, 16] → median = 15 (replaces  To identify boundaries, edges, and regions in complex images.
noisy pixel 200).  Applications: medical imaging (tumor detection), satellite images, OCR, traffic
monitoring.

Applications 3. Multilevel Thresholding  Remote sensing (classifying land, water, vegetation).

 Medical imaging (tumor, cell, or organ boundary detection).  Instead of dividing image into just 2 classes (object & background), multiple
 Document image analysis (character separation). thresholds are used to separate into more than two regions.
 Industrial inspection (detecting defects in manufactured items).  Example: Segmenting a satellite image into water, vegetation, and land regions. Q.Principal Component Analysis (PCA)
 Remote sensing (separating land, water, vegetation regions).
Definition

Principal Component Analysis (PCA) is a statistical and mathematical technique used


Q.Discuss Thresholding -Based Image segmentation for dimensionality reduction, data compression, and feature extraction. It
Steps in Thresholding-Based Segmentation transforms a large set of correlated variables into a smaller set of uncorrelated
variables called Principal Components (PCs), while still retaining most of the important
information present in the original dataset.
1. Input Image – Take a grayscale image.
2. Choose Threshold Value (T): In simple words, PCA identifies the directions (axes) of maximum variance in data and
o Manually fixed, or projects the data onto these new axes, thereby reducing complexity while preserving
o Automatically selected (e.g., Otsu’s method). essential patterns.
3. Classification:
o Pixels ≥ T → Foreground (object).
o Pixels < T → Background.
4. Generate Binary Image – Object = white (1), Background = black (0).
5. Post-processing (optional): Apply morphological operations to remove small noise
or fill gaps.
Key Concepts in PCA

1. Principal Components (PCs):


Advantages o New variables obtained after transformation.
o They are linear combinations of the original features.
 Simple and fast. o Each successive component captures maximum possible variance left in the
 Easy to implement. data.
Concept of Thresholding 2. Variance Maximization:
 Effective when there is clear intensity difference between object and
background. o PCA works on the idea that directions with higher variance contain more
 Works best when object and background have distinct intensity levels. useful information.
 Requires less computation compared to complex segmentation methods.
 Example: In a document image, text (dark) and paper (light) can be separated by 3. Orthogonality:
thresholding. o Principal components are orthogonal (independent) of each other, avoiding
redundancy in representation.
4. Dimensionality Reduction:
Limitations o From n features, PCA selects k components (k < n) that capture most of
the variation.
Types of Thresholding o Helps in reducing computation, visualization, and avoiding the curse of
 Not effective for images with poor contrast or varying illumination.
dimensionality.
1. Global Thresholding  Sensitive to noise.
 Fails when object and background intensities overlap significantly.
 Global thresholding cannot handle non-uniform lighting.
 A single threshold value (T) is chosen for the entire image.
 Simple and fast, but fails when lighting or contrast varies across the image. Steps in PCA
 Example: Otsu’s Method (automatically finds the best global threshold).
1. Standardize the Data – Scale features so that large-valued features don’t
2. Local (Adaptive) Thresholding Applications dominate.
2. Compute Covariance Matrix – Measure how variables are related.
 Different threshold values are computed for different regions of the image. 3. Calculate Eigenvalues & Eigenvectors – Identify directions of maximum
 Document image processing (text extraction, OCR). variance.
 Useful when illumination is non-uniform.
 Medical imaging (separating tumors or organs). 4. Select Principal Components – Choose top k eigenvectors based on eigenvalues.
 Example: Adaptive Gaussian or Mean thresholding in OpenCV.
 Industrial inspection (defect detection). 5. Transform Data – Project original data onto new component axes.

Types of Image Compression Models Bit-Plane Coding


Applications of PCA 1. Lossless Compression Model Definition
o Definition: No information is lost during compression. The reconstructed
 Image Processing: Face recognition, image compression. image is identical to the original.
 Data Visualization: Reducing high-dimensional data (like 100 features) into Bit-plane coding is an image compression technique that represents an image by
o Techniques Used:
2D/3D plots. decomposing each pixel’s intensity value into its binary bits (planes). Instead of
 Run-Length Encoding (RLE)
 Machine Learning: Preprocessing step to reduce noise and redundancy. encoding the entire pixel value at once, the image is split into multiple bit planes (from
 Huffman Coding
 Finance: Stock market pattern analysis. the Most Significant Bit (MSB) to the Least Significant Bit (LSB)).
 LZW (Lempel–Ziv–Welch)
 Bioinformatics: Gene expression data analysis. o Applications: Medical imaging, satellite images, documents, legal or
This method takes advantage of the fact that higher-order bits (MSB planes)
archival data.
contribute more to image quality, while lower-order bits (LSB planes) mostly represent
2. Lossy Compression Model
fine details or noise. By encoding important planes with higher priority, compression
o Definition: Some amount of data is lost permanently, but the reconstructed
becomes more efficient.
image looks visually similar to the original.
o Techniques Used:
Image Compression Model  Transform coding (DCT, Wavelets)
 Quantization
Explanation with Example
Definition  JPEG, JPEG2000, MPEG image/video standards.
o Applications: Web images, streaming, social media, multimedia storage.
 Suppose we have an 8-bit grayscale image.
Image compression is a process of reducing the amount of data required to represent
 Each pixel intensity ranges from 0 to 255, represented in 8-bit binary form.
an image while preserving its essential visual quality. It eliminates redundancies
 These 8 bits are divided into 8 separate images (bit planes):
present in the image data (spatial, spectral, or perceptual redundancies), thereby
Redundancies Exploited in Compression Models o Plane 7 → MSB (most significant bit) – carries coarse image structure.
decreasing file size and storage requirements.
o Plane 6 → Next significant bit – adds more detail.
o …
An image compression model refers to the theoretical and practical framework that  Spatial Redundancy: Neighboring pixels are often similar (smooth regions).
o Plane 0 → LSB (least significant bit) – carries fine details or noise.
describes how an image can be encoded (compressed) and decoded (decompressed)  Spectral Redundancy: Correlation among color planes or frequency bands.
efficiently. The goal is to achieve a balance between high compression ratio (small file  Psycho-visual Redundancy: Human eye is less sensitive to small details and high
Example:
size) and acceptable image quality. frequencies.
If a pixel has intensity 200, its binary representation is:

2200=(11001000)^2
Basic Components of an Image Compression Model

An image compression model generally consists of the following stages:

1. Source Encoder (Mapper):


o Transforms image into a format that reduces redundancy.
o Example: Transform coding using Fourier Transform, DCT (Discrete Cosine
 Bit-plane 7 → 1
Transform), or Wavelet Transform.  Bit-plane 6 → 1
2. Quantizer:  Bit-plane 5 → 0
o Approximates values by reducing precision.
 Bit-plane 4 → 0
o Introduces a small amount of distortion but greatly reduces data size.
 Bit-plane 3 → 1
3. Entropy Encoder (Symbol Coder):  Bit-plane 2 → 0
o Encodes the quantized values into a binary stream.
 Bit-plane 1 → 0
o Uses techniques like Huffman Coding or Arithmetic Coding to further
 Bit-plane 0 → 0
reduce redundancy.
4. Channel (Storage/Transmission Medium): So, this pixel will be distributed across different planes accordingly.
o The compressed bitstream is stored or transmitted.
5. Decoder:
o Reconstructs the image by reversing the above process.
How Compression Works in Bit-Plane Coding

1. Decompose the image into separate bit planes.


2. Encode each plane separately using compression algorithms (Run-Length
Encoding, Huffman Coding, etc.).
3. Give higher priority to MSB planes, as they are visually more important. Types of Notch Filters How It Works
4. Reconstruct the image during decoding by combining the planes.
1. Ideal Notch Filter 1. Prediction:
o Completely eliminates frequencies in a narrow band. o The current pixel value is predicted using already known neighboring pixel
o Sharp cut-off but may cause ringing effects in the image (due to sudden values.
Advantages truncation). o Example predictors:
2. Butterworth Notch Filter  Previous pixel in the same row
 Exploits redundancy between bit planes. o Smooth transition between passband and stopband.  Average of nearby pixels
 Provides progressive transmission (first send MSB planes for coarse image, then o Controlled by the order of the filter.  Linear combination of neighbors
add details). o Reduces artifacts compared to ideal.
 Simple and effective for grayscale image compression. 3. Gaussian Notch Filter
o Uses a Gaussian curve for smooth rejection.
o Very effective with minimal distortion.

Disadvantages

 LSB planes often contain noise, so encoding them wastes space.


 Works best for images with smooth intensity variations, less efficient for complex
images.

Applications

 Progressive image transmission (like old fax machines, progressive JPEG).


 Used in multiresolution image processing.
 Suitable for lossless and lossy compression, depending on how many planes are
encoded.
.
Notch Filter in Image Processing (and Signal Processing)
A notch filter is a special type of filter that blocks (rejects) a narrow band of Example
frequencies while allowing all other frequencies to pass.
Example in Images
It’s the opposite of a band-pass filter.
 Suppose you have an image corrupted with horizontal stripes (periodic noise). Suppose we have pixel values in a row:
120, 121, 123, 124
Think of it like a "frequency eraser":  In the Fourier transform, this noise shows up as two bright symmetric spots away
from the center.
 Predictor: “previous pixel”
 It cuts out (removes) a very specific unwanted frequency (or range of frequencies).  A notch filter is applied at those spots → noise is removed → clean image
 Predictions: 120, 120, 121, 123
 Everything else in the signal or image remains mostly unchanged. recovered after inverse Fourier transform.
 Errors (residuals): 0, 1, 2, 1

Lossless Predictive Coding Now instead of storing/transmitting the large pixel values (120–124), we only need to
store small numbers (0,1,2,1) → easier to compress.
In Image Processing Lossless predictive coding is a compression technique used in image and signal
processing.
 Images can be represented in the frequency domain using Fourier Transform. It works on the idea that neighboring pixels (or samples) are often similar, so instead
 Some patterns (like periodic noise, stripes, or interference) appear as bright spots of storing/transmitting the actual pixel value, we store/transmit the difference Advantages
in the frequency spectrum. (prediction error) between the actual value and a predicted value.
 A notch filter is applied to “notch out” (remove) those frequencies, while keeping  No information is lost (perfect reconstruction).
the rest. Since differences are usually smaller and have less variation, they can be represented  Works well for images with high correlation between neighboring pixels (like
with fewer bits, making compression possible without losing information. medical images, scanned documents).
👉 This is especially useful for removing periodic noise in images.

Applications

 PNG image compression


 JPEG-LS (Lossless JPEG standard)
 Lossless audio coding
 Medical imaging (X-rays, MRI, CT scans)

(c) Basic Relationships Between Pixels


1. Neighbors of a Pixel:

o 4-neighbors: Adjacent pixels (top, bottom, left, right).

o 8-neighbors: Includes diagonal pixels.

2. Connectivity:
o 4-connectivity: Pixels connected if they are 4-neighbors.

o 8-connectivity: Pixels connected if they are 8-neighbors.

3. Distance Measures:
o Euclidean Distance: Straight-line distance between pixels.

o City-Block (D4): Sum of horizontal + vertical steps.

o Chessboard (D8): Maximum of horizontal/vertical steps.

You might also like