0% found this document useful (0 votes)
4 views34 pages

Image Restoration

Image restoration is a digital image processing technique aimed at recovering the original image from a degraded version by reversing distortions and noise effects. The degradation model mathematically represents the relationship between the original and observed images, incorporating degradation functions and noise. Various restoration techniques, including inverse filtering, Wiener filtering, and mean filters, are employed to improve image quality across applications like medical imaging and satellite imagery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views34 pages

Image Restoration

Image restoration is a digital image processing technique aimed at recovering the original image from a degraded version by reversing distortions and noise effects. The degradation model mathematically represents the relationship between the original and observed images, incorporating degradation functions and noise. Various restoration techniques, including inverse filtering, Wiener filtering, and mean filters, are employed to improve image quality across applications like medical imaging and satellite imagery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Image Restoration

What do you mean by Image restoration ? Explain in detail . Explain the image degradation model
with mathematical prelims

Image Restoration is a process in digital image processing that aims to recover an original, clean image
from a degraded version using knowledge about the degradation phenomenon and noise models. It
attempts to reverse the effects of distortions and improve the quality of an image, often used in
applications like medical imaging, satellite imagery, and forensic analysis.

🔍 What is Image Restoration?


Image Restoration involves removing or minimizing known degradations such as:

Blur due to motion


Defocus

Sensor noise

Atmospheric turbulence

It is different from Image Enhancement, which improves visual appearance without relying on any
mathematical model of degradation.

📐 Image Degradation Model (Mathematical Preliminaries)


The image degradation model mathematically represents how a clean image gets degraded. It is
generally represented as:

g(x, y) = H[f (x, y)] + η(x, y)

Where:

f (x, y): Original (ideal) image


g(x, y): Observed (degraded) image
H : Degradation function (typically a linear, space-invariant system, like blurring)
η(x, y): Additive noise

In the frequency domain, this model becomes:

G(u, v) = H(u, v) ⋅ F (u, v) + N (u, v)

Where:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/34
G(u, v): Fourier transform of the degraded image
F (u, v): Fourier transform of the original image
H(u, v): Frequency response of the degradation function
N (u, v): Fourier transform of the noise

🌀 Types of Degradations
1. Blur (Linear Degradation)
Caused by motion or defocus, usually modeled with a Point Spread Function (PSF).
2. Noise (Additive/Multiplicative)
Caused by sensors, transmission channels, etc. Common noise types:

Gaussian noise
Salt-and-pepper noise
Poisson noise

🛠️ Restoration Techniques
1. Inverse Filtering

Assumes no noise and tries to invert the degradation:

G(u, v)
F (u, v) = ​

H(u, v)

Works poorly when H(u, v) is small or noise is present.

2. Wiener Filtering

Minimizes the mean square error between original and estimated image:

H ∗ (u, v)
F (u, v) = SN (u,v)

⋅ G(u, v)
∣H(u, v)∣2 + SF (u,v)

Where:

H ∗ (u, v): Complex conjugate of H(u, v)


SN (u, v): Power spectral density of the noise

SF (u, v): Power spectral density of the original image


3. Constrained Least Squares Filtering

Solves:

^
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API.
2 2
2/34
f^ = arg min ∥Hf − g ∥ + γ ∥∇f ∥
2 2
​ ​

Where:

∇f : Laplacian (smoothness constraint)


γ : Regularization parameter

4. Blind Deconvolution

Used when the degradation function H is unknown.

It estimates both the original image and the degradation model iteratively.

📊 Applications of Image Restoration


Medical image improvement (e.g., MRI, X-ray)

Astronomical image correction


Removing motion blur in photography

Old photo or film restoration

Satellite image correction

Summary
Concept Description
Image Restoration Process to recover original image from a degraded one

Degradation Model g(x, y) = H[f (x, y)] + η(x, y)


Key Techniques Inverse Filter, Wiener Filter, CLS Filter, Blind Deconvolution

Applications Medical, satellite, forensic, archival image restoration

Let me know if you'd like diagrams or visual examples to support these concepts!

What is noise . Discuss various types of noise with their mathematical prelims

📢 What is Noise in Image Processing?


Noise in image processing refers to unwanted random variations of brightness or color information in
images. It corrupts the original image and degrades visual quality. Noise is introduced due to various
factors like sensor limitations, transmission errors, or environmental conditions.
Mathematically, the noisy image is modeled as:

g(x, y) = f (x, y) + η(x, y)

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/34
Where:

f (x, y): Original image


g(x, y): Noisy (observed) image
η(x, y): Noise component (random variable)

🔍 Types of Noise with Mathematical Models


1. Gaussian Noise (Additive White Gaussian Noise – AWGN)

Caused by electronic circuit noise or sensor imperfections.

Follows a normal (Gaussian) distribution.

Probability Density Function (PDF):

1 (η − μ)2
P (η) = exp (− )
2πσ 2 2σ 2
​ ​

Where:

μ: Mean (usually 0)
σ : Standard deviation

Characteristics:

Noise values spread across the image.

Most common in natural and scientific images.

2. Salt-and-Pepper Noise (Impulse Noise)

Caused by sharp and sudden disturbances (e.g., transmission errors).


Appears as random black and white pixels.

Mathematical Model:

⎧ps , if η = a (salt – white pixel)


P (η) = ⎨pp , if η = b (pepper – black pixel)


​ ​ ​

0, otherwise

Where ps ​ + pp ≤ 1, and a = 255, b = 0


Characteristics:

Sudden pixel value jumps to extremes.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/34
Best removed using median filtering.

3. Poisson Noise (Shot Noise)

Arises due to the quantum nature of light.


Especially prominent in low-light photography.

Mathematical Model:

Poisson noise follows the Poisson distribution:

λη e−λ
P (η; λ) = ​

η!
Where:

λ: Expected value (equal to the variance)


η : Observed pixel value

Characteristics:

Signal-dependent (i.e., noise increases with signal intensity).

4. Speckle Noise

Multiplicative noise usually found in coherent imaging systems (e.g., radar, ultrasound).
Results in granular appearance.

Mathematical Model:

g(x, y) = f (x, y) + f (x, y) ⋅ η(x, y) = f (x, y)(1 + η(x, y))

Where η(x, y) is typically a zero-mean Gaussian random variable.

Characteristics:

Reduces contrast and detail in images.

Requires adaptive filtering.

5. Quantization Noise

Occurs during analog-to-digital conversion.

Introduced when continuous pixel values are rounded to discrete levels.

Mathematical Model:

η = round(f (x, y)) − f (x, y)


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 5/34
Characteristics:

Depends on quantization step size.

Generally uniform if signal is well-distributed.

📊 Summary Table
Best Filtering
Noise Type Source PDF / Model Appearance
Method
Smooth grainy
Gaussian Sensor/circuit error Normal distribution Gaussian/Mean filter
texture
Salt-and- Bit Impulse at black & Random black &
Median filter
Pepper errors/transmission white values white pixels
Brightness- Variance-stabilizing
Poisson Photon shot noise Poisson distribution
dependent noise transform
Multiplicative
Speckle Coherent imaging Granular texture Lee or Frost filter
Gaussian
Blocky or flat dithering, higher
Quantization Digitization process Uniform distribution
patches resolution

If you need diagrams or want implementation examples using MATLAB or Python, let me know!

Describe briefly about different mean filters in image restoration , when degradation present in an
image is noise . Explain with the mathematical prelims with examples

📷 Mean Filters in Image Restoration (When Degradation is Noise)


Mean filters are a class of linear or nonlinear filters used for noise reduction in images. They work by
replacing each pixel's value with a function of the values in its neighborhood (typically a square
kernel). These filters are especially useful when the degradation is random noise like Gaussian or salt-
and-pepper noise.

🧮 General Mathematical Expression


Given a pixel g(x, y), the filtered pixel f^(x, y) using a mean filter is:

1
f^(x, y) =
​ ∑ g(s, t)
​ ​

N
(s,t)∈S

Where:

S : Set of coordinates in the filter window (e.g., 3×3)


N : Total number of pixels in the window

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 6/34
📂 Types of Mean Filters (with Use-Cases and Math)
1. Arithmetic Mean Filter

Formula:

m n
1
f^(x, y) =
​ ∑ ∑ g(x + i, y + j)
​ ​ ​

mn
i=1 j=1

Simple average of pixel values in the neighborhood.

Best for: Gaussian noise


Weakness: Blurs edges and fine details.

Example:

For a 3×3 kernel:

1 1
1
f^(x, y) = ∑ ∑ g(x + i, y + j)
9
​ ​ ​ ​

i=−1 j=−1

2. Geometric Mean Filter

Formula:

1
N

f^(x, y) =
​ ​ ∏ g(s, t) ​ ​

(s,t)∈S

Multiplies the pixel values and takes the N-th root.

Best for: Gaussian noise with low intensity


Preserves detail better than arithmetic mean.

Limitation: Only works with positive pixel values.

3. Harmonic Mean Filter

Formula:

N
f^(x, y) = 1
∑(s,t)∈S
​ ​

g(s,t)
​ ​

Gives more weight to smaller values.


Best for: Salt noise (white dots)

Limitation: Fails with 0-valued pixels (i.e., pepper noise).

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 7/34
4. Contraharmonic Mean Filter

Formula:

∑(s,t)∈S g(s, t)Q+1


f^(x, y) =

∑(s,t)∈S g(s, t)Q


Q: Order of the filter

Behavior:

Q > 0: Removes salt noise


Q < 0: Removes pepper noise

Q = 0: Becomes arithmetic mean

Example:

To remove salt noise:

^ ∑ g(s, t)2
Q = 1 ⇒ f (x, y) =
∑ g(s, t)
​ ​

🧪 Example with 3×3 Neighborhood (Arithmetic Mean)


Let’s assume the neighborhood pixels of g(x, y) are:

css

[ 80, 82, 83 ]
[ 79, 255, 81 ] ← (255 is a salt noise pixel)
[ 80, 82, 84 ]

Arithmetic Mean:

80 + 82 + 83 + 79 + 255 + 81 + 80 + 82 + 84 906
f^(x, y) = = ≈ 100.67
9 9
​ ​ ​

This result still gets influenced by the salt noise (255).

Harmonic Mean would reduce this influence.

🧾 Summary Table
Formula
Filter Type Best For Notes
(Simplified)
1
Arithmetic Mean N
​ ∑ g(s, t) Gaussian Noise Simple but blurs image

1/N Retains detail, only for positive


Geometric Mean (∏ g(s, t)) Gaussian, speckle
pixels

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 8/34
Formula
Filter Type Best For Notes
(Simplified)
N
Harmonic Mean ∑ 1
g(s,t)

Salt Noise Cannot handle zero pixels

Contraharmonic ∑ g Q+1 Salt (Q > 0), Pepper (Q <


Versatile, parameter sensitive
Mean ∑ gQ 0)

Let me know if you want to see visual examples or Python/Matlab code to implement these filters!

Describe briefly about different filters in image restoration in frequency domain . Explain with the
mathematical prelims with examples

📷 Filters in Image Restoration – Frequency Domain Approach


In image restoration, when degradation (like blur or noise) affects an image, frequency domain
filtering can be a powerful tool. This approach uses the Fourier Transform to modify specific frequency
components of the degraded image and recover the original image.

🧮 Mathematical Preliminaries
The degradation/restoration model in the frequency domain is:

G(u, v) = H(u, v) ⋅ F (u, v) + N (u, v)

Where:

G(u, v): Fourier Transform of degraded image


H(u, v): Degradation function (e.g., blur filter)
F (u, v): Fourier Transform of original image
N (u, v): Additive noise
^ (u, v), then apply Inverse Fourier Transform:
Our goal is to estimate F

f^(x, y) = F −1 {F^ (u, v)}


🛠️ Types of Frequency Domain Filters


1. Inverse Filtering

Formula:

G(u, v)
F^ (u, v) = ​

H(u, v)

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 9/34
Use:

Removes blur assuming no noise.

Direct inversion of degradation function.

Limitations:

Very sensitive to noise.


Not reliable when H(u, v) is close to zero.

Example:

For a blurred image with motion blur H(u, v), apply inverse filter:

G(u, v)
F^ (u, v) = ​
⇒ f^(x, y) = F −1 {F^ (u, v)}

H(u, v)

2. Wiener Filtering (Minimum Mean Square Error Filter)

Formula:

H ∗ (u, v)
F^ (u, v) = ​

SN (u,v)
​ ​
G(u, v)
∣H(u, v)∣2 + ​

SF (u,v)

Where:

H ∗ (u, v): Complex conjugate of H(u, v)


SN (u, v): Power spectrum of noise

SF (u, v): Power spectrum of original image


Use:

Handles both blur and noise.


Balances between inverse filtering and noise suppression.

Simplified Version:

H ∗ (u, v)
F^ (u, v) = ⋅ G(u, v)
∣H(u, v)∣2 + K

SN (u,v)
Where K = SF (u,v) , a constant when spectra are not known.

3. Homomorphic Filtering

Used to enhance contrast and suppress multiplicative noise (e.g., illumination changes).

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 10/34
Steps:

1. Take log of image to convert multiplicative model to additive:

ln[f (x, y)] = ln[i(x, y)] + ln[r(x, y)]

2. Apply Fourier Transform


3. Apply high-pass filter to emphasize reflectance

4. Apply Inverse Fourier and exponential

Applications:

Uneven illumination correction

Medical imaging

4. Butterworth & Gaussian Filters (Low/High/Notch Pass)

Used in frequency domain to pass/block certain frequencies.

(a) Gaussian Low-Pass Filter:

D2 (u, v)
H(u, v) = exp (− )
2D02

Removes high-frequency noise

D(u, v): Distance from origin


D0 : Cut-off frequency

(b) Butterworth Low-Pass Filter:

1
H(u, v) = 2n
( D(u,v) )

1+ D0 ​

Smoother transition compared to Ideal LPF

(c) High-Pass:

HHP (u, v) = 1 − HLP (u, v)


​ ​

Used for sharpening and edge enhancement.

🧪 Example Workflow: Image Restoration with Wiener Filter


1. Convert image to frequency domain: G(u, v) = F{g(x, y)}
2. Assume known H(u, v), estimate K
^ (u, v) using Wiener formula
3. Compute F
^
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 11/34
4. Restore image using inverse FFT: f^(x, y)

= F −1 {F^ (u, v)}

📋 Summary Table
Filter Type Handles Blur Handles Noise Formula Remarks

Inverse Filter ✅ ❌ G
Simple but noise-sensitive
✅ ✅
H

Wiener Filter See above Balances noise and blur

Homomorphic Filter ❌ ✅ (illumination) Log, HPF Improves contrast

Gaussian Filter ❌ ✅ e−D


2
/2D02 ​

Smooth filtering

Butterworth Filter ❌ ✅ 1/(1 + (D/D0 )2n ) ​


Smooth transition

Let me know if you'd like a code example (Python/Matlab) or a visual explanation using Fourier
transform!

What are band reject filters , band pass filters and notch filters . Explain briefly with mathematical
prelims

🎛️ Band Reject Filters, Band Pass Filters, and Notch Filters in Image Processing
These filters are designed to suppress or enhance specific ranges of frequencies in the frequency
domain of an image. They are particularly useful in removing periodic noise and enhancing or
attenuating certain features.

🧮 Mathematical Preliminaries
In frequency domain filtering, an image f (x, y) is transformed using the Fourier Transform:

F (u, v) = F{f (x, y)}

Filtering is then applied:

G(u, v) = H(u, v) ⋅ F (u, v)

Finally, the result is brought back to the spatial domain:

g(x, y) = F −1 {G(u, v)}

Where:

H(u, v): Filter transfer function


F (u, v): Fourier transform of the original image
G(u, v): Filtered image in frequency domain

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 12/34
1. 🟤 Band Reject Filter (BRF)
➤ Purpose:

Removes a range (band) of frequencies, typically to eliminate periodic noise.

➤ Formula (Ideal Band Reject Filter):

H(u, v) = {
0, if D1 ≤ D(u, v) ≤ D2
​ ​

1, otherwise
​ ​

Where:

D(u, v) = (u − u0 )2 + (v − v0 )2
​ ​ ​

D1 and D2 : Inner and outer radii of the rejected band


​ ​

➤ Smoothed Versions:

Butterworth BRF:

1
H(u, v) = 2n
( D2 (u,v)−D2 )

W ⋅D(u,v)
1+ ​

0 ​

Gaussian BRF:

(D2 (u, v) − D02 )2


H(u, v) = 1 − exp [− ]

2D2 (u, v)W 2


2. 🟢 Band Pass Filter (BPF)


➤ Purpose:

Allows only a specific band of frequencies to pass through; the opposite of a band reject filter.

➤ Formula:

HBPF (u, v) = 1 − HBRF (u, v)


​ ​

➤ Applications:

Enhancing features within a frequency range (e.g., texture enhancement, ridge detection)

3. 🔹 Notch Filters
➤ Purpose:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 13/34
Reject specific frequency points or patterns (not entire bands). Often used to remove discrete
periodic noise (e.g., electrical interference).

➤ Formula (Ideal Notch Reject Filter for one point (uk , vk )): ​ ​

H(u, v) = {
0, if (u − uk )2 + (v − vk )2 ≤ r ​ ​

1, otherwise
​ ​

Where:

r: Radius around the noisy frequency point


Multiple notch filters can be combined to remove several frequencies.

➤ Butterworth Notch Reject Filter:

K
1
H(u, v) = ∏ n
1 + ( Dk (u,v)⋅D )
​ ​ ​ ​


−k (u,v) ​

k=1 D2

0

Where:

Dk (u, v) = distance from (uk , vk )


​ ​ ​

D−k (u, v) = distance from (−uk , −vk )


​ ​ ​

📋 Summary Table
Filter Type Purpose Formula/Approach Application Example

Band Reject Removes a frequency band 0 for D1 ​ ≤ D ≤ D2 ​


Removing periodic ring noise

Band Pass Passes a frequency band 1 − HBRF ​


Texture or detail enhancement

Notch Filter Removes specific frequencies 0 near specific (uk , vk ) ​ ​


Removing narrowband interference

📷 Visual Example (Conceptual)


Imagine a frequency spectrum:

Center = low frequencies


Outer = high frequencies

Bright spots at certain points = periodic noise

A notch filter places “holes” at these points to suppress them.

Let me know if you'd like a graphical explanation or code (Python/Matlab) to design and apply these
filters!

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 14/34
Discuss briefly about inverse filtering with its types and their mathematical prelims

🔄 Inverse Filtering in Image Restoration


Inverse filtering is a frequency domain technique used in image restoration to reverse the effect of
image degradation, assuming the degradation function is known.

📸 1. Image Degradation Model


The general model of image degradation in the frequency domain is:

G(u, v) = H(u, v) ⋅ F (u, v) + N (u, v)

Where:

G(u, v): Fourier Transform of the degraded image


H(u, v): Degradation function (e.g., blur function)
F (u, v): Fourier Transform of the original image
N (u, v): Additive noise (optional)

🔁 2. Basic Inverse Filtering


Assuming no noise (N (u, v) = 0), we try to recover the original image:
G(u, v)
F^ (u, v) = ​

H(u, v)
Then apply the inverse Fourier Transform:

f^(x, y) = F −1 {F^ (u, v)}


⚠️ Limitations of Basic Inverse Filtering:


Very sensitive to noise: If H(u, v) is close to zero, division causes amplification of noise.

Unstable: Even small noise leads to large errors.

🔢 3. Types of Inverse Filtering


🟢 A. Ideal Inverse Filter
Assumes perfect knowledge of H(u, v)

Formula:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 15/34
G(u, v)
F^ (u, v) = ​

H(u, v)

Drawback: Not practical with noise or zeros in H(u, v).

🟡 B. Truncated Inverse Filtering (Constrained Inverse Filter)


To reduce noise amplification, we truncate the inverse filter to a specific frequency region:

F (u, v) = {
G(u,v)
^ H(u,v) ,
if D(u, v) ≤ D0

G(u, v), otherwise


D(u, v): Distance from the origin in frequency space


D0 : Cutoff frequency

This avoids division where H(u, v) is small.

🔵 C. Pseudo Inverse Filter


When noise is present, a modified inverse filter is used to suppress the noise effect.

H ∗ (u, v)
F^ (u, v) = ⋅ G(u, v)
∣H(u, v)∣2 + ϵ

ϵ: Small constant to prevent division by zero

Note: This begins to resemble Wiener filtering.

🧠 4. Summary Table
Type of Inverse Handles
Formula Remarks
Filter Noise

Ideal Inverse Filter ❌ F^ (u, v) = G(u,v)


H(u,v)

Sensitive to zero/low values

Truncated Inverse
Filter
⚠️ Partially Use inverse only up to certain
frequency D0
Reduces high-frequency
noise


Pseudo Inverse Filter ( \frac{H^*(u,v)}{ H(u,v)

📌 Example Use Case


Given a motion-blurred image with known blur kernel H(u, v):

Apply Fourier Transform to the image → G(u, v)

Use inverse filtering method (basic, truncated, or pseudo)

Apply Inverse Fourier Transform → restored image

Would you like a graphical example or Python/Matlab code to see inverse filtering in action?

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 16/34
Discuss briefly about minimum mean square error filtering . Explain the mathematical prelims

📉 Minimum Mean Square Error (MMSE) Filtering in Image Restoration


Minimum Mean Square Error (MMSE) filtering is an advanced technique used in image restoration to
minimize the mean square error (MSE) between the original image and the restored image. It is more
effective than inverse filtering because it takes both degradation and noise into account.

🎯 Objective
To estimate the original image F (u, v) from a degraded and noisy observation G(u, v), such that the
expected mean square error is minimized:

Minimize E [∣F (u, v) − F^ (u, v)∣2 ]

🧮 1. Image Degradation Model


The frequency-domain degradation model is:

G(u, v) = H(u, v) ⋅ F (u, v) + N (u, v)

Where:

G(u, v): Fourier transform of degraded image


H(u, v): Degradation function (e.g., blur)
F (u, v): Fourier transform of original image
N (u, v): Additive noise

📐 2. MMSE Filtering Formula


The MMSE estimate of F (u, v) is given by:

H ∗ (u, v) ⋅ Sf (u, v)
F^ (u, v) = [ ] ⋅ G(u, v)

∣H(u, v)∣2 ⋅ Sf (u, v) + Sn (u, v)


​ ​

Where:

H ∗ (u, v): Complex conjugate of H(u, v)


Sf (u, v): Power spectral density (PSD) of the original image

Sn (u, v): Power spectral density (PSD) of the noise


F^ (u, v): Estimated original image in frequency domain

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 17/34
⚖️ 3. Special Case: Wiener Filter
If both Sf (u, v) and Sn (u, v) are constant (or assumed known ratios), the MMSE filter becomes the
​ ​

Wiener filter:

H ∗ (u, v)
F^ (u, v) = ​

Sn (u,v)
​ ​ ⋅ G(u, v)
∣H(u, v)∣2 + ​

Sf (u,v)

This is often simplified as:

H ∗ (u, v)
F^ (u, v) = [ ] ⋅ G(u, v)
∣H(u, v)∣2 + K

noise variance
Where K = signal variance

🧠 4. Key Points of MMSE Filter


Feature Description

Robust to noise Yes – explicitly considers noise

Requires Knowledge or estimate of Sf and Sn ​

Improves over Inverse and pseudo-inverse filters

Type Linear estimator minimizing MSE

📷 Example Application
Given a motion-blurred image with Gaussian noise:

Estimate or assume Sf (u, v) and Sn (u, v)


​ ​

Compute MMSE filter HMMSE (u, v) ​

Multiply with G(u, v), then inverse Fourier transform to get the restored image.

✅ Advantages of MMSE Filter


Balances noise suppression and image fidelity
Better performance under noisy and degraded conditions

More general and adaptive than inverse or Wiener filtering

Let me know if you'd like a visual or code example in Python or MATLAB!

Discuss briefly about Constrained and Unconstrained restoration process in image processing

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 18/34
🛠️ Constrained vs. Unconstrained Image Restoration in Image Processing
Image restoration involves recovering the original image from a degraded version using mathematical
models. Restoration techniques can be broadly classified into constrained and unconstrained methods
depending on whether prior knowledge or constraints are applied during restoration.

🔓 1. Unconstrained Restoration
➤ Definition

In unconstrained restoration, the objective is to recover the original image by minimizing a specific
error metric (usually Mean Square Error) without applying any additional constraints on the solution.

📐 Mathematical Formulation
Given:

G(u, v) = H(u, v)F (u, v) + N (u, v)

Unconstrained restoration aims to minimize:


2
min ​ G(u, v) − H(u, v)F^ (u, v)
​ ​

F^ (u,v)

🧰 Examples:
Inverse filtering
Wiener filtering

Minimum Mean Square Error (MMSE) filtering

✅ Pros:
Simpler to compute
Useful when degradation model is known and noise is minimal

❌ Cons:
Sensitive to noise
May result in unphysical or unstable solutions (e.g., negative intensities, artifacts)

🔐 2. Constrained Restoration
➤ Definition

In constrained restoration, the restoration process minimizes an error metric while satisfying
certain constraints (e.g., smoothness, non-negativity, energy limits, or prior knowledge of the image).

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 19/34
📐 Mathematical Formulation
2
min G − H F^
​ ​ ​ subject to: C(F^ ) ≤ δ
F^

Where:

C(F^ ): constraint function (e.g., total variation, edge energy, prior image structure)
δ : permissible bound for the constraint

🧰 Common Constraints:
Smoothness constraint: Avoid abrupt changes (using Laplacian or gradient terms)
Non-negativity: Image intensities must be ≥ 0
Bounded energy: Total image energy must be limited

📌 Methods Used:
Constrained Least Squares Filtering

Regularization techniques (e.g., Tikhonov regularization)


Maximum a Posteriori (MAP) estimation

🔍 Constrained Least Squares Filter Example


Objective:
2 2
min G − H F^
​ ​ ​ + γ C F^
​ ​

F^

C : High-pass operator (like Laplacian)


γ : Regularization parameter

Frequency domain solution:

H ∗ (u, v)
F^ (u, v) = ⋅ G(u, v)
∣H(u, v)∣2 + γ∣C(u, v)∣2

🧠 Comparison Table
Feature Unconstrained Restoration Constrained Restoration
Constraints applied ❌ No constraints ✅ Yes, constraints on solution
Noise Handling Poor if noise is high Better at suppressing noise
Complexity Lower Higher (requires optimization techniques)
Stability of solution Can be unstable More stable due to constraints
Examples Inverse, Wiener, MMSE CLS filtering, MAP estimation

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 20/34
✅ Summary
Unconstrained restoration is simpler but less robust.
Constrained restoration introduces prior knowledge or physical constraints for better results,
especially in noisy environments.
The choice between the two depends on the available information and the nature of the
degradation.

Let me know if you'd like a practical example or implementation details!

Discuss in detail about homomorphic filtering in the context of Image restoration

💡 Homomorphic Filtering in Image Restoration – Detailed Explanation


🧠 1. Introduction
Homomorphic filtering is a powerful technique in image processing used to:

Simultaneously normalize brightness (illumination)


Enhance contrast (reflectance)
And restore images degraded by poor lighting or multiplicative noise.

It is non-linear in the spatial domain, but becomes linear in the log-frequency domain, allowing
restoration techniques to be applied more effectively.

🌄 2. The Image Formation Model


In many real-world scenarios, an image f (x, y) is formed as a multiplicative combination of:

Illumination i(x, y): low-frequency component (slowly varying)

Reflectance r(x, y): high-frequency component (edges, details)

f (x, y) = i(x, y) ⋅ r(x, y)

🔁 3. Transforming to Additive Form (Log Domain)


Since direct filtering on multiplicative components is difficult, we take the logarithm to convert the
multiplicative model to an additive one:

ln[f (x, y)] = ln[i(x, y)] + ln[r(x, y)]

Let:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 21/34
z(x, y) = ln[f (x, y)] = l(x, y) + s(x, y)

Where:

l(x, y) = ln[i(x, y)]


s(x, y) = ln[r(x, y)]

🔄 4. Frequency Domain Filtering


➤ Step-by-step process:

1. Take logarithm of the image:

z(x, y) = ln[f (x, y)]

2. Apply Fourier Transform:

Z(u, v) = F{z(x, y)}

3. Apply High-Pass Filter (HPF) in frequency domain to suppress low frequencies (illumination) and
boost high frequencies (reflectance):

S(u, v) = H(u, v) ⋅ Z(u, v)

4. Apply Inverse Fourier Transform:

s′ (x, y) = F −1 {S(u, v)}

5. Exponentiate the result to reverse the log operation:

f^(x, y) = exp[s′ (x, y)]


⚙️ 5. Homomorphic Filter Design


A typical filter used in homomorphic filtering is a high-pass filter:

H(u, v) = (γH − γL ) [1 − e−c⋅(D(u,v) /D0 ) ] + γL


2 2

​ ​ ​

Where:

γH > 1: Gain for high frequencies (reflectance)


γL < 1: Gain for low frequencies (illumination)


D(u, v): Distance from center of frequency domain


D0 : Cutoff frequency

c: Sharpness control

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 22/34
🎯 6. Applications
Non-uniform illumination correction
Medical imaging (e.g., X-rays)
Satellite imagery
Fingerprint/image preprocessing

📸 7. Example
Original Image:

Dark image with uneven lighting.

Steps:

1. Apply log transform


2. Use high-pass filter in frequency domain
3. Apply inverse Fourier transform
4. Use exponential transform

Result:

Illumination normalized
Edges and details enhanced

✅ 8. Advantages
Feature Benefit

Handles multiplicative noise Converts to additive model in log domain


Simultaneous enhancement Corrects lighting + enhances contrast
Frequency-based control Precisely targets desired components

❌ 9. Limitations
Requires careful parameter tuning (γH , γL , D0 , c)
​ ​ ​

Not suitable for purely additive noise


May amplify noise if over-boosted

📌 Summary
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 23/34
Component Role
Illumination Low-frequency, undesired variations
Reflectance High-frequency, desired details
Goal Suppress illumination, enhance reflectance
Technique Log + Frequency filter + Inverse transform

Would you like a Python or MATLAB implementation of homomorphic filtering?

Discuss different types of homomorphic filters

💡 Different Types of Homomorphic Filters in Image Processing


Homomorphic filtering is a frequency domain technique used to correct non-uniform lighting and
enhance contrast by separating the illumination and reflectance components of an image. The
technique operates in the logarithmic domain, treating the image as a product of illumination (low
frequency) and reflectance (high frequency), converting it into an additive form suitable for linear
filtering.

🧠 What Is a Homomorphic Filter?


A homomorphic filter is typically a high-pass filter designed to:

Suppress low-frequency components (illumination),


Amplify high-frequency components (reflectance).

In general, these filters are designed in the frequency domain and applied to the log-transformed
image.

📦 Types of Homomorphic Filters


Homomorphic filters are based on the shape of their frequency response. Common types include:

1. Ideal High-Pass Homomorphic Filter

📐 Frequency Domain Equation:


H(u, v) = {
γH , if D(u, v) > D0
​ ​

γL , if D(u, v) ≤ D0
​ ​

​ ​

γH > 1: Amplification for high frequencies


γL < 1: Attenuation for low frequencies


D0 : Cutoff frequency

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 24/34
D(u, v): Distance from center of frequency domain

✅ Pros:
Simple to design and implement

❌ Cons:
Sharp transitions → Ringing artifacts

2. Butterworth High-Pass Homomorphic Filter

📐 Frequency Domain Equation:


1
H(u, v) = (γH − γL ) 2n + γL
( D(u,v) )
​ ​ ​ ​ ​ ​

D0
1+ ​

n: Filter order (controls sharpness of transition)

✅ Pros:
Smooth transition between low and high frequencies
More control over sharpness with parameter n

❌ Cons:
Slightly complex compared to ideal filter

3. Gaussian High-Pass Homomorphic Filter

📐 Frequency Domain Equation:


H(u, v) = (γH − γL ) [1 − e ] + γL
D(u,v)2

2D02

​ ​ ​

Provides a very smooth transition from low to high frequency

✅ Pros:
No ringing artifacts
Smooth and natural enhancement
Best for photographic and medical images

❌ Cons:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 25/34
Slightly less control than Butterworth (no order parameter)

📊 Comparison Table
Filter Type Sharpness Transition Smoothness Artifact Risk Parameters

Ideal HPF Very Sharp Abrupt High (ringing) D0 ​

Butterworth HPF Moderate Moderate Moderate D0 , n


Gaussian HPF Smooth Very Smooth Low D0 ​

⚙️ How to Use Homomorphic Filters


1. Convert image to grayscale (if needed)
2. Take logarithm: log(f (x, y))

3. Apply Fourier Transform


4. Multiply by homomorphic filter
5. Apply inverse Fourier Transform
6. Take exponential to reverse log
7. Normalize the output

📌 Applications of Homomorphic Filtering


Image enhancement under poor lighting
Satellite image processing
Medical imaging (X-rays, MRIs)
Fingerprint enhancement

Document image binarization

✅ Conclusion
Homomorphic filtering is a powerful tool for enhancing images with uneven lighting and poor
contrast. Choosing the right type of filter (Ideal, Butterworth, or Gaussian) depends on the application,
desired smoothness, and tolerance for artifacts.

Let me know if you'd like a Python or MATLAB implementation example of any of these filters!

Explain in detail about geometric transformation with mathematical prelims

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 26/34
📐 Geometric Transformation in Image Processing — Detailed Explanation with
Mathematical Prelims

🧠 1. Introduction
Geometric Transformation refers to the process of modifying the spatial relationship between pixels
in an image. It alters the geometry or structure of an image, such as its position, orientation, scale, or
shape, without modifying the image's pixel intensities.
It is widely used in:

Image registration
Image rectification
Scaling and rotation

Morphing and warping

🔢 2. Mathematical Representation of Geometric Transformation


Any geometric transformation maps an input image f (x, y) to an output image g(x′ , y ′ ) using a
transformation function:

(x′ , y ′ ) = T (x, y)

Where:

(x, y): coordinates in the original image


(x′ , y ′ ): coordinates in the transformed image
T : transformation function (could be linear or nonlinear)

To apply a transformation, we often use a two-step approach:

1. Transformation: Change coordinate positions.


2. Interpolation: Estimate intensity values at new coordinates.

🧩 3. Basic Geometric Transformations


A. Translation

Shifts the image position in the coordinate space.

x′ = x + t x ,​ y ′ = y + ty ​

Where tx and ty are translation distances.


​ ​

B. Scaling
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 27/34
Resizes the image (enlarges or shrinks).

x′ = sx ⋅ x, ​ y ′ = sy ⋅ y ​

Where sx , sy : scale factors along x and y .


​ ​

C. Rotation

Rotates the image about the origin (or any pivot point).

x′ cos θ − sin θ x
[ ′] = [ ][ ]
y sin θ cos θ y
​ ​ ​ ​

Where θ : angle of rotation (in radians).

D. Shearing

Slants the shape of the image.

Horizontal shear:

x′ = x + a ⋅ y, y′ = y

Vertical shear:

x′ = x, y′ = y + b ⋅ x

🧮 4. Affine Transformation
An affine transformation is a linear mapping method that preserves points, straight lines, and planes.
It combines translation, scaling, rotation, and shearing.

x′ a11 ​
a12 ​
tx ​
x
′ =
y
​ ​a21 ​ ​ ​ ​ a22 ​ ​ ty ​ ​ y
​ ​ ​

1 0 0 1 1

The matrix is called the Affine Transformation Matrix.

🔄 5. Inverse Mapping (Backward Mapping)


To generate the output image, we apply inverse mapping from output to input:

(x, y) = T −1 (x′ , y ′ )

Why?

Forward mapping may miss pixels (leaves gaps).


Inverse mapping ensures each output pixel gets a value (using interpolation if necessary).

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 28/34
🎯 6. Interpolation Techniques
Since transformed coordinates may not map to exact pixel locations:

Nearest Neighbor: Pick closest pixel value


Bilinear Interpolation: Weighted average of 4 neighboring pixels
Bicubic Interpolation: Weighted average of 16 neighboring pixels for smooth results

🔧 7. Higher-Order Transformations
A. Projective (Perspective) Transformation

Allows non-parallel lines to converge (used in 3D-to-2D projections):

x′ h11 h12 h13 x


x′ y ′
(x′ , y ′ ) = ( , )
​ ​ ​

′ =
y
​ ​ h21
​ ​ ​ ​ h22 ​ ​ h23 ​ ​ ​ y ,
​ ​ ​ ​ ​

w w
w h31 ​ h32 ​ h33 ​ 1

B. Non-linear (Elastic/Morphing)

Used for warping or morphing applications, often based on splines or mesh-based deformation.

📷 8. Applications of Geometric Transformations


Application Purpose
Image Registration Aligning two or more images
Medical Imaging Align MRI/CT scans
Object Recognition Normalize object orientation/size
Augmented Reality Project graphics correctly on real-world

Satellite Imaging Correct perspective and distortions

✅ Conclusion
Geometric transformations are fundamental in image processing and computer vision, allowing us to
manipulate, align, or enhance images by changing the spatial relationship of pixels while preserving or
estimating intensity information using interpolation.

Would you like a diagram or code (Python/OpenCV) example to visualize these transformations?

Discuss briefly about spatial transformations

📍 Spatial Transformations in Image Processing – A Brief Overview


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 29/34
🔎 What Are Spatial Transformations?
Spatial transformations refer to operations that modify the spatial position of pixels in an image.
Unlike transformations in the frequency domain (which deal with pixel values in terms of sine and cosine
components), spatial transformations work directly on pixel coordinates and values.
In short, they answer the question:

“Where should this pixel be placed in the output image?”

🧠 Key Characteristics of Spatial Transformations


Operate in the spatial domain (image space)
Modify location and arrangement of pixels

Preserve or interpolate intensity values


Essential for alignment, correction, and enhancement

🔢 Mathematical Preliminaries
Spatial transformation is a mapping from one 2D coordinate system to another:

(x′ , y ′ ) = T (x, y)

Where:

(x, y): input pixel coordinates


(x′ , y ′ ): output pixel coordinates
T : transformation function (can be linear or nonlinear)

Inverse mapping is commonly used to determine the input position for each output pixel:

(x, y) = T −1 (x′ , y ′ )

Interpolation is applied when (x, y) does not fall exactly on pixel grid.

📦 Types of Spatial Transformations


1. Translation

Shifts the entire image by a certain amount.

x′ = x + t x ,
​ y ′ = y + ty ​

2. Scaling

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 30/34
Resizes the image by scaling the coordinates.

x′ = sx ⋅ x, ​ y ′ = sy ⋅ y ​

3. Rotation

Rotates the image around the origin or a point.

x′ = x cos θ − y sin θ, y ′ = x sin θ + y cos θ

4. Shearing

Slants the image either horizontally or vertically.

x′ = x + a ⋅ y, y′ = y + b ⋅ x

5. Affine Transformation

Combines rotation, translation, scaling, and shearing.

x′ a11 ​ a12 ​ tx ​ x

y = a21
​ ​ ​ ​ ​ ​ a22 ​ ​ ty ​ ​ y
​ ​ ​

1 0 0 1 1

6. Projective (Perspective) Transformation

Handles 3D-to-2D mapping with depth distortion.

a1 x + a2 y + a3 b1 x + b2 y + b3
(x′ , y ′ ) = ( , )
​ ​ ​ ​ ​ ​

c1 x + c2 y + 1 c1 x + c2 y + 1
​ ​

​ ​ ​ ​

🧮 Interpolation Methods (Post-transformation)


Since output pixel coordinates may not map exactly:

Nearest-neighbor interpolation
Bilinear interpolation
Bicubic interpolation

📸 Applications of Spatial Transformations


Application Purpose
Image alignment Register images taken from different times/views
Object recognition Normalize object orientation/scale

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 31/34
Application Purpose
Medical image analysis Align MRI, CT scans
Augmented reality Place digital objects accurately
Image enhancement Correct geometric distortions

✅ Conclusion
Spatial transformations are essential tools in image processing for modifying the geometric structure
of images. They enable image registration, resizing, rotation, and correction of distortions.
Understanding spatial transformations is fundamental for tasks in computer vision, robotics, medical
imaging, and more.
Would you like an example implementation using Python and OpenCV?

What is gray level interpolation . Explain in detail

🖤 Gray Level Interpolation – Detailed Explanation


📌 What is Gray Level Interpolation?
Gray Level Interpolation refers to the estimation of pixel intensity (gray level) values at non-integer
(non-grid) coordinates in an image.
This typically arises during:

Geometric transformations (like rotation, scaling, translation)


Resampling

Image registration

Since transformed pixel coordinates often don't fall exactly on the original image grid, gray level
interpolation is used to determine the intensity values at these fractional coordinates.

🎯 Why Is It Needed?
Consider applying a transformation like rotation to an image. After transforming the coordinates, you
might get a point like:

(x, y) = (14.2, 19.6)

But pixel intensities are only defined at integer coordinates like (14, 19), (15, 20), etc.
Hence, we interpolate the gray level at (14.2, 19.6) from the nearby known pixel values.

🔢 Common Interpolation Methods


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 32/34
1. Nearest Neighbor Interpolation

Takes the value of the closest pixel.


Fastest but produces blocky or jagged edges.

I(x, y) = I(round(x), round(y))

2. Bilinear Interpolation

Uses 4 neighboring pixels.


Assumes the pixel value changes linearly in both directions.
Provides smoother results than nearest neighbor.

🔣 Formula:
Let (x, y) lie between 4 pixels:

Q11 = I(i, j)

Q12 = I(i, j + 1)

Q21 = I(i + 1, j)

Q22 = I(i + 1, j + 1)

Let:

x = i + α, y = j + β , where 0 ≤ α, β < 1

I(x, y) = (1 − α)(1 − β)Q11 + α(1 − β)Q21 + (1 − α)βQ12 + αβQ22


​ ​ ​ ​

3. Bicubic Interpolation

Uses 16 neighboring pixels.


Uses cubic polynomials for interpolation.
Slower but provides very smooth results.

2 2
I(x, y) = ∑ ∑ amn ⋅ P (x + m, y + n)
​ ​ ​

m=−1 n=−1

Where amn are interpolation coefficients based on cubic interpolation kernel.


📊 Comparison of Methods
Method Accuracy Smoothness Speed
Nearest Neighbor Low Poor Fastest

Bilinear Moderate Good Moderate

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 33/34
Method Accuracy Smoothness Speed
Bicubic High Excellent Slowest

📸 Applications of Gray Level Interpolation


Geometric transformations (rotate/scale/translate)

Image warping
Resizing images
Image mosaicking
Object tracking and recognition

✅ Conclusion
Gray level interpolation plays a vital role in ensuring visual continuity and accuracy when performing
spatial transformations on images. Choosing the right method involves a trade-off between quality and
computational cost.
Would you like a code example demonstrating interpolation methods in Python (OpenCV or NumPy)?

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 34/34

You might also like