0% found this document useful (0 votes)
42 views24 pages

CVR 2

The document covers key concepts in image processing, focusing on linear filters, convolution, edge detection, texture representation, and shape from texture. It explains the importance of linear filters in enhancing images and detecting features, as well as techniques to reduce noise for effective edge detection. Additionally, it discusses methods for texture analysis and synthesis, and how texture cues can be used to infer 3D shapes from images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views24 pages

CVR 2

The document covers key concepts in image processing, focusing on linear filters, convolution, edge detection, texture representation, and shape from texture. It explains the importance of linear filters in enhancing images and detecting features, as well as techniques to reduce noise for effective edge detection. Additionally, it discusses methods for texture analysis and synthesis, and how texture cues can be used to infer 3D shapes from images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

MODULE - II

Linear Filters: Linear Filters and Convolution, Shift Invariant


Linear Systems, Spatial Frequency and Fourier Transforms,
Sampling and Aliasing, Filters as Templates
Edge Detection: Noise, Estimating Derivatives, Detecting
Edges
Texture: Representing Texture, Analysis (and Synthesis)
Using Oriented Pyramids, Application: Synthesis by
Sampling Local Models, Shape from Texture.

Linear Filters and Convolution: An Overview


Linear filters and convolution are fundamental concepts in
signal processing and computer vision. They are used to
manipulate and analyze signals, images, and data to extract
useful information or perform transformations.
Linear Filters
A linear filter is a system or operation that satisfies the
properties of linearity, which include:
1. Additivity: The response to a sum of inputs is the sum of
the responses to each input.
2. Homogeneity: Scaling the input scales the output by the
same factor.
Linear filters are often used to:
 Remove noise.
 Enhance edges or features in an image.
 Smooth data.
Examples of Linear Filters
1. Low-Pass Filter: Removes high-frequency noise, often
used for smoothing.
2. High-Pass Filter: Removes low-frequency components,
emphasizing edges or details.
3. Band-Pass Filter: Allows a specific frequency range to
pass while suppressing others.
In the context of images, linear filters are applied using
convolution.

Linear Filters in Computer Vision


Linear filters are fundamental image processing techniques
used to enhance images, detect features, or reduce noise.
These filters work by applying a convolution operation
between an image and a filter kernel (also called a
convolution mask or filter).
Linear Filters with Convolution in Computer Vision
In computer vision, linear filters are widely used to process
images for tasks like edge detection, smoothing, and
sharpening.
A convolution operation is commonly used to apply these
filters to an image.

Convolution Operation
Convolution is a mathematical operation that involves sliding
a small matrix (called a kernel or filter) over an image and
computing a weighted sum of pixel values.
Shift Invariant Linear Systems in Image Processing
A Shift-Invariant Linear System (SILS) is a system where
the response to an input signal (such as an image) does not
change if the input is shifted.
These systems are fundamental in image processing,
particularly in convolution-based filtering.
Spatial Frequency and Fourier Transforms in Image
Processing
In image processing, spatial frequency and the Fourier
Transform are essential concepts used to analyze and
manipulate image details in the frequency domain.

1. Spatial Frequency
Spatial frequency refers to how image intensity changes over
space (x, y coordinates). It describes the rate of intensity
variation:
• Low spatial frequencies: Represent smooth, gradual
changes (large objects, background).
• High spatial frequencies: Represent rapid changes
(edges, fine details, noise).
Example:
• A uniform gray region has low spatial frequency (few
intensity changes).
• A checkerboard pattern has high spatial frequency
(many intensity changes).

2. Fourier Transform in Image Processing


The Fourier Transform (FT) converts an image from the
spatial domain (x, y pixels) to the frequency domain, where
it is represented by sinusoidal frequency components.
This helps in analyzing image details, filtering noise, and
enhancing features.
Sampling and Aliasing in Image Processing
Sampling and aliasing are fundamental concepts in digital
image processing that affect how images are represented and
processed. Understanding these concepts is crucial for
avoiding artifacts and ensuring accurate image reproduction.

1. Sampling in Image Processing


What is Sampling?
Sampling is the process of converting a continuous signal
(such as an analog image) into a discrete signal by measuring
its intensity at specific points (pixels).
In spatial sampling, an image is represented as a grid of
pixels, with each pixel storing intensity or color information.
Sampling Rate (Resolution)
• High sampling rate (more pixels) → More details,
better image quality.
• Low sampling rate (fewer pixels) → Loss of detail,
blocky appearance.
Example of Sampling:
A high-resolution image with fine details can become
pixelated when down sampled to a lower resolution.
2. Aliasing in Image Processing
What is Aliasing?
Aliasing occurs when a signal is undersampled (sampled at
too low a rate), causing distortion or artifacts. This results in
false patterns or jagged edges in images.
Effects of Aliasing in Images
1. Moiré Patterns – Unwanted wavy or repeating patterns.
2. Jagged Edges (Staircase effect) – Rough edges on
curves and diagonal lines.
3. Color Artifacts – Incorrect colors appearing in high-
detail areas.
3. Preventing Aliasing
(a) Increase Sampling Rate
By increasing the resolution (more pixels per unit area), we
capture more details, reducing aliasing effects.
(b) Use an Anti-Aliasing Filter (Low-Pass Filtering)
• Before sampling an image, apply a low-pass filter
(Gaussian blur) to remove high-frequency components.
• This ensures that high-frequency details do not get
misinterpreted as lower frequencies.
(c) Super-Sampling and Down-Sampling
• Render an image at a higher resolution and then
downsample (resize) it smoothly.
• Commonly used in computer graphics for smoother
textures.

Filters as Templates in Image Processing


In image processing, filters can act as templates that highlight
specific features in an image by matching patterns of interest.
A filter (or kernel) is a small matrix that is convolved with an
image to detect structures such as edges, textures, or specific
shapes.
When applying a filter to an image, we compare local regions
of the image to the filter template. The stronger the match, the
higher the response, making this method useful for feature
detection.
How Filters Work as Templates
1. A filter (kernel) is designed to match a specific pattern
(e.g., an edge, corner, or texture).
2. The filter slides (convolves) across the image.
3. At each position, it computes a weighted sum of pixel
values.
4. The output highlights areas where the image matches the
template.
Example: Object Detection Using a Template Filter
If we want to find a specific object (like a face or letter), we
can use a small patch (template) and match it with the image
using convolution.
• If we have an image of the letter "A", we can create a
filter (template) shaped like "A".
• The convolution operation will produce high values
where "A" appears in the image.

Edge Detection
Edge detection is highly sensitive to noise because noise
introduces small intensity variations that can be mistaken for
edges. Here’s how noise affects edge detection and some
common solutions:
Effects of Noise on Edge Detection:
1. False Edges – Random variations in pixel intensity may
be detected as edges.
2. Edge Fragmentation – Actual edges may be broken due
to noise-induced intensity fluctuations.
3. Blurred or Weak Edges – Noise can reduce the contrast
between objects and their background, making edges less
distinguishable.

Techniques to Reduce Noise Before Edge Detection:


1. Gaussian Smoothing – Applying a Gaussian filter
before edge detection helps suppress noise while
preserving edge information.
2. Median Filtering – A median filter is effective for
removing salt-and-pepper noise while maintaining sharp
edges.
3. Bilateral Filtering – Reduces noise while preserving
edges better than Gaussian filtering.
4. Non-Local Means (NLM) Denoising – An advanced
method that reduces noise by averaging similar patches
of the image.
5. Wavelet Denoising – Uses wavelet transforms to
separate noise from significant edge information.
Robust Edge Detection Methods for Noisy Images:
• Canny Edge Detector (with proper Gaussian
smoothing)
• Laplacian of Gaussian (LoG) (detects edges after
Gaussian smoothing)
• Sobel or Prewitt Filters with Smoothing (less sensitive
to noise)
• Edge Detection using Deep Learning (CNN-based
methods)

Estimating Derivatives for Edge Detection


Edges in an image correspond to significant changes in
intensity, which can be detected by computing derivatives.
1. First-Order Derivatives (Gradient-Based Methods)
The first derivative measures the rate of intensity change.
Large gradient values indicate edges.
Texture
Representing Texture in Images
Texture describes the spatial arrangement of intensity
variations in an image and is crucial in pattern recognition,
object detection, and image segmentation.
1. Statistical Methods (Capture texture patterns using
pixel intensity distributions)
• Gray-Level Co-occurrence Matrix (GLCM) –
Computes how often pixel intensity pairs occur at a
given offset. Features include:
• Contrast
• Homogeneity
• Energy
• Correlation
• Local Binary Patterns (LBP) – Describes textures by
thresholding a pixel’s neighborhood and converting it
into a binary pattern.
• Histogram-Based Methods – Use intensity histograms
to represent texture characteristics.

2. Structural Methods (Analyze texture as repeating


patterns)
• Textons – Fundamental texture elements, like edges,
blobs, or shapes, that form a texture.
• Wavelet Transforms – Decompose images into
different frequency bands for multi-scale texture
analysis.
• Gabor Filters – Extract texture features by analyzing
frequency and orientation components.
3. Model-Based Methods (Use mathematical models to
describe textures)
Markov Random Fields (MRF) – Models texture as a
probabilistic distribution of neighboring pixel relationships.
Fractal Analysis – Describes self-similar textures using
fractal dimensions.
4. Transform-Based Methods (Convert images to a
different domain for analysis)
Fourier Transform – Represents texture by analyzing
frequency components.
Wavelet Transform – Captures multi-scale texture details.
Texture Analysis and Synthesis Using Oriented Pyramids
Oriented pyramids are multi-scale, multi-orientation image
representations used for analyzing and synthesizing textures
effectively. They help capture texture details at different
scales and orientations.

1. Oriented Pyramids for Texture Analysis


Oriented pyramids decompose an image into multiple
frequency bands and orientations, allowing detailed texture
analysis.
Key Steps in Texture Analysis Using Oriented Pyramids
1. Multi-Scale Decomposition
Use a Gaussian Pyramid or Laplacian Pyramid to represent
different levels of detail.
2. Multi-Orientation Decomposition
Apply oriented filters like Gabor filters or Steerable Filters
to extract orientation-based texture features.
3. Feature Extraction
Compute energy, entropy, or statistical properties from
pyramid coefficients.
4. Texture Classification & Segmentation
Use machine learning or deep learning models to classify
texture features.

Texture Synthesis by Sampling Local Models


Synthesis by sampling local models is a technique used to
generate realistic textures by learning and replicating local
pixel or patch relationships from a sample texture. This
method ensures that the synthesized texture maintains the
statistical and structural properties of the original texture.

1. Concept of Local Models in Texture Synthesis


Instead of using global statistics, local models focus on
preserving:
• Neighborhood relationships between pixels.
• Patch-based structures that represent texture patterns.
• Statistical properties within local regions.
The synthesis process involves sampling from these local
models to generate new texture regions while maintaining
consistency with the original sample.
2. Methods for Texture Synthesis Using Local Models
A. Pixel-Based Synthesis (Non-Parametric Sampling)
• Example: Efros & Leung's Method (1999)
• Grows a texture pixel by pixel based on a
probability distribution of similar neighborhoods.
• Uses a similarity measure (e.g., SSD, L2 norm) to
match local patches.
• Algorithm:
• Start with a seed pixel or a small region.
• Search for similar neighborhoods in the original
texture.
• Randomly select a candidate pixel based on
similarity.
• Repeat until the entire texture is synthesized.
• Pros:
• Works well for highly stochastic textures.
• Can synthesize fine details.
• Cons:
• Slow and computationally expensive.
• Sensitive to noise and errors.

B. Patch-Based Synthesis (Patch Sampling & Blending)


• Example: Efros & Freeman’s Image Quilting (2001)
• Copies entire patches instead of individual pixels.
• Uses overlapping patches with optimal seam
blending to avoid artifacts.
• Algorithm:
• Select an initial patch from the texture sample.
• Find the best-matching patch from the source
texture.
• Align and blend overlapping regions using
minimum error boundary cut.
• Continue until the entire texture is filled.
• Pros:
• Faster than pixel-based synthesis.
• Preserves texture structure well.
• Cons:
• May produce seams or visible artifacts if blending is
not optimal.
C. Multi-Resolution Synthesis (Pyramid-Based)
• Uses Gaussian/Laplacian pyramids or wavelet
decomposition to synthesize textures at different scales.
• Starts synthesis at a coarse level and refines details
progressively.
• Helps maintain long-range consistency in structured
textures.
3. Applications of Local Model-Based Synthesis
• Graphics & Gaming – Creating seamless textures for
3D models and game environments.
• Medical Imaging – Synthesizing realistic textures for
medical simulations.
• Art & Design – Generating new artistic patterns based
on existing textures.
• Super-Resolution & Inpainting – Filling missing
regions in images by sampling local models.
Shape from Texture: Understanding 3D Shape Using
Texture Cues
• Shape from texture is a computer vision technique that
estimates the 3D shape of a surface from the way
textures appear in an image.
• It leverages perspective distortion, texture gradients, and
foreshortening to infer depth and surface orientation.
1. Principles of Shape from Texture
Shape from texture relies on the following texture-based depth
cues:
A. Texture Gradient
• Textures become denser (higher frequency) as they
recede into the distance.
• Larger, more spaced-out texture elements indicate closer
regions, while smaller, compressed ones suggest farther
regions.
B. Foreshortening
• Textured patterns (e.g., grids, stripes) appear stretched
or compressed depending on their surface orientation.
• A square checkerboard may look like trapezoids due to
perspective distortion.
C. Perspective Distortion
• Parallel texture elements appear to converge as they
recede, forming vanishing points.
• Linear structures (e.g., road markings) help infer depth.
D. Repetition and Regularity
• Uniform textures help estimate shape since changes in
spacing and size suggest surface orientation.

2. Methods for Shape from Texture


Several computational methods analyze texture cues to
recover surface shape.
A. Regular Texture Models
• Assume that textures have a known, repeating pattern.
• Changes in texture density, orientation, or shape
indicate depth variations.
B. Statistical Texture Analysis
• Fourier Transform – Analyzes frequency components
of textures to infer surface slant.
• Wavelet Transforms – Decomposes images into multi-
scale features for depth estimation.
C. Geometric Methods
• Projective Geometry Models – Uses transformations to
relate 3D surfaces to 2D projections.
• Vanishing Point Estimation – Detects parallel texture
lines converging in perspective.
D. Machine Learning & Deep Learning
• CNN-based Approaches – Train deep networks to learn
texture-depth relationships.
• Self-Supervised Learning – Uses large datasets to infer
3D shapes from single images.
3. Applications of Shape from Texture
• 3D Scene Reconstruction – Recovering surface depth
from a single image.
• Autonomous Navigation – Estimating terrain shape for
robots and self-driving cars.
• Medical Imaging – Understanding surface structures in
microscopy and MRI scans.
• Augmented Reality (AR) & Virtual Reality (VR) –
Enhancing depth perception in virtual environments.

You might also like