UNIT 1
INTRODUCTION Computer Vision – Image representation and image
analysis tasks – Image representations digitization, properties, color
images, Data structures for Image Analysis - Levels of image data
representation, Traditional and Hierarchical image data structures
I. Introduction to Computer Vision
A. Definition and Scope of Computer Vision
B. Importance of Image Analysis in Computer Vision
C. Overview of Image Representation
II. Image Representation
A. Basics of Image Digitization
B. Properties of Images
C. Color Images
III. Data Structures for Image Analysis
A. Levels of Image Data Representation
1. Pixel-level
2. Region-level
3. Object-level
B. Traditional Image Data Structures
1. Bitmaps and Raster Scan
2. Run-Length Encoding
3. Chain Codes
C. Hierarchical Image Data Structures
1. Quad Trees
2. Oct Trees
3. Region Quad Trees
Image digitization
•Sampling
•Quantization
Digital image properties
1. Metric and topological properties of digital images
2. Histograms
3. Entropy
4. Visual perception of the image
5. Image quality
6. Noise in image
Color images
1. Physics of color
2. Color perceived by human
3. Color spaces
4. Palette images
5. Color constancy
Data structures for image analysis
Levels of image data representation
Traditional image data structures
•Matrices
•Chains
•Topological data structures
•Relational structures
Hierarchical data structures
•Pyramids
•Quadtrees
•Other pyramidal structures
COMPUTER VISION
Definition of Computer Vision
Computer vision aims to replicate human vision by enabling computers to perceive, interpret, and
understand visual data. It involves processing images or videos to extract meaningful
information, often using artificial intelligence, pattern recognition, and other computational
techniques
Core Components of Computer Vision:
1. Image Acquisition: Capturing visual data using cameras or sensors.
2. Preprocessing: Enhancing image quality and reducing noise for better analysis.
3. Feature Extraction: Identifying relevant details like edges, textures, or colors.
4. Object Recognition: Detecting and categorizing objects within an image.
5. Analysis and Decision-Making: Drawing conclusions or actions based on processed data.
Real-World Example
1. Medical Imaging: Detecting anomalies in X-rays or MRI scans.
2. Autonomous Vehicles: Identifying road signs, pedestrians, and obstacles.
3. Surveillance: Monitoring activities for security purposes
4. Retail: Automated checkout and inventory tracking using visual systems.
SCOPE OF CV
The scope includes tasks such as image recognition, object detection, scene understanding,
motion analysis, and 3D reconstruction. It spans various domains, including
autonomous vehicles,healthcare, agriculture, surveillance, and robotics.
1. Healthcare: Detect diseases, monitor patients.
2. Automotive: Enable autonomous driving, collision prevention.
3. Agriculture: Monitor crops, automate sorting.
4. Entertainment: Power AR/VR, create CGI effects.
5. Retail: Visual search, customer behavior analysis.
6. Manufacturing: Detect defects, monitor equipment
Importance of Image Analysis in Computer Vision
Image analysis is crucial for interpreting the visual data captured by sensors. It includes steps like
noise reduction, feature extraction, and object segmentation. This process ensures meaningful
insights are derived, enabling applications such as medical diagnosis, security systems, and
navigation in robotics
Benefits of Image Analysis
1. Enhanced Efficiency: Automates tasks like quality control and speeds up medical diagnoses.
2. Improved Accuracy: Detects subtle details humans may miss, such as anomalies in scans.
3. Data-Driven Insights: Identifies patterns or trends, e.g., crowded areas in surveillance.
4. Supports Machine Learning: Provides datasets for training models, improving accuracy.
Overview of Image Representation
Image representation involves organizing and encoding visual data in a form suitable for
processing.
It includes multiple levels
1. Pixel Level: Raw image data as brightness or color values in a matrix format.
2. Feature Level: Attributes such as edges, textures, or regions extracted from the image.
3. Object Level: Higher-level abstractions like shapes or objects identified in the image.
4. Scene Level: Complete understanding of the image or video content
Image Representation
Image representation refers to the process of encoding and organizing visual information so it can
be analyzed and processed by computer systems. It serves as the foundation for various computer
vision tasks such as image recognition, object detection, and scene understanding.
A. Basics of Image Digitization
Image digitization is the process of converting a physical or analog image into a digital format.
This involves two main steps: sampling and quantization.
Sampling
Sampling refers to the division of an image into a grid of discrete pixels. Each pixel represents a
small, uniform area of the image. The resolution of an image is determined by the number of
pixels it contains, which directly impacts the image's clarity.
Process
•The image is overlaid with a grid, and each cell in the grid becomes a pixel.
•The number of pixels (width × height) determines the resolution, impacting detail and clarity.
(OR) A higher number of pixels results in better detail and image clarity.
For example, a higher resolution (e.g., 1920x1080 pixels) provides (Sharper images)more details
Than a lower resolution (e.g., 640x480 pixels).
Quantization
1. After sampling, each pixel's intensity (brightness) or color is assigned a numerical value.
This process is known as quantization.
2. In grayscale images, pixel intensity values typically range from 0 (black) to 255 (white) for
8-bit images. For color images, each color channel (red, green, blue) is quantified separately.
3. Quantization determines the number of levels available to represent pixel values, with more
levels providing greater precision.
4. The digitization process enables computers to handle images as matrices of
numerical values, making it possible to apply mathematical algorithms for processing and
analysis
B. Properties of Images
Understanding the properties of images is essential for designing effective computer vision
systems.
Key properties include
Metric and Topological Properties
Images have spatial dimensions characterized by width and height (in pixels).
The geometric relationship between pixels, such as adjacency and connectivity, defines the
image's topology.
For example, Adjacency: Determines how pixels are connected (e.g., horizontally,
vertically, or diagonally). Connectivity: 4-connected: Considers horizontal and vertical
neighbors only. 8-connected: Includes diagonal neighbors.
Histograms
•Histograms are graphical representations of the distribution of pixel intensities. They help in
understanding image contrast and brightness.
•A well-spread histogram indicates high contrast, while a narrow histogram suggests
low contrast.
Entropy
Entropy measures the amount of information or randomness in an image. Higher entropy
indicates greater detail and complexity.
High entropy: Indicates complex and detailed images.•
Low entropy: Indicates s
2D PROJECTIVE SPACE
●
Since the imaging apparatus usually behaves like a pinhole camera model, many ofthe
transformations that can happen can be described as projectivetransformations.
●
This offers a general and powerful way to work with points,lines and conics.
●
The 2D projective space is simply defined as
POINT OPERATORS
Consider two images f and g defined on the same domain. Then their pixel wiseaddition is
denoted as f+g. Or consider a positive valued image f and a, the imagelog(1+f) that we got by
taking the logarithm of all pixel values in the image.These two operations are examples of point
operators.Point operationsrefer to running the same conversionoperation for each pixel in
agrayscale image.
DIFFERENT POINT PROCESSING TECHNIQUES
Thresholding -select pixels with given values toproduce binary images.Adaptive Thresholding-
like Thresholding except choosevalues locallyContrast Stretching- spreading out gray level
distribution.Histogram Equalization- general method of
modifyingintensity distribution.Logarithm Operator- reducing contrast of brighterregions.
simple or uniform images.•
Noise
Images often contain noise due to imperfections in the imaging sensor or environmental
factors. Noise reduces image clarity and affects processing accuracy
Types:
•Gaussian Noise: Random variation in pixel intensities resembling a normal
distribution.
•Salt-and-Pepper Noise: Appears as random black and white pixels.
•Speckle Noise: Typically affects coherent imaging systems like ultrasound.
Visual Perception
Human visual perception affects how images are interpreted.
For example, edge and color detection mechanisms in human vision guide how algorithms
simulate these tasks.
Image Quality
Factors such as sharpness, contrast, and resolution determine image quality. Blurring
and compression artifacts can degrade quality.
C. Color Images
Color images provide additional layers of information compared to grayscale images by encoding
data for multiple color channels. Key concepts include:
Color Models
A color model represents how colors are organized and encoded. Common color models include:
1. RGB (Red, Green, Blue): The most widely used model, where colors are
formed by combining the three primary color intensities.
2. CMY/CMYK (Cyan, Magenta, Yellow, Black): Commonly used in printing, this
model
represents colors as combinations of subtractive primaries.
3. HSV (Hue, Saturation, Value): Designed to align with human perception of colors, this model
separates chromatic content (hue and saturation) from brightness (value).
Color Spaces
Color spaces define the range of colors that can be represented in a model. Examples include
sRGB (standard RGB) and Adobe RGB, which differ in their color gamuts.
•sRGB: Standard color space for web and digital displays, suitable for most applications.
•Adobe RGB: Has a wider gamut, preferred for professional photography and printing
Color Depth
Color depth refers to the number of bits used to represent each color channel.
For example:
•24-bit Color: Uses 8 bits per channel for RGB, offering 16.7 million possible colors.
•32-bit Color: Adds an alpha channel for transparency, commonly used in graphic design.
Perception of Color by Humans
Human color perception changes based on things like lighting, surrounding colors, and how our
eyes and brain work. To make computer vision more accurate, algorithms are designed to
consider these factors.
Applications of Color Images
1. Image Enhancement: Making images look better by adjusting colors and contrast.
2. Segmentation: Separating different objects in an image based on their colors.
3. Object Recognition: Finding and identifying objects using their unique colors, like spotting
ripe
fruits.
Pixel-Level Representation
At the pixel level, the image is viewed as an array of individual points, or pixels, where each
pixel
corresponds to a specific intensity or color value. This is the most basic level of image
representation and contains raw, unprocessed data.
Characteristics:
•Images are represented as grids or matrices, with each cell holding a numerical value that
represents the pixel intensity (gray-scale) or color (RGB channels).
•It is a direct output from image-capturing devices such as cameras or scanners.
•Pixel values are used in basic image processing tasks like noise filtering, brightness
adjustment, and edge detection.
Applications:
•Suitable for low-level processing techniques such as histogram analysis or Fourier transforms.
•Serves as the foundation for generating higher-level abstractions.
Limitations:
•High data density with minimal abstraction, making it computationally expensive for large
images.
•Difficult to derive meaningful context or structures without further processing.
_________________________________________________________________________
Region-Level Representation
This intermediate representation focuses on groups of pixels with shared
characteristics,
forming regions. It involves segmenting the image into meaningful parts or regions based on
criteria like color, texture, or intensity.
Characteristics:
•Pixels within the same region are similar in properties, such as brightness or color.
•Regions can be contiguous or disjointed and are often delineated through methods like
thresholding or clustering.
•This representation reduces the data size compared to pixel-level while retaining
meaningful patterns.
Applications:
•Used in image segmentation tasks, where regions may correspond to objects or areas
of interest in the image.
•Facilitates texture analysis, object detection, and image classification tasks.
Advantages:
•Provides a balance between computational efficiency and abstraction.
•Regions can incorporate context through properties like adjacency or topology
Object-Level Representation
The highest level of image representation involves recognizing and delineating objects within the
image. This abstraction identifies distinct entities and represents them using symbolic or
structured data.
Characteristics:
•Objects are recognized based on complex patterns, shapes, or semantic features.
•Relies on models or prior knowledge to identify and classify objects.
•Representations may include bounding boxes, contours, or semantic labels.
Applications:
•Found in applications like face recognition, object tracking, and autonomous navigation
systems.
•Enables tasks like scene understanding or behavior analysis.
Advantages:
•Provides a human-comprehensible interpretation of the image content.
•High abstraction allows for complex decision-making and interaction with other
data
domains
Traditional Image Data Structures
Digital images are stored and processed using various data structures designed to optimize
their representation, compression, and manipulation. Below are three widely
recognized traditional image data structures: Bitmaps and Raster Scan, Run-Length Encoding,
and Chain Codes. Each serves a unique purpose and has its own advantages and limitations.
1. Bitmaps and Raster Scan
A bitmap (or raster image) is a representation of an image as a grid of pixels, where
each pixel corresponds to a specific color or intensity value. This is one of the
simplest forms of image representation.
Bitmap Representation: In this structure, the image is divided into a 2D matrix of pixels,
with each pixel having an associated value based on the color model used (e.g., grayscale or
RGB). For instance, in a grayscale image, each pixel might be represented by an 8-bit value
ranging from 0 (black) to 255 (white).
Raster Scan: The raster scan technique is used for reading or writing bitmap images. It
processes the image row by row (or column by column), scanning pixel data sequentially.
This mimics the operation of traditional CRT displays, where the electron beam moves line
by line to display the image.
Advantages
•Simple and straightforward representation.
•Directly compatible with most image display systems.
Disadvantages
•High memory requirements for large or high-resolution images.
•Inefficiency in representing images with large areas of uniform color
2. Run-Length Encoding (RLE)
Run-Length Encoding is a compression technique commonly applied to bitmap images. It
is particularly effective for images with large areas of uniform intensity or color.
How It Works
Instead of storing each pixel value individually, RLE encodes consecutive identical
values as a single pair: the value and the number of occurrences (run length).
For example, a sequence of pixel values [1, 1, 1, 0, 0, 1] would be stored as [(1, 3), (0,
2), (1, 1)].
Applications
Useful for compressing binary images (black-and-white) and images with significant
uniform regions, such as cartoons or logos.
Advantages
•Reduces storage requirements for suitable images.
•Simple to implement and decode.
Disadvantages
•Inefficient for complex or noisy images, where runs are short
3. Chain Codes
Chain codes are a compact representation of image boundaries. They encode the
outline of a shape by describing the direction of movement along the boundary.
Boundary Representation
•The boundary of an object is traced, and each movement from one pixel to the next is
recorded using a predefined directional code.
•For example, a 4-connected system uses four directions: North (0), East (1), South
(2), and West (3). An 8-connected system includes diagonals, adding NE (4), SE (5),
SW (6), and NW (7)
Example
Tracing a square in a 4-connected system might yield the chain code 0, 1, 2, 3 (North, East,
South, West).
Applications
•Used in shape analysis and object recognition tasks.
•Enables efficient storage and manipulation of geometric features.
Advantages
•Compact representation of boundaries.
•Preserves shape information effectively.
Disadvantages:
•Sensitive to noise and boundary irregularities.
•May require preprocessing to smooth noisy boundaries
Hierarchical Image Data Structures
Hierarchical image data structures play a crucial role in efficiently representing, storing, and
processing image data. These structures help in reducing computational complexity, saving
memory, and facilitating operations like image compression, analysis, and rendering. The
primary hierarchical structures include Quad Trees, Oct Trees, and Region Quad
Trees.
Below is an elaboration of each
Quad Trees
A Quad Tree is a tree data structure used to represent two-dimensional spatial information. It
is particularly effective for image processing tasks like compression, segmentation,
and hierarchical rendering. A Quad Tree divides an image into four quadrants
(subregions) recursively until a certain uniformity criterion is met.
Structure and Characteristics
Each node in a Quad Tree has four children, corresponding to the four quadrants of
the parent node.
The root node represents the entire image.
The division stops when all pixels in a region have uniform properties (e.g., the same
color or intensity).
Advantages
•Efficiently handles sparse image data by focusing only on regions with non-uniform
details.
•Reduces memory usage for images with large homogeneous areas.
•Enables fast access to specific regions in the image for operations like zooming and
panning.
Applications
•Image compression: Quad Trees represent images in a hierarchical manner, where
only the required details are stored.
•Collision detection: Used in computer graphics and gaming to manage spatial data.
•Geographic information systems (GIS): For efficient representation and querying of
spatial information.
2. Oct Trees
Oct Trees are the three-dimensional extension of Quad Trees. They are used to represent
volumetric (3D) image data, such as in medical imaging or 3D modeling. An Oct Tree
divides a 3D space into eight octants recursively.
Structure and Characteristics
• Each node in an Oct Tree has eight children, representing the eight octants of the
parent node.
• The root node represents the entire 3D volume.
• Subdivision continues until a uniformity criterion is satisfied or a certain depth is
reached
Advantages
•Handles large 3D datasets efficiently by focusing only on non-uniform regions.
•Facilitates operations like 3D rendering, simulation, and visualization.
•Simplifies spatial queries in 3D environments.
Applications
•Medical imaging: Efficient storage and visualization of 3D scans, such as CT or MRI.
•3D graphics: Used in rendering and collision detection.
•Robotics: For spatial mapping and navigation in 3D environments
Advantages
•Handles large 3D datasets efficiently by focusing only on non-uniform regions.
•Facilitates operations like 3D rendering, simulation, and visualization.
•Simplifies spatial queries in 3D environments.
3. Region Quad Trees
Region Quad Trees are a variation of standard Quad Trees, optimized for applications where
regions of interest in an image are irregular and not necessarily aligned with grid-based
quadrants. Instead of fixed subdivisions, these trees adaptively partition the image based on
connected regions.
Structure and Characteristics
•Each node corresponds to a specific region in the image.
•Subdivision occurs based on region boundaries rather than fixed quadrants.
•Regions are identified through connected components or other segmentation methods.
Advantages
•Adapts to the irregularity of image data, leading to more compact representations.
•Provides better accuracy for representing region-based information.
•Reduces redundant subdivisions, focusing only on meaningful regions.
Applications
•Image segmentation: Identifying and representing distinct objects or areas in an
image.
•Pattern recognition: For analyzing and classifying regions based on their shapes or
properties.
•Computer vision: Efficiently processing and analyzing images for tasks like object
detection and tracking.
Image Digitization
Sampling
Sampling refers to the process of converting a continuous image function into a discrete form by
measuring its intensity at specific points. These points are organized in a grid pattern, typically
square or hexagonal. The density of these points determines the resolution of the image.
Higher sampling density captures more detail, while sparse sampling may result in loss
of information. For instance, television images typically use resolutions like 512x512 or 1920x1080
for HDTV. The placement and frequency of sampling points are crucial, as explained by Shannon's
theorem, which ensures that the sampled image retains sufficient detail for reconstruction.
Example: A 4K resolution image (3840x2160 pixels) captures more detail than an HD image
(1280x720 pixels)
Quantization
Quantization is the process of mapping continuous brightness values to discrete levels. It assigns each
sampled value to a fixed number of brightness levels, determined by the number of bits used per pixel.
For example, In an 8-bit grayscale image, pixel intensities range from 0 (black) to 255 (white).
Digital Image Properties
Metric and Topological Properties
Digital images consist of pixels arranged in a grid. Metric properties define measurable aspects like
distance. Common distance metrics include:
•Euclidean Distance: Measures straight-line distance.
•City Block Distance (D4): Allows only horizontal and vertical moves.
•Chessboard Distance (D8): Permits diagonal moves as well.
Topological properties, on the other hand, are invariant to continuous transformations,
such as
stretching or bending. These include the number of regions, holes, and connectivity between pixels.
For example, a region's convex hull represents the smallest convex shape enclosing it.
Histograms
Histograms represent the frequency distribution of pixel brightness values in an image. They are used
for analyzing contrast, adjusting illumination, and segmenting objects. A histogram can be visualized
as a bar graph, showing the number of pixels at each brightness level.
Example: Histogram equalization enhances low-contrast images.
Entropy
Entropy measures the amount of information or uncertainty in an image. Defined by Shannon, it
quantifies randomness and is calculated using probabilities derived from the image’s histogram. High
entropy indicates a more complex image with diverse brightness levels.
Example: A cluttered scene has higher entropy compared to a uniform sky.
Visual Perception
Human perception of images depends on contrast, borders, texture, and color. These psycho-physical
parameters determine how easily objects can be distinguished from the background. Illusions and
visual effects provide insights into human perception mechanisms and aid in designing algorithms for
image enhancement.
Example: Edge-detection algorithms mimic how humans recognize object boundaries.
Image Quality
Image quality assessment involves subjective and objective methods. Subjective methods rely on
human evaluation, while objective metrics compare the image to a reference. Common
metrics include mean absolute difference and correlation. Image quality can degrade due
to noise, transmission errors, or processing.
Example: A blurred image degrades quality, affecting object recognition tasks.
Noise
Noise introduces random errors into images, originating from capture, transmission, or processing.
Types include
•White Noise: Uniform across all frequencies.
•Gaussian Noise: Follows a normal distribution.
•Salt-and-Pepper Noise: Manifests as random black and white pixels.
Noise reduction techniques, such as filtering, are used to restore image quality.
Example: Noise-removal filters like Gaussian blur reduce artifacts in medical imaging.
Color Images
Physics of Color
Color arises from electromagnetic radiation within the visible spectrum (380-740 nm).
Colors correspond to specific wavelengths, with primary colors (red, green, and blue) combining to
form others. Newton’s experiments with prisms demonstrated the decomposition of white
light into spectral colors.
Example: Red objects reflect longer wavelengths, while blue reflects shorter ones
Color Perception
Human vision perceives color through cones in the retina, sensitive to red, green, and
blue wavelengths. Perception is subjective and influenced by surrounding colors and lighting
conditions.
Color constancy allows humans to recognize colors under varying illumination.
Example: The same color can appear different under varying light conditions.
Color Spaces
Color spaces define numerical representations of colors. Common models include:
•RGB: For displays and digital devices.
•HSV (Hue, Saturation, Value): Useful for image processing as it separates intensity from
color information.
•CMYK: Used in printing for subtractive color mixing.
Example: RGB is used for digital screens, while CMY/CMYK is preferred in printing.
Palette Images
Palette images reduce file size by mapping a limited set of colors (palette) to the
image. This approach is effective for images with limited color variations, like graphics and icons.
Example: GIF images with 256-color palettes are commonly used for animations
Color Constancy
Color constancy enables the perception of consistent colors despite changes in lighting. For instance,
a red object appears red under sunlight and shadow. Achieving this in computer vision involves
compensating for the illumination spectrum, a challenging but essential task.
Example: Algorithms like White Balancing adjust colors in photographs taken at sunset.
1. Levels of Image Data Representation
1.1 Iconic Images
Description: This is the most basic representation where an image is stored as a matrix of pixel
intensity values. Each pixel represents a small, discrete part of the image, such as brightness for
grayscale or RGB values for color images.
Application: Used as input for image processing techniques like filtering or edge detection.
Example: A grayscale image where a pixel value of 0 indicates black, 255 indicates white, and
intermediate values represent shades of gray.
1.2 Segmented Images
Description: Segmentation groups pixels into meaningful regions that likely belong to the same
object. This can be based on shared properties like intensity or texture.
Application: Common in object recognition tasks where different parts of an image, like roads or
buildings, are identified separately.
Example: Segmenting a medical image into regions corresponding to different tissues or organs.
1.3 Geometric Representations
Description: Geometric models describe 2D and 3D shapes within an image. These representations
are essential for simulations involving light and motion.
Application: Utilized in computer-aided design (CAD) and simulations.
Example: Using a 3D geometric model to estimate the volume of an object in a scene
1.4 Relational Models
Description: These incorporate semantic information by connecting elements in an image using
predefined relationships, such as proximity or hierarchy.
Application: Enables advanced reasoning, like understanding an airport scene from
satellite imagery.
Example: A relational model might link "plane" objects to "runway" regions, indicating
their functional relationship.
2. Traditional Image Data Structures
2.1 Matrices
Description: The fundamental data structure for image representation. Each matrix
element corresponds to a pixel, and its value represents intensity or color.
Applications:
•Storing images as 2D arrays.
•Creating histograms and co-occurrence matrices.
Example:
•A binary image represented as a matrix of 0s and 1s.
•A multispectral image represented by multiple matrices, each corresponding to a spectral
band.
2.2 Chains
Description: Used to represent boundaries or edges in images as sequences of symbols
or directions. Chains follow a specific path around the object.
Applications: Useful in border-following algorithms and syntactic pattern recognition.
Example: Chain codes use directional symbols (e.g., 0 for right, 1 for up-right) to describe an
object’s boundary.
2.3 Topological Data Structures
Description: Represent images using graphs where nodes correspond to regions and edges to their
adjacency or connectivity.
Applications: Simplify region merging and analysis of relationships like "inside" or "touching."
Example: A region adjacency graph for a segmented image shows connections between regions
(e.g., a road connected to a parking lot)
2.4 Relational Structures
Description: Store relationships between image components in table form. They represent semantic
relationships, such as one object being inside another.
Applications: Common in higher-level image understanding tasks.
Example: A relational table might indicate that a “tree” is inside a “park.”
3. Hierarchical Data Structures
3.1 Pyramids
Matrix Pyramids:
Description: A sequence of images with progressively lower resolutions.
Applications: Used in tasks requiring multi-resolution analysis, such as image compression and
object detection.
Example: Detecting faces in a pyramid structure enables recognizing objects at various scales.
Tree Pyramids:
Description: A hierarchical structure where each node represents a section of the image.
Applications: Enables quick access to specific parts of an image.
Example: A T-pyramid where each node has four children, representing the next finer resolution.
3.2 Quadtrees
Description: Divide an image into four quadrants recursively, storing homogeneous regions at
higher resolutions and details selectively.
Applications: Efficient storage and processing for images with large uniform areas.
Example: A quadtree representation of a geographic map highlights regions with roads or water
bodies at higher resolutions
3.3 Other Pyramidal Structures
Description: Advanced pyramidal structures like Laplacian pyramids store differences between
successive levels, improving data compression.
Applications: Used for compact image representation and progressive transmission.
Example: Irregular pyramids derived from graphs retain essential structures while reducing data
complexity