UNIT 1
One Mark Questions:
1.What are classical filtering operations primarily used for in image processing?
A) Noise reduction
B) Image enhancement
C) Image segmentation
D) Feature extraction
2.Which thresholding technique is based on finding the optimal threshold that minimizes intra-class
variance?
A) Global thresholding
B) Otsu's thresholding
C) Adaptive thresholding
D) Edge-based thresholding
3.Which edge detection technique focuses on finding abrupt changes in intensity
A) Sobel operator
B) Laplacian operator
C) Canny edge detector
D) Prewitt operator
4.What is the primary purpose of corner and interest point detection in image processing?
A) Noise reduction
B) Feature extraction
C) Image segmentation
D) Image compression
5.Which mathematical morphology operation is used for eroding the boundaries of foreground
objects?
A) Dilation
B) Erosion
C) Opening
D) Closing
6.Texture analysis in image processing deals with:
A) Identification of spatial patterns
B) Detection of edges
C) Removal of noise
D) Image segmentation
7.Which classical filtering operation is effective in removing salt-and-pepper noise?
A) Median filtering
B) Mean filtering
C) Gaussian filtering
D) Laplacian filtering
8.Which thresholding technique separates an image into two classes based on pixel intensity?
A) Global thresholding
B) Adaptive thresholding
C) Otsu's thresholding
D) Edge-based thresholding
9.Which edge detection technique emphasizes edges that have significant changes in gradient
magnitude?
A) Canny edge detector
B) Sobel operator
C) Laplacian of Gaussian (LoG)
D) Prewitt operator
10.Which operation in mathematical morphology combines erosion and dilation?
A) Closing
B) Opening
C) Top-hat transformation
D) Bottom-hat transformation
11. What is the main purpose of using a Gaussian filter?
A) Edge detection
B) Noise reduction
C) Contrast enhancement
D) Color correction
✅ Answer: B) Noise reduction
12. Which filter is best for removing salt-and-pepper noise?
A) Mean filter
B) Gaussian filter
C) Median filter
D) Laplacian filter
✅ Answer: C) Median filter
13. Which filter uses averaging over a neighborhood?
A) Sobel
B) Mean
C) Laplacian
D) Bilateral
✅ Answer: B) Mean
14. What is the output of a basic thresholding operation?
A) RGB image
B) Binary image
C) Grayscale image
D) Edge-detected image
✅ Answer: B) Binary image
15. Which thresholding method adapts to local variations in intensity?
A) Global thresholding
B) Otsu’s method
C) Adaptive thresholding
D) Fixed thresholding
✅ Answer: C) Adaptive thresholding
16. Otsu’s method is used to:
A) Blur an image
B) Detect corners
C) Find optimal threshold
D) Perform dilation
✅ Answer: C) Find optimal threshold
17. Thresholding is mainly used for:
A) Compression
B) Segmentation
C) Smoothing
D) Color balancing
✅ Answer: B) Segmentation
18. An edge in an image represents:
A) Smooth area
B) Intensity change
C) Background
D) Texture
✅ Answer: B) Intensity change
19. Which of the following is an edge detection operator?
A) GLCM
B) Median
C) Sobel
D) Morphological
✅ Answer: C) Sobel
20. Which step in Canny edge detection removes weak edges?
A) Smoothing
B) Thresholding
C) Non-maximum suppression
D) Gaussian blur
✅ Answer: C) Non-maximum suppression
21. Which operator uses the second derivative for edge detection?
A) Sobel
B) Laplacian
C) Prewitt
D) Canny
✅ Answer: B) Laplacian
22. A corner is a point where:
A) Two edges meet
B) Image is smooth
C) Color is uniform
D) Thresholding fails
✅ Answer: A) Two edges meet
23. Which is a popular corner detection algorithm?
A) Gabor
B) Harris
C) Bilateral
D) Gaussian
✅ Answer: B) Harris
24. Which feature descriptor is scale and rotation invariant?
A) FAST
B) Harris
C) SIFT
D) Sobel
✅ Answer: C) SIFT
25. Which corner detection algorithm is faster?
A) SIFT
B) Harris
C) FAST
D) SURF
✅ Answer: C) FAST
26. Which operation shrinks white regions in a binary image?
A) Dilation
B) Erosion
C) Opening
D) Closing
✅ Answer: B) Erosion
27. Dilation operation:
A) Shrinks objects
B) Blurs edges
C) Expands white regions
D) Removes noise
✅ Answer: C) Expands white regions
28. Opening is a combination of:
A) Dilation followed by erosion
B) Erosion followed by dilation
C) Two erosions
D) Two dilations
✅ Answer: B) Erosion followed by dilation
29. Closing operation is used to:
A) Detect corners
B) Fill small holes
C) Blur edges
D) Sharpen image
✅ Answer: B) Fill small holes
30. GLCM stands for:
A) Grayscale Line Color Mapping
B) Gray-Level Co-occurrence Matrix
C) Gaussian Level Contrast Matrix
D) Generalized Linear Control Method
✅ Answer: B) Gray-Level Co-occurrence Matrix
31. Local Binary Pattern (LBP) is used for:
A) Corner detection
B) Texture analysis
C) Noise removal
D) Image resizing
✅ Answer: B) Texture analysis
32. Which transform is used for multi-scale texture analysis?
A) Laplace
B) Gabor
C) Wavelet
D) Fourier
✅ Answer: C) Wavelet
33. Texture represents the ______ arrangement of pixel intensities.
A) Random
B) Uniform
C) Spatial
D) Logical
✅ Answer: C) Spatial
8 Marks:
1. Explain the concept of image processing in detail. Discuss its importance in various fields and
provide examples of its applications
2. Describe the basic steps involved in performing edge detection in image processing. Discuss
the significance of edge detection and provide examples of its applications.
3. What is mathematical morphology? Discuss the fundamental operations involved in
mathematical morphology and provide examples to demonstrate their effects on images
4. Explain the concept of texture analysis in image processing. Discuss how texture features are
extracted and utilized in various applications.
5. Explain the differences between linear and non-linear filters with examples. Discuss how
Gaussian and Median filters work and where they are used.
6. Describe global, adaptive, and Otsu’s thresholding techniques. Compare their effectiveness
in varying lighting conditions.
7. Explain the Canny edge detection process in detail. Highlight how it improves upon other
edge detectors like Sobel and Prewitt.
8. What is the significance of corner detection in image processing? Compare Harris and FAST
corner detectors in terms of method and speed.
9. Describe the basic morphological operations: erosion, dilation, opening, and closing. Explain
their impact on binary images with suitable examples.
10. What is texture in image processing? Describe any two texture analysis methods (e.g.,
GLCM, LBP, or wavelet-based analysis) and their applications.
11. How do filtering, edge detection, and morphological operations work together in image
preprocessing? Explain with a practical example, such as license plate detection or medical
image segmentation.
16 marks
12. Describe various classical filtering techniques in image processing. Explain linear (mean,
Gaussian) and non-linear (median, bilateral) filters with examples and use-cases.
13. Explain in detail the different thresholding techniques. Include global thresholding, Otsu’s
method, and adaptive thresholding. Compare their performance.
14. Discuss edge detection techniques. Explain and compare Sobel, Prewitt, Laplacian, and
Canny edge detectors in terms of method, accuracy, and noise sensitivity.
15. Explain the concept of corner and interest point detection. Describe Harris, FAST, and SIFT
algorithms with their applications in real-world scenarios.
16. What is mathematical morphology? Describe basic operations (erosion, dilation, opening,
closing) with examples and their importance in binary image processing.
17. Discuss texture analysis techniques in image processing. Explain statistical, structural, and
transform-based methods with real-world examples.
18. Write a comprehensive note on feature extraction in image processing using filtering, edge
detection, and corner detection techniques.
19. Explain how different image processing techniques work together in a pipeline for object
detection or image segmentation.
UNIT 2
1. Which technique is primarily used for identifying and labeling connected components
in a binary image?
A) Skeletonization
B) Object labeling and counting
C) Fourier descriptors
D) Active contours
Answer:
B) Object labeling and counting
2. Which method is used to reduce a shape to its minimal representation while
preserving its topology?
A) Boundary tracking
B) Thinning (Skeletonization)
C) Chain codes
D) Shape recognition
Answer:
B) Thinning (Skeletonization)
3. Which of the following descriptors is used to represent a shape boundary using a
sequence of directional codes?
A) Fourier descriptors
B) Chain codes
C) Centroidal profiles
D) Moment descriptors
Answer:
B) Chain codes
4. Active contours are also known as:
A) Snakes
B) Skeletons
C) Centroids
D) Moments
Answer:
A) Snakes
5. Which of the following is commonly used to measure shape size in binary shape
analysis?
A) Boundary length measures
B) Size filtering
C) Boundary descriptors
D) Fourier descriptors
Answer:
B) Size filtering
6. Which function helps to calculate the shortest distance between points inside a shape?
A) Distance functions
B) Object labeling
C) Boundary tracking
D) Moment descriptors
Answer:
A) Distance functions
7. What does connectedness in binary shape analysis help determine?
A) The shape’s color
B) Whether pixels belong to the same object
C) The boundary length
D) The Fourier transform
Answer:
B) Whether pixels belong to the same object
8. Which process assigns unique labels to different objects in a binary image?
A) Skeletonization
B) Object labeling and counting
C) Size filtering
D) Deformable shape analysis
Answer:
B) Object labeling and counting
9. What technique is used to remove small noise or objects below a certain area
threshold?
A) Size filtering
B) Boundary tracking
C) Moment calculation
D) Chain codes
Answer:
A) Size filtering
10. Which descriptor analyzes the shape by representing its boundary as a series of
directional moves?
A) Chain codes
B) Region descriptors
C) Fourier descriptors
D) Centroidal profiles
Answer:
A) Chain codes
11. What does deformable shape analysis primarily focus on?
A) Fixed, rigid shapes
B) Shapes that can change or adapt form
C) Counting objects in an image
D) Computing boundary lengths
Answer:
B) Shapes that can change or adapt form
12. Active contours are used for:
A) Tracking boundaries dynamically in an image
B) Labeling connected components
C) Calculating region moments
D) Filtering small shapes
Answer:
A) Tracking boundaries dynamically in an image
13. Which method is useful for handling occlusion in shape recognition?
A) Boundary length measures
B) Chain codes
C) Shape models and recognition
D) Size filtering
Answer:
C) Shape models and recognition
14. Which of the following is a boundary descriptor that uses frequency components to
describe shape?
A) Moment descriptors
B) Fourier descriptors
C) Region descriptors
D) Centroidal profiles
Answer:
B) Fourier descriptors
15. What does a skeleton of a shape represent?
A) The shape’s color histogram
B) A thin version of the shape preserving its topology
C) The perimeter of the shape
D) The number of connected components
Answer:
B) A thin version of the shape preserving its topology
16. Which boundary tracking procedure helps in extracting the outline of an object?
A) Distance functions
B) Boundary tracking
C) Size filtering
D) Moment calculation
Answer:
B) Boundary tracking
17. Centroidal profiles are used to:
A) Analyze shape based on distances from the centroid to boundary points
B) Label connected components
C) Filter small objects
D) Compute Fourier transforms
Answer:
A) Analyze shape based on distances from the centroid to boundary points
18. Moment descriptors are primarily used for:
A) Shape recognition and characterization
B) Counting objects
C) Boundary tracking
D) Size filtering
Answer:
A) Shape recognition and characterization
19. Which distance function calculates the shortest distance from every pixel to the
nearest boundary pixel?
A) Euclidean distance transform
B) Chain code
C) Fourier descriptor
D) Moment descriptor
Answer:
A) Euclidean distance transform
20. Handling occlusion in shape recognition means:
A) Ignoring parts of the shape
B) Recognizing shapes even if partially hidden
C) Counting the number of pixels
D) Applying size filtering
Answer:
B) Recognizing shapes even if partially hidden
8 MARKS QUESTION
1. Discuss the concept of connectedness in binary images and explain how object
labeling and counting is performed. What are the common challenges
encountered during these processes?
2. Explain the role of size filtering in binary shape analysis. How does it affect the overall shape
recognition process? Provide examples of when size filtering is necessary.
3. Describe skeletonization and thinning in the context of shape analysis. How do
these techniques help in simplifying shapes and what are their applications?
4. What are boundary tracking procedures? Describe how boundary descriptors
such as chain codes and Fourier descriptors are used for shape representation
and recognition.
5. Explain the principles of deformable shape analysis and active contours (snakes). How are
they used to handle complex shapes and occlusions in images?
6. Describe the importance of centroidal profiles and moment descriptors in shape
recognition. How do these descriptors contribute to distinguishing between
different shapes?
7. Discuss the challenges posed by occlusion in shape recognition. What techniques
or models can be used to effectively recognize partially occluded shapes?
8. Explain distance functions in binary shape analysis. How are distance
transforms used in skeleton extraction and shape matching?
15 MARKS
1.
Explain in detail the process of binary shape analysis starting from connectedness,
object labeling, and counting, to size filtering. Discuss how these steps are crucial for
accurate shape extraction and recognition. Include examples where appropriate.
2.
Describe skeletonization and thinning techniques in binary shape analysis. Explain the
algorithms used, their advantages, and limitations. How do these methods aid in further
shape processing tasks such as recognition and classification?
3.
Discuss boundary tracking procedures and their role in shape analysis. Explain how
boundary descriptors like chain codes and Fourier descriptors are computed and used
for shape representation. Compare their strengths and weaknesses.
4.
Define deformable shape analysis and active contours (snakes). Describe the
mathematical formulation of active contours, how they evolve, and their application in
boundary detection. Discuss how they handle occlusion and shape variability.
5.
Explain the concepts of shape models and shape recognition. Describe how centroidal
profiles, moment descriptors, and region descriptors are used to characterize and
recognize shapes. Discuss challenges such as occlusion and noise and how they are
addressed.
6.
Detail the use of distance functions in binary shape analysis. Explain how distance
transforms are computed and used in skeleton extraction and shape matching. Illustrate
with examples how these functions contribute to deformable shape models.
UNIT III
1. What is the main use of the Hough Transform?
A) Color filtering
B) Line and shape detection
C) Image compression
D) Noise removal
Answer: B) Line and shape detection
2. Which method does the Hough Transform use for line parameterization?
A) Polar coordinates
B) Foot-of-normal method
C) Cartesian slope-intercept
D) Radial basis function
Answer: B) Foot-of-normal method
3. What does RANSAC stand for?
A) Random Sample Consensus
B) Recursive Analysis and Sampling
C) Rapid Algorithm for Noise Suppression
D) Range and Sample Calculation
Answer: A) Random Sample Consensus
4. Besides lines, which shape can the standard Hough Transform detect?
A) Triangles
B) Squares
C) Circles
D) Ellipses
Answer: C) Circles
5. What is a common problem with the Hough Transform?
A) Low accuracy
B) Speed and computational cost
C) Poor edge detection
D) Requires color images
Answer: B) Speed and computational cost
6. In circle detection using Hough Transform, what is "accurate center location"?
A) Estimating radius only
B) Locating the circle’s center precisely
C) Measuring circle perimeter
D) Detecting circle color
Answer: B) Locating the circle’s center precisely
7. What technique is used to detect elliptical shapes with the Hough Transform?
A) Standard Hough Transform
B) Generalized Hough Transform (GHT)
C) Fourier Transform
D) Wavelet Transform
Answer: B) Generalized Hough Transform (GHT)
8. What is the role of spatial matched filtering?
A) Image compression
B) Enhancing detection by matching spatial features
C) Noise removal
D) Color segmentation
Answer: B) Enhancing detection by matching spatial features
9. What does the accumulator space represent in the Hough Transform?
A) Color histogram
B) Parameter space collecting votes for shapes
C) Image pixels
D) Noise distribution
Answer: B) Parameter space collecting votes for shapes
10. RANSAC is primarily used for:
A) Detecting occlusion
B) Robust line fitting in the presence of outliers
C) Image sharpening
D) Edge detection
Answer: B) Robust line fitting in the presence of outliers
11. Which algorithm is commonly used to handle outliers in line fitting?
A) K-means
B) RANSAC
C) FFT
D) PCA
Answer: B) RANSAC
12. The foot-of-normal method in the Hough Transform represents a line by:
A) Its slope and intercept
B) The distance from origin and angle of normal
C) The endpoints of the line
D) The pixel intensity values
Answer: B) The distance from origin and angle of normal
13. Hole detection in shape analysis helps in:
A) Identifying objects inside other objects
B) Color segmentation
C) Edge sharpening
D) Noise filtering
Answer: A) Identifying objects inside other objects
14. Speed problems in the Hough Transform arise mainly because:
A) Too many parameters in accumulator space
B) Poor edge detection
C) Color image complexity
D) Lack of sufficient data
Answer: A) Too many parameters in accumulator space
15. Which transform is generalized to detect arbitrary shapes beyond lines and
circles?
A) Fourier Transform
B) Generalized Hough Transform (GHT)
C) Wavelet Transform
D) Radon Transform
Answer: B) Generalized Hough Transform (GHT)
16. In ellipse detection, which feature is crucial to improve accuracy?
A) Center location and axis lengths
B) Pixel brightness
C) Image texture
D) Edge thickness
Answer: A) Center location and axis lengths
17. Line localization is important because:
A) It helps in removing noise
B) It identifies exact pixel positions of lines
C) It filters small shapes
D) It converts image to binary
Answer: B) It identifies exact pixel positions of lines
18. Spatial matched filtering works by:
A) Matching a template to an image region to detect features
B) Converting image to frequency domain
C) Segmenting colors
D) Removing noise via thresholding
Answer: A) Matching a template to an image region to detect features
19. The RANSAC algorithm iteratively:
A) Removes noise pixels
B) Fits models using random subsets and selects the best fit
C) Converts images to grayscale
D) Detects edges using gradients
Answer: B) Fits models using random subsets and selects the best fit
20. In Hough Transform, the accumulator array is:
A) A 2D array counting votes for possible line parameters
B) A pixel intensity map
C) A filter kernel
D) A noise mask
Answer: A) A 2D array counting votes for possible line parameters
8-Mark Questions:
1. Explain the working of the Hough Transform for line detection using the foot-of-
normal method. How does the accumulator space help in detecting lines?
2. Describe the RANSAC algorithm and explain how it improves the robustness of
straight line detection in the presence of noise and outliers.
3. Discuss the process of circular object detection using the Hough Transform. How
is the accurate center of the circle determined?
4. What is the Generalized Hough Transform (GHT)? Explain how it is used for
detecting shapes like ellipses and other arbitrary objects.
5. Explain the speed problem encountered in Hough Transform-based methods.
What techniques can be applied to reduce computational time without losing
accuracy?
6. Describe spatial matched filtering and its role in object location and feature
collation in shape detection.
7. Explain the foot-of-normal method for line parameterization and its advantages
over slope-intercept form.
8. Outline the steps involved in line localization and line fitting after line detection
using the Hough Transform.
15-Mark Questions:
1. Explain in detail the Hough Transform technique for line detection, focusing on
the foot-of-normal parameterization, accumulator space voting, line localization,
and line fitting. How does this method handle noisy data, and how does
RANSAC complement this approach?
2. Discuss the Generalized Hough Transform (GHT) and its application in ellipse
detection. Explain how spatial matched filtering, feature collation, and object
location work within the GHT framework. Include challenges and solutions
related to computational efficiency and accuracy.
3. Case Study: Describe the complete process of human iris location using Hough
Transform-based methods. Include circular and elliptical detection techniques,
accurate center location, hole detection, and handling of speed issues.
4. Compare and contrast the Hough Transform and RANSAC methods for shape
detection in images. Discuss their mathematical foundations, robustness to noise,
computational complexity, and practical applications.
5. Explain the challenges of speed and accuracy in the Hough Transform for shape
detection. Discuss various optimization strategies such as probabilistic Hough
Transform, multi-resolution approaches, and parallel processing.
UNIT IV
1. What is the main purpose of projection schemes in 3D vision?
A) To map 3D points onto a 2D plane
B) To enhance image colors
C) To increase image resolution
D) To remove noise
Answer: A) To map 3D points onto a 2D plane
2. Which method estimates surface shape using brightness variations in a single
image?
A) Shape from texture
B) Shape from shading
C) Active range finding
D) Optical flow
Answer: B) Shape from shading
3. Photometric stereo recovers surface normals using:
A) A single image under one lighting
B) Multiple images under varying lighting
C) Texture variations
D) Depth sensors
Answer: B) Multiple images under varying lighting
4. Shape from texture exploits:
A) Lighting intensity variations
B) Texture pattern distortions due to surface shape
C) Motion between frames
D) Depth sensor data
Answer: B) Texture pattern distortions due to surface shape
5. Active range finding uses:
A) Only passive image data
B) Sensors that actively measure distances
C) Color information
D) Texture analysis
Answer: B) Sensors that actively measure distances
6. A common volumetric representation in 3D modeling is:
A) Point cloud
B) Voxel grid
C) Edge map
D) Contour plot
Answer: B) Voxel grid
7. Triangulation in 3D vision is used to:
A) Align images
B) Calculate 3D points from multiple views
C) Estimate motion parameters
D) Extract texture features
Answer: B) Calculate 3D points from multiple views
8. Optical flow represents:
A) The depth of an object
B) The apparent motion of pixels between images
C) The surface normals
D) Color changes
Answer: B) The apparent motion of pixels between images
9. Bundle adjustment is primarily used for:
A) Filtering noise from images
B) Refining 3D reconstructions by minimizing re-projection errors
C) Segmenting objects
D) Detecting edges
Answer: B) Refining 3D reconstructions by minimizing re-projection errors
10. Spline-based motion modeling is used to:
A) Detect object edges
B) Represent smooth, continuous motion trajectories
C) Convert 3D models to 2D
D) Enhance image contrast
Answer: B) Represent smooth, continuous motion trajectories
11. Shape from focus technique relies on:
A) Variation in lighting intensity
B) Sharpness of image regions at different focal depths
C) Motion between frames
D) Texture gradients
Answer: B) Sharpness of image regions at different focal depths
12. Which of the following is NOT a surface representation technique?
A) Point-based representation
B) Volumetric representation
C) Histogram equalization
D) Mesh representation
Answer: C) Histogram equalization
13. In 3D object recognition, features are typically extracted from:
A) Image color only
B) 3D surface or shape descriptors
C) Noise patterns
D) Image brightness only
Answer: B) 3D surface or shape descriptors
14. Translational alignment in motion analysis refers to:
A) Rotational motion of objects
B) Linear displacement of the camera or object
C) Scaling of object size
D) Change in illumination
Answer: B) Linear displacement of the camera or object
15. Layered motion techniques are useful for:
A) Segmenting moving objects into separate layers
B) Enhancing image contrast
C) Detecting edges
D) Performing color correction
Answer: A) Segmenting moving objects into separate layers
16. A point-based representation of surfaces typically uses:
A) Voxels
B) A set of discrete points sampled from the surface
C) Edge maps
D) Texture gradients
Answer: B) A set of discrete points sampled from the surface
17. Bundle adjustment minimizes which of the following errors?
A) Color variance
B) Re-projection error between observed and predicted image points
C) Motion blur
D) Texture inconsistency
Answer: B) Re-projection error between observed and predicted image points
18. Which method uses multiple images taken at different focus distances to recover
depth?
A) Shape from shading
B) Shape from focus
C) Photometric stereo
D) Active range finding
Answer: B) Shape from focus
19. Parametric motion models typically describe motion using:
A) Arbitrary pixel displacements
B) Mathematical functions with a fixed number of parameters
C) Color changes
D) Texture features
Answer: B) Mathematical functions with a fixed number of parameters
20. Volumetric representations in 3D vision are most useful for:
A) Representing surfaces with discrete points
B) Representing solid objects including interior volume
C) Enhancing image sharpness
D) Segmenting textures
Answer: B) Representing solid objects including interior volume
8 MARKS
1. Explain the principle of shape from shading. How can variations in image
brightness be used to recover the 3D shape of an object?
2. Describe photometric stereo. How does it differ from shape from shading, and
what are its advantages in surface normal estimation?
3. Discuss the concept of shape from texture. How do texture variations provide
cues about surface orientation and depth?
4. What is active range finding? Explain common sensors or techniques used in
active range finding and how they differ from passive methods.
5. Compare point-based and volumetric surface representations. What are the pros
and cons of each in 3D modeling and reconstruction?
6. Describe the process of triangulation in 3D reconstruction. Why is triangulation
essential when working with multiple camera views?
7. Explain the concept of optical flow and its role in motion estimation. How can
optical flow be used to analyze dynamic scenes?
8. Outline the steps involved in 3D object recognition. What challenges arise in
recognizing objects from different viewpoints or under varying lighting?
15 MARKS
1. Discuss various methods for 3D vision including shape from shading,
photometric stereo, shape from texture, shape from focus, and active range
finding. Compare their principles, strengths, and limitations.
2. Explain the full pipeline of 3D reconstruction starting from image acquisition,
triangulation, bundle adjustment, to final model generation. Include
explanations of translational alignment and parametric motion estimation.
3. Explain motion analysis techniques such as optical flow, spline-based motion,
and layered motion. Discuss how these methods contribute to understanding
dynamic scenes in 3D vision.
4. Describe surface representations used in 3D vision, focusing on point-based
and volumetric models. Discuss their impact on reconstruction accuracy and
computational complexity.
5. Elaborate on bundle adjustment in 3D vision: its role, mathematical
formulation, and importance in refining 3D reconstructions.
UNIT V
MCQ
1. Which technique is commonly used for face recognition by representing
faces as a set of principal components?
A) Chamfer matching
B) Particle filters
C) Eigenfaces
D) Optical flow
Answer: C) Eigenfaces
2. Foreground-background separation in surveillance is primarily used for:
A) Detecting road signs
B) Identifying moving objects
C) Face detection
D) Locating pedestrians
Answer: B) Identifying moving objects
3. What does Chamfer matching help with in surveillance applications?
A) Face recognition
B) Tracking shapes and contours
C) Roadway detection
D) Gait analysis
Answer: B) Tracking shapes and contours
4. In in-vehicle vision systems, which feature is used to locate the roadway?
A) Particle filters
B) Road markings
C) Foreground-background separation
D) Eigenfaces
Answer: B) Road markings
5. Human gait analysis is primarily used in:
A) Surveillance
B) Photo albums
C) Roadway detection
D) Face detection
Answer: A) Surveillance
6. Active appearance models in face recognition combine:
A) Texture and shape information
B) Motion and color information
C) Sound and image data
D) Depth and speed data
Answer: A) Texture and shape information
7. In surveillance, particle filters are mainly used for:
A) Image enhancement
B) Robust object tracking over time
C) Face recognition
D) Road sign detection
Answer: B) Robust object tracking over time
8. Which technique helps in detecting occlusion during object tracking?
A) Eigenfaces
B) Chamfer matching
C) Optical flow
D) Histogram equalization
Answer: B) Chamfer matching
9. Road sign identification in in-vehicle vision systems primarily uses:
A) Particle filters
B) Shape and color features
C) Face detection algorithms
D) Background subtraction
Answer: B) Shape and color features
10. Foreground-background separation can be achieved by:
A) Edge detection
B) Background modeling and subtraction
C) Shape matching
D) Histogram equalization
Answer: B) Background modeling and subtraction
11. Human gait analysis is considered a:
A) Biometric recognition method
B) Image compression technique
C) Color enhancement method
D) Edge detection method
Answer: A) Biometric recognition method
12. Eigenfaces technique uses which mathematical method to reduce
dimensionality?
A) Fourier Transform
B) Principal Component Analysis (PCA)
C) Wavelet Transform
D) Histogram Equalization
Answer: B) Principal Component Analysis (PCA)
13. Particle filters are also known as:
A) Kalman filters
B) Sequential Monte Carlo methods
C) Fourier filters
D) Gaussian filters
Answer: B) Sequential Monte Carlo methods
14. Combining views from multiple cameras improves surveillance by:
A) Increasing color accuracy
B) Providing multiple perspectives to resolve occlusion
C) Reducing image noise
D) Increasing frame rate
Answer: B) Providing multiple perspectives to resolve occlusion
15. Chamfer matching is mainly used for:
A) Matching edge-based shape templates
B) Enhancing image contrast
C) Detecting facial landmarks
D) Removing background noise
Answer: A) Matching edge-based shape templates
8-Mark Questions:
1. Explain the Eigenface method for face recognition. How does it reduce
dimensionality and what are its limitations?
2. Describe how foreground-background separation is used in surveillance systems
to detect moving objects. What are some common techniques?
3. What is Chamfer matching? Discuss its application in object tracking and
occlusion handling in surveillance.
4. Explain the role of particle filters in multi-camera tracking systems. How do they
improve tracking accuracy?
5. Discuss how roadways and road markings are detected in in-vehicle vision
systems. What challenges are typically encountered?
6. Outline the steps involved in human gait analysis and its importance in
surveillance.
15-Mark Questions:
1. Discuss the process of face detection and recognition in photo album
applications. Explain Eigenfaces, active appearance models, and 3D shape
models, highlighting their strengths and weaknesses.
2. Explain the key components of a surveillance system, including foreground-
background separation, Chamfer matching, particle filters, and handling
occlusion. Discuss how combining views from multiple cameras enhances system
performance.
3. Describe the design and functioning of an in-vehicle vision system focused on
locating the roadway, road markings, road signs, and pedestrians. Discuss
challenges such as varying lighting and weather conditions.
4. Analyze human gait analysis as a biometric technique in surveillance
applications. Discuss the data acquisition methods, feature extraction, and
challenges in real-world scenarios.
5. Compare and contrast different tracking methods such as Chamfer matching
and particle filters. Explain their application in surveillance systems and their
effectiveness in occlusion and multi-object tracking.