Computer Vision and Image Processing
Short Questions
Unit 1
1. Q1. Define Computer Vision and Image Processing.
Computer Vision is the field that enables machines to interpret and understand visual
data. Image Processing involves techniques to enhance and manipulate images.
2. Q2. Mention two real-world applications of computer vision.
Examples: Autonomous vehicles (lane detection), Healthcare (tumor detection).
3. Q3. Differentiate between sampling and quantization.
Sampling selects pixels from a continuous image, while quantization assigns discrete
intensity levels.
4. Q4. List any two image acquisition methods.
Examples: CCD/CMOS camera sensors, Medical imaging (MRI, CT).
5. Q5. What are RGB and HSV color models used for?
RGB is used in display systems, while HSV is used in color analysis and segmentation.
6. Q6. State the purpose of histogram equalization.
To enhance contrast in an image by redistributing intensity values.
7. Q7. Give two examples of spatial filtering techniques.
Examples: Mean filter, Median filter.
8. Q8. What does the Discrete Fourier Transform (DFT) do?
It represents an image in terms of frequency components.
9. Q9. Differentiate between low-pass and high-pass filters.
Low-pass filters smooth images, high-pass filters enhance edges.
10. Q10. What is image restoration? Give one example.
Restoration is recovering a degraded image. Example: Removing blur using Wiener
filtering.
Unit 2
11. Q1. Differentiate between Sobel and Prewitt edge operators.
Both detect edges; Sobel gives more weight to central pixels, Prewitt is simpler.
12. Q2. List the four steps of the Canny edge detection algorithm.
Noise reduction, Gradient computation, Non-maximum suppression, Hysteresis
thresholding.
13. Q3. What is meant by corner detection? Give one application.
Corner detection identifies points of high variation in intensity. Example: Panorama
stitching.
14. Q4. Define thresholding in image segmentation.
Thresholding separates objects from background based on intensity values.
15. Q5. Mention the difference between region growing and region splitting
methods.
Region growing adds similar pixels to regions; splitting divides image into smaller
regions.
16. Q6. Write two differences between K-means and Mean-Shift clustering.
K-means requires specifying k; Mean-Shift does not. K-means is centroid-based, Mean-
Shift is density-based.
17. Q7. Define erosion and dilation in morphology.
Erosion shrinks objects; Dilation expands objects.
18. Q8. What are opening and closing operations used for?
Opening removes noise; Closing fills small holes.
19. Q9. Name two statistical texture analysis methods.
Examples: Gray-Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP).
20. Q10. What is the role of Gabor filters in texture analysis?
They capture frequency and orientation information for texture classification.
Unit 3
21. Q1. What is epipolar geometry?
It describes the geometric relationship between two camera views of the same scene.
22. Q2. Define disparity mapping and its role.
Disparity mapping measures pixel shifts between stereo images to compute depth.
23. Q3. Mention any two depth estimation techniques.
Examples: Stereo triangulation, Structured light projection.
24. Q4. What is meant by Structure from Motion (SfM)?
SfM reconstructs 3D structure from a sequence of 2D images with camera motion.
25. Q5. State the purpose of feature tracking in SfM.
To follow keypoints across frames for 3D reconstruction.
26. Q6. Write one difference between Lucas-Kanade and Horn-Schunck optical flow
methods.
Lucas-Kanade is local and assumes constant flow in a neighborhood, Horn-Schunck is
global with smoothness constraint.
27. Q7. What is motion segmentation? Give one application.
Separating moving objects from background. Example: Video surveillance.
28. Q8. Differentiate between intrinsic and extrinsic camera parameters.
Intrinsic parameters describe internal camera properties; extrinsic define camera position
and orientation.
29. Q9. Name any two camera calibration techniques.
Examples: Zhang’s method, Tsai’s calibration method.
30. Q10. What is a 3D point cloud and where is it used?
A 3D point cloud is a collection of spatial points representing surfaces; used in 3D
modeling and robotics.
Unit 4
31. Q1. What is the main advantage of SIFT over traditional methods?
SIFT is scale and rotation invariant.
32. Q2. Differentiate between SIFT and SURF.
SIFT is more accurate but slower; SURF is faster but less robust.
33. Q3. Mention any two commonly used feature matching algorithms.
Examples: Brute-force matching, FLANN (Fast Library for Approximate Nearest
Neighbors).
34. Q4. Define template matching in object detection.
Template matching finds areas in an image that match a reference template.
35. Q5. What is the basic idea behind deformable part models?
They represent an object as parts with spatial constraints, improving detection of variable
shapes.
36. Q6. State one key role of CNNs in vision tasks.
CNNs automatically learn hierarchical image features for recognition tasks.
37. Q7. Differentiate between supervised and unsupervised learning.
Supervised uses labeled data, unsupervised uses unlabeled data to find patterns.
38. Q8. Write one application of SVMs in image recognition.
SVMs are used for face recognition and handwriting classification.
39. Q9. What is the role of an autoencoder?
Autoencoders learn compressed representations for dimensionality reduction and
denoising.
40. Q10. Mention one difference between RNNs and GANs.
RNNs handle sequential data; GANs generate realistic synthetic data.
Unit 5
41. Q1. Differentiate between lossy and lossless compression.
Lossy reduces file size by discarding data; lossless preserves all data.
42. Q2. Give one example each of lossy and lossless compression.
Lossy: JPEG; Lossless: PNG.
43. Q3. What is the main principle behind JPEG compression?
It uses DCT and quantization to remove perceptually less important information.
44. Q4. State the purpose of the PNG standard.
PNG provides lossless compression with transparency support.
45. Q5. Define dilation in morphological processing.
Dilation expands object boundaries by adding pixels.
46. Q6. What is the effect of erosion on an image?
Erosion shrinks objects and removes small noise.
47. Q7. Differentiate between opening and closing operations.
Opening removes noise; Closing fills small holes.
48. Q8. Mention one application of morphology in shape analysis.
Object counting or skeletonization.
49. Q9. Name two real-world applications of face recognition systems.
Examples: Biometric authentication, security surveillance.
50. Q10. Write one example of medical image analysis.
Detecting tumors in MRI scans.