INDEX
Exp.                                                                                       Faculty
 no.
       Date                              Title                                  Page no.
                                                                                            sign
 1            Perform the image transformation that include the geometric and
                             morphological transformations.
 2             Perform the image enhancement by applying contrast limited
                            adaptive histogram Equalization.
               Perform the Contours and Region based segmentation in
 3
                                      images.
              Perform the Wavelet Transforms on image using PyWavelets.
 4
              Perform the K-Means clustering for Image segmentation using
 5                                   CV2 library.
               Perform basic motion detection and tracking using python and
 6                                      OpenCV
 7                  Perform Face detection using OpenCV library
 8                      Perform Foreground Extraction in an image
                Perform Pedestrian Detection using OpenCV and Python
 9
Ex. No: 1          Preform The Image Transformation That Include The Geometric And
                                      Morphological Transformation
Date:
Aim:
        To perform the image transformation that include the geometric and
        morphological transformation.
Procedure:
     Step 1: Create a new folder and open a Python IDLE and create a file named
     imagetransformation.py, save this file in already created folder and type the
     code.
     Step 2: Add or save any image in a device or in a created folder to perform
     image transformation.
     Step 3: To run the program click the run module in python idle window.
     Step 4: After running choose the choice 1 or 2 to perform geometric or
     morphological transform.
     Step 5: If you choose the 1 for geometric and choose the choice 1/2/3 for
     particular geometric transform technique (like translation / rotation / Scaling).
     Repeat step 4,5 for morphological.
     Step 6: Now our transformed image displayed with original image successfully.
Geometric Transformation:
Geometric transformations are used to modify the spatial arrangement of pixels in an
image. These operations are typically defined by transformation matrices which
specify how each pixel's position is modified.
Morphological Transformation:
Morphological transformations are primarily used for preprocessing tasks such as
noise removal, image enhancement, and segmentation.
Program Code:
    import numpy as np
    import cv2 as cv
    import cv2
    import numpy as np
    def geometric_operations(image):
      print("Geometric transformations menu:")
      print("1. Translate")
      print("2. Rotate")
      print("3. Scale")
      choice = int(input("Enter your choice (1/2/3): "))
      if choice == 1:
         # Translation
         rows, cols = image.shape[:2]
         M = np.float32([[1, 0, 50], [0, 1, 25]]) # Translate by (50, 25)
         translated_img = cv2.warpAffine(image, M, (cols, rows))
         return translated_img
      elif choice == 2:
         # Rotation
         rows, cols = image.shape[:2]
         M = cv2.getRotationMatrix2D((cols/2, rows/2), 45, 1) # Rotate by 45 degrees
         rotated_img = cv2.warpAffine(image, M, (cols, rows))
         return rotated_img
      elif choice == 3:
         # Scaling
         scaled_img = cv2.resize(image, None, fx=0.5, fy=0.5) # Scale to half size
         return scaled_img
      else:
         print("Invalid choice")
         return image
    def morphological_operations(image):
      print("Morphological operations menu:")
      print("1. Dilation")
      print("2. Erosion")
      choice = int(input("Enter your choice (1/2): "))
      kernel = np.ones((5,5), np.uint8)
      if choice == 1:
         # Dilation
         dilated_img = cv2.dilate(image, kernel, iterations=1)
         return dilated_img
      elif choice == 2:
         # Erosion
         eroded_img = cv2.erode(image, kernel, iterations=1)
         return eroded_img
  else:
     print("Invalid choice")
     return image
def main():
  image_path = 'cat.jpg' #'path/to/your/image.jpg'
  image = cv2.imread(image_path)
  if image is None:
     print("Error: Could not read the image.")
     return
  print("Select operation type:")
  print("1. Geometric transformation")
  print("2. Morphological operation")
  operation_type = int(input("Enter your choice (1/2): "))
  if operation_type == 1:
     transformed_image = geometric_operations(image)
  elif operation_type == 2:
     transformed_image = morphological_operations(image)
  else:
     print("Invalid choice")
  # Display original and transformed images
  cv2.imshow("Original Image", image)
  cv2.imshow("Transformed Image", transformed_image)
  cv2.waitKey(0)
  cv2.destroyAllWindows()
if __name__ == "__main__":
   main()
 Output:
TRANSLATE OUPUT:
ROTATE OUPUT:
SCALE OUTPUT:
MORPHOLOGICAL (DILATION) OUTPUT:
 EROSION OUTPUT:
Result:
          Thus the above program performed the image transformation that include the
          geometric and morphological transformations. Hence the output verified.
Ex. No: 2            Perform the image enhancement by applying contrast limited
                                     adaptive histogram Equalization
Date:
Aim:
      To Perform the image enhancement by applying contrast limited adaptive
histogram Equalization
Procedure:
     Step 1: Create a new folder and open a Python IDLE and create a file
     named imagecontrast.py, save this file in already created folder and type
     the code.
       Step 2: Add or save any image in a device or in a created folder to perform
       image transformation.
       Step 3: To run the program click the run module in python idle window.
       Step 4: Now our contrast image displayed with original image
       successfully.
  CLAHE: CLAHE is a variant of Adaptive histogram equalization (AHE)
which takes care of over-amplification of the contrast. This algorithm can be
applied to improve the contrast of images. We can also apply CLAHE to color
images, where usually it is applied on the luminance channel and the results after
equalizing only the luminance channel of an HSV image are much better than
equalizing all the channels of the BGR image.
Program Code:
  import cv2
  import numpy as np
  # Reading the image from the present directory
  image = cv2.imread("ima1.jpg")
  # Resizing the image for compatibility
  image = cv2.resize(image, (500, 600))
  # The initial processing of the image
  # image = cv2.medianBlur(image, 3)
  image_bw = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
  # The declaration of CLAHE
  # clipLimit -> Threshold for contrast limiting
  clahe = cv2.createCLAHE(clipLimit=5)
  final_img = clahe.apply(image_bw) + 30
  # Ordinary thresholding the same image
  _, ordinary_img = cv2.threshold(image_bw, 155, 255, cv2.THRESH_BINARY)
  # Showing the two images
  cv2.imshow("original image", image)
  cv2.imshow("ordinary threshold", ordinary_img)
  cv2.imshow("CLAHE image", final_img)
Output:
Result:
       The above program was performed the image enhancement by applying contrast
limited adaptive histogram Equalization. Hence the output verified.
Ex. No: 3               Perform the Contours and Region based segmentation in
                                                 images
Date:
Aim:
       To perform the Contours and Region based segmentation in images.
Procedure:
     Step 1: Create a new folder and open a Python IDLE and create a file named
     imagecontrast.py, save this file in already created folder and type the code.
       Step 2: Add or save any image in a device or in a created folder to perform image
       transformation.
       Step 3: Install required python library using command prompt (ex: pip install
       matplotlib sckimage).
       Step 4: To run the program click the run module in python idle window.
       Step 5: Now our contours and region segmentated image displayed with original
       image successfully.
CONTOURS BASED SEGMENTATION:
 Con Contours in OpenCV refer to the boundaries of an object or shape in an image.
 They are represented as a list of points that define the shape's perimeter. OpenCV
 provides several functions for finding and manipulating contours, including
 cv.findContours()** and **cv.drawContours().
  OpenCV Contours can be used for various image processing tasks, such as object
  detection, shape analysis, and boundary extraction. They are often used in
  conjunction with other OpenCV functions, such as edge detection and
  thresholding, to perform more advanced image processing operations.
REGION BASED SEGMENTATION:
 Label the region which we are sure of being the foreground or object with one color
 (or intensity), label the region which we are sure of being background or non-object
 with another color and finally the region which we are not sure of anything, label it
 with 0. That is our marker. Then apply watershed algorithm.
Program Code:
     import cv2
     import numpy as np
     from skimage import io, color, measure
     import matplotlib.pyplot as plt
     # Read image using OpenCV
     image = cv2.imread('land.jpg')
     original_image = image.copy() # Make a copy for displaying contours later
     # Convert image to grayscale for contour detection
     gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
     # Apply GaussianBlur to reduce noise
     blurred = cv2.GaussianBlur(gray, (5, 5), 0)
     # Use Canny edge detection
     edges = cv2.Canny(blurred, 50, 150)
     # Find contours in the edged image
     contours, _ = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL,
     cv2.CHAIN_APPROX_SIMPLE)
     # Create a binary image for region-based segmentation
     gray_image = color.rgb2gray(original_image)
     thresh = 0.7
     binary = gray_image > thresh
     # Label regions
     label_image = measure.label(binary)
     # Regionprops to extract properties of labeled regions
     regions = measure.regionprops(label_image)
     # Draw contours on the original image
     cv2.drawContours(image, contours, -1, (0, 255, 0), 2)
     # Display the image with contours using OpenCV
     cv2.imshow('Contours', image)
     # Display the segmented regions using matplotlib
     fig, ax = plt.subplots()
     ax.imshow(original_image)
     for region in regions:
        # Draw rectangle around segmented regions
        minr, minc, maxr, maxc = region.bbox
        rect = plt.Rectangle((minc, minr), maxc - minc, maxr - minr,
                      fill=False, edgecolor='red', linewidth=2)
        ax.add_patch(rect)
     plt.title('Segmented Regions')
     plt.show()
Output:
Result:-
      The above program performed the Contours and Region based segmentation in
      images.
Ex. No: 4
                    Perform the Wavelet Transforms on image using PyWavelets
Date:
Aim:
       To Perform the Wavelet Transforms on image using PyWavelets.
Procedure:
     Step 1: Create a new folder and open a Python IDLE and create a file named
     Wavelet.py, save this file in already created folder and type the code.
       Step 2: Add or save any image in a device or in a created folder to perform image
       transformation.
       Step 3: Install required python library using command prompt (ex: pip install
       matplotlib opencv pywt numpy).
       Step 4: To run the program click the run module in python idle window.
       Step 5: Now our transformed image displayed with original image successfully
PYWAVELET:
    Wavelet Transform provides a multi-resolution analysis of an image. It
    decomposes the image into approximation and detail coefficients, allowing for
    efficient compression.
    PyWavelets is open source wavelet transform software for Python. It combines a
    simple high level interface with low level C and Cython performance.
    Using pywavelets for wavelet transform allows you to decompose and analyze
    images in terms of various frequency components. This can be useful for tasks
    such as image compression, denoising, and feature extraction. By visualizing and
    manipulating the wavelet coefficients, you gain insights into the structure and
    content of the image at different scales.
Program Code:
   import cv2
   import numpy as np
   import pywt
   import matplotlib.pyplot as plt
   image = cv2.imread('land.jpg')
   image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
   # Convert to float for more resolution for use with pywt
   image = np.float32(image)
   image /= 255
   # Wavelet transform of image, and plot approximation and details
   titles = ['Approximation', ' Horizontal detail',
           'Vertical detail', 'Diagonal detail']
   coeffs2 = pywt.dwt2(image, 'bior1.3')
   LL, (LH, HL, HH) = coeffs2
   fig = plt.figure(figsize=(12, 3))
   for i, a in enumerate([LL, LH, HL, HH]):
       ax = fig.add_subplot(1, 4, i + 1)
       ax.imshow(a, interpolation="nearest", cmap=plt.cm.gray)
       ax.set_title(titles[i], fontsize=10)
       ax.set_xticks([])
       ax.set_yticks([])
   fig.tight_layout()
   plt.show()
   # Convert back to uint8 OpenCV format
   image *= 255
   image = np.uint8(image)
   cv2.imshow('image', image)
Output:
Result:
     The above program was performed the Wavelet Transforms on image using
     PyWavelets, hence the output verified.
Ex. No: 5           Perform the K-Means clustering for Image segmentation using
                                           CV2 library.
Date:
Aim:
       To Perform the K-Means clustering for Image segmentation using CV2
       library.
Procedure:
     Step 1: Create a new folder and open a Python IDLE and create a file named
     Wavelet.py, save this file in already created folder and type the code.
       Step 2: Add or save any image in a device or in a created folder to perform
       image transformation.
       Step 3: Install required python library using command prompt (ex: pip install
       matplotlib opencv pywt numpy).
       Step 4: To run the program click the run module in python idle window.
       Step 5: Now our transformed image displayed with original image
       successfully.
K-Means Clustering for Image Segmentation:
  Image Segmentation: In computer vision, image segmentation is the process of
  partitioning an image into multiple segments. The goal of segmenting an image is
  to change the representation of an image into something that is more meaningful
  and easier to analyze. It is usually used for locating objects and creating
  boundaries.
  K Means is a clustering algorithm. Clustering algorithms are unsupervised
  algorithms which means that there is no labelled data available. It is used to
  identify different classes or clusters in the given data based on how similar the
  data is. Data points in the same group are more similar to other data points in that
  same group than those in other groups.
  K-means clustering is one of the most commonly used clustering algorithms.
  Here, k represents the number of clusters.
Program:
 import cv2
 import numpy as np
 import matplotlib.pyplot as plt
 # Load the image
 image = cv2.imread('land.jpg')
 # Convert BGR to RGB for
 displaying with matplotlib
 image_rgb = cv2.cvtColor(image,
 cv2.COLOR_BGR2RGB)
 # Reshape the image to a 2D array
 of pixels
 pixels = image_rgb.reshape(-1, 3)
 pixels = np.float32(pixels) #
 Convert to float32 for k-means
 # Define the criteria for the k-means
 algorithm
 criteria =
 (cv2.TERM_CRITERIA_EPS +
 cv2.TERM_CRITERIA_MAX_ITE
 R, 100, 0.2)
 k = 4 # Number of clusters
 # Apply k-means clustering
 _, labels, centers =
 cv2.kmeans(pixels, k, None, criteria,
 10,
 cv2.KMEANS_RANDOM_CENTE
 RS)
 # Convert the centers to uint8
 centers = np.uint8(centers)
 # Map the labels to center colors
 segmented_image =
 centers[labels.flatten()]
 segmented_image =
 segmented_image.reshape(image_rg
 b.shape)
 # Display the results
 plt.figure(figsize=(12, 6))
 # Original image
 plt.subplot(1, 2, 1)
 plt.title('Original Image')
plt.imshow(image_rgb)
plt.axis('off')
# Segmented image
plt.subplot(1, 2, 2)
plt.title('Segmented Image')
plt.imshow(segmented_image)
plt.axis('off')
plt.show()
Output:
Output for K = 4,
Output for K = 6,
 Result:
   The above program was performed the K-Means clustering for Image segmentation
   using CV2 library, hence the output verified.
Ex. No: 6           Perform basic motion detection and tracking using python and
Date:                                         OpenCV
Aim:
       To Perform basic motion detection and tracking using python and OpenCV
Procedure:
     Step 1: Create a new folder and open a Python IDLE and create a file named
     motion.py, save this file in already created folder and type the code.
       Step 2: Add or save any image in a device or in a created folder to perform
       image transformation.
       Step 3: Install required python library using command prompt (ex: pip install
       opencv).
       Step 4: To run the program click the run module in python idle window.
       Step 5: Now our transformed image displayed with original image
       successfully.
Motion and Tracking using Python:
    1. Motion Detection
    Motion detection is performed using background subtraction, which helps to
    distinguish moving objects from the static background.
    2. Tracking Moving Objects
    We can use contour detection to track moving objects based on the detected
    motion areas.
Program:
   import cv2
   # Path to your video file
   video_path = 'tracking.mp4'
   # Initialize the video capture object with the video file
   cap = cv2.VideoCapture(video_path)
   # Check if the video file was opened successfully
   if not cap.isOpened():
      print("Error: Could not open video file.")
      exit()
   # Create a background subtractor object
   fgbg = cv2.createBackgroundSubtractorMOG2()
   # Define the desired window size (width, height)
   window_size = (800, 600)
   while True:
     # Read a frame from the video capture object
     ret, frame = cap.read()
     if not ret:
        break
     # Apply the background subtractor to the frame
     fgmask = fgbg.apply(frame)
     # Find contours in the mask
     contours, _ = cv2.findContours(fgmask, cv2.RETR_EXTERNAL,
   cv2.CHAIN_APPROX_SIMPLE)
     # Draw contours on the original frame
     for contour in contours:
        if cv2.contourArea(contour) > 500: # Filter out small contours
           x, y, w, h = cv2.boundingRect(contour)
           cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
     # Resize the images to the desired window size
     resized_frame = cv2.resize(frame, window_size)
     resized_fgmask = cv2.resize(fgmask, window_size)
     # Display the resized frame and the foreground mask
     cv2.imshow('Frame', resized_frame)
     cv2.imshow('FG Mask', resized_fgmask)
     # Exit the loop if the 'q' key is pressed
     if cv2.waitKey(30) & 0xFF == ord('q'):
        break
   # Release the video capture object and close the windows
   cap.release()
   cv2.destroyAllWindows()
Output:
 Output Frame for Motion Detection:
 Output Frame for FG mask for subtractor background:
Result:
     Thus the above program performed basic motion detection and tracking using python and
     OpenCV. Hence the output is verified.
Ex. No: 7
                   Perform Face detection using OpenCV library
Date:
Aim:
       To perform Face detection using OpenCV library
Procedure:
     Step 1: Create a new folder and open a Python IDLE and create a file named
     facedetect.py, save this file in already created folder and type the code.
       Step 2: Add or save any image in a device or in a created folder to perform
       face detection.
       Step 3: Install required python library using command prompt (ex: pip install
       matplotlib opencv ).
       Step 4: To run the program click the run module in python idle window.
       Step 5: Now face is detected by provided image as input successfully.
Face Detection using openCV in python
    Face detection involves identifying a person’s face in an image or video. This is
done by analyzing the visual input to determine whether a person’s facial features are
present.
Intro to Haar Cascade Classifiers
    This method was first introduced in the paper Rapid Object Detection Using a
Boosted Cascade of Simple Features, written by Paul Viola and Michael Jones. The
idea behind this technique involves using a cascade of classifiers to detect different
features in an image. These classifiers are then combined into one strong classifier that
can accurately distinguish between samples that contain a human face from those that
don’t.
Program Code:
         import cv2
         import matplotlib.pyplot as plt
         # Set the path to your image
         imagePath = 'pic.jpg'
         # Attempt to load the image
         img = cv2.imread(imagePath)
         # Check if the image was loaded successfully
         if img is None:
             raise FileNotFoundError(f"Image not found at {imagePath}")
         # Print the shape of the loaded image
         print("Image shape: ",img.shape)
         # Convert the image to grayscale
         gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
         print("Grayscale image shape: ",gray_image.shape)
         # Load the face classifier
         face_classifier = cv2.CascadeClassifier(cv2.data.haarcascades +
         "haarcascade_frontalface_default.xml")
         # Detect faces in the grayscale image
         faces = face_classifier.detectMultiScale(gray_image, scaleFactor=1.1, minNeighbors=5,
         minSize=(40, 40))
         # Draw rectangles around detected faces
         for (x, y, w, h) in faces:
            cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 4)
         # Convert image from BGR to RGB for display
         img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
         # Display the image with Matplotlib
         plt.figure(figsize=(20,10))
         plt.imshow(img_rgb)
         plt.axis('off')
         plt.show()
 Output:
Result:
   Thus the above program performed perform Face detection using OpenCV
   library in python. Hence the output is verified.
Ex. No: 8                     Perform Foreground Extraction in an image
Date:
Aim:
       To perform Foreground Extraction in an image using OpenCV in python.
Procedure:
     Step 1: Create a new folder and open a Python IDLE and create a file named
     foreground.py, save this file in already created folder and type the code.
       Step 2: Add or save any image in a device or in a created folder to perform
       face detection.
       Step 3: Install required python library using command prompt (ex: pip install
       matplotlib opencv ).
       Step 4: To run the program click the run module in python idle window.
       Step 5: Now foreground extraction is performed by provided image as input
       successfully.
Foreground Extraction in an image:
       Foreground extract is a part of image segmentation, where the goal is to
precisely delineate and separate the main objects or subjects (foreground) from the
rest of the image (background).
       Image segmentation techniques, including semantic segmentation or instance
segmentation, contribute to achieving accurate and detailed delineation of the
foreground within an image.
GrabCut Algorithm for Image Segmentation
       GrabCut is an interactive image segmentation algorithm that was introduced
by Carsten Rother, Vladimir Kolmogorov, and Andrew Blake in 2004. It is a
graph-cut-based algorithm designed to segment an image into foreground and
background regions, making it particularly useful for applications like image
editing and object recognition.
       The algorithm requires user interaction to initialize the segmentation
process. Typically, a rectangle is drawn around the object of interest in the image.
The algorithm then iteratively refines this initial segmentation based on color and
texture information within and outside the rectangle.
Program:
    # import required libraries
    import numpy as np
    import cv2
    # from matplotlib import pyplot as plt
    # read input image
    img = cv2.imread('picture.jpg')
    # define mask
    mask = np.zeros(img.shape[:2],np.uint8)
    bgdModel = np.zeros((1,65),np.float64)
    fgdModel = np.zeros((1,65),np.float64)
    # define rectangle
    rect = (150,50,500,470)
    # apply grabCut method to extract the foreground
    cv2.grabCut(img,mask,rect,bgdModel,fgdModel,20,cv2.GC_INIT_WITH_RECT)
    mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
    foreground_img = img*mask2[:,:,np.newaxis]
    # display the extracted foreground image
    # plt.imshow(img),plt.colorbar(),plt.show()
    cv2.imshow('Original Image', img)
    cv2.imshow('Foreground Image',foreground_img)
    cv2.waitKey(0)
Output:
Result:
   The above program was perform Foreground Extraction in an image using
   OpenCV in python. Hence the output is verified.
Ex. No: 9                Perform Pedestrian Detection using OpenCV and Python
Date:
Aim:
       To Perform Pedestrian Detection using OpenCV and Python
Procedure:
       Step 1: Create a new folder and open a Python IDLE and create a file named
       pedestrian.py, save this file in already created folder and type the code.
       Step 2: Add or save any image in a device or in a created folder to perform
       face detection.
       Step 3: Install required python library using command prompt (ex: pip install
       opencv imutils).
       Step 4: To run the program click the run module in python idle window.
       Step 5: Now pedestrian detection is performed by provided image as input
       successfully.
Pedestrian Detection using OpenCV:
      Pedestrian detection is a very important area of research because it can
enhance the functionality of a pedestrian protection system in Self Driving Cars. We
can extract features like head, two arms, two legs, etc, from an image of a human
body and pass them to train a machine learning model. After training, the model can
be used to detect and track humans in images and video streams. However, OpenCV
has a built-in method to detect pedestrians. It has a pre-trained HOG(Histogram of
Oriented Gradients) + Linear SVM model to detect pedestrians in images and video
streams.
Program Code:
     import cv2
     import imutils
     # Initializing the HOG person
     # detector
     hog = cv2.HOGDescriptor()
     hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
     # Reading the Image
     image = cv2.imread('ped1.jpg')
     # Resizing the Image
     image = imutils.resize(image,width=min(400, image.shape[1]))
     # Detecting all the regions in the
     # Image that has a pedestrians inside it
     (regions, _) = hog.detectMultiScale(image,winStride=(4, 4),padding=(4, 4),scale=1.05)
     # Drawing the regions in the Image
     for (x, y, w, h) in regions:
         cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)
     # Showing the output Image
     cv2.imshow("Image", image)
     cv2.waitKey(0)
     cv2.destroyAllWindows()
Output:
  RESULT:
      The above program created Perform Pedestrian Detection using OpenCV
      and Python. Hence the output is verified.