0% found this document useful (0 votes)
48 views41 pages

IPPR LAB Manual

Lab manual

Uploaded by

Light DHRUV
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views41 pages

IPPR LAB Manual

Lab manual

Uploaded by

Light DHRUV
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

NOIDA INSTITUTE OF ENGINEERING AND TECHNOLOGY

GREATER NOIDA
(NAAC ACCREDITED)
Approved by AICTE and Affiliated to Dr. A.P.J. Abdul
Kalam Technical University Uttar Pradesh, Lucknow

LABORATORY MANUAL

COURSE: B.TECH. SEMESTER: V

Department of Electronics and Communication Engineering


(NBA ACCREDITED)
Vision & Mission of the Institute

Vision:
To be an Institute of academic excellence in the field of education, with future plan of becoming a
deemed university, earn name and hence win faith of the society.

Mission:
To impart to its students a high-quality education, develop their skills, broaden their mental horizon
and nurture them into competent and talented professionals to meet the challenges of the new
millennium.

Vision & Mission of the Department

Vision:
To be a renowned Center of Excellence in Electronics and Communication Engineering,
developing globally competent ethical resources to serve society.

Mission:

M1: To impart a robust teaching and learning process thriving on qualified, trained resources
and state-of-the-art infrastructure.
M2: To promote innovation and research culture by providing students with hands-on
experience for solving real-time problems and developing sustainable products and
solutions.

M3: To imbibe ethical values, entrepreneurial zeal, and lifelong learning ability to develop
future-ready professionals.
Program Educational Objectives (PEOs)

PEO 1: To excel as a professional at work place through continuous learning and ethically perform
with entrepreneurial mindset,

PEO 2: To be able to demonstrate high degree of analytical and design ability,

PEO 3: To effectively serve the society by solving the real-time problem and mentoring.

Program Specific Outcomes (PSOs)

PSO 1: To design and implement products using the cutting-edge software and hardware tools in
Electronics and communication engineering to satisfy the Industrial needs.

PSO 2: To analyse and develop solutions for the real time problems and to apply the knowledge for
innovative ideas and solutions in Telecommunications, Wireless Networking, Embedded Systems, and
VLSI.

PSO 3: To demonstrate the technical skills, professional competence, entrepreneurial attitude to become
competent professional for the society.
Program Outcomes (POs)

PO1 Engineering Knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals and an engineering specialization to the solution of complex engineering
problems.

PO2 Problem Analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.

PO3 Design / Development of solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.

PO4 Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis
of the information to provide valid conclusions.

PO5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.

PO6 The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant
to the professional engineering practice.

PO7 Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.

PO8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
PO9 Individual and team work: Function effectively as an individual and as a member or leader
in diverse teams, and in multidisciplinary settings.

PO10 Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give
and receive clear instructions.

PO11 Project management and finance: Demonstrate knowledge and understanding of the
engineering management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.

PO12 Life-long learning: Recognize the need for and have the preparation and ability to engage in
independent and lifelong learning in the broadest context of technological change.
Course Objective and Course Outcomes (CO’s)

Course Objective:
The student will learn about:
1. Basic skills for image sharpening and image enhancement.
2. Basic concept of image restoration and compression techniques.
3. Basic concept of image segmentation for image analysis.
4. Analyze the spatial/ texture feature of image.
5. The use of various enhancement and segmentation techniques for developing computer
vision application.
Course Outcomes:

After successful completion of this course, students will be able to:

AEC-0513P.1: Implement image sharpening and image enhancement algorithm.


AEC-0513P.2: Analyze the power of various image restoration and compression techniques.
AEC-0513P.3: Learn basic skills for image segmentation and image analysis.
AEC-0513P.4: Analyze the spatial/ texture features of image.
AEC-0513P.5: Implement and evaluate different enhancement and segmentation techniques
for developing computer vision applications.
Mapping of COs and POs
Enter correlation levels 1, 2 or 3 as defined below:
1: Slight (Low) 2: Moderate (Medium) 3: Substantial (High)
If there is no correlation, put “-”

COs PO1PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
AEC-0513P.1 3 3 3 3 3 3 1 3 3 3 2 3
AEC-0513P.2 3 3 3 3 3 3 1 3 3 3 0 3
AEC-0513P.3 3 3 3 3 3 3 2 3 3 3 1 3
AEC-0513P.4 3 2 3 3 3 3 3 3 3 3 1 3
AEC-0513P.5 3 1.5 3 3 3 3 3 3 3 3 1 3
Average 3.00 2.50 3.00 3.00 3.00 3.00 2.00 3.00 3.00 3.00 1.00 3.00

Mapping of COs and PSOs

CO PSO 1 PSO 2 PSO 3


AEC-0513P.1 3 3 3
AEC-0513P.2 3 3 3
AEC-0513P.3 3 3 3
AEC-0513P.4 3 3 3
AEC-0513P.5 3 3 3
Average 3 3 3
General Safety Guidelines

Following safety guidelines must be followed while performing lab experiments:

1. Student entry in the lab is ensured strictly as per the allocated time slots or seeking prior
proper permission from the lab faculty or instructor.
2. Students are expected to conduct themselves in a responsible manner while working in the
laboratory.
3. They should keep their bags on the shelf provided outside the lab and carry only essential
items such as lab record, manual, pen-pencil, copy and calculator etc. inside the lab.
4. Students are not allowed to carry food items (not even chewing gum), beverages and water
bottles while working in the laboratory.
5. They are expected to observe good housekeeping practices and ensure equipment, sitting
stools and components to be handled carefully and kept at proper place after finishing the
work to keep the lab clean and tidy.
6. While working in the lab
− Avoid stretching electrical cables and connectors while using the equipment.
− Rig the circuit and get it verified from the lab instructor before connecting it to power
source.
− Pay proper attention towards earthing of electrical equipment. Ensure proper
ventilation in the lab while working.
− Ensure use of wire clippers, insulating tape, plug-pins to prevent any electrical
shocking hazards.
− In case of any short circuit, sensing burning smell or observing any smoke switch off
power supply and immediately report to the faculty/lab instructor available in the lab.
7. In case of any minor injury please contact the lab instructor or lab faculty. The first aid Box
is available in the department.
8. In case of any fire emergency, contact the faculty or lab instructor. For your information, the
fire safety equipment is available on each floor near notice board.
Instructions to Students for Writing the Record

In the record, the index page should be filled properly by writing the corresponding experiment
number, experiment name, date on which it was done and the page number.

On the right side page of the record following has to be written:


1. Title: The title of the experiment should be written in the page in capital letters.
2. In the left top margin, experiment number and date should be written.
3. Aim: The purpose of the experiment should be written clearly.
4. Apparatus/Tools/Equipments/Components used: A list of the Apparatus/Tools/ Equipments/
Components used for doing the experiment should be entered.
5. Theory: Simple working of the circuit/experimental set up/algorithm should be written.
6. Procedure: Steps for doing the experiment and recording the readings should be briefly
described (flow chart/ Circuit Diagrams / programs in the case of computer/processor related
experiments)
7. Results: The results of the experiment must be summarized in writing and should be fulfilling
the aim.

On the Left side page of the record following has to be recorded:


1. Circuit/Program: Neatly drawn circuit diagrams for the experimental set up.
2. Design: The design of the circuit components for the experimental set up for selecting the
components should be clearly shown if necessary.
3. Observations:
− Data should be clearly recorded using Tabular Columns.
− Unit of the observed data should be clearly mentioned
− Relevant calculations should be shown. If repetitive calculations are needed, only show a
sample calculation and summarize the others in a table.
Evaluation Scheme

B.Tech. Electronics & Communication Engineering


YEAR 3rd/ SEMESTER V
AEC-0513P IMAGE PROCESSING AND PATTERN RECOGNITION LAB

List of Experiments

1. Write a program using MATLAB/Python to display grey scale/colour images.


2. Write a program using MATLAB/Python to extract different attributes (i.e., Geometrical and
texture) of an Image.
3. Write a program using MATLAB/Python for Image Negation.
4. Write a program using MATLAB/Python for Power Law Transformation.
5. Write a program using MATLAB/Python for Histogram Mapping and Equalization.
6. Write a program using MATLAB/Python for Image Smoothening and Sharpening.
7. Write a program using MATLAB/Python for Edge Detection using Sobel, Prewitt and
Roberts Operators.
8. Write a program using MATLAB/Python for Morphological Operations on Binary Images.
9. Write a program using MATLAB/Python for Pseudo Coloring.
10. Write a program using MATLAB/Python for the segmentation using watershed transform.
11. Write a program to eliminate the high frequency components of an image. Write a MATLAB
program for DCT based image compression.
12. Write a program using MATLAB/Python to extract the image features for image
segmentation using DWT Computation.
Index

Type of CO PO PSO
S. No. Name of Experiment Category
Experiment Mapping Mapping Mapping

1 Write a program using MATLAB/Python to Software Core CO1 PSO2


display grey scale/colour images. PO5,PO12

Write a program using MATLAB/Python to PO1, PSO1,


2 extract different attributes (i.e., Geometrical and Software Core CO1 PO5, PO12 PSO2
texture) of an Image.
PO1, PO3, PSO1,
3 Write a program using MATLAB/Python for Software CO3 PO5, PO12
Image Negation. Core PSO2

PO1, PO3, PSO1,


4 Write a program using MATLAB/Python for Software
Core CO2 PO5, PO12 PSO2
Power Law Transformation.

PO1, PO3, PSO1,


5 Write a program using MATLAB/Python Software
Core CO2 PO5, PO12 PSO2
for Histogram Mapping and Equalization.
PO1, PO3, PSO1,
6 Write a program using MATLAB/Python Software Core
for Image Smoothening and Sharpening. CO2 PO5, PO12 PSO2

Write a program using MATLAB/Python PO1, PO3, PSO1,


7
for Edge Detection using Sobel, Prewitt Software PO5, PO12
Core CO2 PSO2
and Roberts Operators.
PO1, PO3, PSO1,
8 Write a program using MATLAB/Python for Software
Core CO2 PO5, PO12 PSO2
Morphological Operations on Binary Images.
PO1, PO3, PSO1,
9 Write a program using MATLAB/Python for Software Core
CO5 PO5, PO12 PSO2
Pseudo Coloring.

PO1, PO3, PSO1,


10 Write a program using MATLAB/Python for Software
Core CO5 PO5, PO12 PSO2
the segmentation using watershed transform.
Write a program to eliminate the high
PO1, PO3, PSO1,
11 frequency components of an image. Write a Software PO5, PO12
MATLAB program for DCT based image Core CO5 PSO2
compression.

Write a program using MATLAB/Python to


PO1, PO3, PSO1,
12 extract the image features for image Software Core CO4 PO5, PO12
segmentation using DWT Computation. PSO2
EXPERIMENT NO. 1

OBJECTIVE: Write a program using MATLAB/Python to display grey scale/colour images.

TOOL REQUIRED: MATLAB/PYTHON

THEORY:
1. Reading Images: Images are read into the MATLAB Environment using imread() function
which takes filename with applicable extension as the argument.
I=imread('nature.jpg');
This will read JPEG image ‘nature’ into the image array.
Note: The semicolon (;) at the end of command line is used to suppress the output in MATLAB. If
‘;’ is not used at the end, it will show the output of the specified operation.
2. Displaying Images: imshow() function is used to display images in MATLAB. The basic
syntax of imshow() is imshow(f, G);
Here f is image matrix and G is number of intensity level used to display the image. The second
Argument in the above syntax is optional. If G is omitted its value defaults to 256 levels.
When we use the syntax imshow(f, [Low, High]);
It displays all value less than or equal to ‘Low’ as black and all values greater than or equal to
‘High’ as white. The values between ‘Low’ and ‘High’ are displayed as the intermediate intensity
value using the default number of levels.
Examples:
Showing Grayscale Images
>> imshow(f);
This will display the grayscale image f.
Also, we can write
>> imshow(f, [90, 180]);
It will display all value less than or equal to 90 as black and all values greater than or equal to 180
as white. The values between 90 and 180 are displayed as the intermediate intensity value using the
default number of levels.
Showing Binary images
>> imshow(BW);
It displays the binary image BW. It displays pixels with the value 0 (zero) as black and pixels with
the value 1 as white.
Showing RGB images
>> imshow(f);
It displays the RGB image f.

PROGRAM:
import cv2
image = cv2.imread('lion.png')
cv2.imshow('original', image)
cv2.waitKey(0)
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow('Grayscale Lion', gray_image)
cv2.waitKey(0)
cv2.destroyAllWindows()

RESULT: grey scale/colour images has been displayed successfully.


EXPERIMENT NO. 2

OBJECTIVE: Write a program using MATLAB/Python to extract different attributes (i.e.,


Geometrical and texture) of an Image.

TOOL REQUIRED: MATLAB/PYTHON


THEORY: Geometric attributes of an image are area, position, orientation, distance between point
and line, minimizing second moment.
In general, image texture analysis consists of four types of problems: (1) texture segmentation, (2)
texture classification , (3) texture synthesis , and (4) shape from texture

PROGRAM:

from PIL import Image


im = Image.open(r"C:\Users\admin\Desktop\download.jpg")
w = im.width
h = im.height
print(w)
print(h)

RESULT: different attributes of an Image has been extracted successfully.


EXPERIMENT NO. 3

OBJECTIVE: Write a program using MATLAB/Python for Image Negation.

TOOL REQUIRED: MATLAB/PYTHON

THEORY: The negative of an image with intensity levels in the range [0,L − 1] is obtained by
using the negative transformation function which has the form:

s=L−1–r

Reversing the intensity levels of a digital image in this manner produces the equivalent of a
photographic negative. This type of processing is used, for example, in enhancing white or gray
detail embedded in dark regions of an image, especially when the black areas are dominant in size.
PROGRAM:

from PIL import Image


from PIL import ImageFilter
img = Image.open(r"C:\Users\admin\Desktop\fishh.jpg");
img.show()
for i in range(0, img.size[0] - 1):
for j in range(0, img.size[1] - 1):
pixelColorVals = img.getpixel((i, j));
redPixel = 255 - pixelColorVals[0];
greenPixel = 255 - pixelColorVals[1];
bluePixel = 255 - pixelColorVals[2];
img.putpixel((i, j), (redPixel, greenPixel, bluePixel))
img.show();

RESULT: Negative of an image has been generated successfully.


EXPERIMENT NO. 4

OBJECTIVE: Write a program using MATLAB/Python for Power Law Transformation.

TOOL REQUIRED: MATLAB/PYTHON

THEORY: Power-law transformations have the form

s = crγ
where c and γ are positive constants.
The response of many devices used for image capture, printing, and display obey a power law. By
convention, the exponent in a power-law equation is referred to as gamma [hence our use of this
symbol in above Eq. The process used to correct these power-law response phenomena is called
gamma correction or gamma encoding. For example, cathode ray tube (CRT) devices have an
intensity-to-voltage response that is a power function, with exponents varying from approximately
1.8 to 2.5.
The results of processing above Fig.(a) with γ = 3 0 . , 4.0, and 5.0 are shown in Figs.(b) through
(d), respectively. Suitable results were obtained using gamma values of 3.0 and 4.0. The latter
result has a slightly more appealing appearance because it has higher contrast.

PROGRAM:
import cv2
import numpy as np
img = cv2.imread(r"C:\Users\admin\Desktop\fishh.jpg")
for gamma in [0.1, 0.5]:
gamma_corrected = np.array(255 * (img / 255) ** gamma, dtype='uint8')
cv2.imwrite('gamma_transformed' + str(gamma) + '.jpg', gamma_corrected)

RESULT: Power Law Transformation of an image for different γ has been performed successfully.
EXPERIMENT NO. 5

OBJECTIVE: Write a program using MATLAB/Python for Histogram Mapping and Equalization.

TOOL REQUIRED: MATLAB/PYTHON

THEORY:

HISTOGRAM EQUALIZATION:

Histogram equalization is a method in image processing of contrast adjustment using the image's
histogram. This method usually increases the global contrast of many images, especially when the
image is represented by a narrow range of intensity values. Through this adjustment, the intensities
can be better distributed on the histogram utilizing the full range of intensities evenly. This allows
for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this
by effectively spreading out the highly populated intensity values which are used to degrade image
contrast.
Assuming initially continuous intensity values, let the variable r denote the intensities of an image
to be processed. As usual, we assume that r is in the range [0, L − 1], with r = 0 representing black
and r = L − 1 representing white. For r satisfying these conditions, we focus attention on
transformations (intensity mappings) of the form
s = T(r) 0 ≤ r ≤ L – 1
that produce an output intensity value, s, for a given intensity value r in the input image. We
assume that
(a) T(r) is a monotonic† increasing function in the interval 0 ≤ r ≤ L − 1; and
(b) 0 ≤ T(r) ≤ L − 1 for 0 ≤ r ≤ L − 1.
In some formulations we use the inverse transformation
r = T−1(s) 0 ≤ s ≤ L – 1
in which case we change condition (a) to: (a’) T(r) is a strictly monotonic increasing function in the
interval 0 ≤ r ≤ L − 1.
PROGRAM:

import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
img = cv.imread(“flower.jpeg”)
cv.imshow('image',img)
cv.waitKey(0)
cv.destroyAllWindows()
hist,bins = np.histogram(img.flatten(),256,[0,256])
cdf = hist.cumsum()
cdf_normalized = cdf * float(hist.max())/cdf.max()
plt.plot(cdf_normalized, color = 'b')
plt.hist(img.flatten(),256,[0,256], color = 'r')
plt.xlim([0,256])
plt.legend(('cdf','histogram'), loc = 'upper left')
plt.show()
equ = cv.equalizeHist(img)
cv.imshow('equ.png',equ)
cv.waitKey(0)
cv.destroyAllWindows()
hist,bins = np.histogram(equ.flatten(),256,[0,256])
cdf = hist.cumsum()
cdf_normalized = cdf * float(hist.max()) / cdf.max()
plt.plot(cdf_normalized, color = 'b')
plt.hist(equ.flatten(),256,[0,256], color = 'r')
plt.xlim([0,256])
plt.legend(('cdf','histogram'), loc = 'upper left')
plt.show()

RESULT: Histogram equalization has been done successfully.


EXPERIMENT NO. 6

OBJECTIVE: Write a program using MATLAB/Python for Image Smoothening and Sharpening.

TOOL REQUIRED: MATLAB/PYTHON

THEORY: Smoothing (also called averaging) spatial filters are used to reduce sharp transitions in
intensity. Because random noise typically consists of sharp transitions in intensity, an obvious
application of smoothing is noise reduction. Smoothing is used to reduce irrelevant detail in an
image, where “irrelevant” refers to pixel regions that are small with respect to the size of the filter
kernel. Another application is for smoothing the false contours that result from using an insufficient
number of intensity levels in an image. Smoothing filters are used in combination with other
techniques for image enhancement, such as the histogram processing techniques and unsharp
masking.
Sharpening highlights transitions in intensity. Uses of image sharpening range from electronic
printing and medical imaging to industrial inspection and autonomous guidance in military
systems. Sharpening can be accomplished by spatial differentiation. Image differentiation enhances
edges and other discontinuities (such as noise) and de-emphasizes areas with slowly varying
intensities. Sharpening is often referred to as high pass filtering. In this case, high frequencies
(which are responsible for fine details) are passed, while low frequencies are attenuated or rejected.

PROGRAM:
import cv2 as cv
import numpy as np
from IPython.display import Image
src = np.array((1, 4, 6, 4, 1))
GUASSIAN_KERNEL = np.outer(src.T, src) / 256
LOW_PASS_KERNEL = np.array([1/9] * 9).reshape((3, 3))
HIGH_PASS_KERNEL = np.array([0, -0.25, 0, -0.25, 2, -0.25, 0, 0.25, 0]).reshape((3, 3))
def convolve(img1, kernel):
img = img1.copy()
u, v = kernel.shape
m, n = img.shape
for i in range(m - u):
for j in range(n - v):
img[i][j] = np.sum((img[i: i + u, j: j + v] * kernel))
return img
def smoothing(img, kernel=GUASSIAN_KERNEL):
return convolve(img, kernel)
def sharpening(img, kernel=HIGH_PASS_KERNEL):
smooth_img = smoothing(img, LOW_PASS_KERNEL)
return img + 2 * (img - smooth_img)
if __name__ == '__main__':
img1 = cv.imread('img1.jpg', 0)
img2 = cv.imread('img2.png', 0)
op1 = smoothing(img1, kernel=(LOW_PASS_KERNEL))
op2 = sharpening(img2, kernel=HIGH_PASS_KERNEL)
cv.imwrite('original image1.jpg', img1)
cv.imwrite('smoothened image1.jpg', op1)
cv.imwrite('original image2.png', img2)
cv.imwrite('sharpened image2.png', smoothing(sharpening(smoothing(op2))))

RESULT: The PSK waveform is obtained on the DSO screen.


EXPERIMENT NO. 7

OBJECTIVE: Write a program using MATLAB/Python for Edge Detection using Sobel, Prewitt
and Roberts Operators.

TOOL REQUIRED: MATLAB/PYTHON

THEORY: Edge detection is an approach used frequently for segmenting images based on abrupt
(local) changes in intensity. A step edge is characterized by a transition between two intensity
levels occurring ideally over the distance of one pixel. Step edges occur, for example, in images
generated by a computer for use in areas such as solid modeling and animation. These clean, ideal
edges can occur over the distance of one pixel, provided that no additional processing (such as
smoothing) is used to make them look “real.” Digital step edges are used frequently as edge models
in algorithm development.
The three steps performed typically for edge detection are:
1. Image smoothing for noise reduction.
2. Detection of edge points. As mentioned earlier, this is a local operation that extracts from an
image all points that are potential edge-point candidates.
3. Edge localization. The objective of this step is to select from the candidate points only the points
that are members of the set of points comprising an edge.
When diagonal edge direction is of interest, we need 2-D kernels. The Roberts cross-gradient
operators are one of the earliest attempts to use 2-D kernels with a diagonal preference. Consider
the 3 × 3 region in Fig. (a). The Roberts operators are based on implementing the diagonal
differences
𝜕𝑓
𝑔𝑥 = = ( 𝑧9 − 𝑧5 )
𝜕𝑥
𝜕𝑓
𝑔𝑦 = = ( 𝑧8 − 𝑧6 )
𝜕𝑦

These derivatives can be implemented by filtering an image with the kernels shown in Figs.(b)
and (c).
Kernels of size 2 × 2 are simple conceptually, but they are not as useful for computing edge
direction as kernels that are symmetric about their centers, the smallest of which are of size 3 × 3.
The simplest digital approximations to the partial derivatives using kernels of size 3 × 3 are given
by
𝜕𝑓
𝑔𝑥 = = (𝑧7 + 𝑧8 + 𝑧9 ) − (𝑧1 + 𝑧2 + 𝑧3 )
𝜕𝑥
𝜕𝑓
𝑔𝑥 = = (𝑧3 + 𝑧6 + 𝑧9 ) − (𝑧1 + 𝑧4 + 𝑧7 )
𝜕𝑦
These derivatives can be implemented by filtering an image with the kernels shown in Fig (d) and
(e). These kernels are called the Prewitt operators

A slight variation of the preceding two equations uses a weight of 2 in the center coefficient:
𝜕𝑓
𝑔𝑥 = = (𝑧7 + 2𝑧8 + 𝑧9 ) − (𝑧1 + 2𝑧2 + 𝑧3 )
𝜕𝑥
𝜕𝑓
𝑔𝑥 = = (𝑧3 + 2𝑧6 + 𝑧9 ) − (𝑧1 + 2𝑧4 + 𝑧7 )
𝜕𝑦
It can be demonstrated that using a 2 in the center location provides image smoothing. Figures (f)
and (g) show the kernels used to implement above equations. These kernels are called the Sobel
operators.
PROGRAM:
"""
edges.py: Canny, Prewitt and Sobel Edge detection using opencv
"""
__author__ = "K.M. Tahsin Hassan Rahit"
__email__ = "tahsin.rahit@gmail.com"
import cv2
import numpy as np
img = cv2.imread('messi5.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_gaussian = cv2.GaussianBlur(gray,(3,3),0)
#canny
img_canny = cv2.Canny(img,100,200)
#sobel
img_sobelx = cv2.Sobel(img_gaussian,cv2.CV_8U,1,0,ksize=5)
img_sobely = cv2.Sobel(img_gaussian,cv2.CV_8U,0,1,ksize=5)
img_sobel = img_sobelx + img_sobely
#prewitt
kernelx = np.array([[1,1,1],[0,0,0],[-1,-1,-1]])
kernely = np.array([[-1,0,1],[-1,0,1],[-1,0,1]])
img_prewittx = cv2.filter2D(img_gaussian, -1, kernelx)
img_prewitty = cv2.filter2D(img_gaussian, -1, kernely)
cv2.imshow("Original Image", img)
cv2.imshow("Canny", img_canny)
cv2.imshow("Sobel X", img_sobelx)
cv2.imshow("Sobel Y", img_sobely)
cv2.imshow("Sobel", img_sobel)
cv2.imshow("Prewitt X", img_prewittx)
cv2.imshow("Prewitt Y", img_prewitty)
cv2.imshow("Prewitt", img_prewittx + img_prewitty)
cv2.waitKey(0)
cv2.destroyAllWindows()

RESULT: Edge detection has been done successfully using Sobel, Prewitt and Robert.
EXPERIMENT NO. 8

OBJECTIVE: Write a program using MATLAB/Python for Morphological Operations on Binary


Images.
TOOL REQUIRED: MATLAB/PYTHON

THEORY: Morphology is a broad set of image processing operations that process images based
on shapes. In a morphological operation, each pixel in the image is adjusted based on the value of
other pixels in its neighborhood. By choosing the size and shape of the neighborhood, you can
construct a morphological operation that is sensitive to specific shapes in the input image.
The most basic morphological operations are dilation and erosion. Dilation adds pixels to the
boundaries of objects in an image, while erosion removes pixels on object boundaries. The number
of pixels added or removed from the objects in an image depends on the size and shape of the
structuring element used to process the image. In the morphological dilation and erosion
operations, the state of any given pixel in the output image is determined by applying a rule to the
corresponding pixel and its neighbors in the input image. The rule used to process the pixels
defines the operation as a dilation or an erosion.
PROGRAM:
import cv2
import numpy as np

# return video from the first webcam on your computer.


screenRead = cv2.VideoCapture(0)

# loop runs if capturing has been initialized.


while(1):
# reads frames from a camera
_, image = screenRead.read()

# Converts to HSV color space, OCV reads colors as BGR


# frame is converted to hsv
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

# defining the range of masking


blue1 = np.array([110, 50, 50])
blue2 = np.array([130, 255, 255])

# initializing the mask to be


# convoluted over input image
mask = cv2.inRange(hsv, blue1, blue2)
# passing the bitwise_and over
# each pixel convoluted
res = cv2.bitwise_and(image, image, mask = mask)

# defining the kernel i.e. Structuring element


kernel = np.ones((5, 5), np.uint8)

# defining the opening function


# over the image and structuring element
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)

# The mask and opening operation


# is shown in the window
cv2.imshow('Mask', mask)
cv2.imshow('Opening', opening)

# Wait for 'a' key to stop the program


if cv2.waitKey(1) & 0xFF == ord('a'):
break

# De-allocate any associated memory usage


cv2.destroyAllWindows()

# Close the window / Release webcam


screenRead.release()

RESULT: The morphological operations has been performed successfully.


EXPERIMENT NO: 09
OBJECTIVE:- Write a program using MATLAB/python for Pseudo Coloring.

SOFTWARE REQUIRMENT: MATLAB 2015 (a)

Brief Theory:
Pseudo Coloring is one of the attractive categories in image processing. It is used to make old
black and white images or videos colorful. Pseudo Coloring techniques are used for analysis
identifying color surfaces of the sample image and adaptive modeling of histogram black and
white image. Selecting different values in the layers R, G, B is the most important achievement
of this technique such that this method is based on the analysis of histogram characteristics in
the sample image, to assign different values in different layers of a color image, we take action.
Pseudo-coloring is also known as a coloring topic in processing digital images.
In this technique, a gray level of type integer unsigned 8-bit (a number between zero and 255)
should be considered as input and three outputs must be achieved for three layers of the color
digital image graph and each of these three levels should be of type integer unsigned 8-bit (a
number between zero and 255). There are many techniques for this conversion that have some
differences based on our needs.

Grayscale image: It is a black and white image. The pixels values are shades of gray colour
which is the combination of white and black shades. The image is represented in form of one
2-Dimensional matrix. Each value represents the intensity or brightness of the corresponding
pixel at that coordinate in the image. Total 256 shades are possible for the grayscale images. 0
means black and 255 means white. As we increase the value from 0 to 255, the white
component gets increases and brightness increases.
RGB color image: It is a colored image. It consists of three 2-Dimensional matrices, which are
called channels. Red, Green and Blue channels contain the corresponding colour values for
each pixel in the image. In integer format, the range of pixel intensity goes from 0 to 255. 0
means black and 255 represents the highest intensity of the primary colour. There exist 256
shades of each colour.

Algorithm Steps:
1. Read the grayscale image.
2. If its bit-depth is 24, then make it 8.
3. Create an empty image of the same size.
4. Assign some random weight to RGB channels.
5. Copy weighted product of grayscale image to each channel of Red, Green, and Blue.
6. Display the images after creation.
Main functions used in algorithm:
imread( ) inbuilt function is used to read the image.
imtool( ) inbuilt function is used to display the image.
rgb2gray( ) inbuilt function is used to convert RGB to gray image.
uint8( ) inbuilt function is used to convert double into integer format.
pause( ) inbuilt function is used to stop execution for specified seconds.

MATLAB Program
MATLAB code for pseudo colouring
% of grayscale images.
% UTILITY CODE
k=imread("gfglogo.png");
gray2rgb(k);
imtool(grayscale,[]);
function gray2rgb(img)

% Convert into grayscale if not.


[x,y,z]=size(img);
if(z==3)
grayscale=rgb2gray(img);
end
gray=double(grayscale./255);
rgb(:,:,1)=gray(:,:)*0.5;
rgb(:,:,2)=gray(:,:)*0.6;
rgb(:,:,3)=gray(:,:)*0.4;
imtool(rgb,[]);

c(x,y,z)=0;
colour=uint8(c);
colour(:,:,1)=grayscale(:,:)*0.5;
colour(:,:,2)=grayscale(:,:)*0.7;
colour(:,:,3)=grayscale(:,:)*0.4;
imtool(colour,[]);

pause(10);
imtool close all;
end
Results: In this experiment, we have written a MATLAB code for Pseudo Coloring. The
detailed explanation of code along with its output is given as follows:
Code Explanation:
[x,y,z]=size(img); This line gets the size of input image.
gray=double(grayscale./255); This line converts input image into double format.
rgb(:,:,1)=gray(:,:)*0.5; This line builds red channel.
rgb(:,:,2)=gray(:,:)*0.6; This line builds green channel.
rgb(:,:,3)=gray(:,:)*0.4; This line builds blue channel.
imtool(rgb,[]); This line displays the build Plain – RGB image.
c(x,y,z)=0; This line creates empty image with black pixels.
colour=uint8(c); This line converts image into integer format.
colour(:,:,1)=grayscale(:,:)*0.5; This line populates the red channel.
colour(:,:,2)=grayscale(:,:)*0.7; This line populates the green channel.
colour(:,:,3)=grayscale(:,:)*0.4; This line populates the blue channel.
imtool(colour,[]); This line displays the coloured image formed.
pause(10); This line halts the execution for 10 seconds.
k=imread(“madhubala.png”); This line reads the input image.
gray2rgb(k); This line calls the utility function by passing input image as parameter.
Output:
EXPERIMENT NO.-10

OBJECTIVE:

Write a program using matlab/python for the segmentation using watershed transform.

EQUIPMENT REQUIRED:

Hardware required Software Required.

PC MATLAB R2017a

THEORY:

• Segmetation using watershed transform: The Watershed Transform is a unique


technique for segmenting digital images that uses a type of region growing
method based on an image gradient. The concept of Watershed Transform is
based on visualizing an image in three dimensions: two spatial coordinates
versus gray levels.

• Output:
MATLAB CODE:

I = rgb2gray(RGB);
I2 = imtophat(I, strel('disk', 10));

level = graythresh(I2);
BW = im2bw(I2,level);
D = -bwdist(~BW);
D(~BW) = -Inf;
L = watershed(D);
imshow(label2rgb(L,'jet','w'))

RESULT:

The study for the segmentation using watershed transform has been done successfully.
EXPERIMENT NO.-11

OBJECTIVE: Write a program to eliminate the high frequency components of an image.

EQUIPMENT REQUIRED:

Hardware required Software Required.

PC MATLAB R2017a

THEORY: Frequency in images is the rate of change of intensity values. Thus, a high-
frequency image is the one where the intensity values change quickly from one pixel to the
next.
Frequency is the number of occurrences of a repeating event per unit of time
Now, when talking about images, I suppose we're talking about spatial frequency, so it would
be per unit of space, instead. But notice the key words repeating event.

So in my understanding, frequency is a property of something that repeats. But edges in an


image don't repeat (or I guess you could say they repeat, just very rarely, so we can think of
the other repetitions being outside the image). Then why do we say edges are high
frequency?

MATLAB CODE:

clc % to clear command window


close all % to close the figure window
clear all % to clear previous variable
OUTPUT:

RESULT:

The study of elimination of high frequency components of image has been done successfully.
EXPERIMENT NO.-12

OBJECTIVE: Write a program using MATLAB/Python to extract the image features for image
segmentation using DWT computation.

EQUIPMENT REQUIRED:

Hardware required Software Required.

PC MATLAB R2017a

THEORY: In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is
any wavelet transform for which the wavelets are discretely sampled. In this paper, there are given
fundamental of DWT and implementation in MATLAB. Image is filtered by low pass (for smooth
variation between gray level pixels) and high pass filter (for high variation between gray level
pixels).

Output:
Syntax:

[cA,cD] = dwt(x,wname)

[cA,cD] = dwt(x,LoD,HiD)

[cA,cD] = dwt(___,'mode',extmode)

MATLAB CODE:

load woman
imagesc(X)
colormap(map)

%Obtain the single-level 2-D discrete wavelet transform of the image using the order 4 symlet
and periodic extension.
[cA,cH,cV,cD] = dwt2(X,'sym4','mode','per');
imagesc(cV)
%Display the vertical detail coefficients and the approximation coefficients.
title('Vertical Detail Coefficients')
imagesc(cA)
title('Approximation Coefficients' )

RESULT:
The study for the features for image segmentation using DWT computation has been done
successfully.

You might also like