Document From Sindhu Reddy... ??
Document From Sindhu Reddy... ??
Like the mean filter, the median filter considers each pixel in the image in turn and looks at its
nearby neighbors to decide whether or not it is representative of its surroundings. Instead of simply
replacing the pixel value with the mean of neighboring pixel values, it replaces it with the median of
those values. The median is calculated by first sorting all the pixel values from the surrounding
neighborhood into numerical order and then replacing the pixel being considered with the middle
pixel value. (If the neighborhood under consideration contains an even number of pixels, the
average of the two middle pixel values is used.) Figure 1 illustrates an example calculation.
Figure 1 Calculating the median value of a pixel neighborhood. As can be seen, the central pixel
value of 150 is rather unrepresentative of the surrounding pixels and is replaced with the median
value: 124. A 3×3 square neighborhood is used here --- larger neighborhoods will produce more
severe smoothing.
By calculating the median value of a neighborhood rather than the mean filter, the median filter has
two main advantages over the mean filter:
       The median is a more robust average than the mean and so a single very unrepresentative
        pixel in a neighborhood will not affect the median value significantly.
       Since the median value must actually be the value of one of the pixels in the neighborhood,
        the median filter does not create new unrealistic pixel values when the filter straddles an
        edge. For this reason the median filter is much better at preserving sharp edges than the
        mean filter.
The image
shows an image that has been corrupted by Gaussian noise with mean 0 and standard deviation ( )
8. The original image is
Note how the noise has been reduced at the expense of a slight degradation in image quality. The
image
has been corrupted by even more noise (Gaussian noise with mean 0 and 13), and
is the result of 3×3 median filtering. The median filter is sometimes not as subjectively good at
dealing with large amounts of Gaussian noise as the mean filter.
Where median filtering really comes into its own is when the noise produces extreme `outlier' pixel
values, as for instance in
which has been corrupted with `salt and pepper' noise, i.e. bits have been flipped with probability
1%. Median filtering this with a 3×3 neighborhood produces
in which the noise has been entirely eliminated with almost no degradation to the underlying image.
Compare this with the similar test on the mean filter.
has been corrupted with higher levels (i.e. p=5% that a bit is flipped) of salt and pepper noise
After smoothing with a 3×3 filter, most of the noise has been eliminated
If we smooth the noisy image with a larger median filter, e.g. 7×7, all the noisy pixels disappear, as
shown in
Note that the image is beginning to look a bit `blotchy', as graylevel regions are mapped together.
Alternatively, we can pass a 3×3 median filter over the image three times in order to remove all the
noise with less loss of detail
In general, the median filter allows a great deal of high spatial frequency detail to pass while
remaining very effective at removing noise on images where less than half of the pixels in a
smoothing neighborhood have been effected. (As a consequence of this, median filtering can be less
effective at removing noise from images corrupted with Gaussian noise.)
One of the major problems with the median filter is that it is relatively expensive and complex to
compute. To find the median it is necessary to sort all the values in the neighborhood into numerical
order and this is relatively slow, even with fast sorting algorithms such as quicksort. The basic
algorithm can, however,be enhanced somewhat for speed. A common technique is to notice that
when the neighborhood window is slid across the image, many of the pixels in the window are the
same from one step to the next, and the relative ordering of these with each other will obviously not
have changed. Clever algorithms make use of this to improve performance.
Interactive Experimentation
Exercises
    2. Compare the relative speed of mean and median filters using the same sized neighborhood
       and image. How does the performance of each scale with size of image and size of
       neighborhood?
    3. Unlike the mean filter, the median filter is non-linear. This means that for two
       images A(x) and B(x):
        Illustrate this to yourself by performing smoothing and pixel addition (in the order indicated
        on each side of the above equation!) to a set of test images. Carry out this experiment on
        some simple images, e.g.
and
where is the standard deviation of the distribution. We have also assumed that the distribution
has a mean of zero (i.e. it is centered on the line x=0). The distribution is illustrated in Figure 1.
The idea of Gaussian smoothing is to use this 2-D distribution as a `point-spread' function, and this is
achieved by convolution. Since the image is stored as a collection of discrete pixels we need to
produce a discrete approximation to the Gaussian function before we can perform the convolution.
In theory, the Gaussian distribution is non-zero everywhere, which would require an infinitely large
convolution kernel, but in practice it is effectively zero more than about three standard deviations
from the mean, and so we can truncate the kernel at this point. Figure 3 shows a suitable integer
valued convolution kernel that approximates a Gaussian with a of 1.0. It is not obvious how to pick
the values of the mask to approximate a Gaussian. One could use the value of the Gaussian at the
centre of a pixel in the mask, but this is not accurate because the value of the Gaussian varies non-
linearly across the pixel. We integrated the value of the Gaussian over the whole pixel (by summing
the Gaussian at 0.001 increments). The integrals are not integers: we rescaled the array so that the
corners had the value 1. Finally, the 273 is the sum of all the values in the mask.
Figure 4 One of the pair of 1-D convolution kernels used to calculate the full kernel shown in Figure 3
more quickly.
A further way to compute a Gaussian smoothing with a large standard deviation is to convolve an
image several times with a smaller Gaussian. While this is computationally complex, it can have
applicability if the processing is carried out using a hardware pipeline.
The Gaussian filter not only has utility in engineering applications. It is also attracting attention from
computational biologists because it has been attributed with some amount of biological
plausibility, e.g. some cells in the visual pathways of the brain often have an approximately Gaussian
response.
The effect of Gaussian smoothing is to blur an image, in a similar fashion to the mean filter. The
degree of smoothing is determined by the standard deviation of the Gaussian. (Larger standard
deviation Gaussians, of course, require larger convolution kernels in order to be accurately
represented.)
The Gaussian outputs a `weighted average' of each pixel's neighborhood, with the average weighted
more towards the value of the central pixels. This is in contrast to the mean filter's uniformly
weighted average. Because of this, a Gaussian provides gentler smoothing and preserves edges
better than a similarly sized mean filter.
One of the principle justifications for using the Gaussian as a smoothing filter is due to its frequency
response. Most convolution-based smoothing filters act as lowpass frequency filters. This means
that their effect is to remove high spatial frequency components from an image. The frequency
response of a convolution filter, i.e. its effect on different spatial frequencies, can be seen by taking
the Fourier transform of the filter. Figure 5 shows the frequency responses of a 1-D mean filter with
width 5 and also of a Gaussian filter with    = 3.
Figure 5 Frequency responses of Box (i.e. mean) filter (width 5 pixels) and Gaussian filter ( = 3
pixels). The spatial frequency axis is marked in cycles per pixel, and hence no value above 0.5 has a
real meaning.
Both filters attenuate high frequencies more than low frequencies, but the mean filter exhibits
oscillations in its frequency response. The Gaussian on the other hand shows no oscillations. In fact,
the shape of the frequency response curve is itself (half a) Gaussian. So by choosing an appropriately
sized Gaussian filter we can be fairly confident about what range of spatial frequencies are still
present in the image after filtering, which is not the case of the mean filter. This has consequences
for some edge detection techniques, as mentioned in the section on zero crossings. (The Gaussian
filter also turns out to be very similar to the optimal smoothing filter for edge detection under the
criteria used to derive the Canny edge detector.)
The bold font for f and h emphasizes the fact that both input and output images may be multi-band.
In order to preserve the DC component, it must be
In this case, the kernel measures the photometric similarity between pixels. The normalization
constant in this case is
The spatial distribution of image intensities plays no role in range filtering taken by itself. Combining
intensities from the entire image, however, makes little sense, since the distribution of image values
far away from x ought not to affect the final value at x. In addition, one can show that range filtering
without domain filtering merely changes the color map of an image, and is therefore of little use.
The appropriate solution is to combine domain and range filtering, thereby enforcing both geometric
and photometric locality. Combined filtering can be described as follows:
Combined domain and range filtering will be denoted as bilateral filtering. It replaces the pixel value
at x with an average of similar and nearby pixel values. In smooth regions, pixel values in a small
neighborhood are similar to each other, and the bilateral filter acts essentially as a standard domain
filter, averaging away the small, weakly correlated differences between pixel values caused by noise.
Consider now a sharp boundary between a dark and a bright region, as in figure 1(a).
When the bilateral filter is centered, say, on a pixel on the bright side of the boundary, the similarity
function s assumes values close to one for pixels on the same side, and values close to zero for pixels
on the dark side. The similarity function is shown in figure 1(b) for a 23x23 filter support centered
two pixels to the right of the step in figure 1(a). The normalization term k(x) ensures that the weights
for all the pixels add up to one. As a result, the filter replaces the bright pixel at the center by an
average of the bright pixels in its vicinity, and essentially ignores the dark pixels. Conversely, when
the filter is centered on a dark pixel, the bright pixels are ignored instead. Thus, as shown in figure
1(c), good filtering behavior is achieved at the boundaries, thanks to the domain component of the
filter, and crisp edges are preserved at the same time, thanks to the range component.
A simple and important case of bilateral filtering is shift-invariant Gaussian filtering, in which both
the closeness function c and the similarity function s are Gaussian functions of the Euclidean
distance between their arguments. More specifically, c is radially symmetric:
where
where
is a suitable measure of distance in intensity space. In the scalar case, this may be simply the
absolute difference of the pixel difference or, since noise increases with image intensity, an
intensity-dependent version of it. Just as this form of domain filtering is shift-invariant, the Gaussian
range filter introduced above is insensitive to overall additive changes of image intensity. Of course,
the range filter is shift-invariant as well.
Figure 2 (a) and (b) show the potential of bilateral filtering for the removal of texture. The picture
"simplification" illustrated by figure 2 (b) can be useful for data reduction without loss of overall
shape features in applications such as image transmission, picture editing and manipulation, image
description for retrieval.
                  (a)                                             (b)
                                        Figure
                                           2
Bilateral filtering with parameters sd =3 pixels and sr =50 intensity values is applied to the image in
figure 3 (a) to yield the image in figure 3 (b). Notice that most of the fine texture has been filtered
away, and yet all contours are as crisp as in the original image. Figure 3 (c) shows a detail of figure 3
(a), and figure 3 (d) shows the corresponding filtered version. The two onions have assumed a
graphics-like appearance, and the fine texture has gone. However, the overall shading is preserved,
because it is well within the band of the domain filter and is almost unaffected by the range filter.
Also, the boundaries of the onions are preserved.
                     (a)                                                 (b)
                     (c)                                                 (d)
                                              Figur
                                               e3
For black-and-white images, intensities between any two gray levels are still gray levels. As a
consequence, when smoothing black-and-white images with a standard low-pass filter, intermediate
levels of gray are produced across edges, thereby producing blurred images. With color images, an
additional complication arises from the fact that between any two colors there are other, often
rather different colors. For instance, between blue and red there are various shades of pink and
purple. Thus, disturbing color bands may be produced when smoothing across color edges. The
smoothed image does not just look blurred, it also exhibits odd-looking, colored auras around
objects.
        (a)                                  (b)
        (c)                                  (d)
                        Figure 4
Figure 4 (a) shows a detail from a picture with a red jacket against a blue sky. Even in this unblurred
picture, a thin pink-purple line is visible, and is caused by a combination of lens blurring and pixel
averaging. In fact, pixels along the boundary, when projected back into the scene, intersect both red
jacket and blue sky, and the resulting color is the pink average of red and blue. When smoothing,
this effect is emphasized, as the broad, blurred pink-purple area in figure 4 (b) shows.
To address this difficulty, edge-preserving smoothing could be applied to the red, green, and blue
components of the image separately. However, the intensity profiles across the edge in the three
color bands are in general different. Smoothing the three color bands separately results in an even
more pronounced pink and purple band than in the original, as shown in figure 4 (c). The pink-purple
band, however, is not widened as in the standard-blurred version of figure 4 (b).
A much better result can be obtained with bilateral filtering. In fact, a bilateral filter allows
combining the three color bands appropriately, and measuring photometric distances between
pixels in the combined space. Moreover, this combined distance can be made to correspond closely
to perceived dissimilarity by using Euclidean distance in the CIE-Lab color space. This color space is
based on a large body of psychophysical data concerning color-matching experiments performed by
human observers. In this space, small Euclidean distances are designed to correlate strongly with the
perception of color discrepancy as experienced by an "average" color-normal human observer. Thus,
in a sense, bilateral filtering performed in the CIE-Lab color space is the most natural type of filtering
for color images: only perceptually similar colors are averaged together, and only perceptually
important edges are preserved. Figure 4 (d) shows the image resulting from bilateral smoothing of
the image in figure 4 (a). The pink band has shrunk considerably, and no extraneous colors appear.
                (a)                                  (b)                                 (c)
                                                  Figure 5
Figure 5 (c) shows the result of five iterations of bilateral filtering of the image in figure 5 (a). While a
single iteration produces a much cleaner image (figure 5 (b)) than the original, and is probably
sufficient for most image processing needs, multiple iterations have the effect of flattening the
colors in an image considerably, but without blurring edges. The resulting image has a much smaller
color map, and the effects of bilateral filtering are easier to see when displayed on a printed page.
Notice the cartoon-like appearance of figure 5 (c). All shadows and edges are preserved, but most of
the shading is gone, and no "new" colors are introduced by filtering.
Remember to replace 'input_image.jpg' with the actual filename or path of your input image.
Feel free to adjust the parameters (kernel size, standard deviation, etc.) according to your specific
requirements and the characteristics of your images. Experimenting with different parameters will
help you achieve the desired results.
    4.6 Changing the Shape of Images:
    Changing the shape of an image typically involves resizing, cropping, or transforming it in some way.
    Here are some common operations you might perform using OpenCV in Python:
    Image Transformation involves the transformation of image data in order to retrieve information
    from the image or preprocess the image for further usage. In this tutorial we are going to
    implement the following image transformation:
        Image Translation
        Reflection
        Rotation
        Scaling
        Cropping
        Shearing in x-axis
        Shearing in y-axis
    What is OpenCV?
    OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine
    learning software library. OpenCV was built to provide a common infrastructure for computer
    vision applications and to accelerate the use of machine perception in commercial products. By
    using it, one can process images and videos to identify objects, faces, or even the handwriting of a
    human. When it is integrated with various libraries, such as NumPy, Python is capable of
    processing the OpenCV array structure for analysis.
    Image Translation
    In computer vision or image processing, image translation is the rectilinear shift of an image from
    one location to another, so the shifting of an object is called translation. In other
    words, translation is the shifting of an object’s location.
 Python3
import numpy as np
import cv2 as cv
img = cv.imread('girlImage.jpg', 0)
cv.imshow('img', dst)
        cv.waitKey(0)
     cv.destroyAllWindows()
    In the above code, we have imported NumPy and OpenCV module then read the image by
    using imread() function, and then translation takes place with the warpAffine() method which is
    defined as follows:
    In the first argument, we passed the image, in the second argument it takes a matrix as a
    parameter in the matrix we give x = 100, which means we are telling the function to shift the
    image 70 units on the right side and y= 50, which means we are telling the function to shift the
    image 50 units downwards. In the third argument, where we mentioned the cols and rows, we
    told the function to do not to crop the image from both the x and y sides.
    dst = cv.warpAffine(img,M,(cols,rows))
    Output:
    Image Reflection
    Image reflection is used to flip the image vertically or horizontally. For reflection along the x-axis,
    we set the value of Sy to -1, Sx to 1, and vice-versa for the y-axis reflection.
 Python3
import numpy as np
import cv2 as cv
img = cv.imread('girlImage.jpg', 0)
     M = np.float32([[1, 0, 0],
           [0, -1, rows],
[0, 0, 1]])
reflected_img = cv.warpPerspective(img, M,
(int(cols),
int(rows)))
cv.imshow('img', reflected_img)
cv.imwrite('reflection_out.jpg', reflected_img)
cv.waitKey(0)
cv.destroyAllWindows()
Image Rotation
Image rotation is a common image processing routine with applications in matching, alignment,
and other image-based algorithms, in image rotation the image is rotated by a definite angle. It is
used extensively in data augmentation, especially when it comes to image classification.
   Python3
import numpy as np
import cv2 as cv
img = cv.imread('girlImage.jpg', 0)
img_rotation = cv.warpAffine(img,
cv.getRotationMatrix2D((cols/2, rows/2),
30, 0.6),
(cols, rows))
cv.imshow('img', img_rotation)
cv.imwrite('rotation_out.jpg', img_rotation)
cv.waitKey(0)
cv.destroyAllWindows()
    We have used the get rotation matrix function to define the parameter required in the warpAffine
    function to tell the function to make a matrix that can give a required rotation angle( here it is 30
    degrees) with shrinkage of the image by 40%.
    img_rotation = cv.warpAffine(img,
                     cv.getRotationMatrix2D((cols/2, rows/2), 30, 0.6),
                     (cols, rows))
    Output:
    Image Scaling
    Image scaling is a process used to resize a digital image. We perform two things in the image
    scaling either we enlarge the image or we shrink the image, OpenCV has a built-in
    function cv2.resize() for image scaling.
    Shrinking an image:
    img_shrinked = cv2.resize(image, (350, 300),
                  interpolation = cv2.INTER_AREA)
    Note: Here 350 and 300 are the height and width of the shrunk image respectively
    Enlarging Image:
    img_enlarged = cv2.resize(img_shrinked, None,
                  fx=1.5, fy=1.5,
                  interpolation=cv2.INTER_CUBIC)
 Python3
import numpy as np
import cv2 as cv
img = cv.imread('girlImage.jpg', 0)
                   interpolation=cv.INTER_AREA)
     cv.imshow('img', img_shrinked)
fx=1.5, fy=1.5,
interpolation=cv.INTER_CUBIC)
cv.imshow('img', img_enlarged)
cv.waitKey(0)
cv.destroyAllWindows()
Output:
    Image Cropping
    Cropping is the removal of unwanted outer areas from an image.
    cropped_img = img[100:300, 100:300]
    OpenCV loads the image as a NumPy array, we can crop the image simply by indexing the array, in
    our case, we choose to get 200 pixels from 100 to 300 on both axes.
   Python3
     import numpy as np
import cv2 as cv
img = cv.imread('girlImage.jpg', 0)
cv.imwrite('cropped_out.jpg', cropped_img)
cv.waitKey(0)
cv.destroyAllWindows()
Output:
   Python3
 import numpy as np
import cv2 as cv
img = cv.imread('girlImage.jpg', 0)
cv.imshow('img', sheared_img)
cv.waitKey(0)
cv.destroyAllWindows()
Output:
 Python3
import numpy as np
import cv2 as cv
img = cv.imread('girlImage.jpg', 0)
cv.imshow('sheared_y-axis_out.jpg', sheared_img)
cv.waitKey(0)
cv.destroyAllWindows()
    Output:
    4.6.1. Resizing Images:
    Resizing is a straightforward way to change the shape of an image. PIL is the Python Imaging Library
    which provides the python interpreter with image editing capabilities. The Image module provides
    a class with the same name which is used to represent a PIL image. The module also provides a
    number of factory functions, including functions to load images from files, and to create new
    images.
    Image.resize() Returns a resized copy of this image.
Image Used:
 Python3
left = 4
top = height / 5
right = 154
bottom = 3 * height / 5
im1 = im1.resize(newsize)
im1.show()
Output:
    Another example:Here we use the different newsize value.
 Python3
im = Image.open(r"C:\Users\System-Pc\Desktop\ybear.jpg")
left = 6
     top = height / 4
 right = 174
bottom = 3 * height / 4
im1 = im1.resize(newsize)
im1.show()
Output:
    Stepwise Implementation
    For this, we will take the image shown below.
   Python3
 import cv2
img = cv2.imread("test.jpeg")
print(type(img))
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
 Python3
import cv2
img = cv2.imread("test.jpeg")
print(type(img))
Output:
image shape
   Python3
 import cv2
img = cv2.imread("test.jpeg")
print(type(img))
# [rows, columns]
cv2.imshow('original', img)
cv2.imshow('cropped', crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
    Python Copy code
    import cv2
    Here Image Processing Library with Pillow uses Inverse Transformation. If the Number Of Degrees
    Specified for Image Rotation is Not an Integer Multiple of 90 Degrees, then some Pixel Values
    Beyond Image Boundaries i.e Pixel values lying outside the Dimension of the image. Such Values
    will not be displayed in the output image.
    Method:1 Using Image Processing Library Pillow
 Python3
# processing Library
     # Specified
 Original_Image = Image.open("./gfgrotate.jpg")
rotated_image1 = Original_Image.rotate(180)
# The Image
rotated_image2 = Original_Image.transpose(Image.ROTATE_90)
rotated_image3 = Original_Image.rotate(60)
rotated_image1.show()
rotated_image2.show()
rotated_image3.show()
Output:
This is Image is Rotated By 180 Degree
    The rotate() method of Python Image Processing Library Pillow Takes the number of degrees as a
    parameter and rotates the image in Counter Clockwise Direction to the number of degrees
    specified.
    Method 2: Using Open-CV to rotate an image by an angle in Python
    This is common that everyone knows that Python Open-CV is a module that will handle real-time
    applications related to computer vision. Open-CV works with image processing
    library imutils which deals with images. The imutils.rotate() function is used to rotate an image by
    an angle in Python.
 Python3
import imutils
image = cv2.imread(r".\gfgrotate.jpg")
# angle 45
cv2.imshow("Rotated", Rotated_image)
# angle 90
cv2.imshow("Rotated", Rotated1_image)
cv2.waitKey(0)
Output:
Syntax:
transpose(degree)
    Keywords FLIP_TOP_BOTTOM and FLIP_LEFT_RIGHT will be passed to transpose method to flip it.
         FLIP_TOP_BOTTOM – returns an original image flipped Vertically
         FLIP_LEFT_RIGHT- returns an original image flipped Horizontally
Approach
      Import module
      Open original image
      Transform the image as required
      Save the new transformed image.
    Image in use:
 Python3
original_img = Image.open("original.png")
vertical_img = original_img.transpose(method=Image.FLIP_TOP_BOTTOM)
vertical_img.save("vertical.png")
        original_img.close()
     vertical_img.close()
Output:
 Python3
original_img = Image.open("original.png")
horz_img = original_img.transpose(method=Image.FLIP_LEFT_RIGHT)
horz_img.save("horizontal.png")
original_img.close()
horz_img.close()
    Output:
    Python Copy code
    import cv2
    python
    Copy code Image Thresholding is an intensity transformation function in which the values of pixels
    below a particular threshold are reduced, and the values above that threshold are boosted. This
    generally results in a bilevel image at the end, where the image is composed of black and white
    pixels. Thresholding belongs to the family of point-processing techniques. In this article, you will
    learn how to perform Image Thresholding in OpenCV.
    There are various ways of performing thresholding (Adaptive, Inverse, etc.), but the primary focus
    of this article will be on binary thresholding and would touch upon other thresholding methods in
    the end. For binary thresholding, we would be using the cv2.THRESH_BINARY flag
    in cv2.threshold function.
    For demonstration purposes, we would be using an image named test.jpg. Let’s see the code to
    show the image.
 Python3
     import cv2
 # Loading the image named test.jpg
img = cv2.imread(r"ex2.jpg")
cv2.imshow('Image', img)
Output:
Binary Thresholding
    The function takes in argument a source image, threshold at which the cutoff has to take place,
    maximum intensity value represented by the color space, the mode of thresholding and returns an
    integer value (denoting result of the operation) and an image object containing the resultant
    image after the processing.
 Python3
    Output:
    Binary – Inverse Thresholding
In this, the output will be the inverse of above output i.e. white pixel will be black and vice-versa.
 Python3
Output:
Truncate Thresholding
Output:
Zero Thresholding
   Python3
     ret, thresh = cv2.threshold(img, 127, 255, cv2.THRESH_TOZERO)
Output:
 Python3
Output:
import cv2
import numpy as np
The gradient of a function simply means the rate of change of a function. We will
use numdifftools to find Gradient of a function.
Examples:
Input : x^4+x+1
Output :Gradient of x^4+x+1 at x=1 is 4.99
Input :(1-x)^2+(y-x^2)^2
Output :Gradient of (1-x^2)+(y-x^2)^2 at (1, 2) is [-4. 2.]
Approach:
 For Single variable function: For single variable function we can define directly using “lambda”
   as stated below:-
      g=lambda x:(x**4)+x+1
     For Multi-Variable Function: We will define a function using “def” and pass an array “x” and it
      will return multivariate function as described below:-
     def rosen(x):
         return (1-x[0])**2 +(x[1]-x[0]**2)**2
    where ‘rosen’ is name of function and ‘x’ is passed as array. x[0] and x[1] are array elements in
    the same order as defined in array.i.e Function defined above is (1-x^2)+(y-x^2)^2.
Similarly, We can define function of more than 2-variables also in same manner as stated above.
Method used: Gradient()
Syntax:
nd.Gradient(func_name)
Example:
import numdifftools as nd
g = lambda x:(x**4)+x + 1
grad1 = nd.Gradient(g)([1])
def rosen(x):
Output:
Gradient of x^4+x+1 at x=1 is 4.999999999999998
Gradient of (1-x^2)+(y-x^2)^2 at (1, 2) is [-4. 2.]
Python Copy code
import cv2
import numpy as np
cv2.Sobel: Applies the Sobel operator to calculate the gradient in the x and y directions. The ksize
parameter specifies the size of the Sobel kernel.
np.sqrt(gradient_x**2 + gradient_y**2): Calculates the magnitude of the gradient using the
Euclidean norm.
np.arctan2(gradient_y, gradient_x) * (180 / np.pi): Calculates the angle of the gradient in degrees.
It's important to note that the gradient images (gradient_x and gradient_y) can have positive and
negative values. The magnitude image represents the overall strength of the gradient, and the angle
image represents the direction.
Experiment with different gradient operators (Sobel, Scharr, Prewitt) and parameters to achieve the
desired results. Gradients are often used in various applications, such as edge detection, feature
extraction, and image segmentation.
4.9 Performing Histogram Equalization:
Histogram equalization is a technique used in image processing to enhance the contrast of an image
by adjusting the intensity values. It redistributes the intensity values to cover the entire range,
making the histogram more uniform. Here's an example of how to perform histogram equalization
using OpenCV in Python:
Histogram equalization is a method in image processing of contrast adjustment using the image’s
histogram.
This method usually increases the global contrast of many images, especially when the usable data
of the image is represented by close contrast values. Through this adjustment, the intensities can
be better distributed on the histogram. This allows for areas of lower local contrast to gain a
higher contrast. Histogram equalization accomplishes this by effectively spreading out the most
frequent intensity values. The method is useful in images with backgrounds and foregrounds that
are both bright or both dark.
OpenCV has a function to do this, cv2.equalizeHist(). Its input is just grayscale image and output is
our histogram equalized image.
Input Image :
# import Opencv
import cv2
# import Numpy
import numpy as np
img = cv2.imread(\'F:\\do_nawab.png\', 0)
cv2.imshow(\'image\', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output :
Python Copy code
import cv2
import numpy as np
from matplotlib import pyplot as plt
plt.subplot(1, 2, 1)
plt.title('Original Image')
plt.imshow(image, cmap='gray')
plt.subplot(1, 2, 2)
plt.title('Equalized Image')
plt.imshow(equalized_image, cmap='gray')
plt.show()
In this example:
plt.imshow: Displays the original and equalized images side by side using Matplotlib.
Note: Histogram equalization is often more effective when applied to grayscale images. If you're
working with color images, you may want to convert them to grayscale before equalization.
Histogram equalization is particularly useful for enhancing the visibility of details in images with poor
contrast. Experiment with this technique to see how it affects the appearance of different images.