100% found this document useful (3 votes)
358 views40 pages

Feature Selection UNIT 4

Here are the key steps of the Hough Transform for line detection: 1. Detect edge points in the image space. 2. For each edge point (x,y), calculate the corresponding values of ρ and θ using the equation ρ = xcosθ + ysinθ in the parameter space. 3. Increment the accumulator A(ρ,θ) for each (ρ,θ) pair calculated. 4. Local maxima in the accumulator correspond to straight lines detected in the image space. The values of ρ and θ at each local maximum represent the parameters of the corresponding line. So in summary, the Hough Transform converts the problem of detecting lines in image space to that of

Uploaded by

sujitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (3 votes)
358 views40 pages

Feature Selection UNIT 4

Here are the key steps of the Hough Transform for line detection: 1. Detect edge points in the image space. 2. For each edge point (x,y), calculate the corresponding values of ρ and θ using the equation ρ = xcosθ + ysinθ in the parameter space. 3. Increment the accumulator A(ρ,θ) for each (ρ,θ) pair calculated. 4. Local maxima in the accumulator correspond to straight lines detected in the image space. The values of ρ and θ at each local maximum represent the parameters of the corresponding line. So in summary, the Hough Transform converts the problem of detecting lines in image space to that of

Uploaded by

sujitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

UNIT IV

Feature selection

Presented by
Mrs .S.Maria Seraphin Sujitha M.E,
Asst. Prof, ECE Dept,
St.Xavier’s Catholic College of
Engineering,
Chunkankadai.
UNIT IV REGISTRATION AND IMAGE FUSION

Registration - Preprocessing, Feature selection -


points, lines, regions and templates Feature
correspondence - Point pattern matching, Line
matching, Region matching, Template matching.
Transformation functions - Similarity
transformation and Affine Transformation.
Resampling – Nearest Neighbour and Cubic
Splines. Image Fusion - Overview of image
fusion, pixel fusion, wavelet based fusion -region
based fusion.
STEPS IN IMAGE
REGISTRATION
1. Preprocessing
2. Feature Selection
3. Feature Correspondence
4. Determination of a Transformation
Function
5. Resampling
Feature Selection
To register two images, a number of features are selected from
the images and correspondence is established between them.
 Features used in image registration are

1)Corners 2) Lines 3) Curves 4 ) Templates 5) Regions


6) Patches.

 Type of features selected in an image depends on the type of


image provided.
 An image of a man-made scene often contains line segments,
while a satellite image often contains contours and regions.
 In a 3-D image, surface patches and regions are often present.
 Templates are abundant in both 2-D and 3-D images and can be
used as features to register images.
Feature Selection
Image features are unique image
properties that can be used to
establish correspondence between
two images.
FEATURE SELECTION
-POINTS
Points are the most desired features.
Point features are also known as
interest points, point landmarks,
corner points, and control points
Control points represent centers of
unique neighborhoods that contain
considerable image information.
Cont..
Neighborhoods with a large number of
high-gradient pixels are highly informative.
If the high-gradient pixels form a unique
pattern, such as a corner, the
neighborhood becomes unique.
Various methods for detecting corners in
an image have been developed.
 Corners in an image are determined by
locating points that have locally maximum
cornerness measures.
Cont..
Various

  cornerness measures have been
proposed.
 Assuming Ix(x, y) and Iy(x, y) are gradients of
image I(x, y) with respect to x and y at (x, y).
 and are average gradients with respect to x and y
in a small window centered at (x, y), and then
det(C) can be used as a cornerness measure. C is
called the inertia matrix
Cont..
det() and tr() denote the determinant
and the trace of a matrix, respectively.
The eigenvalues of matrix C are
indicators of the strength of the
gradients in directions normal to each
other in a window.
If both eigenvalues are large, it can be
concluded that a strong corner exists
within the window.
Algorithm
INPUT is a list used to save all detected
corners, OUTPUT is a list used to save the
strong and widely dispersed corners.
n shows the number of required corners, m
shows the number of detected corners.
L is an image where the value at an edge
point represents the smaller of the two
eigenvalues of matrix C at the point.
σ is the standard deviation of the Gaussian
smoother used to detect the edges.
1: Find the image edges and compute gradients of the image
with respect to x and y. Also, create empty INPUT and OUTPUT
lists, and let m = 0.

2: At each edge point:

2.1: Compute the inertia matrix C using image gradients in the


circular area of radius 3σ. If a portion of the window falls outside
the image, use only the portion that falls inside the image.

2.2: Determine the eigenvalues of the matrix. Suppose they are λ1


and λ2, and λ1 > λ2.
Find the eigenvalues for windows centered at all image edges and
save their λ2s in image L at the corresponding edge positions.

3: Find edge points that have locally maximum λ2s by examining


3 × 3 neighborhoods in image L. Consider such edge points as
the corners and save them in INPUT.
Cont..
4.Sort INPUT according to λ2 from the largest
to the smallest.
5: Starting from the top of INPUT, move
corners from INPUT to OUTPUT one at a time.
After moving a corner, increment m by 1 and
remove all corners in INPUT that are within
distance 7σ of it. Repeat the process until
either all n required corners are found or no
more corners remain in INPUT.
6: Return the OUTPUT list and m; m is the
number of detected corners.
Cont..
Cont..
The correlation coefficient can be used as a
means to determine the similarity between
two windows.

The cross-correlation coefficient between


windowsW1 and W2 is computed from

where ・ denotes vector inner product and


||W|| denotes the magnitude of vector W.
CONT..
Denoting the window centered at (x,
y) by W(x, y), the uniqueness
measure U(x, y) is defined by
U(x, y) = 1 −MAX {CC[W(x, y),W(x + x’y + y’)]}
where x = −1, 0, 1; y = −1, 0, 1; and x and y are not
both 0.
The following algorithm first determines the image
edges and then locates those edges that are locally
unique and are centered at image windows with
considerable image detail.
1)Determine the image edges.
2: From among the detected edges, select those that are centered at
highly informative circular windows of radius R and save in INPUT.
3: From among the edges in INPUT, keep only those that are
centered at unique circular windows of radius R and are also widely
dispersed as follows.
3.1: Order the edges in INPUT according to their uniqueness from
the highest to the lowest.
3.2: Initially, clear OUTPUT to an empty list and let m = 0.
3.3: If m = n or if the INPUT is empty, stop; otherwise, continue.
3.4: Remove the edge from the top of INPUT, append it to the
OUTPUT, and increment m by 1.
3.5: Multiply the uniqueness of edges in INPUT by H = 1−
exp(−d2i/D2) and sort INPUT again, where di is the distance of the ith
edge in INPUT to the edge just moved from INPUT to OUTPUT, D is
a parameter controlling the spacing of the selected control points. Go
back to step 3.3
FEATURES SELECTION-LINES
Images of man-made scenes contain
abundant line features, which can be
used in image registration.
1.Line detection using the Hough
transform
2. Least-squares line fitting
3. Line detection using image
gradients
1.Line detection using the Hough transform

It can locate regular curves like


straight lines, circles, parabolas,
ellipses, etc.
A line in the xy space is defined by
Line equation: y=mx+b presenting in
parameter space (b,m)
Parameter m is the slope and parameter b is
the y-intercept of the line.
Edge
Edge Linking
Linking and
and Boundary
Boundary Detection
Detection
Local
Local Processing
Processing

 Two properties of edge points are useful for edge


linking:
◦ the strength (or magnitude) of the detected edge points
◦ their directions (determined from gradient directions)
 This is usually done in local neighborhoods.
 Adjacent edge points with similar magnitude and
direction are linked.
 For example, an edge pixel with coordinates (x0,y0)
in a predefined neighborhood of (x,y) is similar to
the pixel at (x,y) if
f ( x, y )  ( x0 , y0 )  E , E : a nonnegativ e threshold

 ( x, y )   ( x0 , y0 )  A, A : a nonegative angle threshold


Edge
Edge Linking
Linking and
and Boundary
Boundary Detection
Detection
Local
Local Processing:
Processing: Example
Example

In this example,
we can find the
license plate
candidate after
edge linking
process.
Edge
Edge Linking
Linking and
and Boundary
Boundary Detection
Detection
Global
Global Processing
Processing via
via the
the Hough
Hough Transform
Transform

 Hough transform: a way of finding edge points in an


image that lie along a straight line.
 Example: xy-plane v.s. ab-plane (parameter space)
yi  axi  b
Edge
Edge Linking
Linking and
and Boundary
Boundary Detection
Detection
Global
Global Processing
Processing via
via the
the Hough
Hough Transform
Transform

 The Hough transform consists


of finding all pairs of values of 
and  which satisfy the
x cos   y sin   
equations that pass through
(x,y).
 These are accumulated in what
is basically a 2-dimensional
histogram.
 When plotted these pairs of 
and  will look like a sine wave.
The process is repeated for all
appropriate (x,y) locations.
• Hough transform: handling the vertical lines
• – Through normal representation
• – Instead of straight lines, there are
sinusoidal curves in the parameter space
• – The number or intesecting sinusoids is
accumulated and then the value Q in the
accumulator A(i,j) shows the number of
colinear points lying on a line
• x cosѲ + y sin Ѳ = ρ
Edge
Edge Linking
Linking and
and Boundary
Boundary Detection
Detection
Hough
Hough Transform
Transform Example
Example

The intersection of the


curves corresponding to
points 1,3,5

2,3,4

1,4
• Summary of Hough transform for edge
linking
– Compute the gradient
– Specify subdivisions in the parametric plane
– Examine the counts of the accumulator cells
– Examine the continuity relationship between
pixels in a chosen cell
Edge
Edge Linking
Linking and
and Boundary
Boundary Detection
Detection
Hough
Hough Transform
Transform Example
Example
Thresholding
Thresholding

 Assumption: the range of intensity levels covered by


objects of interest is different from the background.
1 if f ( x, y )  T
g ( x, y )  
0 if f ( x, y )  T

Single threshold Multiple threshold


Cont..
Lines in the image space my be
described in polar form;
ρ = (x − xc) cos(θ) + (y − yc) sin(θ),
where ρ is the distance of the line to
the image center, (xc, yc) are the
coordinates of the image center, and θ
is the angle of the normal to the line
with the x axis.
Questions
PART A

1.What are the steps to follow the image registration?


2.Define image registration.
3. Define inverse filter. What are the uses?
4. Define cornerness measure.
PART B
Explain the steps in image registration. (8)
Explain the prepressing techniques in image
registration.
Explain the algorithm to find the correspondence
between the lines of sensed and reference images.
Least-squares line fitting
The polar equation of a line
ρ = (x − xc) cos(θ) + (y − yc) sin(θ),
(1)
is given by eq(1), or equivalently by
Ax + By + C = 0
where A = cos(θ), B = sin(θ), and C = −
(xc cos θ + yc sin θ + ρ).
Cont…
Assuming the given points are {(xi, yi) : i = 1, . . . , n} and
the line to be estimated is defined by (3.19), the distance
of point (xi, yi) to the line is given by
di = |Axi + Byi + C|
with the condition that A2 + B2 = 1.
To find the line where the sum of squared distances from
the points to the line, that is minimum.

To find the A,B, and C that minimize E2, we determine the


partial derivatives of E2 with respect to A, B, and C, set
them to zero, and solve the equations for A, B, and C.
Cont..

Two solutions are obtained, one minimizing the sum of squared


distances and one maximizing the sum of squared distances. Among
the two solutions, the one producing the smaller E2 is selected to
represent the line.
Line detection using image
gradients
 Ifthe lines to be detected separate regions of
different properties from each other, the lines
can be found from the high-gradient pixels.
 Image gradients have magnitude and
direction.
 An algorithm that detects lines in an image by
partitioning the image into regions of different
gradient directions and fitting a line to each
region by the
weighted least-squares method is described
below.
Algorithm
nt is the minimum region size used in line fitting and α is the directional accuracy of the lines
being determined.
1: Determine the gradient magnitudes and directions of pixels in the image.
2: Partition the image into regions with gradient directions
in ranges [−α, α], [α, 3α], . . . , [π/2 − α, π/2 +α],...
3: Remove regions containing fewer than nt pixels and fit a line to each remaining region by
the weighted least squares method.

where wi is the gradient magnitude of pixel (xi, yi) in the region under consideration

4: For each line, draw the portion of the line falling inside
the region from which it was obtained.
REGIONS
Elaborateimage segmentation
methods may be needed to extract
nonhomogenous regions.
Thresholding
Region based segmentation
Templates
Templates represent locally unique
and highly informative image
neighborhoods.
Neighborhoods contained in
rectangular or circular windows can be
used as templates.
THANK YOU

You might also like