Ask 1920 1
Ask 1920 1
BACHELOR OF TECHNOLOGY
In
(UGC AUTONOMOUS)
(Permanently Affiliated to AU, Approved by AICTE and Accredited by NBA & NAAC with ‘A’
Grade)
1|Page
ANIL NEERUKONDA INSTITUTE OF TECHNOLOGY AND SCIENCES
(UGC AUTONOMOUS)
(Permanently Affiliated to AU, Approved by AICTE and Accredited by NBA & NAAC with ‘A’
Grade)
Sangivalasa, Bheemili Mandal, Visakhapatnam dist. (A.P)
CERTIFICATE
This is to certify that the industrial training report entitled“Detection of Leg Fracture in X-Ray
Imagesusing Hough Transform“submitted byB. Deva Harshitha-
316126512004,D.V.GuruSaran-316126512013, Y. Ravi Teja-316126512060,E. Manohar-
316126512017 in partial fulfilment of the requirements for the award ofthe degree of Bachelor
of Technology inElectronics & Communication Engineering of Andhra University;
Visakhapatnam is a record of bonafide work carried out under my supervision.
2|Page
ACKNOWLEDGEMENT
We are grateful to Dr. V.Rajya Lakshmi, Head of the Department, Electronics and
Communication Engineering, for providing us permission and with the required facilities for the
completion of the industrial training work.
We are very much thankful to the Principal and Management, ANITS, Sangivalasa, for their
encouragement and cooperation to carry out this work.
We would like to express our deep gratitude toMr. A. SIVA KUMAR, Assistant professor
Department of ECE, ANITS for his guidance. We express our thanks to all teaching faculty of
Department of ECE, for their encouragement helped using accomplishment of our industrial
training.
We would like to thank our parents, friends, and classmates for their encouragement throughout
our industrial training period. At last but not the least, we thank everyone for supporting us
directly or indirectly in completing this industrial training successfully.
3|Page
CONTENTS
ABSTRACT 6
LIST OF FIGURES 7
LIST OF ABBREVATIONS 8
CHAPTER 1 Introduction 9
1.1 Digital Image Processing System 9
1.1.1 Image processing system 9
1.1.2 Image processing fundamentals 11
1.1.2.1 Fundamental steps in Image processing 11
1.1.2.2 Image Types 13
1.1.2.3 Image processing Goals 14
1.1.2.4 Applications of Image processing 17
1.2 Objective 18
4|Page
2.3.5 LoG edge detection 30
2.3.6 Result 30
2.4 Hough transform 32
2.4.1 Introduction 32
2.4.2 Hough-transform – the input 33
2.4.3 Input to Hough-thresholded edge image 34
2.4.4 Hough transform basic idea 36
2.4.4.1 Hough transform- algorithm 37
2.4.4.2 Hough transform-polar representation of lines 38
2.4.4.3 Hough transform-algorithm using polar representation of lines 40
2.4.5 Advantages and Disadvantages 43
2.5 Methodologies 43
2.5.1 Module Names 43
CHAPTER 3 Software Specifications 45
3.1 Introduction 45
3.2 Features in MATLAB 46
3.2.1 Interfacing with other Languages 47
3.2.2 Analyzing and Accessing data 49
3.2.3 Performing Numeric Computation 51
CHAPTER 4Results 52
4.1 Implementation 52
4.2Snapshots 52
4.2.1 Original X-ray images 53
4.2.2 Filtered Image 54
4.2.3 Edge detection image 55
4.2.4 Hough-Thresholdedimage 56
4.2.5 Final Result 57
CHAPTER 5 Applications 58
CHAPTER 6 Conclusion 59
REFERENCES
5|Page
ABSTRACT
The bone fracture is a common problem in human beings occurs due to high
pressure is applied on bone or simple accident and also due to osteoporosis and
bone cancer. The image processing techniques are very useful for many
applications such as biology, security, satellite imagery, personal photo, medicine,
etc. The procedures of image processing such as image enhancement, image
segmentation and feature extraction are used for fracture detection system. In this
project we use Canny edge detection method for segmentation. Canny method
produces perfect information from the bone image. The main aim of this project is
to detect human lower leg bone fracture from X-Ray images. The proposed system
has three steps, namely, preprocessing, segmentation, and fracture detection. In
feature extraction step, this paper uses Hough transform technique for line
detection in the image. The results from various simulation show that the proposed
system is very accurate and efficient.
6|Page
LIST OF FIGURES
7|Page
LIST OF ABBREVTIONS
8|Page
CHAPTER 1
INTRODUCTION
Digital image processing is the use of computer algorithms to perform image processing
on digital images. The 2D continuous image is divided into N rows and M columns. The
intersection of a row and a column is called a pixel. The image can also be a function other
variable including depth, color, and time. An image given in the form of a transparency, slide,
photograph or an X-ray is first digitized and stored as a matrix of binary digits in computer
memory. This digitized image can then be processed and/or displayed on a high-resolution
television monitor. For display, the image is stored in a rapid-access buffer memory, which
refreshes the monitor at a rate of 25 frames per second to produce a visually continuous display.
9|Page
DIGITIZER
Digitizing or digitizationis the representation of an object, image, sound, document or a
signal (usually an analog signal) by a discrete set of its points or samples. Digital information
exists as one of two digits, either 0 or 1. These are known as bits.
An image is digitized to convert it to a form which can be stored in a computer's memory or on
some form of storage media such as a hard disk or CD-ROM. This digitization procedure can be
done by a scanner, or by a video camera connected to a frame grabber board in a computer.
Once the image has been digitized, it can be operated upon by various image processing
operations.
Microdensitometer
Flying spot scanner
Image dissector
Videocon camera
Photosensitive solid- state arrays.
DIGITAL COMPUTER
A computer is an electronic device that accepts raw data, processes it according to a set
of instructions and required to produce the desired result. Mathematical processing of the
digitized image such as convolution, averaging, addition, subtraction, etc. are done by the
computer.
MASS STORAGE
Mass storage devices used in desktop and most server computers typically have their data
organized in a file system.The secondary storage devices normally used are floppy disks, CD
ROMs etc.
OPERATOR CONSOLE
10 | P a g e
The operator console consists of equipment and arrangements for verification of
intermediate results and for alterations in the software as and when require. The operator is also
capable of checking for any resulting errors and for the entry of requisite data.
DISPLAY
Popular display devices produce spots (display elements) for each pixel:
Spots may be binary (e.g., monochrome LCD), achromatic (e.g., so-called black-and-white,
actually grayscale for intensity), pseudo color or false colors (e.g., for intensity or hyper spectral
data), or true color (color data displayed as such).
Digital image processing refers processing of the image in digital form. Modern cameras
may directly take the image in digital form but generally images are originated in optical form.
They are captured by video cameras and digitalized. The digitalization process includes
sampling, quantization. Then these images are processed by the five fundamental processes, at
least any one of them, not necessarily all of them.
1. Image acquisition
2. Image preprocessing
3. Image segmentation
4. Image representation
5. Image description
6. Image recognition
7. Image interpretation
11 | P a g e
IMAGE ACQUISITION
First, we need to produce a digital image from a paper envelope. This can be done using
either a CCD camera, or a scanner
IMAGE PREPROCESSING
This is the step taken before the major image processing task. The problem here is to
perform some basic tasks in order to render the resulting image more suitable for the job to
follow. In this case it may involve enhancing the contrast, removing noise, or identifying regions
likely to contain the postcode.
IMAGE SEGMENTATION
Segmentation is the process of partitioning a digital image into multiple segments (sets of
pixels, also known as super pixels). The goal of segmentation is to simplify and/or change the
representation of an image into something that is more meaningful and easier to analyze. Image
segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images.
More precisely, image segmentation is the process of assigning a label to every pixel in an image
such that pixels with the same label share certain visual characteristics.
IMAGE REPRESENTATION
Image process is the process of convert the input data to a form suitable for computer
processing
IMAGE DESCRIPTION
Image description is the process of extract features that result in some quantitative
information of interest or features that are basic for differentiating one class of objects from
another.
IMAGE RECOGNITION
12 | P a g e
Image recognition is the process of assign a label to an object based on the information
provided by its descriptors.
IMAGE INTERPRETATION
1. Binary image
2. Grayscale image
3. Indexed image
4. True color or RGB image
BINARY IMAGE
Each pixel is just blackor white. Since there are only two possible values for each pixel
(0, 1), we only need one bitper pixel.
GRAYSCALE IMAGE
Each pixel is a shade of gray, normally from 0 (black) to 255(white). This range means
that each pixel can be represented by eight bits, or exactly one byte. Other grayscale ranges are
used, but generally they are a power of 2.
INDEXED IMAGE
An indexed image consists of an array and a color map matrix. The pixel values in the
array are direct indices into a color map. By convention, this documentation uses the variable
name X to refer to the array and map to refer to the color map.
13 | P a g e
Each pixel has a particular color; that color is described by the amount of red, greenand
bluein it. If each of these components has a range 0–255, this gives a totally of 2563different
possible colors. Such an image is a “stack” of three matrices; representing the red, greenand
bluevalues for each pixel. This means that for every pixel there correspond 3 values.
In virtually all image processing applications, however, the goal is to extract information
from the image data. Obtaining the information desired may require filtering, transforming,
coloring, interactive analysis, or any number of other methods.
To be somewhat more specific, one can generalize most image processing tasks to be
characterized by one of the following categories:
1. Image enhancement
2. Image restoration
3. Image analysis
4. Feature extraction
5. Image registration
14 | P a g e
6. Image compression
7. Image synthesis
IMAGE ENHANCEMENT
This simply means improvement of the image being viewed to the (machine or human)
interpreter's visual system. Image enhancement types of operations include contrast adjustment,
noise suppression filtering, application of pseudo color, edge enhancement, and many others.
IMAGE RESTORATION
The purpose of image restoration is to "compensate for" or "undo" defects which degrade
an image. Degradation comes in many forms such as motion blur, noise, and camera misfocus. In
cases like motion blur, it is possible to come up with a very good estimate of the actual blurring
function and "undo" the blur to restore the original image. In cases where the image is corrupted
by noise, the best we may hope to do is to compensate for the degradation it caused.
IMAGE ANALYSIS
Image analysis is the extraction of meaningful information from images. Image analysis
operations produce numerical or graphical information based on characteristics of the original
image. They break into objects and then classify them. They depend on the image statistics.
Common operations are extraction and description of scene and image features, automated
measurements, and object classification. Image analyze are mainly used in machine vision
applications.
FEATURE EXTRACTION
Feature extraction involves simplifying the amount of resources required to describe a large set
of data accurately. When performing analysis of complex data one of the major problem’s stems
from the number of variables involved. Analysis with a large number of variablesgenerally
requires a large amount of memory and computation power or a classification algorithm which
over fits the training sample and generalizes poorly to new samples. Feature extraction is a
general term for methods of constructing combinations of the variables to get around these
problems while still describing the data with sufficient accuracy.
15 | P a g e
IMAGE REGISTRATION
Image registration is the process of overlaying two or more images of the same scene
taken at different times, from different viewpoints, and/or by different sensors. It geometrically
aligns two images the reference and sensed images. The present differences between images are
introduced due to different imaging conditions. Image registration is a crucial step in all image
analysis tasks in which the final information is gained from the combination of various data
sources like in image fusion, change detection, and multichannel image restoration.
Typically, registration is required in remote sensing (multispectral classification,
environmental monitoring, change detection, image mosaicking, weather forecasting, creating
super-resolution images, integrating information into geographic information systems (GIS)), in
medicine (combining computer tomography (CT) and NMR data to obtain more complete
information about the patient, monitoring tumor growth, treatment verification, comparison of
the patient’s data with anatomical atlases), in cartography (map updating), and in computer
vision (target localization, automatic quality control), to name a few.
IMAGE COMPRESSION
The objective of image compression is to reduce irrelevance and redundancy of the image
data in order to be able to store or transmit data in an efficient form. Image compression may be
lossy or lossless. Lossless compression is preferred for archival purposes and often for medical
imaging, technical drawings, clip art, or comics. This is because lossy compression methods,
especially when used at low bit rates, introduce compression artifacts. Lossy methods are
especially suitable for natural images such as photographs in applications where minor
(sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit
rate. The lossy compression that produces imperceptible differences may be called visually
lossless.
IMAGE SYNTHESIS
Image synthesis operations create images from other images or non-image data. Image
synthesis operations generally create images that are either physically impossible or impractical
to acquire.
16 | P a g e
1.1.2.4 APPLICATIONS OF IMAGE PROCESSING
Image processing has an enormous range of applications; almost every area of science
and technology can make use of image processing methods. Here is a short list just to give some
indication of the range of image processing applications.
MEDICINE
Inspection and interpretation of images obtained from X-rays, MRI or CAT scans,
analysis of cell images, of chromosome karyotypes. In medical applications, one is concerned
with processing of chest X-rays, cineangiograms, projection images of trans axial tomography
and other medical images that occur in radiology, nuclear magnetic resonance (NMR) and
ultrasonic scanning. These images may be used for patient screening and monitoring or for
detection of tumors’ or other disease in patients.
AGRICULTURE
Satellite/aerial views of land, for example to determine how much land is being used for
different purposes, or to investigate the suitability of different regions for different crops,
inspection of fruit and vegetables distinguishing good and fresh produce from old.
DOCUMENT PROCESSING
It is used in scanning, and transmission for converting paper documents to a digital image
form, compressing the image, and storing it on magnetic tape. It is also used in document reading
for automatically detecting and recognizing printed characteristics.
17 | P a g e
RADAR IMAGING SYSTEM
Radar and sonar images are used for detection and recognition of various types of targets
or in guidance and maneuvering of aircraft or missile systems.
DEFENSE/INTELLIGENCE
It is used in reconnaissance photo-interpretation for automatic interpretation of earth
satellite imagery to look for sensitive targets or military threats and target acquisition and
guidance for recognizing and tracking targets in real-time smart-bomb and missile-guidance
systems.
1.2 OBJECTIVE
The motivations of this system are: (i) saving time for patients and (ii) to lower the workload of
doctors by screening out the easy case. Another motivation for our project is to reduce human
errors
There are different types of medical imaging tools are available to detecting different
typesof abnormalities such as X-rays, Computed Tomography (CT), Magnetic Resonance
Imaging (MRI), Ultrasound etc.
X-rays and CT are most frequently used in fracture diagnosis because it is the fastest and
easiest way for the doctors to study the injuries of bones and joints.Doctors usually uses
x-ray images to determine whether a fracture exists, and the location of the fracture. The
database is DICOM images
In modern hospitals, medical images are stored in the standard DICOM (Digital Imaging
and Communications in Medicine) format which includes text into the images. Any
attempt to retrieve and display these images must go through PACS (Picture Archives
and Communication System) hardware.
18 | P a g e
Depending on the human experts alone for such a critical matter can cause intolerable
errors.
[2] Mahmoud Al-Ayyoub, IamailHmeidi, Haya Rababaha, Detecting Hand Bone Fractures
in X-Ray Images, Jordan University of Science and Technology Irbid, Jordan, Volume 4.
No.3, September 2013.
In this paper, the aim is to propose an efficient system for a quick and accurate diagnosis
of hand bone fractures based on the information gained from the x-ray images. The general
framework of the proposed system is as follows. It starts by taking a set of labeled x-ray hand
images that contain normal as well as fractured hands and enhance them by applying some
19 | P a g e
filtering algorithms to remove the noise from them. Then, it detects the edges in each image
using edge detection methods. After that, it converts each image into a set of features using tools
such the Wavelet and the Curvelet transforms. The next step is to build the classification
algorithms based on the extracted features. Finally, in the testing phase, the performance and
accuracy of the proposed system is evaluated.
20 | P a g e
[4] S.K. Mahndran, S. SanthoshBaBoo, An Ensemble Systems for Automatic Fracture
Detection, IACIT International Journal of Engineering and Technology, Vol.4, No. 1,
February 2012.
The main focus of the present research work is to automatically detect fractures in long
bones from plain diagnostic X-Rays using a series of sequential steps. Three classifiers, namely,
Back Propagation Neural Networks, Support Vector Machine and Naïve Bayes were considered.
Two feature categories, texture and shape, were collected from the X-Ray image. Totally 11
features were extracted from the image which are used to detect the fracture bones through
training and testing of classifiers. From these three base classifiers, four fusion classifiers were
proposed. Experimental results proved that the fusion of classifier is efficient for fracture
detection and achieved maximum accuracy. The time complexity of the algorithms was also on
par with the industry requirements. One difficulty encountered with fusion classification is the
detection of a classifier which produces the best result. This process could be automated in future
and the computer aided diagnosis program can intelligently identify the best combination of
classifier and feature to produce highest performance. The present research work considers only
simple fractures and experimental results showed that the performance degrades with fractures
parallel to the bone edge are not detected as well as those perpendicular to the bone edges. Future
research can consider these challenges.
[5] Rashmi, Mukesh Kumar, and Rohini Saxena, Algorithm and Technique on Various
Edge Detection: A Survey, Department of Electronics and Communication Engineering,
SHIATS- Allahabad, UP. -India, Vol. 4, No. 3, June 2013.
In this paper we have studied and evaluate different edge detection techniques. We have
seenthat canny edge detector gives better result as compared to others with some positive points.
It is less sensitive to noise, adaptive in nature, resolved the problem of streaking, provides good
localization and detects sharper edges as compared to others. It is considered as optimal edge
detection technique hence lot of work and improvement on this algorithm has been done
andfurther improvements are possible in future as an improved canny algorithm can detect edges
in
color image without converting in gray image, improved canny algorithm for
automaticextraction of moving object in the image guidance. It finds practical application in
21 | P a g e
RunwayDetection and Tracking for Unmanned Aerial Vehicle, in brain MRI image,
cableinsulation layer measurement, Real-time facial expression recognition, edge detection
ofriver regime, Automatic Multiple Faces Tracking and Detection. Canny edge
detectiontechnique is used in license plate reorganization system which is an important part of
intelligenttraffic system (ITS), finds practical application in traffic management, public safety
and militarydepartment. It also finds application in medical field as in ultrasound, x –rays etc.
[6] Mahmoud Al-Ayyoub, Duha Al-Zghool, Determining the Type of Long Bone Fractures
in X-Ray Images, Jordan University of Science & Technology, Irbid 22110, Jordan, E-
ISSN: 2224-3402, Issue 8, Volume 10, August 2013.
In this paper, the proposed system uses image processing and machine learningtechniques
to accurately diagnose and the existence and type of fracture in long bones. Specifically, it uses
supervised learning in which the system classifies new instances based on a model built from a
set of labeled examples (in this work, these are simply the x-ray images each with a
normal/abnormal label) along with their distinguishing features (computed via image processing
techniques). To be more specific, in the first step, a set of filtering algorithms is used to smooth
the images and remove different types of noise such as: blurring, darkness, brightness, Poisson
and Gaussian Noise. It then uses various tools to extract useful and distinguishing features based
on: edge detection, corner detection, parallel & fracture lines, texture features, peak detection,
etc. Due to the plethora of tools available for smoothing and noise removal and their high
adaptability, significant effort is invested testing and tweaking them to find the ones that are
most suitable for the problem at hand. The next step is to build our classification algorithms
based on the extracted features to predict/classify fraction types. Finally, a testing phase is used
to evaluate the performance and accuracy of the proposed process.
[7] Yuancheng ― MIKE‖ Luo and Ramani Duraiswami, Canny Edge Detection on
NVIDIA CUDA, Computer Science & UMIACS, University of Maryland, College Park
22 | P a g e
In this paper, they demonstrated a version of the complete Canny edge detector under
CUDA, including all stages of the algorithms. A significant speedup against straight forward
CPU functions, but a moderate improvement against multi-core multi-threaded CPU functions
taking advantage of special instructions was seen. The implementation speed is dominated by the
hysteresis step (which was not implemented in previous GPU versions). If this postprocessing
step is not needed the algorithm can be much faster (by a factor of four). It should emphasize that
the algorithms used here could be made more efficient, and further speedups should be possible
using more sophisticated component data parallel algorithms. Itsexperiencesshow that using
CUDA one can move complex image processing algorithms to the GPU
1.5PROPOSED SYSTEM
The X-ray/CT images are obtained from the hospital that contains normal as well as fractured
bones images. In the first step, applying preprocessing techniques such as RGB to grayscale
conversion and enhance them by using filtering algorithm to remove the noise from the image.
Then it detects the edges in images using edge detection methods and segmented the image.
After segmentation, it converts each image into a set of features by using some feature extraction
technique. Then we build the classification algorithm based on extracted features. Finally, the
performance and accuracy of the proposed system are evaluated
23 | P a g e
1.5.1 PROPOSED SYSTEM BLOCK DIAGRAM
PRE-PROCESSING
IMAGE SEGMENTATION
FRACTURE DETECTION
FINAL RESULT
24 | P a g e
PROPOSED TECHNIQUE
PREPROCESSING
SEGMENTATION
FRACTURE DETECTION
PROJECT DESCRIPTION
2.1 INTRODUCTION
Bone fracture is common problem even in most developed countries and the number of
fractures is increasing rapidly. Bone fracture can occur due to a simple accident or different types
of diseases. So, quick and accurate diagnosis can be crucial to the success of any prescribed
treatment. Depending on the human experts alone for such a critical matter have cause
intolerable errors. Hence, the idea of automatic diagnosis procedure has always been an
appealing one. The main goal of this project is to detect the lower leg bone fracture from X-Ray
images using MATLAB software. The lower leg bone is the second largest bone of the body. It
is made up of two bones, the tibia and fibula. The fibula bone is smaller and thinner than the
tibia. However, the tibia fracture is most commonly occurring due to it carries a significant
portion of the body weight. Among the four modalities (X-ray, CT, MRI, Ultrasound), X-ray
diagnosis is commonly used for bone fracture detection due to their low cost, high speed and
wide availability. Although CT and MRI images gives better quality images for body organs than
X-ray images, the latter are faster cheaper, enjoy wider availability and are easier to use few
limitations. Moreover, the level of quality of X-ray images is enough for the purpose of bone
fracture detection.
25 | P a g e
Figure 2.1. Structure of Lower Leg Bone
2.2 GENERAL
There are different types of noise such as poison, Gaussian, Salt & pepper, etc. Gaussian noise is
the most common types of noise that can be found in X-ray images. This type of noise is
generally caused by the sensor and circuitry of a scanner or digital camera. So, the system choses
to use Gaussian filter to reduce the noise while preserving the edge and smooth of the image
2.2.1 PREPROCESSING:
This stage consists of the procedures that enhance the features of an input X-ray
image so that the result image improves the performance of the subsequent stages of the
proposed system. In this work, the main procedures for image enhancement are noise removal,
adjusting image brightness and color adjustment. Noise can be defined as unwanted pixel that
affects the quality of the image. The Gaussian smoothing filter is a very good filter for removing
noise draw from a normal distribution. A Gaussian filter is parameterized by σ, and the
relationship between σ and the degree of smoothing is very simple. A large σ implies a wider
Gaussian filter and greater smoothing. After filtering, this system is performed adjusting image
brightness and color to distinct the desired object or bone shape from the image. Then, the
adjusted image is converted into the gray scale image to speed up processing time and less
computation.
26 | P a g e
Figure 2.2. Results of Image preprocessing
27 | P a g e
These features are used by advanced computer vision algorithms. Edge detection is used for
object detection which serves various applications like medical image processing, biometrics etc.
Edge detection is an active area of research as it facilitates higher level image analysis. There are
three different types of discontinuities in the grey level like point, line and edges. Spatial masks
can be used to detect all the three types of discontinuities in an image. There are many edge
detection techniques in the literature for image segmentation. The most commonly used
discontinuity-based edge detection techniques are reviewed in this section.
28 | P a g e
2.3.2 Sobel Edge Detection
The Sobel edge detection method is introduced by Sobel in 1970. The Sobel method of
edge detection for image segmentation finds edges using the Sobelapproximation to
thederivative. It precedes the edges at those points where the gradient ishighest. The Sobel
technique performs a 2-D spatial gradient quantity on an image and sohighlights regions of high
spatial frequency that correspond to edges. In general, it is used tofind the estimated absolute
gradient magnitude at each point in n input grayscale image. Inconjecture at least the operator
consists of a pair of 3x3 complication kernels as given away in under table. One kernel is simply
the other rotated by 90o. This is very alike to the RobertsCross operator.
29 | P a g e
Prewitt detection is slightly simpler to implement computationally than the Sobel detection, but it
tends to produce somewhat noisier results.
It has two effects, it will smooth the image and it computes the Laplacian, which yields a double
edge image. Locating edges then consists of finding the zero crossings between the double edges.
The digital implementation of the Laplacian function is usually made through the mask below,
30 | P a g e
The Laplacian is generally used to found whether a pixel is on the dark or light side of an edge.
2.3.6 RESULT
This section presents the relative performance of various edge detection techniques such
asRoberts edge detector, Sobel Edge Detector, Prewitt edge detector, Kirsch, Robinson, Marr-
Hildreth edge detector, LoG edge detector and Canny Edge Detector.
The edge detection techniques were implemented using MATLAB R2013a). The
objective is to produce a clean edge map by extracting the principal edge features of the image.
The original image and the image obtained by usingdifferent edge detection techniques are given
in figure.
31 | P a g e
Fig. 2.3: (a) Original X-Ray input image and corresponding resultant edge detected images by using (b) Roberts, (c)
Sobel, (d) Prewitt, (e) Canny, and (f) Laplace second order difference operators.
Roberts, Sobel and Prewitt results actually deviated from the others. LoG andCanny produce
almost same edge map. It isobserved from the figure, Canny result is superior by far to the other
results.
2.4.1 Introduction
The Hough transform (HT) can be used to detect lines, circles or other parametric curves.
It was introduced in 1962 (Hough 1962) and first used to find lines in images a decade
later (Duda 1972).
The goal is to find the location of lines in images.
This problem could be solved by e.g. Morphology and a linear structuring element, or by
correlation.
Then we would need to handle rotation, zoom, distortions etc.
Hough transform can detect lines, circles and other structures if their parametric equation
is known.
It can give robust detection under noise and partial occlusion. It can give robust detection
under noise and partial occlusion.
32 | P a g e
• These lines separate regions with different grey levels.
Fig 2.4. linear Structured image before and after performing Edge Detection
• The magnitude results computed by the Sobel operator can be thresholder and used as input.
33 | P a g e
Edge magnitude
• The gradient is a measure of how the function f(x ,y) changes as a function of changes in the
arguments x and y.
• Horizontal edges:
Edge direction
34 | P a g e
• Remember that α(x,y) will be the angle with respect to the x-axis.
• Remember also that the direction of an edge will be perpendicular to the gradient in any given
point.
• As always with edge detection, simple lowpass filtering can be applied first.
Hough-transform
35 | P a g e
• Assume that we have performed some edge detection, and a thresholding of the edge
magnitude image.
• Thus, we have n pixels that may partially describe the boundary of some objects.
– There are many lines passing through the point (xi, yi).
– Common to them is that they satisfy the equation for some set of parameters (a, b).
36 | P a g e
• This equation can obviously be rewritten as follows:
• Another point (x,y) will give rise to another line in (a, b) space.
• Two points (x, y) and (z, k) define a line in the (x, y) plane.
• These two points give rise to two different lines in (a, b) space.
• In (a, b) space these lines will intersect in a point (a1, b1) where a1 is the rise and b1 the intersect
of the line defined by (x, y) and (z, k) in (x, y) space.
• The fact is that all points on the line defined by (x, y) and (z, k) in (x, y) space will
parameterize lines that intersect in (a1, b1) in (a, b) space.
• Points that lie on a line will form a “cluster of crossings” in the (a, b) space.
37 | P a g e
2.4.4.1Hough transform – algorithm
• Quantize the parameter space (a, b), that is, divide it into cells.
– For each point (x, y) with value 1 in the binary image, find the values of (a, b) in the range
[[amin, amax], [bmin, bmax]] defining the line corresponding to this point.
– Increase the value of the accumulator for these [a1, b1] point.
• Cells receiving a minimum number of “votes” are assumed to correspond to lines in (x, y)
space.
38 | P a g e
• The polar (also called normal) representation of straight lines is
x cosθ + y sinθ = ρ
• Each point (xi, yi) in the xy-plane gives a sinusoid in the ρθ- plane.
x cosθ + y sinθ = ρ
will give M curves that intersect at (ρi, θj) in the parameter plane.
• Each curve in the figure represents the family of lines that pass through a particular point (xi, yi)
in the xy -plane.
• The intersection point (ρ1, θ1) corresponds to the lines that passes through two points (xi, yi)
and
(xj, yj).
39 | P a g e
• A horizontal line will have θ=0 and ρ equal to the intercept with the y-axis.
• A vertical line will have θ=90 and ρ equal to the intercept with the x-axis.
• Partition the ρθ-plane into accumulator cells A [ρ, θ], ρ∈ [ρmin, ρmax]; θ∈ [θmin, θmax]
• The discretization of θ and ρ must happen with values δθ and δρ giving acceptable precision
and sizes of the parameter space.
40 | P a g e
• The cell (i, j) corresponds to the square associated with parameter values (θj, ρi).
• For each foreground point (xk, yk) in the thresholded edge image
• After this procedure, A(i,j) = P means that P points in the xy space lie on the line
• Example:
41 | P a g e
Natural scene and result of Sobel edge detection followed by thresholding:
42 | P a g e
2.4.5 Hough transform
Advantages
– Conceptually simple.
– Easy implementation Easy implementation.
– Handles missing and occluded data very gracefully.
– Can be adapted to many types of forms, not just lines.
Disadvantages
– Computationally complex for objects with many parameters.
– Looks for only one single type of object.
– Can be “fooled” by “apparent lines”.
– The length and the position of a line segment cannot be determined.
– Co-linear line segments cannot be separated.
2.7 METHODOLOGIES
MODULE 1
43 | P a g e
as (N=N1 X N2). Since it was in RGB (color) format, it was converted into grayscale using RGB
to gray conversion process. Image Resizing process also done if needed.
MODULE 2
PREPROCESSING:
The original image is smoothed implementing with a Gaussian filter. The result is an
image with less blur. It is intended to obtain the real edges of the image.
MODULE 3
SEGMENTATION:
The algorithmic steps are as follows:
• Convolve image f (r, c) with a Gaussian function to get smooth image f^(r, c).
f^ (r, c) =f(r, c)*G(r,c,6)
• Apply first difference gradient operator to compute edge strength then edge magnitude
and direction are obtained as before.
• Apply non-maximal or critical suppression to the gradient magnitude.
• Apply threshold to the non-maximal suppression image.
Unlike Roberts and Sobel, the Canny operation is not very susceptible to noise. If the Canny
detector worked well it would be superior.
MODULE 4
FRACTURE DETECTION:
The last stage of this system is fracture detection it is performed by the procedures. First, the
useful features such as straight lines can be extracted from the image. And then, these features
are used to detect fracture or non-fracture image. After enhancing and segmentation the input
image, the process is extracted the features in binary image by using Hough transform. The
Hough transform is a feature extraction technique it is concerned with the identification of
straight lines, shapes, curves in a given image. It takes a binary image as an input.
44 | P a g e
CHAPTER 3
SOFTWARE SPECIFICATION
In 2004, MATLAB had around one million users across industry and
academia. MATLAB users come from various backgrounds of engineering, science,
and economics. MATLAB is widely used in academic and research institutions as well as
industrial enterprises.
45 | P a g e
can be saved in a text file, typically using the MATLAB Editor, as a script or encapsulated into
a function, extending the commands available.
MATLAB provides a number of features for documenting and sharing your work. You
can integrate your MATLAB code with other languages and applications, and distribute your
MATLAB algorithms and applications.
MATLAB is used in vast area, including signal and image processing, communications,
control design, test and measurement, financial modeling and analysis, and computational. Add-
on toolboxes (collections of special-purpose MATLAB functions) extend the MATLAB
environment to solve particular classes of problems in these application areas.
MATLAB can be used on personal computers and powerful server systems, including
the Cheaha compute cluster. With the addition of the Parallel Computing Toolbox, the language
can be extended with parallel implementations for common computational functions, including
for-loop unrolling. Additionally this toolbox supports offloading computationally intensive
workloads to Cheaha the campus compute cluster.MATLAB is one of a few languages in which
each variable is a matrix (broadly construed) and "knows" how big it is. Moreover, the
fundamental operators (e.g. addition, multiplication) are programmed to deal with matrices when
required. And the MATLAB environment handles much of the bothersome housekeeping that
makes all this possible. Since so many of the procedures required for Macro-Investment Analysis
46 | P a g e
involves matrices, MATLAB proves to be an extremely efficient language for both
communication and implementation.
Libraries written in Java, ActiveX or .NET can be directly called from MATLAB and
many MATLAB libraries (for example XML or SQL support) are implemented as wrappers
around Java or ActiveX libraries. Calling MATLAB from Java is more complicated, but can be
done with MATLAB extension, which is sold separately by Math Works, or using an
undocumented mechanism called JMI (Java-to-Mat lab Interface), which should not be confused
with the unrelated Java that is also called JMI.
As alternatives to the MuPAD based Symbolic Math Toolbox available from Math Works,
MATLAB can be connected to Maple or Mathematica.
Development Environment
MATLAB provides a high-level language and development tools that let you quickly
develop and analyze your algorithms and applications.
47 | P a g e
The MATLAB Language
The MATLAB language supports the vector and matrix operations that are fundamental
to engineering and scientific problems. It enables fast development and execution. With the
MATLAB language, you can program and develop algorithms faster than with traditional
languages because you do not need to perform low-level administrative tasks, such as declaring
variables, specifying data types, and allocating memory. In many cases, MATLAB eliminates the
need for ‘for’ loops. As a result, one line of MATLAB code can often replace several lines of C
or C++ code.
At the same time, MATLAB provides all the features of a traditional programming
language, including arithmetic operators, flow control, data structures, data types, object-oriented
programming (OOP), and debugging features.
MATLAB lets you execute commands or groups of commands one at a time, without
compiling and linking, enabling you to quickly iterate to the optimal solution. For fast execution
of heavy matrix and vector computations, MATLAB uses processor-optimized libraries. For
general-purpose scalar computations, MATLAB generates machine-code instructions using its
JIT (Just-In-Time) compilation technology.
This technology, which is available on most platforms, provides execution speeds that
rival those of traditional programming languages.
MATLAB Editor
Provides standard editing and debugging features, such as setting breakpoints and single
stepping
Code Analyzer
Checks your code for problems and recommends modifications to maximize performance
and maintainability
48 | P a g e
MATLAB Profiler
Directory Reports
Scan all the files in a directory and report on code efficiency, file differences, file
dependencies, and code coverage.
MATLAB supports the entire data analysis process, from acquiring data from external
devices and databases, through preprocessing, visualization, and numerical analysis, to
producing presentation-quality output.
Data Analysis
MATLAB provides interactive tools and command-line functions for data analysis operations,
including:
49 | P a g e
Data Access
MATLAB is an efficient platform for accessing data from files, other applications,
databases, and external devices. You can read data from popular file formats, such as Microsoft
Excel; ASCII text or binary files; image, sound, and video files; and scientific files, such as HDF
and HDF5. Low-level binary file I/O functions let you work with data files in any format.
Additional functions let you read data from Web pages and XML.
Visualizing Data
All the graphics features that are required to visualize engineering and scientific data are
available in MATLAB. These include 2-D and 3-D plotting functions, 3-D volume visualization
functions, tools for interactively creating plots, and the ability to export results to all popular
graphics formats. You can customize plots by adding multiple axes; changing line colors and
markers; adding annotation, Latex equations, and legends; and drawing shapes.
2-D Plotting
MATLAB provides functions for visualizing 2-D matrices, 3-D scalar, and 3-D vector
data. You can use these functions to visualize and understand large, often complex,
50 | P a g e
multidimensional data. Specifying plot characteristics, such as camera viewing angle,
perspective, lighting effect, light source locations, and transparency.
51 | P a g e
CHAPTER 4
RESULTS
4.1 IMPLEMENTATION
4.2 SNAPSHOTS
SNAPSHOT TO SELECT ANX-RAY IMAGE
52 | P a g e
4.2.1. ORIGINALX RAY IMAGE
53 | P a g e
4.2.2. FILTERED IMAGE
54 | P a g e
4.2.3. EDGE DETECTION IMAGE
55 | P a g e
4.2.4. HOUGH-THRESHOLDED IMAGE
56 | P a g e
4.2.5. FINAL IMAGE
57 | P a g e
CHAPTER 5
58 | P a g e
APPLICATIONS
Used for effective analysis of the leg fractures and provides a crystal-clear view
of theFracture.
Provides accurate result.
Saves the analysis time that doctors take to Identify the Fracture.
Burden on the doctors can also be reduced.
Chance for Human errors can be reduced using this method.
CHAPTER 6
59 | P a g e
CONCLUSION AND REFERENCES
6.1 CONCLUSION
This project presented the image processing technique to detect the bone fracture.
The fully automatic detection of fractures in leg bone is an important but difficult
problem. According to the test results, the system has been done to detect the bone
fracture. A conclusion can be made that the performance of the detection method
affected by the quality of the image. The better the image quality, the better the
result system got. In future work, focusing on other works like detecting on smaller
bone, ankle fractures, etc. may be considered.
6.2 REFERENCES
[2] Mahmoud Al-Ayyoub, IamailHmeidi, Haya Rababaha, Detecting Hand Bone Fractures
in X-Ray Images, Jordan University of Science and Technology Irbid, Jordan, Volume 4.
No.3, September 2013.
[3] S.K. Mahndran, S. SanthoshBaBoo, An Enhanced Tibia Fracture Detection Tool Using
Image Processing and Classification Fusion Techniques in X-Ray Images, Sankara College
of Science and Commerce, Coimbatore, Tamil Nadu, India, Online ISSN: 0975-4172
&Print ISSN: 0975-4350, Volume 11 Issue 14 Version 1.0 August 2011.
60 | P a g e
[4] S.K. Mahndran, S. SanthoshBaBoo, An Ensemble Systems for Automatic Fracture
Detection, IACIT International Journal of Engineering and Technology, Vol.4, No. 1,
February 2012.
[5] Rashmi, Mukesh Kumar, and Rohini Saxena, Algorithm and Technique on Various
Edge Detection: A Survey, Department of Electronics and Communication Engineering,
SHIATS- Allahabad, UP. -India, Vol. 4, No. 3, June 2013.
[6] Mahmoud Al-Ayyoub, Duha Al-Zghool, Determining the Type of Long Bone Fractures
in X-Ray Images, Jordan University of Science & Technology, Irbid 22110, Jordan, E-
ISSN: 2224-3402, Issue 8, Volume 10, August 2013.
[7] Yuancheng ― MIKE‖ Luo and Ramani Duraiswami, Canny Edge Detection on
NVIDIA CUDA, Computer Science & UMIACS, University of Maryland, College Park
61 | P a g e