0% found this document useful (0 votes)
10 views52 pages

Bone Cancer Detection Using Artificial Neural Network

This project report presents a method for detecting bone cancer using Artificial Neural Networks (ANN) and medical image processing techniques. It emphasizes the importance of early detection through MRI scans and describes a multi-stage process involving image preprocessing, segmentation, feature extraction, and classification. The project aims to improve the accuracy and speed of bone cancer detection compared to existing methods.

Uploaded by

s.pream2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views52 pages

Bone Cancer Detection Using Artificial Neural Network

This project report presents a method for detecting bone cancer using Artificial Neural Networks (ANN) and medical image processing techniques. It emphasizes the importance of early detection through MRI scans and describes a multi-stage process involving image preprocessing, segmentation, feature extraction, and classification. The project aims to improve the accuracy and speed of bone cancer detection compared to existing methods.

Uploaded by

s.pream2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

BONE CANCER DETECTION USING ARTIFICIAL

NEURAL NETWORK

A PROJECT REPORT

Submitted by

PARVEEN BANU. A [Reg No: 1171310045]


BHARATH KUMAR. S [Reg No: 1171310046]
AINTHAVIARASI. K [Reg No: 1171310056]

Under the guidance of


Mrs. A. ASUNTHA, M.E
(Assistant Professor, Department of Electronics & Instrumentation Engineering)

in partial fulfillment for the award of the degree


of

BACHELOR OF TECHNOLOGY
in

ELECTRONICS AND INSTRUMENTATION


ENGINEERING
of

FACULTY OF ENGINEERING AND TECHNOLOGY

S.R.M. Nagar, Kattankulathur, Kancheepuram District


May 2017
SRM UNIVERSITY
(Under Section 3 of UGC Act, 1956)

BONAFIDE CERTIFICATE

Certified that this project report titled “BONE CANCER DETEC-


TION USING ARTIFICIAL NEURAL NETWORK” is the bonafide
work of “ PARVEEN BANU. A [Reg No: 1171310045], BHARATH
KUMAR. S [Reg No: 1171310046], AINTHAVIARASI. K [Reg No:
1171310056]”, who carried out the project work under my supervision.
Certified further, that to the best of my knowledge the work reported
herein does not form any other project report or dissertation on the basis
of which a degree or award was conferred on an earlier occasion on this
or any other candidate.

SIGNATURE SIGNATURE

Mrs. A. ASUNTHA, M.E Dr. A. VIMALA JULIET, M.E, Ph.D


GUIDE HEAD OF THE DEPARTMENT
Assistant Professor Dept. of Electronics and Instrumen-
Dept. of Electronics & Instrumenta- tation Engineering
tion Engineering

Signature of the Internal Examiner Signature of the External Examiner


ABSTRACT

Medical Image Processing is one of the most challenging topics in re-


search areas. Early detection of the cancer-prone area in MRI scan is of
great importance for the successful diagnosis and treatment of bone can-
cer. This project proposes an approach to detect bone cancer in MR im-
ages using medical image processing techniques. A proposed approach
has some preprocessing techniques which use a Gabor filter to smoothen
the image and to remove the noise from an image. This will increase the
quality of the image so that they are suitable for segmentation as well
as morphological operation. In the second stage, super pixel and mul-
tilevel segmentation are performed and some of the important features
are extracted from the images. The extracted image features are used to
identify the bone cancer and classified using Artificial Neural Network
(ANN).
ACKNOWLEDGEMENTS

we would like to express our deepest gratitude to our guide, Mrs. A. ASUNTHA her
valuable guidance, consistent encouragement, personal caring, timely help and provid-
ing us with an excellent atmosphere for doing research. All through the work, in spite
of her busy schedule, she has extended cheerful and cordial support to us for completing
this research work.

PARVEEN BANU. A

BHARATH KUMAR. S

AINTHAVIARASI. K

iv
TABLE OF CONTENTS

ABSTRACT iii

ACKNOWLEDGEMENTS iv

LIST OF TABLES vii

LIST OF FIGURES viii

ABBREVIATIONS ix

LIST OF SYMBOLS x

1 INTRODUCTION 1

2 LITERATURE SURVEY 3

3 BONE CANCER DETECTION 5


3.1 Types of Bone Cancer . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1.1 Primary bone cancer . . . . . . . . . . . . . . . . . . . . . 5
3.1.2 Secondary bone cancer . . . . . . . . . . . . . . . . . . . . 6
3.2 Detection of bone cancer . . . . . . . . . . . . . . . . . . . . . . . 7
3.2.1 X-rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2.2 Computer Tomography (CT) Scan . . . . . . . . . . . . . . 7
3.2.3 Magnetic Resonance Imaging (MRI) Scan . . . . . . . . . . 7
3.2.4 PET Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Cancer Diagnosis Using Neural Networks . . . . . . . . . . . . . . 9
3.3.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3.2 Artificial Neural Network (ANN) Classification . . . . . . . 9
3.3.3 Block Diagram Representation . . . . . . . . . . . . . . . . 10

v
4 STAGES OF BONE CANCER DETECTION USING IMAGE PROCESS-
ING 11
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Acquisition of Image . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3.1 Gabor filter . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.4 Gray Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.5 Edge detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.6 Morphological operation . . . . . . . . . . . . . . . . . . . . . . . 15
4.7 Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.7.1 Superpixel segmentation . . . . . . . . . . . . . . . . . . . 16

5 FEATURE EXTRACTION 19
5.1 Extracted features . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Testing results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

6 CLASSIFICATION USING ARTIFICIAL NEURAL NETWORK 30


6.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.2 Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . 31
6.3 Feedforward Neural Network . . . . . . . . . . . . . . . . . . . . . 33
6.3.1 Multilayer Feed-forward Network . . . . . . . . . . . . . . 34
6.4 Training the model . . . . . . . . . . . . . . . . . . . . . . . . . . 35

7 CONCLUSION AND FUTURE RESEARCH 40


7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
7.2 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
LIST OF TABLES

5.1 Feature Extraction for 5 bone cancer patients . . . . . . . . . . . 28

6.1 Performance analysis using confusion matrix . . . . . . . . . . . 37

vii
LIST OF FIGURES

3.1 Block diagram of bone cancer detection using Artificial Neural Network(ANN) 10

4.1 Input image of bone cancer . . . . . . . . . . . . . . . . . . . . . 12


4.2 Denoised image using Gabor filter . . . . . . . . . . . . . . . . . 13
4.3 RGB to Gray conversion . . . . . . . . . . . . . . . . . . . . . . 14
4.4 Edge detection using Canny method . . . . . . . . . . . . . . . . 15
4.5 Morphological operation for bone cancer . . . . . . . . . . . . . 16
4.6 Superpixel segmented image . . . . . . . . . . . . . . . . . . . . 18
4.7 Multilevel segmented image . . . . . . . . . . . . . . . . . . . . . 18

5.1 Bone cancer detection using Image Processing techniques . . . . 27


5.2 Feature extraction values for various features . . . . . . . . . . . 28
5.3 Performance analysis of feature extraction . . . . . . . . . . . . 29

6.1 Architecture of Feed forward Neural Network . . . . . . . . . . 34


6.2 Feed forward Neural Network . . . . . . . . . . . . . . . . . . . 34
6.3 Neural Network training using simulink toolbox . . . . . . . . . 36
6.4 Confusion Matrix for 5 bone cancer patients . . . . . . . . . . . 38
6.5 Receiver Operating Characteristic (ROC) curve for 5 bone cancer
patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

viii
ABBREVIATIONS

NN Neural Network

ANN Artificial Neural Network

BNN Biological Neural Network

MRI Magnetic Resonance Imaging

CT Computer Tomography

PET Positron Emission Tomography

RGB Red Green Blue

RMS Root Mean Square

IDM Inverse Difference Moment

ASM Angular Second Moment

GLCM Gray Level Co-occurance Matrix

GUI Graphical User Interface

DN Digital Number

ROC Receiver Operating Characteristic

ix
LIST OF SYMBOLS

µ Mean
σ standard deviation

x
CHAPTER 1

INTRODUCTION

Cancer is one of the most serious health problems in the world field. The mortality
rate of cancer is the highest among all other diseases. Bone cancer is one of the most
serious cancers in the world, with the smallest survival rate after the diagnosis, with
gradual increase in the number of deaths every year. Survival from bone cancer is
directly related to its growth at its detection time.
A tumor is an abnormal growth of new tissue and that can be formed in any of the organs
in our body. There are many different kinds of cancer like lung cancer, brain cancer,
breast cancer and bone cancer, etc. Nowadays bone cancer is considered to be one of
the most dangerous and serious cancer in the world, with the smallest survival rate after
the diagnosis. If cancer directly affects the bone then it is called sarcomas. Mostly the
bone cancer is classified as primary or secondary. Cancer that occurs in the bone is the
primary one whereas if it initially occurs anywhere in the body and affects the bone is
called the secondary. There are two types of bone cancer, noncancerous (benign) and
cancerous (malignant). Most of the bone tumors are benign which means they cannot
spread and non cancerous. Bone cancer that destroys the normal bone tissue and it
spreads to the other parts of the body is called metastasis. Like any other cancer bone
cancer also occur in four stages.
Stage 1 Cancer is not aggressive and not spread out of the bone.
Stage 2 Aggressive cancer.
Stage 3 Cancerous cells exist in multiple places of the same bone.
Stage 4 Spreading of cancer to other parts of the body.
Still, the exact cause of bone cancer is not known the prevention cannot be done for this
uncommon cancer. Cancer will occur most often in long bones of the arms and legs.
Obtaining an accurate result in bone cancer detection is very important in many imaging
application. It mainly helps to plan for the treatment at the earlier stage and for the
evaluation of the therapy. The early detection of bone cancer will decrease the mortality
rate. To obtain more accurate results, we divided the whole process into three stages,
image processing stage, image segmentation, feature extraction and classification. The
main objective of our proposed method is to have a fast and robust system for detecting
the bone cancer in early stage and to obtain a more accurate result than many other
existing techniques.

2
CHAPTER 2

LITERATURE SURVEY

Sinthia P and K. Sujatha (2) proposed a novel approach to detect the bone cancer using
K-means algorithm and edge detection method. This methodology used sobel edge de-
tection to detect the edge. Sobel edge detector detects only the border pixels. K-Means
clustering algorithm is used to detect the tumor area. Defining the number of cluster is
the difficult step in K-Means clustering algorithm.

Kishor Kumar Reddy (3) proposed a novel approach for detecting the tumor size and
bone cancer stage using region growing algorithm. This methodology segmented the
region of interest by using region growing algorithm. Tumor size is calculated accord-
ing to the number of pixel in the extracted tumor part. Depending upon the total pixel
value cancer stage is identified. Selection of seed point depends on the image and it is
difficult to select accurately.

Maduri Avula (4) proposed a method to detect the bone cancer from MR images us-
ing Mean pixel intensity. The input MR image is denoised and K-Means clustering
algorithm is applied to extract the tumor part. From the extracted tumor part total num-
ber of pixel is computed and the sum of pixel intensity is calculated for the extracted
tumor part to calculate the mean pixel intensity. Mean pixel intensity is calculated to
identify the cancer. If the mean pixel intensity value is above the threshold value it is
considered as cancer.

Abdulmuhssin Binhssan (5) proposed a method to detect the enchondroma tumor. The
input image is denoised using bilateral filter and average filter. Bilateral filter has cer-
tain disadvantage It takes more time to denoise the image. Average filter provides better
result compare to bilateral filter. Thresholding segmentation is carried out to segment
the image and morphological operations are applied to enhance the tumor area.
Ezhil E.Nithila and S.S Kumar (1) proposed Automatic detection of solitary pulmonary
nodules using swarm intelligence optimized neural networks on CT images. This method-
ology used Gaussian filter to remove the noise and contour model to segment the image.
Leakage problem arises due to weak boundary. Nodule is detected from the segmented
image. Borders of the nodule is corrected to recover the lung nodule. Various features
are extracted to find the tumor accurately. The extracted feature is applied to back prop-
agation neural network to train the data and to classify the tumor.

Mokhled S. Al-Tarawneh (6) proposed a method of Lung Cancer Detection Using Im-
age Processing Techniques. This methodology used Gabor filter to denoise the image.
Gabor filter has the best results. To segment the image two segmentation methods are
used. Thresholding approach and marker controlled watershed segmentation are the
two algorithms. Marker controlled segmentation technique provides better result com-
pare to thresholding approach. The image features are extracted using binarization and
masking approach to identify the cancer.

4
CHAPTER 3

BONE CANCER DETECTION

This chapter provides a detailed description of background information on bone cancer


detection. The chapter starts with a description of different types of bone cancer and
classification method used to detect the cancer prone area with high accuracy.

3.1 Types of Bone Cancer

There are two types of bone cancer, primary bone cancer and secondary bone cancer.

3.1.1 Primary bone cancer

A primary bone cancer is cancer that starts in the cells of the bones. The cancer cells are
bone cells that have become cancerous. It affects males more frequently than females.
There are different types of primary bone cancer. They are classified by the type of cell
which occurs in the cancer end with sarcoma. A sarcoma is a cancer that originates
from cells which occur in and make supporting tissues of the body.For example, bone,
muscle, cartilage, ligaments, etc.

Osteosarcoma

This is the most common type of primary bone cancer. It arises from bone forming
cells. Most cases occur in young people between the ages of 10 to 25. It can however
occur at any age. It typically develops in the growing ends of the bone in young people,
most commonly in bones next to knee and the upper arms. However, any bone can be
affected.
Ewing’s sarcoma

The cells of this cancer look different to the more common osteosarcoma. Most cases
occur in young people between the ages of 10 to 20. It most commonly affects the hips
and long bones in the leg.

Chondrosarcoma

This type of cancer arises from cartilage forming cells. Most cases occur in people
between the ages of 40 and 75. It most commonly affects the pelvis, shoulder blade,
ribs, and the bones of the upper parts of the arms and legs.

Other

Other types of primary bone cancer tumor include Fibrosarcoma, Leimyosarcoma, Ma-
lignant Fibrousnhistiocytoma, Angiosarcoma and Chondroma.

3.1.2 Secondary bone cancer

Secondary bone cancer is a cancer when cancer cells spread from another part of the
body to bones.

6
3.2 Detection of bone cancer

The prime method for cancer detection is through radiological imaging exams. There
are many technologies used in the diagnosis of bone cancer like X-rays, CT scans, FDG-
PET scans, Bronchoscopy, Fluorescent bronchoscopy and Sputum Cytology. Some of
the most commonly used technologies are X-rays, CT, MRI and Positron Emission
Tomography (PET) scans.

3.2.1 X-rays

Chest X-ray diagnosis of cancer is one of the oldest and most effective ways of diago-
nising asymptomatic cancers. X-rays are high energy radiation with waves shorter than
those of visible light. Images obtained using low dose X-rays help diagnose disease and
high dose X-rays help to treat cancer. Although Computer Tomography CT is generally
considered the most effective imaging modality for detection of cancer, X-ray images
are used because of its low cost, simplicity, and low radiation dose.

3.2.2 CT Scan

CT, sometimes called a CAT scan, uses special X-ray equipment to obtain image data
from different angles around the body, and then uses computer processing of the infor-
mation to show a cross section of the body tissues and organs. CT imaging can show
several type of tissue- lung, bone, soft tissue and blood vessels with great clarity. Us-
ing specialized equipment and expertise to create and interpret CT scans of the body,
radiologists can more easily diagnose cancer problems.

3.2.3 MRI Scan

MRI is an advanced medical scanning technology used by physicians to obtain images


of the internal structures of the body. MRI uses two safe and natural forces, a mag-
netic and radio waves, to produce vivid images of the internal body parts. Computer
technology creates detailed images of the soft tissues, muscles, nerves, and bones in

7
human body. MRI provides doctor with a high degree of accuracy to aid them in ac-
curate results. MRI does not use any radiation as compared to X-ray and CT scanning
techniques. The procedure is invasive without any side effects.

3.2.4 PET Scan

The PET scan creates computerized images of chemical changes that take place in tis-
sue. The substance injected into the patient consists of a combination of a sugar and a
small amount of radioactive material. the radioactive sugar can help in locating a tu-
mor, because cancer cells take up or absorb sugar faster than other tissues in the body.
A PET scanner detects the radiation. A computer translates this information into the
images that a radiologist can interprets. PET scans determine whether a breast mass is
cancerous, but they are more accurate in detecting larger and more aggressive tumors
than they are in locating tumors that are smaller than 8 mm and/or less aggressive. PET
scans may be a helpful in evaluating and staging recurrent cancers.

8
3.3 Cancer Diagnosis Using Neural Networks

Improving the ability to identify early stage tumors is an important goal for physicians,
because early detection of bone cancer is the key factor in producing successful treat-
ments. Diagnosis of cancer using a neural network approach has become one of the
widest used technique for diagnosis of cancer.

3.3.1 Classification

Classification is the process of classifying the cancerous images by extracting the fea-
tures of the given image suffering from the cancer and these features are compared with
the features of the given sample images. The sample images are given for classification
and the features of these images are compared with the given image and hence bone
cancer is detected.

3.3.2 ANN Classification

Neural Network (NN), a mimic of Biological Neural Network (BNN), is a massively


parallel distributed processing system which is made up of highly interconnected neu-
ral computing elements that have the ability to learn and thereby acquire knowledge
and make it available for use. Artificial neural network is a mathematical model that
tries to simulate the structure and functionalities of biological neural networks. A neu-
ral network is employed for bone cancer detection. A multilayer feed forward neural
network with supervised learning method is more reliable and efficient for detection of
bone cancer. Artificial neural networks offer a completely different approach to prob-
lem solving and they are sometimes called the sixth generation of computing. So the
ANN is preferred to classify the cancer. The extracted features are given as a input to
the neural network. The ANN is trained by exposing it to a set of existing data (based
on the follow up history of cancer patients) where the outcome is known. Multilayer
networks use a variety of learning techniques. It is one of the most effective approaches
to machine learning algorithm information which flows from the direction of the input
layer towards the output layer. The ANN is the robust system for detecting the bone

9
cancer in early stage and it obtains a more accurate result than many other existing
techniques.

3.3.3 Block Diagram Representation

Figure 3.1: Block diagram of bone cancer detection using Artificial Neural
Network(ANN)

Block diagram representation of various stages involved in diagnosis procedure used


in this thesis is showed in Figure 3.1.It basically consists of three stages, acquiring an
image and applying some pre-processing techniques such as filtering and gray conver-
sion. It is mainly to remove the noise from the image and to improve the quality of an
image. Then the morphological operations are performed to to eliminate the unwanted
small objects and to expand the region of interest. The second stage is the segmenta-
tion, it is a process of partitioning the image into multiple segments so that it is suitable
for the detection of cancer prone area. The feature extraction and the classification is
the final stage. Some of the important features are extracted from the image and it is
classified by using the artificial neural network.

10
CHAPTER 4

STAGES OF BONE CANCER DETECTION USING


IMAGE PROCESSING

This chapter provides a detailed description of the preprocessing stage of the system
presented in this thesis. The chapter starts with a brief introduction to preprocessing
techniques like filtering and gray conversion and then moves on to discuss the methods
used in our system for edge detection and morphological operations.

4.1 Introduction

In recent years the image processing mechanisms are used widely in several medical
areas for improving earlier detection and treatment stages, in which the time factor is
very important to discover the disease in the patient as possible as fast, especially in
various cancer tumors such as the lung cancer, bone cancer, etc. The image processing
techniques has some preprocessing methods to ease the next processing stage, which
identifies the cancerous cells. Preprocessing is a primary step to improve the quality
of an image. These methods take images to look better. The preprocessing techniques
emphasize and sharpen image features for display and analysis. Images typically have
both large scale and small scale variation in intensity, representing the features of vary-
ing sizes. It is difficult with ordinary X-rays to differentiate between adjacent soft tis-
sues and organs or to distinguish diseased tissue, such as tumor, from the surrounding
healthy tissue from which the cancer may have arisen. So preprocessing steps are nec-
essary in order to differentiate the tissues, either increasing or decreasing the contrast
of tissue relative to other.
4.2 Acquisition of Image

The first stage starts with taking a collection of MRI scan images. The bone MRI images
having low noise when compared to scan image and CT image. So we can take the MRI
images for detecting the bone tumors. The main advantage of the MRI image having
better clarity, low noise and distortion. There are different image modalities such as CT
scans, MRI, and X-rays. The MR images are considered to be the best because of its
higher resolution. The MRI is used to show the 2D images of the body.

Figure 4.1: Input image of bone cancer

4.3 Filtering

Usually, an image contains noises such as occlusions, variations in the illuminations


and so on. So these noises should be eliminated. Image denoising algorithms may be
the mostly used in image processing. The unfiltered image also contains noises such
as white noise, salt and pepper noise etc. White noise is one of the most common
problems in image processing. This can be removed by using filter from the extracted
bone image. It is a way to improve the quality of image, so that the resultant image is
better than the original one. It is to improve the visual appearance of an image or to
provide a better transform representation for future automated image processing.

12
4.3.1 Gabor filter

Gabor filtering is the common image enhancement technique for removing salt and pep-
per noise without significantly reducing the sharpness of the image. Gabor filter is used
in order to remove the noise and to smoothen the images. The main advantage of this
filter is, it produces excellent noise reduction with less blurring than linear smoothing
filters of similar size. It allows a great deal of high spatial frequency detail to pass while
remaining very effective at removing the noises on images thereby affecting less than
half of the image pixels in a smoothening neighborhood.

Figure 4.2: Denoised image using Gabor filter

4.4 Gray Conversion

This is the process of converting the pixels having Red Green Blue (RGB) level into the
gray level. A gray level image can be easily processed in comparison to colored image.
The reason are the pixels to be processed separately which have different RGB values.
Therefore gray information obtained by retaining the luminance.

13
Figure 4.3: RGB to Gray conversion

4.5 Edge detection

An edge detector used to obtain a boundary between two regions with relatively dis-
tinct gray level properties. Edge detection includes a variety of mathematical methods
that aim at identifying points in a digital image at which the image brightness changes
sharply or more formally, has discontinuities. The points at which image brightness
changes sharply are typically organized into a set of curved line segments termed edges.
The result of applying an edge detector to an image may lead to a set of connected
curves that indicate the boundaries of an object, the boundaries of objects, the bound-
aries of surface marking as well as curves that respond to discontinuities in surface
orientation. Thus applying an edge detection algorithm to an image may significantly
reduce the amount of data to be processed and therefore filter out information that may
be regarded as less relevant, while preserving the important structural properties of an
image. Edge detection used to extract useful features for pattern recognition in cancer
images.
The canny edge detector is an edge detection operator that uses a multistage algorithm
to detect a wide range of edges in images. It was developed by John F. Canny in 1986.
Canny edge detector is used for detecting an edge of an image. It first blurs the image
and then by applying an algorithm that effectively thins the edges to one pixel. Canny
edge detection is a technique to extract useful structural information from different vi-
sion objects and dramatically reduce the amount of data to be processed. Thus, an edge
detection solution to address these requirements can be implemented in a wide range of
situation.

14
The general criteria for edge detection includes:
Detection of edge with low error rate, which means that the detection should accurately
catch as many edges as possible.
A given edge in the image should only be marked once, and where possible, image
noise should not create false edges.
The edge point detection from the operator should accurately localize on the center of
the edge.
Among the edge detection methods developed so far, canny edge detection algorithm is
one of the most strictly defined methods that provides good and reliable detection. The
advantage of this canny detector includes good localization, and minimal response.

Figure 4.4: Edge detection using Canny method

4.6 Morphological operation

Morphological image processing is a collection of non linear operations related to the


shape or morphology of features in an image. Morphological operations rely on the
relative ordering of pixel values, not on their numerical values, and therefore are espe-
cially suited to the processing of binary images. These operations can also applied to
the gray scale images such that their light transfer functions are unknown and therefore
their absolute pixel values are of or minor interest.
Morphological operations are used to identify shape, size, and connectivity. Two basic
operations of the morphological technique are dilation and erosion. Dilation operation
is used to expand the region. Erosion operation is used to erode away or to eliminate
the small objects.

15
Figure 4.5: Morphological operation for bone cancer

Erosion and dilation

The erosion of the binary image by a structuring element produces a new binary image
with ones in all locations of a structuring element fits the input image. 1 is fits and 0
otherwise. Erosion with small square structuring elements shrinks an image by stripping
away a layer of pixels from both the inner and outer boundaries of regions. The holes
and gaps between different regions become larger, and small details are eliminated.
Erosion removes small scale details from a binary image but simultaneously reduces
the size of regions of regions of interest too. By subtracting the eroded image from the
original image, boundaries of each region can be found.
Dilation has the opposite effect to erosion. It adds a layer of pixels to both the inner and
outer boundaries of regions. The holes enclosed by a single region and gaps between
different region become smaller, and small intrusion into boundaries of region are filled.
Results of dilation or erosion are influenced both by the size and shape of a structuring
element. Dilation and erosion are dual operations in that they have opposite effects.

4.7 Image Segmentation

4.7.1 Superpixel segmentation

In computer vision, segmentation refers to the process of partitioning a digital image


into multiple segments (sets of pixels, also known as superpixels). Image segmentation

16
is typically used to locate objects and boundaries (lines, assigning a label to every pixel
in an image such that pixels with the same label share certain visual characteristics.
The result of image segmentation is a set of segments that collectively cover the entire
image, or a set of contours extracted from the image. Each of the pixels in a region
is similar with respect to some characteristic or computed property, such as color, in-
tensity, texture. All image processing operations generally aim at a better recognition
of objects of interest, i.e., at finding suitable local features that can be distinguished
from other objects and from the background. The next step is to check each individual
pixel to see whether it belongs to an object of interest or not. This operation is called
segmentation and produces a binary image. A pixel has the value one if it belongs to
the object otherwise it is zero. After segmentation, it is known that which pixel belongs
to which object.
Segmentation is the process of partitioning the image into multiple segments. This
methodology used superpixel segmentation and multilevel segmentation. This method
segments the image into bigger pixels compare to other segmentation techniques.
With changing era, improvement is observed in imaging technology thus resolution of
sensor becomes better in the method used in this paper first of all pixels are grouped
based on color and location followed by detection of cancerous cells at superpixel level
which reduces the computational cost and makes the detection of cancer prone area
faster.
As a restricted form of region segmentation, superpixels can balance the conflicting
goals of reducing image complexity through pixel grouping while avoiding under seg-
mentation. Various superpixel method cannot be implied most superpixel methods often
suffer from high computational cost, poor segmentation quality, inconsistent size and
shape, or multiple difficult to tune parameters. Cancer prone region may be irregular
or spiculated in medical image and appear as disjoint segments in further stages. To
enforce connectivity, most methods relabel them with the labels of the larger neigh-
bouring clustering which may be a normal region. Thus it is hard to detect cancerous
from this superpixel since most pixels are noncancerous. The solution for this problem
is to assign a new label to disjoint segment, and thus the small cancerous region can
show up a single superpixel.

17
Figure 4.6: Superpixel segmented image

Figure 4.7: Multilevel segmented image

18
CHAPTER 5

FEATURE EXTRACTION

The Image feature extraction stage is very important in image processing techniques
which uses an algorithms and techniques to detect and isolate various desired portions
or shapes (features) of an image. It plays a major role in the detection of cancer. After
the segmentation is performed on bone region, the features can be obtained from it and
the diagnosis rule can be designed to exactly detect the cancer cells in the bone. This
diagnosis rules can eliminate the false detection of cancer cells resulted in segmentation
and provides better diagnosis.
Feature extraction is an essential stage that represents the final results to predict can-
cer and non-cancer of an image. Feature extraction reduces the number of resources
required to describe a large set of data. It is the process by which certain features of
interest within an image are detected and represented for further processing. The fea-
ture is described as a function of one or more measurements. Each feature specifies
some quantifiable property of an object and is computed such that it quantifies some
significant characteristics of the object. A feature is a significant piece of information
extracted from an image which provides more detailed understanding of the image.
The features like geometric and intensity-based statistical features are extracted. Shape
measurements are physical dimensional measures that characterize the appearance of
an object.

5.1 Extracted features

In this project, various features such as Mean, standard deviation, contrast, correlation,
energy, homogeneity, entropy, Root Mean Square (RMS), variance, smoothness, Kur-
tosis, skewness, Inverse Difference Moment (IDM) are classified.
The primary task of pattern recognition is to take an input pattern and correctly assign
it as one of the possible output classes. This process can be divided into two general
stages: Feature selection and Classification. Feature selection is critical to the whole
process since the classifier will not be able to recognize from poorly selected features.
Criteria to choose features given by Lippman are: Features should contain information
required to distinguish between classes, be insensitive to irrelevant variability in the
input, and also be limited in number, to permit, efficient computation of discriminant
functions and to limit the amount of training data required.
Feature extraction is an important step in the construction of any pattern classification
and aims at the extraction of the relevant information that characterizes each class.
A good feature set contains discriminating information, which can distinguish one ob-
ject from other objects. It must be as robust as possible in order to prevent generating
different feature codes for the objects in the same class. The selected set of features
should be a small set whose values efficiently discriminate among patterns of different
classes, but are similar for patterns within the same class. Features can be classified into
two categories local features and global features.

Mean

Mean is the measure of the average intensity value of the pixels present in the region.
The average brightness of a region is defined as the sample mean of the pixel bright-
nesses within that region.

X- intensity value of pixel


i- number pixel
n- total number of pixels

20
Standard deviation

Standard deviation is the measure of how much that gray levels differ from its mean.
The Standard Deviation is the estimate of the mean square deviation of gray pixel value
p (i, j) from its mean value. Standard deviation describes the dispersion within a local
region.

X- intensity value of pixel


X - mean value of pixels
i- number of pixel
n- total number of pixels

Contrast

Contrast is the measure of the difference between the brightness of the objects or regions
and other objects within the same field of view. Contrast generally refers to the differ-
ence in luminance or gray level values in an image and is an important characteristic. It
can be defined as the ratio of the maximum intensity to the minimum intensity over an
image. Contrast ratio has a strong bearing on the resolving power and detectability of
an image. Larger this ratio, more easy it is to interpret the image. Satellite images lack
adequate contrast and require contrast improvement.

p(i,j)- pixel value at point (i,j)

21
Correlation

Correlation is the measure of degree and type of relationship between adjacent pix-
els.Correlation translates the mask directly to the image without flipping it. It is often
used in applications where it is necessary to measure the similarity between images or
parts of images. If the mask is symmetric (i.e., the flipped mask is the same as the
original one) then the results of convolution and correlation are the same. This feature
measures how correlated a pixel is to its neighborhood. It is the measure of gray tone
linear dependencies in the image. Feature values range from -1 to 1, these extremes
indicating perfect negative and positive correlation respectively.

p(i,j)- pixel value at point (i,j)


µi - mean value of pi
µj - mean value of pj
σi - standard deviation value of pi
σj - standard deviation value of pj

Energy

Energy is the sum of squared elements in the Gray level co-occurrence of Matrix. En-
ergy is also known as uniformity. The range of energy is [0 1]. Energy also means
uniformity, or Angular Second Moment Angular Second Moment (ASM). The more
homogeneous the image is, the larger the value. When energy equals to 1, the image is
believed to be a constant image.

p(i,j)- pixel value at point (i,j)

22
Homogeneity

Homogeneity is the closeness of the distribution of elements in the Gray Level Co-
occurance Matrix (GLCM). Homogeneity measures the similarity of pixels. A diago-
nal gray level co-occurrence matrix gives homogeneity of 1. It becomes large if local
textures only have minimal changes.

p(i,j)- pixel value at point (i,j)

Entropy

Entropy is a statistical measure of randomness that can be used to characterize the tex-
ture of the input image. Entropy can also be used to describe the distribution variation
in a region.

RMS

RMS is the measure of root mean square value of an image. The root mean square
(abbreviated RMS or rms) is defined as the square root of mean square (the arithmetic
mean of the squares of a set of numbers). The RMS is also known as the quadratic
mean and is a particular case of the generalized mean with exponent RMS can also be
defined for a continuously varying function in terms of an integral of the squares of the
instantaneous values during a cycle.

X- intensity value of pixel


n- number of pixel

23
N- total number of pixels

Variance

Variance is the measure of variance value of an image. The dispersion of the values
around the mean is represented by variance. It is a measure of gray level contrast that
can be used to establish descriptors of relative components.

X- intensity value of pixel


X - mean value of pixels
i- number of pixel n- total number of pixels

Smoothness

Smoothness is a measure of relative smoothness of intensity in a region. In image


processing, to smooth a data set is to create an approximating function that attempts
to capture important patterns in the data, while leaving out noise or other fine-scale
structures/rapid phenomena. In smoothing, the data points of a signal are modified so
individual points (presumably because of noise) are reduced, and points that are lower
than the adjacent points are increased leading to a smoother signal. Smoothing may
be used in two important ways that can aid in data analysis by being able to extract
more information from the data as long as the assumption of smoothing is reasonable
and by being able to provide analyses that are both flexible and robust. Many different
algorithms are used in smoothing.

Kurtosis

Kurtosis is a measure of peaks distribution related to the normal distribution.


x- intensity value of pixel

24
µ - mean value of pixel
σ - standard deviation value of pixel

Skewness

Skewness is a measure of asymmetry in a statistical distribution. Skewness is a measure


of the asymmetry of the probability distribution of a real-valued random variable about
its mean. The skewness value can be positive or negative, or even undefined.

x- intensity value of pixel


µ - mean value of pixel
σ - standard deviation value of pixel

IDM

Inverse Difference Moment is a measure of image texture usually called homogeneity.


IDM features obtain the measure of the closeness of the distribution of GLCM elements
to the GLCM diagonal. The smoothness of the image is explained by this feature. The
IDM is expected to be high if the gray levels of the pixel are similar. This measure
relates inversely to the contrast measure.

x- intensity value of pixel


µ - mean value of pixel

25
After calculating the physical dimensional measure, the texture feature extraction is
also calculated on the quantized image by using GLCM method, one of the most known
texture analysis method. A gray level co-occurrence matrix is a second order statistical
measure introduced by Haralick. GLCM is the gray-level co-occurrence matrix, also
known as the gray level spatial dependence matrix. The Gray-Level Co-occurrence
Matrix is based on the extraction of a gray-scale image. The GLCM functions char-
acterize the texture of an image by calculating how often pairs of pixel with specific
values and in a specified spatial relationship occur in an image, creating a GLCM, and
then extracting statistical measures from this matrix.

26
5.2 Testing results

The work was tested on MRI images of 5 bone cancer patients. The MATLAB appli-
cation with Graphical User Interface Graphical User Interface (GUI) was developed to
enable users to perform interactive tasks.

Figure 5.1: Bone cancer detection using Image Processing techniques

27
Figure 5.2: Feature extraction values for various features

Various features are extracted from 5 bone cancer patients and the extracted feature
values are tabulated below in Table 5.1

Table 5.1: Feature Extraction for 5 bone cancer patients

Features Patient1 Patient2 Patient3 Patient4 Patient5


Contrast 0.1328 0.1951 0.2651 0.23 0.2129
Correlation 0.1618 0.1522 0.1305 0.1618 0.1628
Energy 0.8927 0.92 0.8369 0.8711 0.8234
Homogeneity 0.9709 0.9763 0.9532 0.9634 0.9509
Mean 0.0022 0.0026 0.0032 0.0029 0.0024
Standard deviation 0.064 0.0634 0.0821 0.0753 0.0811
Entropy 2.6426 1.3506 3.1958 2.3544 2.7827
RMS 0.064 0.0635 0.0822 0.0754 0.9589
Variance 0.0041 0.004 0.0068 0.0057 0.0066
Smoothness 0.956 0.9822 0.9721 0.9749 0.9589
Kurtosis 21.6003 47.2263 17.2783 26.5633 14.3405
Skewness 1.2933 3.287 1.5316 1.9942 1.0993

28
The performance analysis of extracted features are represented as the bar graph
in Figure 5.3. The classification of Benign and Malignant cancer was done based on
extracted feature values.

Figure 5.3: Performance analysis of feature extraction

29
CHAPTER 6

CLASSIFICATION USING ARTIFICIAL NEURAL


NETWORK

6.1 Classification

Classification is the important and last stage of our proposed system. The classifier
differentiates normal tumor from abnormal tumor. The overall objective of image clas-
sification is to automatically categorize all pixels in an image into land cover classes
or themes. Normally, multi spectral data are used to perform the classification, and the
spectral pattern present within the data for each pixel is used as numerical basis for
categorization. That is, different feature types manifest different combination of Digital
Number (DN)s based on their inherent spectral reflectance and emittance properties.
The traditional methods of classification mainly follow two approaches: unsupervised
and supervised. The unsupervised approach attempts spectral grouping that may have
an unclear meaning from the users point of view. Having established these, the analyst
then tries to associate an information class with each group. The unsupervised approach
is often referred to as clustering and results in statistics that are for spectral, statistical
clusters. In the supervised approach to classification, the image analyst supervises the
pixel categorization process by specifying to the computer algorithm.
Unsupervised classifiers do not utilize training data as the basis for classification. Rather,
this family of classifiers involves algorithms that examine the unknown pixels in an im-
age and aggregate them into a number of classes based on the natural groupings or
clusters present in the image values. It performs very well in cases where the values
within a given cover type are close together in the measurement space, data in different
classes are comparatively well separated.
Supervised classification can be defined normally as the process of samples of known
identity to classify pixels of unknown identity. Samples of known identity are those
pixels located within training areas. Pixels located within these areas term the training
samples used to guide the classification algorithm to assigning specific spectral values
to appropriate informational class.
Various texture characteristics such as mean, standard deviation, contrast, correlation,
energy, homogeneity, entropy, RMS, variance, smoothness, Kurtosis, skewness and
IDM are extracted and applied to Artificial Neural Network to train the data.

6.2 Artificial Neural Network

Artificial neural network ANN, a mimic of Biological Neural Network, is a massively


parallel distributed processing system which is made up of highly interconnected neural
computing elements that have the ability to learn and thereby acquire knowledge and
make it available for use. Artificial neural networks are simplified limitations of the
central nervous system, and are aggravated by the kind of computing performed by the
human brains. Neurons are the structural entities of human brain and perform compu-
tations such as cognition, logical inference, pattern recognition and so on, where ANNs
are simplified models of the biological nervous systems. Hence the technology, which
is built on such a simplified imitation of neurons is termed as Artificial neural system
technology or ANN or simply neural networks and neurons. An ANN is configured
for a specific application, such as pattern recognition or data classification, through a
learning process where learning involves adjustments to the synaptic connections that
exist between the neurons.

Characteristics of Neural Networks (NNs)

Neural Networks are developed via human cognition through biological neurons, per-
forming similarly as a human brain hence, the characteristic includes the ability for
storing knowledge and making it available for use whenever necessary, propensity to
identify patterns, even in the presence of noise, aptitude for taking past experiences into
consideration and make inferences and judgments about new situations.
1. The NNs can map input patterns to their associated output patterns thereby exhibiting
mapping capabilities.
2. NNs are trained with known examples of a problem thus identifying new objects

31
previously untrained.
3. The NNs possess the capability to generalize thereby predicting new outcomes from
past trends.
4. The NNs are robust systems and are fault tolerant as they can recall full pattern from
incomplete, partial or noisy patterns.
5. The NNs can process information in parallels, at high speed, and in a distributed
manner.
Artificial Neural Networks are relatively crude electronic models based on the neural
structure of the brain. The brain basically learns from experience. It is natural proof that
some problems that are beyond the scope of current computers are indeed solvable by
small energy efficient packages. This brain modeling also promises a less technical way
to develop machine solutions. This new approach to computing also provides a more
graceful degradation during system overload than its more traditional counterparts.
These biologically inspired methods of computing are thought to be the next major ad-
vancement in the computing industry. Even simple animal brains are capable of func-
tions that are currently impossible for computers. Computers do rote things well, like
keeping ledgers or performing complex math. But computers have trouble recognizing
even simple patterns much less generalizing those patterns of the past into actions of
the future.
Now, advances in biological research promise an initial understanding of the natural
thinking mechanism. This research shows that brains store information as patterns.
Some of these patterns are very complicated and allow us the ability to recognize indi-
vidual faces from many different angles. This process of storing information as patterns,
utilizing those patterns, and then solving problems encompasses a new field in com-
puting. This field, as mentioned before, does not utilize traditional programming but
involves the creation of massively parallel networks and the training of those networks
to solve specific problems. This field also utilizes words very different from traditional
computing, words like behave, react, self-organize, learn, generalize, and forget.
The fundamental processing element of a neural network is a neuron. This building
block of human awareness encompasses a few general capabilities. Basically, a biolog-
ical neuron receives inputs from other sources, combines them in some way, performs
a generally nonlinear operation on the result, and then outputs the final result.

32
6.3 Feedforward Neural Network

Feedforward ANNs allow signals to travel only in one direction i.e. from input to out-
put, there is no feedback (loops) i.e. the output of any layer does not affect that same
layer. Feed-forward ANNs tend to be straight forward networks that associate inputs
with outputs and are extensively used in pattern recognition.
This type of organization is also referred to as bottom-up or top-down or three layer
network. The input layer neurons receive the input signals and output layer neurons
receives an output signal. Thus, it is also known as Single Layer Feed forward Network
and is acyclic in nature.
A feedforward neural network is an artificial neural network wherein connections be-
tween the units do not form a cycle. As such, it is different from recurrent neural
networks.
The feedforward neural network was the first and simplest type of artificial neural net-
work devised. In this network, the information moves in only one direction, forward,
from the input nodes, through the hidden nodes (if any) and to the output nodes. There
are no cycles or loops in the network.
Artificial Neural Networks contain the three normal types of layers - input, hidden, and
output. The layer of input neurons receive the data either from input files or directly
from electronic sensors in real-time applications. The output layer sends information
directly to the outside world, to a secondary computer process, or to other devices such
as a mechanical control system. Between these two layers can be many hidden layers.
These internal layers contain many of the neurons in various interconnected structures.
The inputs and outputs of each of these hidden neurons simply go to other neurons. In
most networks each neuron in a hidden layer receives the signals from all of the neu-
rons in a layer above it, typically an input layer. After a neuron performs its function
it passes its output to all of the neurons in the layer below it, providing a feed forward
path to the output.

33
These lines of communication from one neuron to another are important aspects of
neural networks. They are the glue to the system. They are the connections which pro-
vide a variable strength to an input.

6.3.1 Multilayer Feed-forward Network

This network is made up of multiple layers where irrespective of an input and output
layer, one or more intermediary layer called as hidden layers (i.e. hidden neurons)
are present. This hidden layer aid in performing an intermediary computation before
directing the input to the output layers.

Figure 6.1: Architecture of Feed forward Neural Network

Figure 6.2: Feed forward Neural Network

34
6.4 Training the model

Once a network has been structured for a particular application, that network is ready
to be trained. To start this process the initial weights are chosen randomly. Then, the
training, or learning, begins. The ANN is trained by exposing it to a set of existing data
(based on the follow up history of cancer patients) where the outcome is known. Mul-
tilayer networks use a variety of learning techniques; the most popular is feed forward
algorithm. It is one of the most effective approaches to machine learning algorithm
information which flows from the direction of the input layer towards the output layer.
There are two approaches to training - supervised and unsupervised. Supervised train-
ing involves a mechanism of providing the network with the desired output either by
manually "grading" the network’s performance or by providing the desired output with
the inputs. Unsupervised training is where the network has to make sense of the inputs
without outside help.

35
Figure 6.3: Neural Network training using simulink toolbox

36
After creating and training the neural from the train file, the test results are shown
in Figure 6.3. The receiver operator curve is plotted for the false positive rate to the
true positive rate is shown in Figure The test features data sets consist of ten images.
The identification result obtains using the neural network approach the success of its
efficient use cancer detection system.
Training is the test set of unknown categories of bone MRI images is passed through the
ANN classification system. The hidden layers and log-sigmoid transfer function are ap-
propriate to improve the correct classify of the disease stages. Finally, the performance
of the system is evaluated by the confusion matrix.
A confusion matrix is a table that is often used to describe the performance of a classi-
fication model (or classifier) on a set of test data for which the true values are known.
The following equations are to evaluate the correct classification and the incorrect clas-
sification of this system. The representation of the confusion matrix is given in Table
6.1

Table 6.1: Performance analysis using confusion matrix

Predicted No Predicted Yes Total


Actual No True Negative (TN) False Positive (FP) TN+FP
Actual Yes False Negative (FN) True Positive (TP) FN+TP
Total TP+FN FP+TP TN+FP+ FN+TP

TN + TP
Accuracy=
TP + FN + FP + TN
TP
Sensitivity=
TP + FN
TN
Specificity=
TN + FP
True positives (TP): These are cases in which we predicted yes (they have the disease),
and they do have the disease.
True negatives (TN): We predicted no, and they don’t have the disease.
False positives (FP): We predicted yes, but they don’t actually have the disease. (known
as a Type I error.).
False negatives (FN): We predicted no, but they actually do have the disease. (known
as a Type II error.)

37
Figure 6.4: Confusion Matrix for 5 bone cancer patients

38
Figure 6.5: ROC curve for 5 bone cancer patients

39
CHAPTER 7

CONCLUSION AND FUTURE RESEARCH

This chapter gives a brief summary of the system developed in this project to detect and
to classify the cancer and the results obtained. At the end, we provide suggestions for
future work.

7.1 Summary

We have developed a system to classify the cancer using MR images. This system
comprises of three stages: pre-processing, segmentation, feature extraction and classi-
fication. The pre-processing stage consists of various techniques such as filtering, gray
conversion, edge detection and morphological operations to improve the quality of an
image and the segmentation is implemented to obtain the diagnosis result. Then some
of the important features are extracted and it is calculated for classification of cancer.
Feed forward neural network is used for the classification and this system can know the
condition at early stages, so that it can play a very important and essential role to avoid
serious stages and to reduce the percent of bone cancer distribution in the humanity.
Thus the fast and robust system for detecting the bone cancer with high accuracy was
developed.

7.2 Future Research

By the process used complexity is reduced and diagnosis confidence is improved. This
project used Gabor filter for noise reduction in the images. Canny filter is used for edge
detection and finally superpixel segmentation is used to segment the image. Further
the classification can be done through pearsons and spearman algorithm to detect the
cancer prone regions in MR images. Comparison of results with PET images will be
done in future work.
REFERENCES

[1] Ezhil E.Nithila, S.S.Kumar, “Automatic detection of solitary pulmonary nodules


using swarm intelligence optimized neural networks on CT images”, Engineering
Science and Technology, an international journal, 2016

[2] Sinthia P and K. Sujatha, “A novel approach to detect the bone cancer using K-
means algorithm and edge detection method”, ARPN Journal of Engineering and
applied science,11(13), July 2016.

[3] Kishor Kumar Reddy, Anisha P R, Raju G V S, “A novel approach for detecting the
tumor size and bone cancer stage using region growing algorithm”, International
Conference on Computational Intelligence and Communication Networks, 2015.

[4] Maduri Avula, Narasimha Prasad Lakkakula, Murali Prasad raja,“Bone cancer de-
tection from MRI scan imagery using Mean Pixel Intensity”, Asia modeling sym-
posium,2014

[5] Abdulmuhssin Binhssan, “Enchondroma tumor Detection”, International journal of


advanced research in computer and communication Engineering, 4(6), june 2015.

[6] Mokhled S. Al-tarawneh, “Lung cancer detection using image processing tech-
niques”, Leonardo electronic journal of practices and technologies, 20, 147-158,
2012.

[7] Fatma Taher and Naoufel Werghi. “Lung cancer detection by using Artificial Neural
Network and Fuzzy Clustering Methods”, American Journal of Biomedical Engi-
neering, 2(3), 136-142, 2012.

[8] Anita chaudhary, Sonit sukhraj singh, “Lung cancer detection on CT images by
using image processing”, International conference on computing sciences, 2012.

[9] Md. Badrul Alam Miah and Mohammad Abu Yousuf, “Detection of lung cancer
from CT image using Image Processing and Neural Network”, International con-

41
ference on Electrical engineering and Information & communication Technology,
2015.

[10] Nooshin Hadavi, Md.Jan Nordin, Ali Shojaeipour, “Lung cancer diagnosis using
CT-scan images based on cellular learning automata”, IEEE, 2014.

[11] K. Jalal Deen and Dr. R. Ganesan, “An automated lung cancer detection from
CT images based on using artificial neural network and fuzzy clustering methods”,
International journal of applied engineering research, 9(22), 17327-17343,2014.

[12] Krupali D. Mistry, Bijal J. Talati, “An approach to detect bone tumor using com-
parative analysis of segmentation technique”, International journal of innovative
research in computer and communication engineering, 4(5), 2016.

[13] Bhagyashri G. Patil, “Cancer Cells Detection Using Digital Image Processing
Methods”, International Journal of Latest Trends in Engineering and Technology,
3(4), 2014.

[14] Amit Verma and Gayatri Khanna, “A Survey on Digital Image Processing Tech-
niques for Tumor Detection”, Indian journal of science and technology, 9(14), 2016.

[15] Glenn w. Milligan, S. C. Soon, Lisa m. sokol. “The effect of cluster size, dimen-
sionality, and the number of clusters on recovery of true cluster structure”, IEEE
transactions on pattern analysis and machine intelligence, 5(1).

42

You might also like