0% found this document useful (0 votes)
228 views69 pages

Srujana Documenatation

Uploaded by

Charitha Iddum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
228 views69 pages

Srujana Documenatation

Uploaded by

Charitha Iddum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 69

BRAIN TUMOR IMAGE SEGMENTATION

USING DEEP NETWORKS

A DISSERTRATION SUBMITTED TO
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY, KAKINADA
A Project Report submitted in the partial fulfillment of the requirements for the award of the
Degree of
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE AND ENGINEERING
Submitted by

Y. SRUJANA 18NG1A0559

Under the Esteemed Guidance of


Dr K P N V SATYA SREE
Professor

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

AUTONOMOUS
(Affiliated of to JNTUK Kakinada, Approved by A.I.C.T.E, New Delhi)
TELAPROLU, UNGUTURU MANDAL, KRISHNA DISTRICT - 521109
2018-2022
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY
(Affiliated of to JNTU Kakinada, Approved by A.I.C.T.E, New Delhi)
TELAPROLU, UNGUTURU MANDAL, KRISHNA DISTRICT-
521109 2018-2022

CERTIFICATE

This is to certify that this project entitled “BRAIN TUMOR IMAGE SEGMENTATION
USING DEEP NETWORKS” is the bonafide work of Y. Srujana (18NG1A0559) and
submitted in partial fulfillment of the requirements for the award of the Degree in Bachelor of
Technology in Computer Science & Engineering, during the academic year 2018-22.

Project Guide Head of the Department


Dr. K P N V SATYA SREE Dr. S M ROY CHOUDRI

Signature of External Examiner

https://usharama.edu.in/home Tel: 0866 252755, +91 9949712255


DECLARATION

This is to certify that Project report entitled “BRAIN TUMOR IMAGE


SEGMENTATION USING DEEP NETWORKS” is the work done by me during the
academic year 2018-2022 and is submitted in partial fulfillment of the requirements for the
award of degree of Bachelor of technology in COMPUTER SCIENCE AND
ENGINEERING from JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY,
KAKINADA.

BY

Y. Srujana (18NG1A0559)
ACKNOWLEDGEMENT

We are pleased to acknowledge our sincere thanks to our Honorable Chairman SRI.
S. RAMABRAHMAM for the guidance and advice which is given and for providing
sufficient resources.

We are extremely thankful to Dr. K RAJASEKHARA RAO, Director of USHA


RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY, TELAPROLU for giving a
golden opportunity to our education and project work.

We wish to avail this opportunity to express to thank Dr. G V K S V PRASAD,


Principal, URCE for his continuous support and giving valuable suggestions during the entire
period of the project work.

We take this opportunity to express our gratitude to Dr. S M ROY CHOUDRI, Head
of the Department and also our guide Dr. K P N V SATYA SREE, Professor in Computer
Science and Engineering for her valuable support and motivation at each and every point in
successful completion of the project.

We also place our floral gratitude to all other teaching staff and lab technicians for
their constant support and advice throughout the project.

Project Associate

Y. Srujana (18NG1A0559)
BRAIN TUMOR IMAGE SEGMENTATION

USING
DEEP NETWORKS
ABSTRACT
ABSTRACT

Medical imaging is gaining importance with an increase in the demand for


automated, reliable, fast and efficient diagnosis which can provide insight into the image
better than human eyes. The brain tumor is the second leading cause for cancer-related
deaths in men age 20 to 39 and leading cause cancer among women in the same age group.
Brain tumors are painful and should end in various diseases if not cured properly. The
diagnosis of the tumor is a very important part of its treatment. Identification plays an
important part in the diagnosis of benign and malignant tumors. A prime reason behind a
rise in the number of cancer patients worldwide is the ignorance towards the treatment of a
tumor in its early stages. This paper discusses such a machine learning algorithm that can
write the user about the details of the tumor using brain MRI. These methods include noise
removal and sharpening of the image along with basic morphological functions, erosion, and
dilation, to obtain the background. Subtractions of background and its negative from
different sets of images result in extracted in age. Plotting contour and c-label of the tumor
and its boundary provides us with information related to the tumor that can help in a better
visualization in diagnosing cases. This process helps in identifying the size, shape, and
position of the tumor. It helps the medical staff as well the patient to understand the
seriousness of the tumor with the help of different color-labeling for different levels of
elevation. A GUI for the contour of the tumor and its boundary can provide information to
the medical staff on the click of user choice buttons. Keywords: classification, convolutional
neural network, feature extraction, machine learning, magnetic resonance imaging,
segmentation, texture features.
TABLE OF CONTENTS

TOPIC PAGE NO
1. INTRODUCTION 01
1.1. Literature Survey 03
1.1.1. Machine Learning 04
1.1.2. Features of Machine Learning 10
1.1.3. Existing System 11
1.1.4. Proposed System 12
2. AIM & SCOPE 13
2.1. Requirement Analysis 14
2.1.1. Functional Requirement Analysis 14
2.1.2. User Requirement Analysis 14
2.1.3. Non-Functional Requirement Analysis 15
2.2. Module Description 15
2.3. Feasibility Study 16
2.3.1. Technical Feasibility 17
2.3.2. Operational Feasibility 17
2.3.3. Behavioural Feasibility 17

2.4 Process Model Used 17

2.5 Software and Hardware Requirements 19


2.5.1. Software Requirements 19
2.5.2. Hardware Requirements 19
2.6 SRS Specification 19
3. DESIGN PHASE 20
3.1. Design phase Purpose 21
3.2. Design Concepts 22
3.3. Design Constraints 23
3.4. Conceptual Design 25
3.5. System Analysis Methods 26
3.5.1. Use case Diagram 26
TABLE OF CONTENTS
3.5.2. Activity Diagram 27

3.6. System design 28


3.6.1. System Structure 28
3.6.2. Class Diagram 29
3.6.3. Sequence Diagram 30
4. IMPLEMENTATION 31
4.1. Tools used 32
4.2. Pseudo code 36
4.3. Component Diagram 38
4.4. Deployment Diagram 39
5. SCREEN SHOTS 40
6. TESTING 46
7. SUMMARY & CONCLUSION 51
8. FUTURE ENHANCEMENTS 53
9. BIBILOGRAPHY 56
LIST OF FIGURES

FIGURE NO FIGURE NAME

1.1.1 Flow chart of Supervised Learning Algorithms

1.1.2 Traditional Programming Vs Machine Learning

1.1.3 Machine Learning Model

1.1.4 Proposed Work

3.5.1 Use Case Diagram

3.5.2 Activity Diagram

3.5.3 System Architecture

3.5.4 Class Diagram

3.5.5 Sequence Diagram

4.3.1 Component Diagram

4.4.1 Deployment Diagram

5.1 Project Bar

5.2 Sample Folder

5.3 Open Folder Location in CMD

5.4 Run Flask

5.5 Main Output Stream

5.6 Sample Folder

5.7 Selecting 0th Image from the Folder

5.8 Output for Selected Image

6.1 Testing Process


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER - 1

INTRODUCTIO

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 1


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

INTRODUCTION

The cells in the body grow and divide in an orderly manner and form some new
cells. These new cells help to keep the human body healthy and properly working. When
some cells lose their capability to regulate their growth, they grow with none order. The extra
cells formed form a mass of tissue that is named as the tumor. The tumors can be benign or
malignant. Malignant tumors lead to cancer while benign tumor is not Cancerous. The
important think about the diagnosis includes the medical image data obtained from various
biomedical devices that use different imaging techniques like x-ray, CT scan, MRI. Magnetic
resonance imaging (MRI) may be a technique that depends on the measurement of magnetic
flux vectors that are generated after an appropriate excitation of strong magnetic fields and
radiofrequency pulses in the nuclei of hydrogen atoms present in the water molecules of a
patient's body. The MRI scan is much better than the CT scan for diagnosis as it doesn't use
any radiation. The radiologists can evaluate the brain using MRI. The MRI technique can
determine the presence of tumors within the brain. The MRI also contains noise caused
thanks to operator intervention which may cause inaccurate classification. The large volume
of MRI is to analyze; thus, automated systems are needed because they're less expensive -.
Automated detection of tumors in MR images is important as high accuracy is required when
handling human life. The supervised and unsupervised machine learning algorithm technique
can be employed for the classification of brain MR image either as normal or abnormal.
During this paper, an efficient automated classification technique for brain MRI is proposed
using machine learning algorithms. The supervised machine learning algorithm is used for
classification of brain MR image.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 2


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

1.1 LITERATURE SURVEY

Krizhevsky et al. 2012 achieved state-of-the-art results in image classification based


on transfer learning solutions upon training a large, deep convolutional neural network to
classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the
1000 different classes. On the test data, he achieved top-1 and top-5 error rates of 37.5% and
17.0% which was considerably better than the previous state-of-the-art. He also entered a
variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test
error rate of 15.3%, compared to 26.2% achieved by the second-best entry. The neural
network, which had 60 million parameters and 650,000 neurons, consisted of five
convolutional layers, some of which were followed by max-pooling layers, and three fully-
connected layers with a final 1000-way Softmax. To make training faster, he used non-
saturating neurons and a very efficient GPU implementation of the convolution operation. To
reduce overfitting in the fully-connected layers he employed a recently-developed
regularization method called ―dropout‖ that proved to be very effective.

Simonyan& Zisserman 2014 they investigated the effect of the convolutional network
depth on its accuracy in the large-scale image recognition setting. These findings were the
basis of their ImageNet Challenge 2014 submission, where their team secured the first and
the second places in the localisation and classification tracks respectively. Their main
contribution was a thorough evaluation of networks of increasing depth using architecture
with very small (3×3) convolution filters, which shows that a significant improvement on the
prior-art configurations can be achieved by pushing the depth to 16–19 weight layers after
training smaller versions of VGG with less weight layers. Pan & Yang 2010‘s survey focused
on categorizing and reviewing the current progress on transfer learning for classification,
regression and clustering problems. In this survey, they discussed the relationship between
transfer learning and other related machine learning techniques such as domain adaptation,
multitask learning and sample selection bias, as well as covariate shift. They also explored
some potential future issues in transfer learning research. In this survey article, they reviewed
several current trends of transfer learning.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 3


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

1.1.1. MACHINE LEARNING

Tom Mitchell states machine learning as “A computer program is said to learn from
experience and from some tasks and some performance on, as measured by, improves with
experience”. Machine Learning is combination of correlations and relationships, most
machine learning algorithms in existence are concerned with finding and/or exploiting
relationship between datasets. Once Machine Learning Algorithms can pinpoint on certain
correlations, the model can either use these relationships to predict future observations or
generalize the data to reveal interesting patterns. In Machine Learning there are various types of
algorithms such as Regression, Linear Regression, Logistic Regression, Naive Bayes Classifier,
Bayes theorem, KNN (K-Nearest Neighbour Classifier), Decision Tress, Entropy, ID3, SVM
(Support Vector Machines), K-means Algorithm, Random Forest and etc.,

The name machine learning was coined in 1959 by Arthur Samuel. Machine
learning explores the study and construction of algorithms that can learn from and make
predictions on data Machine learning is closely related to (and often overlaps with)
computational statistics, which also focuses on prediction-making through the use of
computers. It has strong ties to mathematical optimization, which delivers methods, theory and
application domains to the field. Machine learning is sometimes conflated with data mining,
where the latter subfield focuses more on exploratory data analysis and is known as
unsupervised learning.

With in the field of data analytics, machine learning is a method used to devise
complex models and algorithms that lend themselves to prediction; in commercial use, this is
known as predictive analytics. These analytical models allow researchers, data scientists,
engineers, and analysts to "produce reliable, repeatable decisions and results" and uncover
"hidden insights" through learning from historical relationships and trends in the data.

Machine learning implementations are classified into three major categories,


depending on the nature of the learning “signal” or “response” available to a learning system
which are as follows:

Supervised learning: When an algorithm learns from example data and associated
target responses that can consist of numeric values or string labels, such as classes or tags, in
order to later predict the correct response when posed with new examples comes under the
category of Supervised learning. This approach is indeed similar to human learning under the
supervision of a teacher. The teacher provides good examples for the student to memorize,
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 4
BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

and the student then derives general rules from these specific examples.

Unsupervised learning: When an algorithm learns from plain examples without any
associated response, leaving to the algorithm to determine the data patterns on its own. This
type of algorithm tends to restructure the data into something else, such as new features that
may represent a class or a new series of un-correlated values. They are quite useful in
providing humans with insights into the meaning of data and new useful inputs to supervised
machine learning algorithms. As a kind of learning, it resembles the methods humans use to
figure out that certain objects or events are from the same class, such as by observing the
degree of similarity between objects. Some recommendation systems that you find on the
web in the form of marketing automation are based on this type of learning.

Reinforcement learning: When you present the algorithm with examples that lack
labels, as in unsupervised learning. However, you can accompany an example with positive
or negative feedback according to the solution the algorithm proposes comes under the
category of Reinforcement learning, which is connected to applications for which the
algorithm must make decisions (so the product is prescriptive, not just descriptive, as in
unsupervised learning), and the decisions bear consequences. In the human world, it is just
like learning by trial and error. Errors help you learn because they have a penalty added (cost,
loss of time, regret, pain, and so on), teaching you that a certain course of action is less likely
to succeed than others.

In this case, an application presents the algorithm with examples of specific


situations, such as having the gamer stuck in a maze while avoiding an enemy. The
application lets the algorithm know the outcome of actions it takes, and learning occurs while
trying to avoid what it discovers to be dangerous and to pursue survival. You can have a look
at how the company Google Deep Mind has created a reinforcement learning program that
plays old Atari’s video 3 games. When watching the video, notice how the program is
initially clumsy and unskilled but steadily improves with training until it becomes a
champion.

Semi-supervised learning: Where an incomplete training signal is given: a training


set with some (often many) of the target outputs missing. There is a special case of this
principle known as Transduction where the entire set of problem instances is known at
learning time, except that part of the targets are missing. Supervised Learning the majority of

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 5


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

practical machine learning uses supervised learning. Supervised learning is where you have
input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping
function from the input to the output.
Y = f(X)

The goal is to approximate the mapping function so well that when you have new
(x) input data that you can predict the output variables (Y) for that data. It is called
supervised learning because the process of an algorithm learning from the training dataset can
be thought of as a teacher supervising the learning process. We know the correct answers, the
algorithm iteratively makes predictions on the training data and is corrected by the teacher.
Learning stops when the algorithm achieves an acceptable level of performance.

Types of Supervised Learning:

Classification: It is a Supervised Learning task where output is having defined labels


(discrete value). For example, in above Figure A, Output – Purchased has defined labels i.e.,
0 or 1; 1 means the customer will purchase and 0 means that customer won’t purchase. The
goal here is to predict discrete values belonging to a particular class and evaluate on the basis
of accuracy. It can be either binary or multi class classification. In binary classification,
model predicts either 0 or 1; yes or no but in case of multi class classification, model predicts
more than one class. Example: Gmail classifies mails in more than one clases like social,
promotions, updates, forum.

Regression: It is a Supervised Learning task where output is having continuous value.


Example in above Figure B, Output – Wind Speed is not having any discrete value but is
continuous in the particular range. The goal here is to predict a value as much closer to actual
output value as our model can and then evaluation is done by calculating error value. The
smaller the error, the greater the accuracy of our regression model. Imagine you're car
shopping and have decided that gas mileage is a deciding factor in your decision to buy. If
you wanted to predict the miles per gallon of some promising rides, how would you do it?
Well, since you know the different features of the car (weight, horsepower, displacement,
etc.) one possible method is regression. By plotting the average MPG of each car given its
features you can then use regression techniques to find the relationship of the MPG and the
input features. The regression function here could be represented as $Y = f(X)$, where Y
would be the MPG and X would be the input features like the weight, displacement,
horsepower, etc. The target function is $f$ and this curve helps us predict whether it’s
beneficial to buy or not buy.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 6


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

This mechanism is called regression.

Fig 1.1.1 Flow Chart of Supervised Learning Algorithm

Classification:

Data mining is the process of extracting knowledge-able information from huge


amounts of data. It is an integration of multiple disciplines such as statistics, machine
learning, neural networks and pattern recognition. Data mining extracts biomedical and
health care knowledge for clinical decision making and generates scientific hypotheses from
large medical data.

The Classification algorithm is a Supervised Learning technique that is used to


identify the category of new observations on the basis of training data. In Classification, a
program learns from the given dataset or observations and then classifies new observation into
a number of classes or groups such as Yes or No, 0 or 1, Spam or Not Spam, cat or dog, etc.
Classes can be called as targets/labels or categories.

Unlike regression, the output variable of Classification is a category, not a value, such
as "Green or Blue", "fruit or animal", etc. Since the Classification algorithm is a Supervised
learning technique, hence it takes labeled input data, which means it contains input with the
corresponding output. In classification algorithm, a discrete output function(y) is mapped to
input variable(x). The main goal of the Classification algorithm is to identify the category of a
given dataset, and these algorithms are mainly used to predict the output for the categorical
data.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 7


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Association rule mining and classification are two major techniques of data mining.
Association rule mining is an unsupervised learning method for discovering interesting
patterns and their association in large data bases.

Classification is a supervised learning method used to find class labels for unknown
samples. Classification is the task of assigning an object's tone of special predefined
categories. It is pervasive problem that encompasses many applications.

Classification is designed as the task of learning a target function F that maps each
attribute set A to one of the predefined class labels C. The target function is also known as
classification model.

A classification model is useful for mainly two purposes.

1) Descriptive modeling.
2) Predictive modeling.

Classification is the process of recognizing, understanding, and grouping ideas and


objects into pre-set categories or “sub-populations.” Using pre-categorized training datasets,
machine learning programs use a variety of algorithms to classify future datasets into
categories.

Classification algorithms in machine learning use input training data to predict the
likelihood that subsequent data will fall into one of the predetermined categories. One of the
most common uses of classification is filtering emails into “spam” or “non-spam.”

In short, classification is a form of “pattern recognition,” with classification


algorithms applied to the training data to find the same pattern (similar words or sentiments,
number sequences, etc.) in future sets of data.

Lazy Learners: Lazy Learner firstly stores the training dataset and wait until it receives the
test dataset. In Lazy learner case, classification is done on the basis of the most related data

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 8


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS
stored in the training dataset. It takes less time in training but more time for predictions.
Example: K-NN algorithm, Case-based reasoning

Eager Learners: Eager Learners develop a classification model based on a training dataset
before receiving a test dataset. Opposite to Lazy learners, Eager Learner takes more time in
learning, and less time in prediction. Example: Decision Trees, Naïve Bayes, ANN.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 9


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Classification can be performed on structured or unstructured data. Classification is a


technique where we categorize data into a given number of classes. The main goal of a
classification problem is to identify the category/class to which a new data will fall under.

Few of the terminologies encountered in machine learning – classification:

Classifier: An algorithm that maps the input data to a specific category.

Classification model: A classification model tries to draw some conclusion from the input
values given for training. It will predict the class labels/categories for the new data.

Feature: A feature is an individual measurable property of a phenomenon being observed.


Binary

Classification: Classification task with two possible outcomes. E.g., Gender classification
(Male / Female).

Multi-class classification: Classification with more than two classes. In multi class
classification each sample is assigned to one and only one target label. E.g., An animal can be
cat or dog but not both at the same time.

Multi-label classification: Classification task where each sample is mapped to a set of target
labels (more than one class). E.g., A news article can be about sports, a person, and location
at the same time.

Applications of Classification Algorithms:

 Email spam classification


 Bank customers loan pay willingness prediction.
 Cancer tumor cells identification.
 Sentiment analysis
 Drug’s classification
 Facial key points detection
 Pedestrians’ detection in an automotive car driving.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 10


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

1.1.2. FEATURES OF MACHINE LEARNING

• It is nothing but automating the Automation.

• Getting computers to program themselves.

• Writing Software is bottleneck.

• Machine leaning models involves machines learning from data without the help of
humans or any kind of human intervention.

• Machine Learning is the science of making of making the computers learn and act
like humans by feeding data and information without being explicitly programmed.

• Machine Learning is totally different from traditionally programming, here data and
output is given to the computer and in return it gives us the program which
provides solution to the various problems. Below is the figure.

Fig 1.1.2 Traditional Programming vs Machine Learning

• Machine Learning is a combination of Algorithms, Datasets, and Programs.

• There are Many Algorithms in Machine Learning through which we will provide us the
exact solution in predicting the disease of the patients.

• How Does Machine Learning Works?

• Solution to the above question is Machine learning works by taking in data, finding
relationships within that data and then giving the output.

There are various applications in which machine learning is implemented such as


Web search, computing biology, finance, e-commerce, space exploration, robotics, social
networks, debugging and much more.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 11


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Fig 1.1.3 Machine Learning Model

Applications of Machine Learning

 Traffic Alerts
 Social Media
 Transportation and Commuting
 Products Recommendations
 Virtual Personal Assistants
 Self Driving Cars
 Dynamic Pricing
 Google Translate
 Online Video Streaming

1.1.3. EXISTING SYSTEM

Joshi proposed brain tumor detection and classification systems in MR images by


first extracting the tumor portion from brain image, then extracting the texture features of
the detected tumor using gray level co-occurrence matrix (GLCM) and then classified
using a Neuro-fuzzy classifier. Shasidhar proposed a modified fuzzy c-means (FCM)
algorithm for MRI brain tumor detection. The texture features are extracted from the brain
MR image and then a modified FCM algorithm is used for brain tumor detection. The
average speed-ups of as much as 80 times a traditional FCM algorithm are obtained using
the modified FCM algorithm. The modified FCM algorithm is a fast alternative to the
traditional FCM technique.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 12


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

1.1.4. PROPOSED SYSTEM

As per the literature survey, it was found that automation of brain tumor detection
is very essential as high accuracy is needed when human life is involved. Automated
detection of tumors in MR images involves feature extraction and classification using a
machine learning algorithm. In this paper, a system to automatically detect a tumor in MR
images s proposed as shown in the figure.

Fig 1.1.4 Proposed Work

ADVANTAGES OF PROPOSED SYSTEM:

1. Easy detection of diseases.

2. Accurate results.

3. Cheaper than existing system’s method.

4. Less time consuming.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 13


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER-2
AIM & SCOPE

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 14


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

2 AIM & SCOPE

Analysis is defined as detailed examination of the elements or structure of something.

2.1 REQUIREMENT ANALYSIS

The process to gather the software requirements from clients, analyze and document
them is known as requirements engineering or requirements analysis. The goal of
requirement engineering is to develop and maintain sophisticated and descriptive
‘System/Software Requirements Specification’ documents. It is a four step process generally,
which includes -

• Feasibility Study

• Requirements Gathering

• Software Requirements Specification

• Software Requirements Validation The basic requirements of our project are:

• Python installed

• Research Papers

• Datasets

• Accuracy calculation

2.1.1 FUNCTIONAL REQUIREMENT ANALYSIS

Functional requirements explain what has to be done by identifying the necessary


task, action or activity that must be accomplished. Functional requirements analysis will be
used as the top- level functions for functional analysis.

2.1.2 USER REQUIREMENTS ANALYSIS

User Requirements Analysis is the process of determining user expectations for a new
or modified product. These features must be quantifiable, relevant and detailed. The main
user requirements of our project are as follows:

• Internet Facility/ LAN Connection


• CPU i5+
• Visual Studio

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 15


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

• RAM 8 or 16 GB
• Memory 1GB

2.1.3 NONFUNCTIONAL REQUIREMENTS ANALYSIS

Non-functional requirements describe the general characteristics of a system. They


are also known as quality attributes. Some typical non-functional requirements are
Performance, Response Time, Throughput, Utilization, and Scalability.

Performance:

The performance of a device is essentially estimated in terms of efficiency,


effectiveness and speed.

• Short response time for a given piece of work.

• High throughput (rate of processing work)

• Short data transmission time.

Response Time: Response time is the time a system or functional unit takes to react
to a given input.

2.2 MODULE DESCRIPTION

The following modules are required for effective purposes. They are,

• Physical Data Acquisition: Acquiring the physical image of any device means
extracting an exact bit-by-bit copy of the original device's flash memory. In contrast
to logical acquisition, physically acquired images hold unallocated space, files, and
the volume stack, in addition to the extraction of data remnants present in the
memory.

• Data Preprocessing: Data preprocessing is an important step in the data mining


process. The phrase "garbage in, garbage out" is particularly applicable to data
mining and machine learning projects. Data-gathering methods are often loosely
controlled, resulting in out-ofrange values, impossible data combinations, and
missing values, etc.

• Segmentation: Segmentation is an architectural approach that divides a network into


multiple segments or subnets, each acting as its own small network. This allows

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 16


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

network administrators to control the flow of traffic between subnets based on granular
policies.

• Feature Extraction: Feature extraction is a process of dimensionality reduction by


which an initial set of raw data is reduced to more manageable groups for
processing. A characteristic of these large data sets is a large number of variables
that require a lot of computing resources to process.

• Classification: Classification means grouping things together on the basis of certain


common features. It is actually the method of putting similar things into one group.
It makes study more easy and systematic.

• Data Post Processing: Post processing procedures usually include various pruning
routines, rule quality processing, rule filtering, rule combination, model
combination, or even knowledge integration. All these procedures provide a kind of
symbolic filter for noisy, imprecise, or non-user-friendly knowledge derived by an
inductive algorithm.

• Decision Making: Decision making is the process of making choices by identifying


a decision, gathering information, and assessing alternative resolutions. Using a step-
by-step decision-making process can help you make more deliberate, thoughtful
decisions by organizing relevant information and defining alternatives.

2.3 FEASIBILITY STUDY:

Feasibility Study is a high level capsule version of the entire process intended to
answer a number of questions like: What is the problem? Is there any feasible solution to the
given problem? Is the problem even worth solving? Feasibility study is conducted once the
problem is clearly understood. Feasibility study is necessary to determine that the proposed
system is Feasible by considering the technical, Operational, and Economical factors. By
having a detailed feasibility study the management will have a clear-cut view of the
proposed system. A well designed feasibility study should provide a historical background
of the business or project, the operations and management, marketing research and policies,
financial data, legal requirements and tax obligations. The following feasibilities are
considered for the project in order to ensure that the project is variable and it does not have
any major obstructions. Feasibility study encompasses the following things:

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 17


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

2.3.1 Technical feasibility:

In this step, we verify whether the proposed systems are technically feasible or not.
i.e., all the technologies required to develop the system are available readily or not.
Technical Feasibility determines whether the organization has the technology and skills
necessary to carry out the project and how this should be obtained. The system can be
feasible because of the following grounds.
All necessary technology exists to develop the system
This system is flexible and it can be expanded further
This system can give guarantee of accuracy, ease of use, and reliability
Our project is technically feasible because, all the technology needed for our project
is readily available.

2.3.2 Operational feasibility:


In this step, we verify different operational factors of the proposed systems like
manpower, time etc., whichever solution uses less operational resources, is the best
operationally feasible solution. The solution should also be operationally possible to
implement. Operational Feasibility determines if the proposed system satisfied user
objectives could be fitted into the current system operation. The present system Smart Health
care system can be justified as operationally feasible based on the following grounds.
The methods of processing and presentation are completely accepted by the clients
they can meet all user requirements.

The clients have been involved in the planning and development of the system.

The proposed system will not cause any problem under any circumstances.

2.3.3 Behavioural feasibility:


This device will help people to save their time. As there will be no wastage of time, the
user will be satisfied. It will also help the clearance of hospital in an efficient manner.

2.4 Process model used:


The model that is basically being followed is the WATERFALL MODEL, which states
that the phases are organized in a linear order. First of all the feasibility study is done.
Once that part is over the requirement analysis and project planning begins.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 18


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

If a system exists one and modification and addition of a new module is needed,
analysis of the present system can be used as a basic model. The design starts after the
requirement analysis is complete and the coding begins after the design is complete. Once
the programming is completed, the testing is done.

In this model the sequence of activities performed in a software development project


are: Requirement Analysis, Project Planning, System design, Detail design, Coding, Unit
testing, System integration & testing. Here the linear ordering of these activities is critical.
End of the phase and the output of one phase is the input of the other phase.

The output of each phase is to be consistent with the overall requirement of the
system. Some of the qualities of the spiral model are also incorporated like after the people
concerned with the project review completion of each of the phases of the work done.
WATERFALL MODEL was being chosen because all requirements were known
beforehand and the objective of our software development is the computerization/
automation of an already existing manual working system.

SCALABILITY:

System is capable of handling increase total throughput under an increased load


when resources (typically hardware) are added. System can work normally under situations
such as low bandwidth and large number of users.

PORTABILITY:

Portability is one of the key concepts of high-level programming. Portability is the


software code base feature to be able to reuse the existing code instead of creating new code
when moving software from an environment to another. Project can be executed under
different operation conditions provided it meet its minimum configurations. Only system
files and dependant assemblies would have to be configured in such case.

VALIDATION:

It is the process of checking that a software system meets specifications and that it fulfills
its intended purpose. It may also be referred to as software quality control. It is normally
the responsibility of software testers as part of the software development lifecycle. Software
validation checks that the software product satisfies or fits the intended use (high-level
checking)

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 19


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

i.e., the software meets the user requirements, not as specification artefacts or as needs of those
who will operate the software only; but, as the need so fall the stakeholders.

2.5 Software and hardware requirements:

2.5.1 Software requirements:

Windows 7 and above


Visual Studio

2.5.2 Hardware requirements:


Processor - Dual Core
Hard Disk - 50 GB
Memory - 1 GB RAM

2.6 SRS Specification:

Software Requirements specification (SRS) - a requirements specification for a


software system- is a complete description of behavior of a system to be developed. It
includes a set of cases that describe all the interactions users will have with the software. In
addition to use cases, the SRS also contains non- functional requirements.

Non-functional requirements are requirements which impose constraints on the


design or implementation (such as performance engineering requirements, quality standards,
or design constraints).

System Requirements Specification It is a collection of information that embodies


the requirements of a system. A business analyst, sometimes titled system analyst, is
responsible for analyzing the business needs of their clients and stakeholders to help identify
business problems and propose solutions. Projects are subject to three sorts of required
elements. Business requirements describe in business terms what must be delivered or
accomplished to provide value. Product requirements describe properties of a system or
product (which could be one of several ways to accomplish a set of business requirements.
Process requirements describe activities performed by the developing organization. For
instance, process requirements could specify methodologies that must be followed, and
constraints that the organization must obey.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 20


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER - 3

DESIGN PHASE

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 21


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

3 Design phase
Design is a multi step process that focuses on data structure, Software architecture,
procedural details and interface between modules. The design process also translates the
requirements into the presentation of software that can be accessed for quality before coding
begins.

Computer software design changes continuously as new methods; better analysis and
broader understanding evolved. Software design at a relatively early stage in its revolution.
Therefore, software design methodology lacks the depth, flexibility and quantitative nature
that are normally associated with more classical engineering disciplines. However, the
techniques for software design do exist, criteria for design qualities are available and design
notation can be applied.

The purpose of the design phase is to plan a solution of the problem specified by the
requirements document. The design of a system is perhaps the most critical factor affecting
the quality of the software. It has a major impact on the project during later phases,
particularly during testing and maintenance.

3.1 Design phase purpose:

Software design sits at the technical kernel of the software engineering process and
is applied regardless of the development paradigm and area of application. Design is the
first step in the development phase for any engineered product or system. The designer’s
goal is to produce a model or representation of an entity that will later be built. Beginning,
once the system requirements have been specified and analyzed, system design is the first of
the three technical activities design, code and test that is required to build and verify
software.

The importance can be stated with a single word “Quality”. Design is the place
where quality is fostered in software development. Design provides a representation of
software that can be accessed for quality. Design is the only way that can accurately
translate a customer’s view into a finished software product or system. Software design
serves as a foundation for all the software engineering steps to follow for this design pattern.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 22


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Without a strong design, we risk building an unstable system that will be difficult to
test. One whose quality cannot be assessed until the last stage.

During design, progressive refinement of data structure, program structure, and


procedural details are developed ,reviewed and documented .System design can be viewed
from either technical or project management perspective .From the technical point of
view ,design is comprised of four activities- architectural design ,data structure
design ,interface design and procedural design.

The design model is an abstraction of the implementation of the system. It is used to


conceive as well as document the design of the software system. It is a comprehensive,
composite artifact encompassing all design classes, subsystems, packages, collaborations,
and the relationships between them.

3.2 Design Concepts:

The set of fundamental software design concepts are as follows:

Abstraction:
The lower level of abstraction provides a more detailed description of the solution. A
sequence of instruction that contains a specific and limited function refers to a procedural
abstraction. A collection of data that describes a data object is a data abstraction.
Architecture:
The complete structure of the software is known as software architecture. Structure
provides conceptual integrity for a system in a number of ways. The architecture is the
structure of program modules where they interact with each other in a specialized way. The
aim of the software design is to obtain an architectural framework of a system.
Patterns:
In software engineering, a design pattern is a general repeatable solution to a
commonly occurring problem in software design. A design pattern isn't a finished design
that can be transformed directly into code. It is a description or template for how to solve a
problem that can be used in many different situations and problems. A design pattern
describes a design structure and that structure solves a particular design problem in a
specifiedcontent.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 23


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Modularity:

Modularity is the single attribute of software that permits a program to be managed


easily.

Information hiding:
Modules must be specified and designed so that the information like algorithm and data
presented in a module is not accessible for other modules not requiring that
information.
Functional independence:

Functional independence is the concept of separation and related to the concept of


modularity, abstraction and information hiding. The functional independence is
accessed using two criteria i.e. Cohesion and coupling. Cohesion is an extension of the
information hiding concept. A cohesive module performs a single task and it requires a
small interaction with the other components in other parts of the program. Coupling is
an indication of interconnection between modules in a structure of software.
Refinement:

Refinement is a top-down design approach. It is a process of elaboration. A program is


established for refining levels of procedural details.

Refactoring:

Refactoring is the process of changing the software system in a way that it does not
change the external behavior of the code and still improves its internal structure.
Design classes:

The model of software is defined as a set of design classes. Every class describes the
elements of the problem domain and that focus on features of the problem which are
user visible.

3.3 Design Constraints:

Design Constraints are generally the limitations on a design. They include imposed
limitations that you don't control and limitations that are self-imposed as a way to improve a
design. The following are common types of design constraints. 9 Types of Design
Constraints:

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 24


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Commercial Constraints:
Basic commercial constraints such as time and budget come under commercial constraints

Requirements:
Requirements specify the basic needs of a project. Ex: Functional requirements.

Non-Functional Requirements:
Non-Functional requirements are the requirements that specify intangible elements
of a design.
Compliance:
Compliance refers to applicable laws, regulations and standards.

Style:

A style guide or multiple style guides related to an organization, brand, product,


service, environment or project. For example, a product development team may follow a
style guide for a brand family that constrains the colors and layout of package designs.
Sensory Design:
Beyond visual design, constraints may apply to taste, touch, sound and smell.
For example, a brand identity that calls for products to smell fruity.

Usability:
Usability principles imply frameworks and standards. Ex: The principle of least
astonishment.

Principles:

Principles include the design principles of an organization, team or individual. For


example, a designer who uses form follows function to constrain designs.

Integration:
A design that needs to work with other things such as products, services,
systems, processes, controls, partners and information.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 25


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

3.4 Conceptual Design:

Conceptual Design is an early phase of the design process, in which the broad
outlines of function and form of something are articulated. It includes the design of
interactions, experiences, processes and strategies. It involves an understanding of people's
needs - and how to meet them with products, services, & processes. Common artifacts of
conceptual design are concept sketches and models.

The unified modeling language allows the software engineer to express an analysis
model using the modeling notation that is governed by a set of syntactic, semantic and
pragmatic rules.

A UML system is represented using five different views that describe the system
from a distinctly different perspective. Each view can be defined by a set of diagrams. UML
is specifically constructed through two different domains. They are:

UML analysis modeling, this focuses on the user model and structural model views
of the system.
UML design modeling, which focuses on the behavioral modeling, implementation
modeling and environment model views.
Use case diagram at its simplest is a representation of a user's interaction with the
system that shows the relationship between the user and the different use cases in which the
user is involved. A use case diagram can identify the different types of users of a system and
the different use cases and will often be accompanied by other types of diagrams as well.
Actors are the external entities that interact with the system. The use cases are represented
by either circles or ellipses.

The alert message contains "emergency occurred 'old age home name' consult. Immediately
to doctor 'phone number' through telegram bot.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 26


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

3.5 SYSTEM ANALYSIS METHODS

3.5.1 USE CASE DIAGRAM

The Use Case diagram of the project based on using machine learning consist of all the various
aspects a normal use case diagram requires. This use case diagram shows how from starting the
model flows from one step to another, like he enters into the system then enters all the
information’s and all other general information along with the symptoms that goes into the
system, compares with the prediction model and if true is predicts the appropriate results
otherwise it shows the details where the user if gone wrong while entering the information’s
and it also shows the appropriate precautionary measure for the user to follow. Here the
use case diagram of all the entities is linked to each other where the user gets started with
the system.

Fig: 3.5.1. Use Case Diagram

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 27


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

3.5.2 ACTIVITY DIAGRAM

Activity diagram is another important diagram in UML to describe the dynamic aspects of the
system. Activity diagram is basically a flowchart to represent the flow from one activity to
another activity. The activity can be described as an operation of the system. The control flow
is drawn from one operation to another. Here in this diagram the activity starts from user
where the user registers into the system then login using the credentials and then the credentials
are matched in the system and if it’s true, then the user proceeds to the prediction phase where
the prediction happens. Then finally after processing the data from datasets the analysis
will happen then the correct result will be displayed that is nothing but the Output.

Fig 3.5.2 Activity Diagram

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 28


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

3.6 SYSTEM DESIGN

3.6.1 SYSTEM STRUCTURE

Architectural design is a concept that focuses on components or elements of a


structure. Any changes the client wants to make to the design should be communicated to
the architect during this phase. Flow diagram is a collective term for a diagram representing
a flow or set of dynamic relationships in a system.
A data flow diagram (DFD) is a way of representing the flow of data of a process or
a system, usually an information system. The DFD also provides information about the
outputs and inputs of each entity and the process itself. A data flow diagram is a graphical
representation of the “flow” of the data through an information system. DFD’s can also be
used for the visualization of data processing. On a DFD, data items flow from an external
data source or an internal data store to an internal data store or an external data sink, via an
internal process. A DFD provides no information about the timing of processes or about
processes that will operate in sequence or in parallel. It is therefore quite different from a
flow chart, which shows the flow of control through an algorithm allowing a reader to
determine what operations will be performed on what order under what circumstances but
not what kinds of data will be input to and output from the system or where the data will
come from and go to or where the data will be stored.

Fig 3.5.3. System Architecture

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 29


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

3.6.2 CLASS DIAGRAM

It is based on using Deep learning consist of class diagram that all the other application that
consists the basic class diagram, here the class diagram is the basic entity that is required in
order to carry on with the project. Class diagram consist information about all the classes
that is used and all the related datasets, and all the other necessary attributes and their
relationships with other entities, all these information is necessary in order to use the concept
of the prediction, where the user will enter all necessary information such as username,
email, phone number, and many more attributes that is required in order to login into the
system and using the files concept we will store the information of the users who are
registering into the system and retrieves those information later while logging into the
system.

Fig 3.5.4 Class Diagram

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 30


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

3.6.3 SEQUENCE DIAGRAM

The Sequence diagram of the project based on deep learning consist of all the various aspects
a normal sequence diagram requires. This sequence diagram shows how from starting the
model flows from one step to another, like he enters into the system then enters all the
information’s and all other general information along with the symptoms that goes into the
system, compares with the prediction model and if true is predicts the appropriate results
otherwise it shows the details where the user if gone wrong while entering the information’s
and it also shows the appropriate precautionary measure for the user to follow. Here the
sequence of all the entities is linked to each other where the user gets started with the system.

Fig 3.5.5 Sequence Diagram

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 31


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER – 4

IMPLEMENTATIO

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 32


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

4. IMPLEMENTATION

4.1. TOOLS USED

4.1.1. ANACONDA

Anaconda Individual Edition contains conda and Anaconda Navigator, as well as Python
and hundreds of scientific packages. When you installed Anaconda, you installed all these
too.

Conda works on your command line interface such as Anaconda Prompt on Windows and
terminal on macOS and Linux.

Navigator is a desktop graphical user interface that allows you to launch applications
and easily manage conda packages, environments, and channels without using
command-line commands.

You can try both conda and Navigator to see which is right for you to manage your
packages and environments. You can even switch between them, and the work you do with
one can be viewed in the other.

ANACONDA NAVIGATOR

Anaconda Navigator is a desktop graphical user interface (GUI) included in Anaconda®


distribution that allows you to launch applications and easily manage conda packages,
environments, and channels without using command-line commands. Navigator can search
for packages on Anaconda.org or in a local Anaconda Repository. It is available for
Windows, macOS, and Linux.

To get Navigator, get the Navigator Cheat Sheet and install Anaconda. The Getting started
with Navigator section shows how to start Navigator from the shortcuts or from a terminal
window.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 33


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Use of Anaconda Navigator:

In order to run, many scientific packages depend on specific versions of other packages. Data
scientists often use multiple versions of many packages and use multiple environments to
separate these different versions.

The command-line program conda is both a package manager and an environment manager.
This helps data scientists ensure that each version of each package has all the dependencies it
requires and works correctly.

Navigator is an easy, point-and-click way to work with packages and environments without
needing to type conda commands in a terminal window. You can use it to find the packages
you want, install them in an environment, run the packages, and update them – all inside
Navigator.

Applications of Anaconda Navigator:

The following applications are available by default in Navigator:

 JupyterLab
 Jupyter Notebook
 Spyder
 PyCharm
 VSCode
 Glueviz

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 34


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

 Orange 3 App
 RStudio
 Anaconda Prompt (Windows only)
 Anaconda PowerShell (Windows only)

Advanced conda users can also build their own Navigator applications.

ANACONDA PROMPT

Anaconda Prompt is a command line shell (a program where you type in commands instead
of using a mouse). The black screen and text that makes up the Anaconda Prompt doesn't
look like much, but it is really helpful for problem solvers using Python.

If you prefer using a command line interface (CLI), you can use conda to verify the
installation using Anaconda Prompt on Windows or terminal on Linux and macOS.

To open Anaconda Prompt:

o Windows: Click Start, search, or select Anaconda Prompt from the menu.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 35


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

4.1.2. SUBLIME TEXT3

Sublime Text is a shareware cross-platform source code editor with a Python application
programming interface (API).

It natively supports many programming languages and markup languages, and functions can
be added by users with plugins, typically community-built and maintained under free-
software licenses.

FEATURES OF SUBLIME TEXT

The following is a list of features of Sublime Text:

 "Go to Anything," quick navigation to files, symbols, or lines


 "Command palette" uses adaptive matching for quick keyboard invocation of
arbitrary commands.
 Simultaneous editing: simultaneously make the same interactive changes to multiple
selected areas.
 Python-based plugin API.
 Project-specific preferences.
 Extensive customizability via JSON settings files, including project-specific and
platform-specific settings.
 Cross-platform (Windows, macOS, and Linux) and Supportive Plugins for cross-
platform.
 Compatible with many language grammars from TextMate.

4.1.3. HEROKU CLOUD SERVICE

Heroku is a cloud platform as a service (PaaS) supporting several programming languages.


One of the first cloud platforms, Heroku has been in development since June 2007, when it
supported only the Ruby programming language, but now supports Java, Node.js , Scala,
Clojure, Python, PHP, and Go. For this reason, Heroku is said to be a polyglot platform as it
has features for a developer to build, run and scale applications in a similar manner across
most languages.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 36


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

4.2. PSEUDO CODE

CODE FOR BRAIN TUMOR IMAGE SEGMETNATION:

import os
from flask import Flask, render_template, request
from predictor import check

app = Flask(__name__, static_folder="images")

APP_ROOT = os.path.dirname(os.path.abspath(__file__))

@app.route('/')
@app.route('/index')
def index():
return render_template('upload.html')

@app.route('/upload', methods=['GET', 'POST'])


def upload():
target = os.path.join(APP_ROOT, 'images/')
print(target)

if not os.path.isdir(target):
os.mkdir(target)

for file in request.files.getlist('file'):


print(file)
filename = file.filename
print(filename)
dest = '/'.join([target, filename])
print(dest)
file.save(dest)

status = check(filename)

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 37


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

return render_template('complete.html', image_name=filename, predvalue=status)

if __name__ == "main":
app.run(port=4555, debug=True)

import numpy as np
from keras.preprocessing import image
from tensorflow.keras.models import load_model
saved_model = load_model("model/VGG_model.h5")
status = True

def check(input_img):
print(" your image is : " + input_img)
print(input_img)

img = image.load_img("images/" + input_img, target_size=(224, 224))


img = np.asarray(img)
print(img)

img = np.expand_dims(img, axis=0)

print(img)
output = saved_model.predict(img)
print(output)
if output[0][0] == 1:
status = True
else:
status = False

print(status)
return status

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 38


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

4.3. COMPONENT DIAGRAM

A component diagram, also known as a UML component diagram, describes the


organization and wiring of the physical components in a system. Component diagrams are often
drawn to help model implementation details and double-check that every aspect of the
system's required function is covered by planned development. Here component diagram
consists of all major components that is used to build a system. So, design, Algorithm, File
System and Datasets all are linked to one another. Datasets are used to compare the results
and algorithm issued to process those results and give a correct accuracy and design UI
issued to show the result in an appropriate way in the system and file system is used to store
the user data. So, like this all components are interlinked to each other.

Fig 4.3.1 Component Diagram

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 39


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

4.4. DEPLOYMENT DIAGRAM

A deployment diagram shows the configuration of run time processing nodes and the
components that live on them. Deployment Diagrams is a kind of structure diagram used in
modeling the physical aspects of an object-oriented system. Here the deployment diagram
show the final stage of the project and it also shows how the model looks like after doing all
the processes and deploying in the machine. Starting from the system how it processes the user
entered information and then comparing that information with the help of datasets, then training
and testing those data using the algorithms such as decision tree, naïve Bayes, Random forest.
Then finally processing all those data and information the system gives the desired result in the
interface.

Fig 4.4.1 Deployment Diagram

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 40


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER -5

SCREEN SHOTS

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 41


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

5. OUTPUTS AND SCREENSHOTS


Now we will see how the implementation works in detail by the following
screenshots.

Fig: 5.1. Project bar

In the above screenshot, select the address bar and type the cmd in the address bar.

Fig: 5.2. Sample Folder

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 42


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

On typing cmd and pressing “Enter” key, it will take us to command prompt.

Fig: 5.3. Open folder location in cmd

Fig: 5.4. Run Flask

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 43


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Fig: 5.5. Main Output Stream

Later on, in the above screen, select the browse files to give the desired input of user’s choice
to check whether the tumor status.

Fig: 5.6. Sample Folder

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 44


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

The above screenshot shows the set of sample images. Choose any one of them.

Fig: 5.7. Selecting 0th image from the folder

Now after selecting 0th image as filename click on “check tumor status”. It will redirect you to
the brain tumor website which is shown below.

Fig: 5.8. Output for selected image

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 45


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

In the above screen we can see that it checks that the input image doesn’t have any brain
tumor.

Similarly check for another input image.

Fig: 5.9. Selecting 5th image from the folder

Fig: 5.10. Output for selected image

In the above screen we see that it checks that the brain tumor detected for the selected input
image.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 46


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER -6

TESTING

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 47


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

6. TESTING

Testing is a process of executing a program with the intent of finding an error. A good test
case is one that has a high probability of finding an as-yet –undiscovered error. System
testing is the stage of implementation, which is aimed at ensuring that the system works
accurately and efficiently as expected before live operation commences. It verifies that the
whole set of programs hang together. System testing requires a test consists of several key
activities and steps for run program, string, system and is important in adopting a successful
new system.

TYPES OF TESTING

6.1. UNIT TESTING

Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application. It is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application,
and/or system configuration.

6.2. INTEGRATION TESTING

Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing
the problems that arise from the combination of components.

6.3. VALIDATION TESTING

An engineering validation test (EVT) is performed on first engineering prototypes, to ensure


that the basic unit performs to design goals and specifications. It is important in identifying
design problems, and solving them as early in the design cycle as possible, is the key to
keeping projects on
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 48
BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

time and within budget. Too often, product design and performance problems are not detected
until late in the product development cycle — when the product is ready to be shipped. The
old adage holds true: It costs a penny to make a change in engineering, a dime in
production and a dollar after a product is in the field.

Verification is a Quality control process that is used to evaluate whether or not a product,
service, or system complies with regulations, specifications, or conditions imposed at the start of
a development phase. Verification can be in development, scale-up, or production. This is often
an internal process.

Validation is a Quality assurance process of establishing evidence that provides a high


degree of assurance that a product, service, or system accomplishes its intended
requirements. This often involves acceptance of fitness for purpose with end users and other
product stakeholders.

The testing process overview is as follows:

Fig 6.1 Testing Process

6.4. SYSTEM TESTING

System testing of software or hardware is testing conducted on a complete, integrated system


to evaluate the system's compliance with its specified requirements. System testing falls
within the

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 49


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

scope of black box testing, and as such, should require no knowledge of the inner design of
the code or logic.

As a rule, system testing takes, as its input, all of the "integrated" software components that
have successfully passed integration testing and also the software system itself integrated
with any applicable hardware system. System testing is a more limited type of testing; it seeks to
detect defects both within the "inter-assemblages" and also within the system as a whole.
System testing is performed on the entire system in the context of a Functional Requirement
Specification (FRS) or System Requirement Specification (SRS).

6.5. TESTING OF INITIALIZATION AND UICOMPONENTS

Serial Number of Test Case TC 01

Module Under Test Brain tumor detection


Description Checks whether the input image is
detected with tumor or not

Input Upload an image without any tumor.

Output
Tumor not detected

Remarks Test Successful.

Table 6.5.1 Test Case 1

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 50


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

Serial Number of Test Case TC 02

Module Under Test Brain tumor detection

Description Checks whether the input image is detected


with tumor or not

Input Upload an image with any tumor

Output Tumor detected


Remarks Test Successful.

Error Check Credentials

Table 6.5.2 Test Case 2

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 51


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER -7

SUMMARY & CONCLUSION

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 52


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

7. SUMMARY & CONCLUSION

So, Finally I conclude by saying that, this project medical imaging is gaining
importance with an increase in the demand for automated, reliable, fast and efficient
diagnosis which can provide insight into the image better than human eyes. The brain tumor
is the second leading cause for cancer-related deaths in men age 20 to 39 and leading cause
cancer among women in the same age group. Brain tumors are painful and should end in
various diseases if not cured properly. The diagnosis of the tumor is a very important part of
its treatment. Identification plays an important part in the diagnosis of benign and malignant
tumors. A prime reason behind a rise in the number of cancer patients worldwide is the
ignorance towards the treatment of a tumor in its early stages. This paper discusses such a
machine learning algorithm that can write the user about the details of the tumor using brain
MRI. These methods include noise removal and sharpening of the image along with basic
morphological functions, erosion, and dilation, to obtain the background. Subtractions of
background and its negative from different sets of images result in extracted in age. Plotting
contour and c-label of the tumor and its boundary provides us with information related to
the tumor that can help in a better visualization in diagnosing cases. This process helps in
identifying the size, shape, and position of the tumor. It helps the medical staff as well the
patient to understand the seriousness of the tumor with the help of different color-labeling
for different levels of elevation. A GUI for the contour of the tumor and its boundary can
provide information to the medical staff on the click of user choice buttons. Keywords:
classification, convolutional neural network, feature extraction, machine learning, magnetic
resonance imaging, segmentation, texture features.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 53


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER -8

FUTURE ENHANCEMENT

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 54


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

8. FUTURE ENHANCEMENT

In this proposed work different medical images like MRI brain cancer images are taken
for detecting tumor. The proposed approach for brain tumor detection supported
convolution neural network categorizes into multi-layer perceptron neural network.
The proposed approach utilizes a mixture of this neural network technique and
consists of several steps including training the system, pre-processing, implementation
of the tensor flow, classification. In the future, we'll take an outsized database and
check out to offer more accuracy which can work on any sort of MRI brain tumor.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 55


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

CHAPTER-9

BIBLIOGRAPH

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 56


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

9. BIBLIOGRAPHY

[1] K. Padmavathi, and K. Thangadurai, “Implementation of RGB and Gray scale


images in plant leaves disease detection -comparative study,” Indian J. of Sci. and Tech.,
vol. 9, pp. 1-6,Feb. 2016.

[2] Dr.K.Thangadurai, K.Padmavathi, “Computer Vision image Enhancement For


Disease Detection”, 2014 World Congress on Computing and Communication
Technologies.

[3] Kiran R. Gavhale, and U. Gawande, “An Overview of the Research on Plant
LeavesDisease detection using Image Processing Techniques,” IOSR J. of Compu. Eng.
(IOSR-JCE),vol. 16, PP 10- 16,Jan. 2014.

[4] Y. Q. Xia, Y. Li, and C. Li, “Intelligent Diagnose System of Wheat Diseases
Based on Android Phone,” J. of Infor. & Compu. Sci., vol. 12, pp. 6845-6852, Dec. 2015.

[5] Wenjiang Huang, Qingsong Guan, JuhuaLuo, Jingcheng Zhang, Jinling Zhao,
Dong Liang, Linsheng Huang, and Dongyan Zhang, “New Optimized Spectral Indices for
Identifying and Monitoring Winter Wheat Diseases”, IEEE journal of selected topics in
applied earth observation and remote sensing,Vol. 7, No. 6, June 2014

[6] Monica Jhuria, Ashwani Kumar, and RushikeshBorse, “Image Processing For
Smart Farming: Detection Of Disease And Fruit Grading”, Proceedings of the 2013 IEEE
Second International Conference on Image Information Processing (ICIIP-2013)

[7] Zulkifli Bin Husin, Abdul Hallis Bin Abdul Aziz, Ali Yeon Bin
MdShakaffRohaniBinti S Mohamed Farook, “Feasibility Study on Plant Chili Disease
Detection Using Image Processing Techniques”, 2012 Third International Conference on
Intelligent Systems Modelling and Simulation.

[8] Mrunalini R. Badnakhe, Prashant R. Deshmukh, “Infected Leaf Analysis and


Comparison by Otsu Threshold and k-Means Clustering”, International Journal of
Advanced Research in Computer Science and Software Engineering, Volume 2, Issue 3,
March 2012.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 57


BRAIN TUMOR IMAGE SEGMENTATION USING DEEP NETWORKS

[9] H. Al-Hiary, S. Bani-Ahmad, M. Reyalat, M. Braik and Z. ALRahamneh, “Fast and


Accurate Detection and Classification of Plant Diseases”, International Journal of Computer
Applications (0975 - 8887)Volume 17- No.1, March 2011

[10] Chunxia Zhang, Xiuqing Wang, Xudong Li, “Design of Monitoring and Control
Plant Disease System Based on DSP&FPGA”, 2010 Second International Conference on
Networks Security, Wireless Communications and Trusted Computing.

[11] RajneetKaur , Miss. ManjeetKaur“A Brief Review on Plant DiseaseDetection using


in Image Processing”IJCSMC, Vol. 6, Issue. 2, February 2017

[12] SandeshRaut, AmitFulsunge “Plant Disease Detection in Image Processing Using


MATLAB” IJIRSET Vol. 6, Issue 6, June 2017

[13] K. Elangovan , S. Nalini “Plant Disease Classification Using Image Segmentation


and SVM Techniques” IJCIRV ISSN 0973-1873 Volume 13, Number 7 (2017)

[14] Sonal P Patel. Mr. Arun Kumar Dewangan “A Comparative Study on Various Plant
Leaf Diseases Detection and Classification” (IJSRET), ISSN 2278 - 0882 Volume 6, Issue 3,
March 2017

[15] R.Rajmohan, M.Pajany, Smart paddy crop disease identification and management
using deep convolution neural network & svm classifier, International journal of pure and
applied mathematics, vol 118, no 5, pp. 255-264, 2017.

[16] V Vinothini, M Sankari, M Pajany, “Remote Intelligent For Oxygen Prediction


Content in Prawn Culture System”, ijsrcseit,vol 2(2), 2017, pp 223-228.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPT OF CSE 58

You might also like