Brain Tumour
Brain Tumour
Medical imaging is gaining importance with an increase in the demand for automated, reliable, fast and efficient
diagnosis which can provide insight into the image better than human eyes. The brain tumor is the second leading
cause for cancer-related deaths in men age 20 to 39 and leading cause cancer among women in the same age group.
Brain tumors are painful and should end in various diseases if not cured properly. The diagnosis of the tumor is a
very important part of its treatment. Identification plays an important part in the diagnosis of benign and malignant
tumors. A prime reason behind a rise in the number of cancer patients worldwide is the ignorance towards the
treatment of a tumor in its early stages. This paper discusses such a machine learning algorithm that can write the
user about the details of the tumor using brain MRI. These methods include noise removal and sharpening of the
image along with basic morphological functions, erosion, and dilation, to obtain the background. Subtractions of
background and its negative from different sets of images result in extracted in age. Plotting contour and c-label of
the tumor and its boundary provides us with information related to the tumor that can help in a better visualization
in diagnosing cases. This process helps in identifying the size, shape, and position of the tumor. It helps the medical
staff as well the patient to understand the seriousness of the tumor with the help of different color-labeling for
different levels of elevation. A GUI for the contour of the tumor and its boundary can provide information to the
medical staff on the click of user choice buttons. Keywords: classification, convolutional neural network, feature
extraction, machine learning, magnetic resonance imaging, segmentation, texture features.
INTRODUCTION:
features a specific function. The cells in the body grow and divide in an orderly manner and form some new cells.
These new cells help to keep the human body healthy and properly working. When some cells lose their capability to
regulate their growth, they grow with none order. The extra cells formed form a mass of tissue that is named as the
tumor. The tumors can be benign or malignant. Malignant tumors lead to cancer while benign tumor is not
Cancerous. The important think about the diagnosis includes the medical image data obtained from various
biomedical devices that use different imaging techniques like x-ray, CT scan, MRI. Magnetic resonance imaging
(MRI) may be a technique that depends on the measurement of magnetic flux vectors that are generated after an
appropriate excitation of strong magnetic fields and radiofrequency pulses in the nuclei of hydrogen atoms present in
the water molecules of a patient's body. The MRI scan is much better than the CT scan for diagnosis as it doesn't use
any radiation. The radiologists can evaluate the brain using MRI. The MRI technique can determine the presence of
tumors within the brain. The MRI also contains noise caused thanks to operator intervention which may cause
inaccurate classification. The large volume of MRI is to analyze; thus, automated systems are needed because they're
less expensive -. Automated detection of tumors in MR images is important as high accuracy is required when
handling human life. The supervised and unsupervised machine learning algorithm technique can be employed for
the classification of brain MR image either as normal or abnormal. During this paper, an efficient automated
classification technique for brain MRI is proposed using machine learning algorithms. The supervised machine
learning algorithm is used for classification of brain MR image.
LITERATURE SURVEY
Krizhevsky et al. 2012 achieved state-of-the-art results in image classification based on transfer learning solutions
upon training a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the
ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, he achieved top-1 and top-5 error
rates of 37.5% and 17.0% which was considerably better than the previous state-of-the-art. He also entered a
variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%,
compared to 26.2% achieved by the second-best entry. The neural network, which had 60 million parameters and
650,000 neurons, consisted of five convolutional layers, some of which were followed by max-pooling layers, and
three fully-connected layers with a final 1000-way Softmax. To make training faster, he used non-saturating
neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-
connected layers he employed a recently-developed regularization method called ―dropout‖ that proved to be very
effective.
Simonyan& Zisserman 2014 they investigated the effect of the convolutional network depth on its accuracy in the
large-scale image recognition setting. These findings were the basis of their ImageNet Challenge 2014 submission,
where their team secured the first and the second places in the localisation and classification tracks respectively.
Their main contribution was a thorough evaluation of networks of increasing depth using architecture with very
small (3×3) convolution filters, which shows that a significant improvement on the prior-art configurations can be
achieved by pushing the depth to 16–19 weight layers after training smaller versions of VGG with less weight
layers.
Pan & Yang 2010‘s survey focused on categorizing and reviewing the current progress on transfer learning for
classification, regression and clustering problems. In this survey, they discussed the relationship between transfer
learning and other related machine learning techniques such as domain adaptation, multitask learning and sample
selection bias, as well as covariate shift. They also explored some potential future issues in transfer learning
research.In this survey article, they reviewed several current trends of transfer learning.
EXISTING SYSTEM:
Joshi proposed brain tumor detection and classification systems in MR images by first extracting the tumor
portion from brain image, then extracting the texture features of the detected tumor using gray level co-occurrence
matrix (GLCM) and then classified using a neuro-fuzzy classifier. shasidhar proposed a modified fuzzy c-means
(FCM) algorithm for MRI brain tumor detection. The texture features are extracted from the brain MR image and
then a modified FCM algorithm is used for brain tumor detection. The average speed-ups of as much as 80 times a
traditional FCM algorithm are obtained using the modified FCM algorithm. The modified FCM algorithm is a fast
alternative to the traditional FCM technique.
PROPOSED SYSTEM:
As per the literature survey, it was found that automation of brain tumor detection is very essential as high accuracy is
needed when human life is involved. Automated detection of tumors in MR images involves feature extraction and
classification using a machine learning algorithm. In this paper, a system to automatically detect a tumor in MR images
s proposed as shown in the figure.
2. Accurate results.
ANALYSIS
Analysis is defined as detailed examination of the elements or structure of something.
The process to gather the software requirements from clients, analyze and document them is known as
requirements engineering or requirements analysis. The goal of requirement engineering is to develop
and maintain sophisticated and descriptive ‘System/Software Requirements Specification’ documents.
It is a four step process generally, which includes -
• Feasibility Study
• Requirements Gathering
• Python installed
• Research Papers
• Datasets
• Accuracy calculation
Functional requirements explain what has to be done by identifying the necessary task, action or
activity that must be accomplished. Functional requirements analysis will be used as the top- level
functions for functional analysis.
1.1.2 USER REQUIREMENTS ANALYSIS
User Requirements Analysis is the process of determining user expectations for a new or modified
product. These features must be quantifiable, relevant and detailed. The main user requirements of our
project are as follows:
• Internet Facility/ LAN Connection
• CPU i5+
• Visual Studio
• RAM 8 or 16 GB
• Memory 1GB
Non-functional requirements describe the general characteristics of a system. They are also known as
quality attributes. Some typical non-functional requirements are Performance, Response Time,
Throughput, Utilization, and Scalability.
Performance:
The performance of a device is essentially estimated in terms of efficiency, effectiveness and speed.
Response Time: Response time is the time a system or functional unit takes to react to a given input.
□ Physical Data Acquisition: Acquiring the physical image of any device means extracting an
exact bit-by-bit copy of the original device's flash memory. In contrast to logical acquisition,
physically acquired images hold unallocated space, files, and the volume stack, in addition to
the extraction of data remnants present in the memory.
□ Data Preprocessing: Data preprocessing is an important step in the data mining process. The
phrase "garbage in, garbage out" is particularly applicable to data mining and machine
learning projects. Data-gathering methods are often loosely controlled, resulting in out-
ofrange values, impossible data combinations, and missing values, etc.
□ Segmentation: Segmentation is an architectural approach that divides a network into multiple
segments or subnets, each acting as its own small network. This allows network administrators
to control the flow of traffic between subnets based on granular policies.
Feature Extraction: Feature extraction is a process of dimensionality reduction by which an
initial set of raw data is reduced to more manageable groups for processing. A characteristic of
these large data sets is a large number of variables that require a lot of computing resources to
process.
Classification: Classification means grouping things together on the basis of certain common
features. It is actually the method of putting similar things into one group. It makes study more
easy and systematic.
Data Post Processing: Post processing procedures usually include various pruning routines,
rule quality processing, rule filtering, rule combination, model combination, or even
knowledge integration. All these procedures provide a kind of symbolic filter for noisy,
imprecise, or non-user-friendly knowledge derived by an inductive algorithm.
Decision Making: Decision making is the process of making choices by identifying a
decision, gathering information, and assessing alternative resolutions. Using a step-bystep
decision-making process can help you make more deliberate, thoughtful decisions by
organizing relevant information and defining alternatives.
If a system exists one and modification and addition of a new module is needed, analysis of the present
system can be used as a basic model. The design starts after the requirement analysis is complete and
the coding begins after the design is complete. Once the programming is completed, the testing is
done.
In this model the sequence of activities performed in a software development project are: Requirement
Analysis, Project Planning, System design, Detail design, Coding, Unit testing, System integration &
testing. Here the linear ordering of these activities is critical. End of the phase and the output of one
phase is the input of the other phase.
The output of each phase is to be consistent with the overall requirement of the system. Some of the
qualities of the spiral model are also incorporated like after the people concerned with the project
review completion of each of the phases of the work done. WATERFALL MODEL was being chosen
because all requirements were known beforehand and the objective of our software development is the
computerization/automation of an already existing manual working system.
Product and process requirements are closely linked. Process requirements often specify the activities
that will be performed to satisfy a product requirement. For example, a maximum development cost
requirement (a process requirement) may be imposed to help achieve a maximum sales price
requirement (a product requirement) a requirement that the product be maintainable (a product
requirement) often is addressed by imposing requirements to follow particular development styles. A
system engineering requirement can be a description of what a system must do, referred to as
Functional Requirement. This type of requirement specifies something that the delivered system must
be able to do. Another type of requirement specifies something about the system itself, and how well it
performs its functions. Such requirements are often called Nonfunctional requirements, or
‘Performance requirements’ or ‘Quality of service requirements’. Examples of such requirements
include usability, availability, reliability, supportability, testability and maintainability.
A collection of requirements define the characteristics or features of the desired system. A ‘good’ list
of requirements as far as possible avoids saying how the system should implement the requirements,
leaving such decisions to the system designer. Specifying how the system should be implemented is
called “implementation bias” or “solution engineering”. However, implementation constraints on the
solution may validly be expressed by the future owner, for example for required interfaces to external
systems; for interoperability with other systems; and for commonality with other owned products.
Functional requirements:
The Functional Requirements Specification gives the operations and activities that a system must be
able to perform. Functional requirements should include functions performed by specific screens,
outlines of work-flows performed by the system, and other business or compliance requirements the
system must meet. It also depends upon the type of software, expected users and the type of system
where the software is used.
Non functional requirements:
In systems engineering, a non-functional requirement is a requirement that specifies criteria that can be
used to judge the operation of a system, rather than specific behaviors. They are contrasted with
functional requirements that define specific behavior or functions. The nonfunctional requirements can
be considered as quality attributes of a system.
Reliability: The system should be 90% reliable.
Since it may need some maintenance or preparation for some particular day, the system does not need
to be reliable every time. so, 80% reliability is enough.
Efficiency: Based upon the density of vehicles signaling time will be dynamically allocated.
Availability: It is available in all the metropolitan cities.
Maintainability: The system should be optimized for supportability, or ease of maintenance as far as
possible.
Chapter 3
Design
Phase
Design phase:
Design is a multi step process that focuses on data structure, Software architecture, procedural details
and interface between modules. The design process also translates the requirements into the
presentation of software that can be accessed for quality before coding begins.
Computer software design changes continuously as new methods; better analysis and broader
understanding evolved. Software design at a relatively early stage in its revolution. Therefore, software
design methodology lacks the depth, flexibility and quantitative nature that are normally associated
with more classical engineering disciplines. However, the techniques for software design do exist,
criteria for design qualities are available and design notation can be applied.
The purpose of the design phase is to plan a solution of the problem specified by the requirements
document. The design of a system is perhaps the most critical factor affecting the quality of the
software. It has a major impact on the project during later phases, particularly during testing and
maintenance.
Software design sits at the technical kernel of the software engineering process and is applied
regardless of the development paradigm and area of application. Design is the first step in the
development phase for any engineered product or system. The designer’s goal is to produce a model or
representation of an entity that will later be built. Beginning, once the system requirements have been
specified and analyzed, system design is the first of the three technical activities design, code and test
that is required to build and verify software.
The importance can be stated with a single word “Quality”. Design is the place where quality is
fostered in software development. Design provides a representation of software that can be accessed
for quality. Design is the only way that can accurately translate a customer’s view into a finished
software product or system. Software design serves as a foundation for all the software engineering
steps that follow. Without a strong design, we risk building an unstable system that will be difficult to
test. One whose quality cannot be assessed until the last stage.
During design, progressive refinement of data structure, program structure, and procedural details are
developed ,reviewed and documented .System design can be viewed from either technical or project
management perspective .From the technical point of view ,design is comprised of four activities-
architectural design ,data structure design ,interface design and procedural design.
The design model is an abstraction of the implementation of the system. It is used to conceive as well
as document the design of the software system. It is a comprehensive, composite artifact encompassing
all design classes, subsystems, packages, collaborations, and the relationships between them.
1. Abstraction:
The lower level of abstraction provides a more detailed description of the solution. A sequence of
instruction that contains a specific and limited function refers to a procedural abstraction. A collection
of data that describes a data object is a data abstraction.
2. Architecture:
The complete structure of the software is known as software architecture. Structure provides
conceptual integrity for a system in a number of ways. The architecture is the structure of program
modules where they interact with each other in a specialized way. The aim of the software design is to
obtain an architectural framework of a system.
3. Patterns:
4.
A design pattern describes a design structure and that structure solves a particular design problem in a
specified content.
5. Modularity:
Modularity is the single attribute of software that permits a program to be managed easily.
6. Information hiding:
Modules must be specified and designed so that the information like algorithm and data presented in a
module is not accessible for other modules not requiring that information.
7. Functional independence:
Functional independence is the concept of separation and related to the concept of modularity,
abstraction and information hiding. The functional independence is accessed using two criteria i.e.
Cohesion and coupling. Cohesion is an extension of the information hiding concept. A cohesive
module performs a single task and it requires a small interaction with the other components in other
parts of the program. Coupling is an indication of interconnection between modules in a structure of
software.
8. Refinement:
Refactoring is the process of changing the software system in a way that it does not change the external
behavior of the code and still improves its internal structure.
10. Design classes:
The model of software is defined as a set of design classes. Every class describes the elements of the
problem domain and that focus on features of the problem which are user visible.
Commercial Constraints:
Basic commercial constraints such as time and budget come under commercial constraints
Requirements:
Non-Functional Requirements:
Non-Functional requirements are the requirements that specify intangible elements of a design.
Compliance:
Style:
A style guide or multiple style guides related to an organization, brand, product, service, environment
or project. For example, a product development team may follow a style guide for a brand family that
constrains the colors and layout of package designs.
Sensory Design:
Beyond visual design, constraints may apply to taste, touch, sound and smell. For example, a brand
identity that calls for products to smell fruity.
Usability:
Usability principles imply frameworks and standards. Ex: The principle of least astonishment.
Principles:
Principles include the design principles of an organization, team or individual. For example, a designer
who uses form follows function to constrain designs.
Integration:
A design that needs to work with other things such as products, services, systems, processes, controls,
partners and information.
Conceptual Design is an early phase of the design process, in which the broad outlines of function and
form of something are articulated. It includes the design of interactions, experiences, processes and
strategies. It involves an understanding of people's needs - and how to meet them with products,
services, & processes. Common artifacts of conceptual design are concept sketches and models.
The unified modeling language allows the software engineer to express an analysis model using the
modeling notation that is governed by a set of syntactic, semantic and pragmatic rules.
A UML system is represented using five different views that describe the system from a distinctly
different perspective. Each view can be defined by a set of diagrams.
UML is specifically constructed through two different domains. They
are:
>■ UML analysis modeling, this focuses on the user model and structural model views of the
system.
>■ UML design modeling, which focuses on the behavioral modeling, implementation
modeling and environment model views.
Use case diagram:
Use case diagram at its simplest is a representation of a user's interaction with the system that shows
the relationship between the user and the different use cases in which the user is involved. A use case
diagram can identify the different types of users of a system and the different use cases and will often
be accompanied by other types of diagrams as well. Actors are the external entities that interact with
the system. The use cases are represented by either circles or ellipses.
Class diagram:
Class diagrams give an overview of a system by showing its classes and the relationships among them.
Class diagrams are static - they display what interacts but not what happens when they do interact. In
general a class diagram consists of some set of attributes and operations. Operations will be performed
on the data values of attributes.
The logical design of a system pertains to an abstract representation of the data flows, inputs and
outputs of the system. This is often conducted via modeling, using an over-abstract and sometimes
graphical model of the actual system.
Sequence diagram:
A Sequence diagram shows object interactions arranged in time sequence. It depicts the objects and
classes involved in the scenario and the sequence of messages exchanged between the objects needed
to carry out the functionality of the scenario. Sequence diagrams are typically associated with use case
realizations in the Logical View of the system under development.
Activity diagram:
Activity diagram is essentially a fancy flowchart: Activity and state diagrams are related. State chart
diagram focuses on objects undergoing a process. An activity diagram focuses on the flow of
activities involved in a single process. The activity diagram shows the activities depend on one
another.
An activity represents the performance of the task or duty in a workflow. It may also represent the
execution of a statement in a procedure. You can share activities between state machines. However,
transitions cannot be shared.
Activity diagrams provide a way to model the workflow of a business process, code specific
information such as a class operation. The transitions are implicitly triggered by completion of the
actions in the source activities.
The main difference between activity and state chart diagram is activities are activity centric, while
state chart diagrams are state centric.
State Diagram:
Objects have behaviors and states. The state of an object depends on its current activity or condition.
A state chart diagram shows the possible states of the transition that cause damage in state. A state
diagram, also called a state machine diagrams. A state diagram is an illustration of the state an
object can attain as well as the transition between those states in the unified model.
A state chart diagram resembles an activity diagram in which the initial state is represented by a large
filled dot and subsequent states are portrayed as boxes with rounded corners. There may be one or
more horizontal lines through a box dividing into stacked sections. In that case, the upper section
containing the name of the state is written inside its external straight lines ending with an arrow.
At one end, connect various pairs of boxes .These lines define the transition between states.
The final state is portrayed as a large filled dot surrounded by an unfilled circle. Historical states are
denoted by a circle with the letter H inside.
A state chart diagram is a type of diagram used in computer science and related fields to describe the
behavior of systems. State diagrams require that the system described is composed of a finite number
of states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction.
Architectural design is a concept that focuses on components or elements of a structure. Any changes
the client wants to make to the design should be communicated to the architect during this phase.
Flow diagram is a collective term for a diagram representing a flow or set of dynamic relationships in
a system.
A data flow diagram (DFD) is a way of representing the flow of data of a process or a system, usually
an information system. The DFD also provides information about the outputs and inputs of each
entity and the process itself.
A data flow diagram is a graphical representation of the “flow” of the data through an information
system. DFD’s can also be used for the visualization of data processing. On a DFD, data items flow
from an external data source or an internal data store to an internal data store or an external data sink,
via an internal process.
A DFD provides no information about the timing of processes or about processes that will operate in
sequence or in parallel. It is therefore quite different from a flow chart, which shows the flow of
control through an algorithm allowing a reader to determine what operations will be performed on
what order under what circumstances but not what kinds of data will be input to and output from the
system or where the data will come from and go to or where the data will be stored.
1.7 Algorithmic Design:
Step-1: Start Step-2: open interface Step-3: import libraries Step-4: load datasets Step-5: split
model Step-6: build model Step-7: train model Step-8: evaluate Step-9: test model Step-10: reuse
Step-11: stop
Chapter 4
screens
Testing
1.1 Introduction to testing:
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub-assemblies, assemblies and/or a finished product. It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of tests. Each test
type addresses a specific testing requirement.
Unit Testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the application
.it is done after the completion of an individual unit before integration. This is a structural testing, that
relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component
level and test a specific business process, application, and/or system configuration. Unit tests ensure
that each unique path of a business process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.
Integration Testing
Integration tests are designed to test integrated software components to determine if they actually
run as one program. Testing is event driven and is more concerned with the basic outcome of
screens or fields. Integration tests demonstrate that although the components were individually
satisfied, as shown by successfully unit testing, the combination of components is correct and
consistent.
Integration testing is specifically aimed at exposing the problems that arise from the combination of
components
Functional Test
Functional tests provide systematic demonstrations that functions tested are available as specified by
the business and technical requirements, system documentation, and user manuals.
System Test
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions and
flows, emphasizing pre-driven process links and integration points.
White Box Testing is a testing in which the software tester has knowledge of the inner workings,
structure and language of the software, or at least its purpose. It is purposeful. It is used to test areas
that cannot be reached from a black box level.
Black Box Testing is testing the software without any knowledge of the inner workings, structure or
language of the module being tested. Black box tests, as most other kinds of tests, must be written
from a definitive source document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software under test is treated as a
black box. You cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.
Unit testing
Unit testing is usually conducted as part of a combined code and unit test phase of the software
lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct
phases.
Field testing will be performed manually and functional tests will be written in detail.
Integration testing
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or - one step up - software applications at the company level -
interact without error.
Test Results:
All the test cases mentioned above passed successfully. No defects encountered.
Acceptance testing:
User Acceptance Testing is a critical phase of any project and requires significant participation by
the end user. It also ensures that the system meets the functional requirement.
Test Results:
All the test cases mentioned above passed successfully. No defects encountered.
1. Implementation
It intended to simulate the behavior of biological systems composed of “neurons”. ANNs are computational models
inspired by an animal’s central nervous systems. It is capable of machine learning as well as pattern recognition.
These presented as systems of interconnected “neurons” which can compute values from inputs.
A neural network is an oriented graph. It consists of nodes which in the biological analogy represent
neurons, connected by arcs. It corresponds to dendrites and synapses. Each arc associated with a
weight while at each node. Apply the values received as input by the node and define Activation
function along the incoming arcs, adjusted by the weights of the arcs.
A neural network is a machine learning algorithm based on the model of a human neuron. The human
brain consists of millions of neurons. It sends and process signals in the form of electrical and
chemical signals. These neurons are connected with a special structure known as synapses. Synapses
allow neurons to pass signals. From large numbers of simulated neurons neural networks forms.
An Artificial Neural Network is an information processing technique. It works like the way human
brain processes information. ANN includes a large number of connected processing units that work
together to process information. They also generate meaningful results from it.
We can apply Neural network not only for classification. It can also apply for regression of
continuous target attributes.
Neural networks find great application in data mining used in sectors. For example economics,
forensics, etc and for pattern recognition. It can be also used for data classification in a large amount
of data after careful training.
A neural network may contain the following 3 layers:
Input layer - The activity of the input units represents the raw information that can feed into
the network.
Hidden layer - To determine the activity of each hidden unit. The activities of the input units and the
weights on the connections between the input and the hidden units. There may be one or more hidden
layers.
Output layer - The behavior of the output units depends on the activity of the hidden units and
the weights between the hidden and output units.
Artificial Neural network is typically organized in layers. Layers are being made up of many interconnected ‘nodes’
which contain an ‘activation function’. A neural network may contain the following 3 layers:
a. Input layer
The purpose of the input layer is to receive as input the values of the explanatory attributes for each
observation. Usually, the number of input nodes in an input layer is equal to the number of
explanatory variables. ‘input layer’ presents the patterns to the network, which communicates to one
or more ‘hidden layers’.
The nodes of the input layer are passive, meaning they do not change the data. They receive a single
value on their input and duplicate the value to their many outputs. From the input layer, it duplicates
each value and sent to all the hidden nodes.
b. Hidden layer
The Hidden layers apply given transformations to the input values inside the network. In this,
incoming arcs that go from other hidden nodes or from input nodes connected to each node. It
connects with outgoing arcs to output nodes or to other hidden nodes. In hidden layer, the actual
processing is done via a system of weighted ‘connections’. There may be one or more hidden layers.
The values entering a hidden node multiplied by weights, a set of predetermined numbers stored in
the program. The weighted inputs are then added to produce a single number.
c. Output layer
The hidden layers then link to an ‘output layer‘. Output layer receives connections from hidden layers
or from input layer. It returns an output value that corresponds to the prediction of the response
variable. In classification problems, there is usually only one output node. The active nodes of the
output layer combine and change the data to produce the output values.
The ability of the neural network to provide useful data manipulation lies in the proper selection of the
weights. This is different from conventional information processing.
The structure of a neural network also referred to as its ‘architecture’ or ‘topology’. It consists of the
number of layers, Elementary units. It also consists of Interconchangend Weight adjustment
mechanism. The choice of the structure determines the results which are going to obtain. It is the most
critical part of the implementation of a neural network.
The simplest structure is the one in which units distributes in two layers: An input layer and an
output layer. Each unit in the input layer has a single input and a single output which is equal to the
input.
The output unit has all the units of the input layer connected to its input, with a combination function
and a transfer function. There may be more than 1 output unit. In this case, resulting model is a linear
or logistic regression.This is depending on whether transfer function is linear or logistic. The weights
of the network are regression coefficients.
By adding 1 or more hidden layers between the input and output layers and units in this layer the
predictive power of neural network increases. But a number of hidden layers should be as small as
possible. This ensures that neural network does not store all information from learning set but can
generalize it to avoid overfitting.
Overfitting can occur. It occurs when weights make the system learn details of learning set instead
of discovering structures. This happens when size of learning set is too small in relation to the
complexity of the model.
A hidden layer is present or not, the output layer of the network can sometimes have many units,
when there are many classes to predict.
In the feedforward ANNs, the flow of information takes place only in one direction. That is, the flow
of information is from the input layer to the hidden layer and finally to the output. There are no
feedback loops present in this neural network. These type of neural networks are mostly used in
supervised learning for instances such as classification, image recognition etc. We use them in cases
where the data is not sequential in nature.
In the feedback ANNs, the feedback loops are a part of it. Such type of neural networks are mainly for
memory retention such as in the case of recurrent neural networks. These types of networks are most
suited for areas where the data is sequential or time-dependent.
Step-7: evaluate
Step-9: reuse
1.3 Implementation procedure:
Here first we open the interface. The dataset should be downloaded to implement. After downloading
the dataset then we have to import the libraries. Libraries are used to build the classification models.
Next step is to load the datasets for initializing the few parameters required for the image datasets
preprocessing. Now the data will be split into 2 phases one is for training and the other is for testing.
Using Image data generator to argument data by performing various operations on the training images.
For defining the hyperparameters of the plant disease classification model. This model is known as the
Build model. After this for training we initialize the Adam optimizer with learning rate and delay
parameters. Also we choose the type of the loss and metrices for the model and compiling it for
training. Then we have to evaluate by comparing the accuracy and loss by plotting the graph for
training and validation. Then the trained model will be saved.
In test model we write the following predict_disease() to predict the class (or) disease of the plant
image. We need to provide the complete path of the image and it displays the image with its predicted
disease. Here we choose the random images. We can reuse the model that had been Trained in earlier
and label the transform saved in the Google drive.
Conclusion:
In this proposed work different medical images like MRI brain cancer images are taken for detecting
tumor. The proposed approach for brain tumor detection supported convolution neural network
categorizes into multi-layer perceptron neural network. The proposed approach utilizes a mixture of
this neural network technique and consists of several steps including training the system, pre-
processing, implementation of the tensor flow, classification. In the future, we'll take an outsized
database and check out to offer more accuracy which can work on any sort of MRI brain tumor.
Future Enhancement:
A future enhancement of the project is to develop the open multimedia (Audio/Video) about the
diseases and their solution automatically once the disease is detected.
2. Bibliography
[1] K. Padmavathi, and K. Thangadurai, “Implementation of RGB and Gray scale images in plant
leaves disease detection -comparative study,” Indian J. of Sci. and Tech., vol. 9, pp. 1-6,Feb. 2016.
[2] Dr.K.Thangadurai, K.Padmavathi, “Computer Vision image Enhancement For Plant Leaves
Disease Detection”, 2014 World Congress on Computing and Communication Technologies.
[3] Kiran R. Gavhale, and U. Gawande, “An Overview of the Research on Plant LeavesDisease
detection using Image Processing Techniques,” IOSR J. of Compu. Eng. (IOSR-JCE),vol. 16, PP 10-
16,Jan. 2014.
[4] Y. Q. Xia, Y. Li, and C. Li, “Intelligent Diagnose System of Wheat Diseases Based on Android
Phone,” J. of Infor. & Compu. Sci., vol. 12, pp. 6845-6852, Dec. 2015.
[5] Wenjiang Huang, Qingsong Guan, JuhuaLuo, Jingcheng Zhang, Jinling Zhao, Dong Liang,
Linsheng Huang, and Dongyan Zhang, “New Optimized Spectral Indices for Identifying and
Monitoring Winter Wheat Diseases”, IEEE journal of selected topics in applied earth observation and
remote sensing,Vol. 7, No. 6, June 2014
[6] Monica Jhuria, Ashwani Kumar, and RushikeshBorse, “Image Processing For Smart Farming:
Detection Of Disease And Fruit Grading”, Proceedings of the 2013 IEEE Second International
Conference on Image Information Processing (ICIIP-2013)
[7] Zulkifli Bin Husin, Abdul Hallis Bin Abdul Aziz, Ali Yeon Bin MdShakaffRohaniBinti S
Mohamed Farook, “Feasibility Study on Plant Chili Disease Detection Using Image Processing
Techniques”, 2012 Third International Conference on Intelligent Systems Modelling and Simulation.
[8] Mrunalini R. Badnakhe, Prashant R. Deshmukh, “Infected Leaf Analysis and Comparison by
Otsu Threshold and k-Means Clustering”, International Journal of Advanced Research in Computer
Science and Software Engineering, Volume 2, Issue 3, March 2012.
[9] H. Al-Hiary, S. Bani-Ahmad, M. Reyalat, M. Braik and Z. ALRahamneh, “Fast and Accurate
Detection and Classification of Plant Diseases”, International Journal of Computer Applications
(0975
- 8887)Volume 17- No.1, March 2011
[10] Chunxia Zhang, Xiuqing Wang, Xudong Li, “Design of Monitoring and Control Plant Disease
System Based on DSP&FPGA”, 2010 Second International Conference on Networks Security,
Wireless Communications and Trusted Computing.
[11] RajneetKaur , Miss. ManjeetKaur“A Brief Review on Plant DiseaseDetection using in Image
Processing”IJCSMC, Vol. 6, Issue. 2, February 2017
[12] SandeshRaut, AmitFulsunge “Plant Disease Detection in Image Processing Using MATLAB”
IJIRSET Vol. 6, Issue 6, June 2017
[13] K. Elangovan , S. Nalini “Plant Disease Classification Using Image Segmentation and SVM
Techniques” IJCIRV ISSN 0973-1873 Volume 13, Number 7 (2017)
[14] Sonal P Patel. Mr. Arun Kumar Dewangan “A Comparative Study on Various Plant Leaf
Diseases Detection and Classification” (IJSRET), ISSN 2278 - 0882 Volume 6, Issue 3, March 2017
[15]
[16] R.Rajmohan, M.Pajany, Smart paddy crop disease identification and management using deep
convolution neural network & svm classifier, International journal of pure and applied mathematics,
vol 118, no 5, pp. 255-264, 2017.
[17] V Vinothini, M Sankari, M Pajany, “Remote Intelligent For Oxygen Prediction Content in
Prawn Culture System”, ijsrcseit,vol 2(2), 2017, pp 223-228.