0% found this document useful (0 votes)
96 views43 pages

Sravs

This project report by G. Sravanthi focuses on the development of a hybrid deep learning model for brain tumor identification using MRI images, combining Convolutional Neural Networks (CNNs) and Transformer architectures to enhance classification accuracy. The proposed system aims to automate the detection process, improving efficiency and consistency while reducing the workload on radiologists. The study utilizes a dataset of 2270 MRI images and demonstrates promising results in terms of accuracy, precision, recall, and F1-score, with potential future applications in clinical settings.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views43 pages

Sravs

This project report by G. Sravanthi focuses on the development of a hybrid deep learning model for brain tumor identification using MRI images, combining Convolutional Neural Networks (CNNs) and Transformer architectures to enhance classification accuracy. The proposed system aims to automate the detection process, improving efficiency and consistency while reducing the workload on radiologists. The study utilizes a dataset of 2270 MRI images and demonstrates promising results in terms of accuracy, precision, recall, and F1-score, with potential future applications in clinical settings.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Brain Tumor Identification using MRI

Images

G.Sravanthi 22955A6715

I
Brain Tumor Identification using MRI Images
A Project Report
Submitted in Partial Fulfilment of the
Requirements for the Award of the Degree Of

Bachelor of Technology
in
CSE(Data Science)

By
G.Sravanthi 22955A6715

Under the Esteemed Guidance of

Mr. Chitte Anil,

Associate Professor

Department of CSE(Data Science)

INSTITUTE OFAERONAUTICAL ENGINEERING


(Autonomous)

Dundigal, Hyderabad – 500 043, Telangana

April, 2024

© 2024,Sravanthi, All rights reserved.

II
DECLARATION

I certify that
a. the work contained in this report is original and has been done by me under the guidance
of my supervisor(s).
b. the work has not been submitted to any other Institute for any degree or diploma.
c. I have followed the guidelines provided by the Institute in preparing the report.
d. I have conformed to the norms and guidelines given in the Ethical Code of Conduct of the
Institute.
e. whenever I have used materials (data, theoretical analysis, figures, and text) from other
sources, I have given due credit to them by citing them in the text of the report and giving
their details in the references. Further, I have taken permission from the copyright owners
of the sources, whenever necessary.

Place: Signature of the Student :


Date:

III
CERTIFICATE

This is to certify that the project report entitled Brain Tumor Identification using MRI
Images submitted by Ms. Gajjela Sravanthi to the Institute of Aeronautical Engineering,
Hyderabad, in partial fulfilment of the requirements for the award of the Degree Bachelor of
Technology in CSE (Data Science) is a bonafide record of work carried out by him/her under
my/our guidance and supervision. In whole or in parts, the contents of this report have not been
submitted to any other institute for the award ofany Degree.

Supervisor Head of the Department


Mr. Chitte Anil Dr.K. Rajendra Patil
Associate Professor Associate Professor

Date:
````````````````````````````````````````

IV
APPROVAL SHEET

This project report entitled Brain Tumor Identification using MRI Images submittedby
Ms. Gajjela Sravanthi is approved for the award Degree Bachelor of Technology in Data
Science.

Examiner Supervisor

Principal
Dr. L V Narasimha Prasad

Date:
Place:

V
ACKNOWLEDGEMENT

We would like to seize this moment to convey our heartfelt appreciation to everyone who
supported, motivated, and cooperated with us in various capacities throughout our project. It
brings me immense joy to recognize the assistance provided by those individuals who played
a crucial role in ensuring the successful culmination of our project.

We extend our heartfelt thanks to Dr.K.Rajendra Patil, Head of the Department of Data
Science, who served as my supervisor. We express our deep appreciation for the invaluable
guidance and support provided by the faculty of the Data Science Department throughout the
development of this project. Without their advice, cooperation, and encouragement, this project
would not have come to fruition. A special note of gratitude goes to my friends for their
assistance in project development. Lastly, we wish to convey my gratitude to our principal, Dr.
L V Narasimha Prasad, the management, and my parents for their unwavering support inall
circumstances.

With Gratitude,

G.Sravanthi 22955A6715

VI
Abstract

The accurate and timely classification of brain tumors is critical for effective treatment and patient
outcomes.This research proposes a hybrid deep learning model to automatically classify brain cancers
from medical photos by utilizing both Transformers and Convolutional Neural Networks (CNNs). The
Transformer architecture improves the model's comprehension of global linkages and contextual
dependencies between these data, while the CNN is used as a feature extractor to extract complex
spatial features from MRI scans. This combination efficiently uses both local and global imaging
information to enable more accurate tumor type categorization. Using a dataset of 2270 brain MRI
images, the model is trained and assessed, and it performs well on important metrics like accuracy,
precision, recall, and F1-score. The suggested system shows great promise for increasing diagnostic
efficiency and accuracy, providing doctors with a dependable tool to support early tumor diagnosis and
individualized treatment planning. Expanding the dataset, improving the architecture of the model, and
using the system in clinical settings for real-time tumor classification are possible future tasks.

Keywords-
Tumor Classification, Convolutional Neural Networks (CNN), Transformer, Deep Learning,
Medical Image Analysis, MRI, Hybrid Model, Multi-Class Classification, Automated Diagnosis,
Feature Extraction.

VII
Table of Contents

Cover Page .............................................................................................................. I


Title ........................................................................................................................ II
Declaration ............................................................................................................ III
Certificate ............................................................................................................... IV
Approval Sheet ....................................................................................................... V
Acknowledgement ................................................................................................. VI
Abstract ................................................................................................................. VII
Contents............................................................................................................... VIII
List of Tables .......................................................................................................... IX
List of Figures ......................................................................................................... X
CHAPTER 1 Introduction ....................................................................................... 1
1.1 Introduction .................................................................................... 1
1.2 Existing System .............................................................................. 3
1.2.1 Demerits of Existing System ............................................... 3
1.3 Proposed System............................................................................. 3
1.3.1 Merits of Proposed System .................................................. 4
CHAPTER 2 Literature Survey .............................................................................. 5
2.1 Literature review ........................................................................... 5
2.2 Requirement Specifications ........................................................... 7
2.2.1 Hardware Requirements ..................................................... 7
2.2.2 Software Requirements ....................................................... 8
2.2.3 Functional Requirements .................................................... 8
CHAPTER 3 System Design .................................................................................. 10
3.1 System Architecture................................................................... 10
CHAPTER 4 Methdology And Implementation ................................................... 13
4.1 Methodology.............................................................................. 13
4.2 Packages and Modules ............................................................... 16
4.3 Dataset ....................................................................................... 18
4.4 Source Code............................................................................... 20
CHAPTER 5 Results .............................................................................................. 23
CHAPTER 6 Conclusion ....................................................................................... 27
REFERENCES ...................................................................................................... 28

VIII
List of Figures

1.1 MRI images of the brain without tumour and with tumour 2

1.2 Types Of Brain Tumor 12

3.1 System Architecture 20

5.1 Confusion Matrix 29

5.2 Graph of loss and accuracy for CNN model 30

5.3 Graph of loss and accuracy for VIT 31

5.4 Output 32

IX
CHAPTER -1

INTRODUCTION

1.1 INTRODUCTION

Brain tumors are abnormal growths of cells within the brain, which can be life-threatening
and require precise detection and classification for effective treatment. Among the various
types of brain tumors, glioma, meningioma, and pituitary tumors are commonly
encountered in clinical practice. Accurate identification and differentiation of these
tumors from normal brain tissue (no tumor) is crucial for timely and appropriate medical
intervention.
Because magnetic resonance imaging (MRI) can produce comprehensive images of
brain tissues, it is one of the main technologies used by doctors to diagnose brain
cancers. However, the manual examination of MRI scans is a time-consuming and
subjective process, often prone to errors, particularly when faced with subtle or complex
tumor features. Consequently, there has been a surge in interest in applying artificial
intelligence (AI), particularly deep learning, to automate the classification of brain
tumors.

Convolutional Neural Networks (CNNs), a type of deep learning, have proven very
effective at extracting significant spatial characteristics from images in image
classification tasks. CNNs work well in spotting local patterns in brain MRI data, such as
forms, edges, and textures, which are essential for tumor identification. CNNs, on their
own, are not always able to capture the relationships between various sections of the image
or the global environment. This restriction may make it more difficult for them to do more
difficult classification tasks, such differentiating between brain tumor kinds that may have
identical local characteristics but distinct overall structures.

To overcome this challenge, we propose a hybrid deep learning model that combines
CNNs with Transformer architectures. Transformers, originally designed for natural
language processing, have recently shown promise in image classification by using self
attention mechanisms to capture long-range dependencies and contextual relationships in
data. By integrating CNNs and Transformers, our approach aims to leverage the strengths
of both models—CNNs for detailed local feature extraction and Transformers for
capturing global context—thereby improving the accuracy of brain tumor classification.

10
Fig 1.1 MRI images of the brain without tumour and with tumour

Fig 1.2 Types of Tumour

11
1.2 EXISTING SYSTEM

 Traditional Machine Learning Algorithms: Traditional machine learning algorithms such as


Support Vector Machines (SVM), k-Nearest Neighbors (k-NN), and Decision Trees rely heavily
on hand-crafted features extracted from MRI images. These features include texture, shape, and
intensity values, which may not fully capture the complex patterns and variations present in
medical images. Consequently, their performance tends to be inferior to deep learning methods
that learn features automatically.

• Histogram of Oriented Gradients (HOG): HOG is a feature descriptor used to detect objects in
images. While it is effective for certain computer vision tasks, it may not be well-suited for the
nuanced and complex patterns in MRI images of brain tumors. The fixed nature of HOG features
can miss subtle variances and irregularities, leading to lower accuracy in tumor identification.

• Region-Based Methods: Region-based approaches, such as Region Growing or Watershed


Segmentation, identify regions in the image based on predefined criteria. These methods are
sensitive to noise and initialization parameters. If the criteria do not accurately reflect the tumor
characteristics, the segmentation results can be inaccurate, leading to lower accuracy in tumor
detection.

• Thresholding and Morphological Operations: Simple image processing techniques like


thresholding and morphological operations (e.g., dilation, erosion) are used to segment images.
These methods rely on pixel intensity values to differentiate between tumor and non-tumor
regions. Given the variability and overlap in intensity values within MRI images, these techniques
often fail to accurately delineate tumor boundaries, leading to poorer performance

1.2.1 DEMERITS OF EXISTING SYSTEM

12
 Manual identification is subjective and varies between radiologists, leading to inconsistent
results. It is also labor-intensive and delays diagnosis and treatment planning.
 Traditional machine learning techniques require extensive manual feature extraction,
demanding domain expertise, and struggle with the high dimensionality and complexity of
medical images.
 Rule-based systems rely on predefined rules that may not adapt well to image variations
and lack the ability to learn and improve from new data.
 Other deep learning methods like RNNs are not well-suited for image data without
modifications and handle dependencies poorly, while fully connected networks are
inefficient due to a lack of spatial hierarchies.

 Non-deep learning image processing techniques are too simplistic for complex medical
images and often fail with subtle differences; region-based methods are sensitive to noise
and initial conditions and struggle with heterogeneous tumor tissues.

 Hybrid methods introduce additional design complexity and require more computational
power, making them less efficient than CNNs.

1.3 PROPOSED SYSTEM

In our proposed system for Brain Tumor Identification using MRI Images, Convolutional Neural
Network is a well-ordered technique in the field of the medical image process. A convolutional
neural network (CNN) could be a type of artificial neural network works in image recognition and
process that’s specifically designed for method component knowledge. CNN is a powerful image
processing, computing method that use deep learning to perform each generative and descriptive
tasks, typically exploitation machine vision that has image and video recognition, together with
recommender systems and linguistic communication process (NLP).A neural network could be a
combination of system of hardware and computer code similar to the operation of neurons within
the human brain.The CNN extracts spatial features from the medical images, while transformers,
known for their capability in capturing global context and relationships within the data, are utilized
to further enhance the feature representation. By integrating CNNs and transformers, the system
aims to improve the accuracy and efficiency of tumor detection, overcoming the limitations of
manual diagnosis and offering a more robust solution for medical professionals.

13
In our proposed technique we have taken a complete variety of pictures as input and converted all
the images into constant size 240*240 to form them unvaried dimensions. The system leverages a
dataset of 2270 brain images, divided into training and testing subsets. The transformer architecture
is used to capture long-range dependencies and relationships within the images, which is especially
useful for complex medical data. We tend to create a convolutional kernel that is convoluted with
the input layer administering with thirty-two convolutional filters of size 3*3 every with the support
of three channel tensors. We tend to used ReLU because of the activation function. The corrected
linear activation function or ReLU could be a piecewise linear operate which will output the input
directly if it is positive, otherwise, it will output zero.The integration of transformers into the
traditional CNN framework is expected to significantly improve classification accuracy, providing
a faster and more reliable method for identifying the type of brain tumors, ultimately assisting in
better clinical outcomes and patient care.

14
1.3.1. MERITS OF PROPOSED SYSTEM

In our proposed system for Brain Tumor Identification using MRI Images, several merits
distinguish it from traditional methods:

• High Accuracy: The proposed system leverages advanced deep learning techniques, ensuring
high accuracy in detecting and classifying brain tumors from MRI images.
• Early Detection: By identifying tumors at an early stage, the system can significantly improve
patient outcomes and survival rates through timely intervention and treatment.
• Non-Invasive: MRI imaging is a non-invasive method, making the detection process safer and
more comfortable for patients compared to invasive diagnostic methods.
• Automated Processing: The automated nature of the system reduces the workload on
radiologists and healthcare professionals, allowing them to focus on more critical tasks and
increasing overall efficiency.
• Consistency: The system provides consistent results, minimizing the variability and subjectivity
that can occur with human interpretation of MRI images.
• Scalability: The proposed system can be scaled to process large volumes of MRI images, making
it suitable for deployment in large healthcare facilities and research institutions.

15
CHAPTER 2
LITERATURE SURVEY

2.1 LITERATURE REVIEW:

The literature review on brain tumor identification using MRI images has evolved significantly over the
years, with numerous studies contributing to the advancement of this critical field. Among the early
contributors, Smith et al. presented a groundbreaking work titled "Deep CNN: A Deep Convolutional
Neural Network for Brain Tumor Detection in MRI Images." Their objective was to develop a deep
CNN model tailored for detecting brain tumors in MRI scans. While their research marked a significant
step forward, it was limited by a relatively small dataset for training, which potentially restricted the
model's generalizability. Additionally, they highlighted a lack of standardization in evaluation metrics,
making cross-study comparisons challenging.

Building upon this foundation, Zhang and Wang conducted a comprehensive survey titled "Deep
Learning for Brain Tumor Detection in MRI Images: A Survey." Their work reviewed recent
advancements in deep learning methods for brain tumor detection, offering a broad overview of state-
of-the-art techniques. However, specific gaps or drawbacks in their survey were not detailed in the text.

In another study, Kim et al. aimed to enhance brain tumor classification through CNN-based feature
extraction from MRI images. Their research, "Improved Brain Tumor Classification via CNN-Based
Feature Enhancement of MR Images," demonstrated improved classification accuracy. However, they
noted that the performance of their method was heavily dependent on the choice of CNN architecture,
indicating that model selection played a crucial role in the effectiveness of their approach.

Chen and Li further expanded the scope of research with their work titled "Brain Tumor Detection and
Segmentation Using Deep Learning." Their objective was to develop a CNN-based approach capable of
both detecting and segmenting brain tumors in MRI images. Despite the promising results, they
provided limited explanation of their network architecture, which could hinder reproducibility and
future developments in the field.

In a broader context, Patel et al. reviewed existing methods for brain tumor segmentation and
classification in their study, "A Review on Brain Tumor Segmentation and Classification in MRI
Images." While their review encompassed various techniques, it lacked a comprehensive comparison
16
among different methods, making it difficult to pinpoint the most effective approaches. This
highlighted the need for more in-depth comparative studies to guide future research.

Lastly, Liu et al. offered an extensive review of deep CNN models for brain tumor segmentation in
their work, "Deep Convolutional Neural Networks for Brain Tumor Segmentation: A Review." They
provided valuable insights into various models, but their discussion on real-world application
challenges was limited. This gap underscored the necessity for further exploration into practical
implementation issues to bridge the gap between research and clinical practice.

Together, these studies paint a detailed picture of the strides made in brain tumor identification using
MRI images, while also highlighting areas where future research can address existing limitations and
enhance the applicability of these advanced techniques in clinical settings.

2.2 REQUIREMENT SPECIFICATIONS

2.2.1 SOFTWARE REQUIREMENTS

The software components necessary for the project implementation include:

• Operating System : Platform-independent, compatible with Windows, Linux, or


macOS.

• Programming Language : Python 3.x for coding the deep learning model and associated
scripts.

• Deep Learning Frameworks: TensorFlow or PyTorch for building, training, and


evaluating deep neural networks.

• Data Processing Libraries: Pandas and NumPy for efficient data manipulation, and scikit-learn
for preprocessing tasks

17
2.2.2 HARDWARE REQUIREMENTS

Hardware Requirements for Brain Tumor Identification Using MRI Images Developing and
deploying a system for brain tumor identification using MRI images involves significant
computational resources. Here are the essential hardware requirements:
1.High-Performance CPU:
- A multi-core processor (e.g., Intel Xeon or AMD Ryzen) with high clock speeds is essential for
general data processing and running the operating system efficiently.
- Minimum: 8-core processor
- Recommended: 16-core or higher

2. Powerful GPU:
-A high-performance GPU (Graphics Processing Unit) is critical for deep learning tasks, as it
accelerates the training and inference processes of convolutional neural networks (CNNs).
- Minimum: NVIDIA GTX 1080 Ti
- Recommended: NVIDIA RTX 3090 or NVIDIA A100

3. Memory (RAM):
- Sufficient RAM is necessary to handle large datasets and facilitate efficient data processing.
- Minimum: 32 GB
- Recommended: 64 GB or higher
4. Storage:
- Fast and ample storage is required for storing large MRI datasets and trained models. SSDs
(Solid State Drives) are preferred for their speed.
- Minimum: 1 TB SSD
- Recommended: 2 TB SSD (or more) with additional HDDs for backup and long-term storage

5. Cooling System:
- Proper cooling is essential to maintain optimal performance and prevent overheating during
intensive computations.
- Recommended: Liquid cooling or high-end air cooling systems

By ensuring these hardware components are in place, one can efficiently develop and deploy a
robust system for brain tumor identification using MRI images, leveraging advanced deep
learning techniques to achieve high accuracy and reliability.

18
2.2.3 FUNCTIONAL REQUIREMENTS

Functional Requirements for Brain Tumor Identification Using MRI Images

 Image Acquisition and Preprocessing:


-Load MRI Images: The system must be able to import MRI images from various sources (e.g.,
PACS, local storage).
- Preprocess Images: The system should perform preprocessing tasks such as resizing,
normalization, and data augmentation to prepare the images for analysis.

 Tumor Detection and Classification:


- Apply Deep Learning Model: The system should use a trained deep learning model to detect
and classify brain tumors in the preprocessed MRI images.
- Output Tumor Type: The system must identify and output the type and stage of the detected
tumor.

 Segmentation of Tumor Regions:


- Segment Tumor Boundaries: The system must precisely segment the tumor regions within the
MRI images, providing clear outlines of the tumor boundaries.
- Generate Segmentation Map: The system should produce a visual segmentation map
indicating the tumor regions.

 User Interface for Visualization and Interaction:


- Display Results: The system must present detection and segmentation results through a user-
friendly interface.
- Interactive Visualization: The interface should allow users to interact with the images, such as
zooming in on tumor regions and viewing different MRI slices.

 Reporting and Performance Metrics:


- Calculate Performance Metrics: The system must calculate accuracy, sensitivity, specificity,
and F1-score for the tumor detection and segmentation.
- Generate Detailed Reports: The system should create comprehensive reports for each
processed image, including detection results, segmentation maps, and performance metrics.

19
CHAPTER 3
SYSTEM DESIGN
3.1 SYSTEM ARCHITECTURE

20
CHAPTER 4

METHODOLOGY AND IMPLEMENTATION

4.1 METHODOLOGY

Data Collection and Preparation:

1. Data Acquisition

The dataset used for this project consists of brain MRI images labeled for the presence or absence of
tumors. The dataset is collected from publicly available sources (such as Kaggle or medical
institutions) and contains images of varying sizes and quality. The dataset is split into three subsets:
training, validation, and testing, in a 70:15:15 ratio respectively.

2. Data Preprocessing

Before feeding the images into the neural networks, preprocessing is performed:

 Resizing: All images are resized to a fixed dimension of 240x240 pixels to maintain consistency in input
size for both CNN and ViT models.
 Normalization: Pixel values are normalized to fall within the [0, 1] range by dividing the RGB values
by 255.
 Data Augmentation: To increase the diversity of the dataset and reduce overfitting, data augmentation
techniques like horizontal flipping, rotation, zooming, and shifting are applied to the training set.

Model Selection and Customization:

3. CNN Model Architecture

The Convolutional Neural Network (CNN) is designed with the following key components:

 Input Layer: The input layer accepts 240x240x3 images.


 Convolutional Layers: Two Conv2D layers with ReLU activation are used to extract features from the
images. The filters are initialized at 32 filters, and MaxPooling layers are added after each convolutional
layer to reduce the spatial dimensions.
 Fully Connected Layers: After flattening the convolutional feature maps, two dense layers are used.
The first dense layer has 64 units, followed by a dropout layer to prevent overfitting.
 Output Layer: The final layer has one unit with a sigmoid activation function to perform binary
classification (tumor or no tumor).

The architecture is optimized using the Adam optimizer, and binary cross-entropy is used as the loss
function. Early stopping is applied to prevent overfitting.

4. ViT Model Architecture

The Vision Transformer (ViT) model is also explored for this task. The key steps include:

21
 Patch Embedding: Each input image is divided into patches (e.g., 16x16), which are flattened and
linearly projected into a lower-dimensional space.
 Transformer Encoder: A series of transformer encoder blocks are applied to learn the relationships
between different patches in the image. The encoder block consists of multi-head self-attention layers
and feed-forward neural networks with layer normalization.
 Classification Head: The output from the transformer is passed through a classification head for binary
classification.

Implementation

Data Ingestion and Preprocessing:


Data Sources: MRI images sourced from medical databases or uploaded by users.
Preprocessing Pipeline: Implemented in Python using libraries like NumPy, TensorFlow, or Py
Torch for image resizing, normalization, and augmentation.

Model Training and Validation:

5. Training

Both models (CNN and ViT) are trained on the preprocessed dataset. The training involves:

 Batch Size: A batch size of 32 is used for both models.


 Epochs: Both models are trained for 50 epochs with early stopping based on the validation loss.
 Optimizer and Learning Rate: Adam optimizer is used with a learning rate of 0.001.

6. Evaluation

After training, the models are evaluated using the test set. Performance metrics include:

 Accuracy: The proportion of correct predictions over total predictions.


 Precision, Recall, and F1-score: These metrics provide more insight into the model's performance in
detecting tumors.
 ROC-AUC: The receiver operating characteristic curve and the area under it are computed to evaluate
the model's classification performance.

Deployment and Integration:


Deployment: Deployed the trained VGG16 model using Flask or Django to create a RESTful
API endpoint for real-time inference.
User Interface: Developed a web-based interface using HTML, CSS, and JavaScript to allow
healthcare professionals to upload MRI images, receive predictions, and visualize segmentation
maps.

22
Scalability and Maintenance:
Scalability: Designed the system architecture to handle multiple concurrent requests and ensure
robust performance under varying workloads.
Maintenance: Implemented error logging and monitoring mechanisms to facilitate rapid
troubleshooting and regular updates for model retraining with new data.

Integration with Healthcare Systems:


Data Management: Stored MRI images, preprocessing results, and model outputs in a database
(e.g., PostgreSQL, MongoDB) for audit and analysis.
Integration: Integrated with existing hospital systems such as PACS and HIS for seamless data
exchange and integration into clinical workflows.

4.2 Packages and Modules

1.NumPy (`numpy`)
Purpose: Provides support for large, multi-dimensional arrays and matrices, along with a collection of
mathematical functions to operate on these arrays.
Usage: Useful for handling numerical data, performing mathematical operations, and efficient storage
of image pixel data.

2.Pandas (`pandas`)
Purpose: Offers data structures and data analysis tools for manipulating numerical tables and time series.
Usage: Ideal for loading, manipulating, and analyzing tabular data, such as CSV files containing
metadata about the images.

3.TensorFlow (`tensorflow`)
Purpose:An end-to-end open-source platform for machine learning, providing a comprehensive
ecosystem of tools, libraries, and community resources.
Usage: Acts as the backend for building and training the CNN model. TensorFlow includes Keras, which
is a high-level API for building neural networks.

4.Keras Applications (`keras.applications`)


Purpose: Contains several popular deep learning models that are made available along with pre-trained
weights.
Usage: To import the VGG16 model with pretrained weights, which can then be fine-tuned for the brain
tumor identification task.

5.Matplotlib (`matplotlib.pyplot`)
Purpose: A comprehensive library for creating static, animated, and interactive visualizations in Python.
Usage: For plotting training and validation metrics like loss and accuracy over epochs.
6.scikit-learn:
23
Purpose:For evaluation metrics and model validation.
Usage:For computing accuracy, classification report, precision, recall, and F1-score.

7.Seaborn (`seaborn`)
Purpose: A statistical data visualization library based on Matplotlib.
Usage: To create attractive and informative statistical graphics, such as heatmaps for confusion matrices.

4.2 DATA SET

We used the Brain Tumor Classification (MRI) dataset for our experiment. We tooka total of 2270 images
with different types of tumours like pitutary tumor, meningioma tumor, glioma tumor and no tumor .
This dataset is consisting of two classes, where class 1 refers to tumour images and class 0 refers to non-
tumour images. we have 1816 training images and 454 testing images, we used images as validating
images.

The tumor images are classified into four classes which shows the stages are:

Class 1: meningioma_tumor
Class 2: no_tumor
Class 3: pituitary_tumor
Class 4: glioma_tumor

Training: The training phase of the brain tumor detection model involves a dataset initially
comprising 1818 images.

Testing: The testing phase of the brain tumor detection model involves the evaluation of model
performance using a carefully curated set of 454 images.

24
4.2 SOURCE CODE

i. Importing modules

25
ii.Loading Dataset

iii.Data Splitting

iv.CNN Model

26
V. VIT Transformer

27
V.Model Training

Vi.Model Evaluation

28
CHAPTER 5

RESULTS
The outcome of the project demonstrates that the developed models are capable
of accurately detecting the presence of brain tumors in MRI images. Both the
Convolutional Neural Network (CNN) and Vision Transformer (ViT) models
successfully classified whether a tumor was present or absent in the input MRI
images. Additionally, the models could differentiate between various types of
brain tumors, such as glioma, meningioma, and pituitary tumors, achieving an
accuracy of over 84% in predicting the correct tumor type. This outcome validates
the effectiveness of deep learning methods in assisting medical professionals with
brain tumor diagnosis through non-invasive imaging techniques.

Figure5.1: Confusion Matrix

The confusion matrix is normalized, meaning the values are expressed as proportions rather than raw
counts. This makes it easier to compare performance across different classes.

The matrix has two rows and two columns. The rows represent the true labels (Yes and No), while the
columns represent the predicted labels (Yes and No).

29
Prediction table

Interpreting the Values:

 Diagonal Elements: These represent correct predictions. For example, the value 0.80 in the
top-left corner indicates that 80% of the instances that were actually "Yes" were correctly
predicted as "Yes."
 Off-Diagonal Elements: These represent incorrect predictions. For example, the value 0.13 in
the bottom-left corner indicates that 13% of the instances that were actually "No" were
incorrectly predicted as "Yes

Figure 5.2: Graph of loss and accuracy for CNN model

Loss Curves (Left Plot)

 Training Loss: This curve represents the loss function evaluated on the training data during
each epoch. As the model trains, the loss typically decreases, indicating that the model is
learning to fit the training data better.

30
 Validation Loss: This curve represents the loss function evaluated on a separate validation
dataset, which is not used for training. It provides an estimate of how well the model
generalizes to unseen data.

Accuracy Curves (Right Plot)

 Training Accuracy: This curve shows the accuracy of the model on the training data. As the
model learns, the accuracy typically increases.
 Validation Accuracy: This curve shows the accuracy of the model on the validation data. It
provides an estimate of how well the model generalizes to unseen data.

Figure 5.3: Graph of loss and accuracy for Vision Transformer model

Loss Curves (Left Plot)


 Training Loss: This curve represents the loss function evaluated on the training data during each epoch.
As the model trains, the loss typically decreases, indicating that the model is learning to fit the training
data better.
 Validation Loss: This curve represents the loss function evaluated on a separate validation dataset,
which is not used for training. It provides an estimate of how well the model generalizes to unseen data.

Accuracy Curves (Right Plot)

 Training Accuracy: This curve shows the accuracy of the model on the training data. As the
model learns, the accuracy typically increases.
 Validation Accuracy: This curve shows the accuracy of the model on the validation data. It
provides an estimate of how well the model generalizes to unseen data.

31
Figure 5.4: Output

32
CHAPTER 6

CONCLUSION

In this project, we explored the efficacy of Convolutional Neural Networks (CNN) and Vision
Transformers (ViT) for detecting brain tumors. Our experiments demonstrate that both CNN
and ViT models can achieve high accuracy in classifying brain tumor images. The CNN model
achieved a test accuracy of 84.31%, while the ViT model further improved this performance by
leveraging attention mechanisms for better feature extraction. These results validate the
effectiveness of deep learning approaches in medical image classification, particularly for brain
tumor detection.

The combination of CNN's localized feature learning and ViT's global context understanding
offers a promising direction for future research in medical diagnostics. However, the dataset
used in this study was limited, which may affect the generalizability of the results. Further
work with larger, more diverse datasets and more refined model architectures could enhance
the reliability and accuracy of these models in real-world clinical applications.

This project underscores the significant potential of convolutional neural networks in


automating and improving diagnostic processes. The careful design and implementation,
coupled with the use of advanced machine learning libraries and frameworks, have
resulted in a system that not only achieves high accuracy but also provides insights into
its predictions. This work serves as a foundation for further research and development in
the area of medical image analysis, paving the way for more sophisticated and user-
friendly diagnostic tools that can assist healthcare professionals in making more informed
decisions.

33
REFERENCES:

[1] Hany Kasban, Mohsen El-bendary, Dina Salama, A comparative study of medical
imaging techniques, Int. J. Inf. Sci. Intell. Syst. 4 (2015) 37–58; J. Clerk Maxwell, A
Treatise on Electricity and Magnetism, vol. 2, 3rd ed., Clarendon, Oxford, 1892, pp. 68–73.

[2] D. Surya Prabha, J. Satheesh Kumar, Performance evaluation of image segmentation


using objective methods, Indian J. Sci. Technol. 9 (8) (February 2016).

[3] Anam Mustaqeem, Ali Javed, Tehseen Fatima, Int. J. Image Graph. Signal Process. 10
(2012) 34–39.

[4] M.L. Oelze, J.F. Zachary, W.D. O’Brien Jr., Differentiation of tumour types in vivo by
scatterer property estimates and parametric images using ultrasound backscatter, vol. 1, 5-8
Oct. 2003, pp. 1014–1017.

[5] Brain Tumour: Statistics, Cancer.Net Editorial Board, 1/2021. (Accessed on January
2021

[6]Seetha, S. Selvakumar Raja, Brain tumour classification using convolutional neural


networks, Biomed. Pharmacol. J. 11 (2018) 1457–1461, https://doi.org/10.13005/bpj/1511.

[7] Tonmoy Hossain, Fairuz Shadmani Shishir, Mohsena Ashraf, M.D. Abdullah Al Nasim,
Faisal Muhammad Shah, Brain tumour detection using convolutional neural network, in:
1st InternationalConference on Advances in Science, Engineering and Robotics Technology,
ICASERT, 3-5 May 2019, 2019

[8] Deepak, S. & Ameer, P. M. Brain tumor classification using deep CNN features via transfer learning. Brain
Tumor Classif. Using Deep CNN Features Transfer Learn. 111(1), 1–19 (2019).

[9] Saleh, A., Sukaik, R., & Abu-Naser, S. S. Brain tumor classification using deep learning. In 2020
International Conference on Assistive and Rehabilitation Technologies,
IEEE. https://doi.org/10.1109/iCareTech49914.2020.00032 (2020).

[10] Waghmare, V. K. & Kolekar, M. H. Brain tumor classification using deep learning. Internet Things
Healthc. Technol. 73(1), 155–175 (2021).
[11] N. Gordillo, E. Montseny, P. Sobrevilla, State of the art survey on MRI brain tumour segmentation,
Magn. Reson. Imaging 31 (8) (2013) 1426–1438.
[12] D. White, A. Houston, W. Sampson, G. Wilkins, Intra and interoperator variations in region-of-interest
drawing and their effect on the measurement of glomerular filtration rates, Clin. Nucl. Med. 24 (1999) 177–181.

[13] Afshar P, Mohammadi A, Plataniotis KN. Brain tumor type classification via capsule networks. In: 2018 25th
IEEE international conference on image processing (ICIP). IEEE; 2018 Oct 7. p. 3129–33.

[14] Afshar P, Mohammadi A, Plataniotis KN. Brain tumor type classification via capsule networks. In: 2018 25th
IEEE international conference on image processing (ICIP). IEEE; 2018 Oct 7. p. 3129–33.

[15] Kayaalp, F., Basarslan, M. S., & Polat, K. TSCBAS: A novel correlation based attribute selection method and
application on telecommunications churn analysis. In 2018 International Conference on Artificial Intelligence and Data

Processing (IDAP), IEEE, 2018, 1–5. https://doi.org/10.1109/IDAP.2018.8620935.

[16] Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition, 2014.

[17] Bal, F. & Kayaalp, F. A novel deep learning-based hybrid method for the determination of productivity of

agricultural products: Apple case study. IEEE Access 11, 7808–


7821. https://doi.org/10.1109/ACCESS.2023.3238570 (2023).

[18] Sartaj, B., Ankita, K., Prajakta, B., Sameer, D., & Swati, K. Brain tumor classification (MRI). Kaggle
(2020). https://doi.org/10.34740/kaggle/dsv/1183165.

[19] Brain tumor segmentation and grading of lower-grade glioma using deep learning in MRI imagesComput.
Biol. Med., 121 (2020), Article 103758

[20] Lotlikar, V.S.; Satpute, N.; Gupta, A. Brain Tumor Detection Using Machine Learning and Deep
Learning: A Review. Curr. Med. Imaging 2022, 18, 604–622. [Google Scholar] [CrossRef]

[21] Xie, Y.; Zaccagna, F.; Rundo, L.; Testa, C.; Agati, R.; Lodi, R.; Manners, D.N.; Tonon, C.
Convolutional neural network techniques for brain tumor classification (from 2015 to 2022): Review,
challenges, and future perspectives. Diagnostics 2022, 12, 1850. [Google Scholar] [CrossRef]
[22] Almadhoun, H.R.; Abu-Naser, S.S. Detection of Brain Tumor Using Deep Learning. Int. J. Acad. Eng.
Res. (IJAER) 2022, 6, 29–47. [Google Scholar]

[23] Sapra, P.; Singh, R.; Khurana, S. Brain tumor detection using neural network. Int. J. Sci. Mod. Eng.
(IJISME) ISSN 2013, 1, 2319–6386. [Google Scholar]
<1%
ijnrd.org
<1%
Internet

mdpi.com
Internet
<1%

rtdiabete.org
Internet
<1%

Sreenidhi International School on 2023-06-08


<1%
Submitted works

The Hong Kong Polytechnic University on 2006-04-22


Submitted works
<1%

Universiti Putra Malaysia on 2018-10-11


Submitted works
<1%

core.ac.uk
Internet
<1%

frontiersin.org
Internet
<1%

Higher Education Commission Pakistan on 2013-05-19


Submitted works
<1%

Swinburne University of Technology on 2022-08-01


Submitted works
<1%

Universiti Teknologi Petronas on 2023-05-08


Submitted works
<1%
Similaíity Repoít ID: oid:3618:57791978

deepai.org
Internet
<1%
link.springer.com
<1%
Internet

ijisae.org
Internet
<1%

"Classification in BioApps", Springer Science and Business Media LLC, ...


Crossref
<1%

Anuj Jain, Arnav Jalui, Jahanvi Jasani, Yash Lahoti, Ruhina Karani.
"De... <1%
Crossref

Firoj Alam, Tanvirul Alam, Ferda Ofli, Muhammad Imran. "Robust Traini...
Crossref
<1%

R.A. Welikala, P.J. Foster, P.H. Whincup, A.R. Rudnicka, C.G. Owen,
D.P... <1%
Crossref

Rochester Institute of Technology on 2018-04-27


Submitted works
<1%

Samiya Majid Baba, Indu Bala. "Detection of Diabetic Retinopathy with ...
Crossref
<1%

dspace.vutbr.cz
Internet
<1%

eprints.utar.edu.my
Internet
<1%
ijiemr.org
Internet
<1

You might also like