0% found this document useful (0 votes)
35 views144 pages

BCD Project Report

This project report presents a deep learning-based approach for blood cancer detection using the MobileNetV2 algorithm, aimed at improving early diagnosis and treatment outcomes. The model classifies microscopic blood smear images as cancerous or non-cancerous, utilizing preprocessing techniques to enhance accuracy and efficiency. This lightweight architecture allows for real-time deployment on low-resource devices, making it a scalable solution for automated blood cancer screening.

Uploaded by

NANDHINI A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views144 pages

BCD Project Report

This project report presents a deep learning-based approach for blood cancer detection using the MobileNetV2 algorithm, aimed at improving early diagnosis and treatment outcomes. The model classifies microscopic blood smear images as cancerous or non-cancerous, utilizing preprocessing techniques to enhance accuracy and efficiency. This lightweight architecture allows for real-time deployment on low-resource devices, making it a scalable solution for automated blood cancer screening.

Uploaded by

NANDHINI A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 144

A DEEP LEARNING BASED APPROACH FOR

BLOOD CANCER DETECTION USING


MOBILENET ALGORITHM

A PROJECT REPORT

Submitted
by
JESWIN A A (612821104024)

SUBASH R (612821104049)

VENKAT S K (612821104054)

THAMARAISELVAN R (612821104052)

in partial fulfillment for the award of the degree


of
BACHELOR OF ENGINEERING
in

COMPUTER SCIENCE AND ENGINEERING

VARUVAN VADIVELAN INSTITUTE OF TECHNOLOGY,


DHARMAPURI 636 701
ANNA UNIVERSITY :: CHENNAI 600 025
MAY 2025
A DEEP LEARNING BASED APPROACH FOR
BLOOD CANCER DETECTION USING
MOBILENET ALGORITHM

A PROJECT REPORT

Submitte
d by
JESWIN A A (612821104024)

SUBASH R (612821104049)

VENKAT S K (612821104054)

THAMARAISELVAN R (612821104052)

in partial fulfillment for the award of the degree


of
BACHELOR OF ENGINEERING
in

COMPUTER SCIENCE AND ENGINEERING

VARUVAN VADIVELAN INSTITUTE OF TECHNOLOGY,


DHARMAPURI 636 701

MAY 2025
ANNA UNIVERSITY :: CHENNAI 600 025

MAY 2025
ANNA UNIVERSITY : CHENNAI 600 025

BONAFIDE CERTIFICATE

Certified that this project report "DEEP LEARNING BASED

APPROACH FOR BLOOD CANCER DETECTION USING MOBILENET

ALGORITHM" is the bonafide work of “JESWIN A A (612821104024), SUBASH

R (612821104049), VENKAT S K (612821104054), THAMARAISELVAN R

(612821104052)” who carried out the project work under my supervision.

SIGNATURE SIGNATURE

Mrs. M. GEETHARANI., M.E., Mrs. A. NANDHINI.,

M.E., HEAD OF THE DEPARTMENT SUPERVISOR

Assistant Professor

Department of Computer Science Department of Computer

Science and Engineering and Engineering

Varuvan Vadivelan Institute of Varuvan Vadivelan Institute of

Technology, Dharmapuri - 636701. Technology, Dharmapuri -

636701.

Submitted for Anna University Project Viva-voce held on ……………….....


………………………………………………………………………………………….
..
INTERNAL EXAMINER EXTERNAL EXAMINER
ACKNOWLEDGEMENT

We take this opportunity to express our sincere gratitude to our Chairman,

Thiru. M. Vadivelan, B.A., whose support and encouragement have been instrumental

in the successful completion of our project. Without his motivation, this endeavor

would not have been possible.

We are honored to extend our heartfelt thanks to our Principal,

Dr. A. Sivakumar, M.E., M.S., DBA, DMM, Ph.D., LLB, for being a constant source

of inspiration and for his guidance throughout our academic journey.

Our sincere appreciation to Mrs. M. Geetharani, M.E., Head of the Department of

Computer Science and Engineering, for providing us with the necessary facilities and

environment to carry out our project.

We are also grateful to Mrs. M. Kiruthika Devi, M.E., Assistant Professor and

Project Coordinator, Department of Computer Science and Engineering, whose support

and encouragement from the beginning have been a great motivation.

We express our gratitude to our guide, Mrs. A. Nandhini, M.E.,

Assistant Professor, Department of Computer Science and Engineering, for her valuable

guidance, feedback, and continuous support throughout this project.

Above all, we thank Almighty God for His blessings and grace that guided us

through this journey. We also express our sincere appreciation to all the faculty

members of our department for their support.

Finally, we dedicate this work to our beloved parents, whose love and

encouragement have been our greatest strength during this endeavor.


ABSTRACT

Blood cancer is a life-threatening condition that impairs the normal production and

function of blood cells. Early and accurate detection is essential for timely intervention

and improved survival. Traditional diagnostic methods, like microscopic analysis of

blood smears, are time-consuming, prone to error, and require expert interpretation.

This project proposes a deep learning–based approach using the MobileNetV2

algorithm, a lightweight convolutional neural network (CNN), to classify microscopic

blood smear images as cancerous or non-cancerous. Preprocessing steps such as

enhancement, noise reduction, and normalization improve feature extraction and

classification accuracy. MobileNetV2's depthwise separable convolutions ensure

efficient computation, enabling real-time deployment on low-resource devices. The

model demonstrates high accuracy and fast inference, making it suitable for automated

blood cancer screening. Its lightweight architecture also allows deployment in edge and

cloud platforms, enhancing accessibility in limited-resource settings. This approach

shows the potential of deep learning to provide scalable, cost-effective, and accurate

diagnostics for blood cancer detection.

Keywords: MobileNetV2, Deep learning, Convolutional Neural Network (CNN),

Depthwise separable convolutions, Image preprocessing, Transfer learning, Medical

diagnostics.

v
LIST OF TABLES

TABLE NO TITLE PAGE NO

3.1.1 Hardware Requirements 22

3.1.2 Software Requirements 23

3.1.5 Model Configuration 25

3.3.1 Manual system Vs. TML Vs. DL 29

6.3.1 Test Cases 70

vi
LIST OF FIGURES

FIGURE NO TITLE PAGE NO

1.3.1 TML Vs. DL 6

1.3.2 Leukemia Classification 6

1.7.1 Leukocytes Types 10

1.7.2 Leukemia Stages 10

1.7.3 Project Flow 12

1.8.1 Binary masked and Segmented


14
image

1.8.2 Feature Maps 14

4.1.1 PBS Images 31

4.1.2 Cancerous & Non- Cancerous


31
PBS image

4.1.3 Blood Cancer Types 32

4.1.4 Preprocessing 34

4.2.1 General CNN Architecture 36

4.2.2 MobileNetV2 Architecture 37

4.2.3 Transfer Learning 37

4.3.1 Web based Interface 40

4.4.1 Model Training Accuracy & Loss 43

4.4.2 Performance Evaluation 44

vii
4.4.3 Confusion Matrix 44

5.1.1 System Overview 48

5.2.1 System Architecture 53

5.3.1 Use Case Diagram 55

5.4.1 Class Diagram 57

5.5.1 Activity Diagram 59

5.6.1 Sequence Diagram 63

6.1.1 Functional Test 67

10.2.1 Index Page 87

10.2.2 About Us Page 87

10.2.3 Login Page 88

10.2.4 Services Page 88

10.2.5 Features Page 89

10.2.6 Prediction Page 89

10.2.7 Image Upload 90

10.2.8 Image Upload (2) 90

10.2.9 Result Report Page 91

10.2.10 Stages Description Page 91

viii
LIST OF ABBREVIATIONS

ABBREVIATION DEFINITION

AI Artificial Intelligence

ML Machine Learning

TML Traditional Machine Learning

CNN Convolutional Neural Network

SVM Support Vector Machine

MobileNetV2 MobileNet Version 2

KNN K-Nearest Neighbor

CPU Central Processing Unit

RAM Random Access Memory

SSD Solid State Drive

WBC White Blood Cells

PBS Peripheral Blood Smear

EHR Electronic Health Record

MRI Magnetic Resonance Imaging

CT Computed Tomography

ALL Acute Lymphoblastic Leukemia

AML Acute Myeloid Leukemia


ix
CLL Chronic Lymphocytic Leukemia

CML Chronic Myeloid Leukemia

CLAHE Contrast Limited Adaptive


Histogram Equalization

HIPAA Health Insurance Portability and


Accountability Act

ALL-IDB Acute Lymphoblastic Leukemia


Image Database

TCIA The Cancer Imaging Archive

FDA Food and Drug Administration

RGB Red, Green, and Blue

JPEG Joint Photographic Experts


Group

PNG Portable Network Graphics

PDF Portable Document Format

IEEE Institute of Electrical and


Electronics Engineers

HTML Hypertext Markup Language

CSS Cascading Style Sheets

UI User Interface
x
UX User Experience

SQL Structured Query Language

API Application Programming


Interface

UAT User Acceptance Test

IDE Integrated Development


Environment

AWS Amazon Web Services

HTTPS HyperText Transfer Protocol


Secure

SSL Secure Sockets Layer

TLS Transport Layer Security

JSON JavaScript Object Notation

JWT JSON Web Token

xi
TABLE OF CONTENTS

CHAPTER PAGE
TITLE
NO NO

ABSTRACT v

LIST OF TABLES vi

LIST OF FIGURES vii

LIST OF ABBREVIATIONS ix

1 INTRODUTION

1.1 INTRODUTION 1

1.2 BACKGROUND AND MOTIVATION 3

1.3 PURPOSE OF THE PROJECT 5

1.4 PROBLEM STATEMENT 7

1.5 OBJECTIVES OF THE PROJECT 8

1.6 SCOPE OF THE PROJECT 9

1.7 METHODOLOGY OVERVIEW 10

1.8 SIGNIFICANCE OF THE PROJECT 13

1.9 CHALLENGES FACED 15

2 LITERATURE REVIEW

2.1 IMAGE SEGMENTATION METHODS 16

xii
2.1.1 CLUSTERING BASED SEGMENTATION 17

2.2 WBC's CLASSIFICATION 18

2.3 DEEP ENSEMBLE LEARNING TECHNIQUE 19

2.4 MACHINE LEARNING BASED STUDIES 20

3 SYSTEM ANALYSIS

3.1 SYSTEM SPECIFICATION 21

3.1.1 HARDWARE REQUIREMENTS 22

3.1.2 SOFTWARE REQUIREMENTS 23

3.1.3 DATASET DETAILS 23

3.1.4 IMAGE PREPROCESSING TECHNIQUES 24

3.1.5 MODEL CONFIGURATION & TRAINING SETUP 25

3.1.6 DEPLOYMENT ENVIRONMENT 25

3.2 EXISTING SYSTEM 26

3.2.1 DISADVANTAGES OF EXISTING SYSTEM 26

3.3 PROPOSED SYSTEM 27

3.3.1 ADVANTAGES OF PROPOSED SYSTEM 28

4 MODULES DESCRIPTION

4.1 DATA COLLECTION AND PREPROCESSING 30

4.2 MODEL TRAINING 35

xiii
5 SYSTEM DESIGN

4.3
5.1 WEB BASED
SYSTEM INTERFACE DEVELOPMENT AND
OVERVIEW 45
38
DEPLOYMENT
5.2 SYSTEM ARCHITECTURE 49
4.4 PERFORMANCE EVALUATION AND
5.3 USE CASE DIAGRAM
IMPROVEMENT 54
41
5.4 CLASS DIAGRAM 56

5.5 ACTIVITY DIAGRAM 58

5.6 SEQUENCE DIAGRAM 61

6 SYSTEM TESTING

6.1 FUNCTIONAL TESTING 65

6.2 NON-FUNCTIONAL TESTING 68

6.3 TEST CASES AND STRATEGIES 69

6.4 BUG REPORTING, RESOLUTION AND VALIDATION 71


RESULTS

7 SYSTEM IMPLEMENTATION

7.1 FRONTEND DEVELOPMENT 73

7.2 BACKEND DEVELOPMENT 75

8 CONCLUSION 77

9 FUTURE ENHANCEMENTS 93

10 APPENDICES

10.1 CODE SNIPPET xiv 82

10.2 VISUALS OF THE WEBSITE 87


ABSTRACT
CHAPTER 1

INTRODUCTION
CHAPTER 2

LITERATURE

REVIEW
CHAPTER 3

SYSTEM

ANALYSIS
CHAPTER 4

MODULES

DESCRIPTION
CHAPTER 5

SYSTEM

DESIGN
CHAPTER 6

SYSTEM

TESTING
CHAPTER 7

SYSTEM

IMPLEMENTATION
CHAPTER 10

APPENDICES
CHAPTER 8

CONCLUSION
CHAPTER 9

FUTURE ENHANCEMENTS
REFERENCES
CHAPTER 1

INTRODUCTIO

1.1 Introduction

Blood cancer, also known as leukemia, is a life-threatening disease that disrupts the
normal function and production of blood cells, leading to severe health complications. It
impairs the immune system and blood clotting, causing symptoms such as fatigue,
excessive bleeding, and an increased risk of infections.

Early and accurate detection is essential for improving treatment outcomes and
survival rates. However, traditional diagnostic methods, such as manual microscopic
examination of blood smears, are often time-consuming, prone to human error, and rely
heavily on the expertise of pathologists. These limitations delay diagnosis and hinder
timely medical intervention.

Recent advancements in artificial intelligence (AI), particularly in deep learning,


have revolutionized the way diseases are diagnosed from medical images. Deep learning
techniques, especially convolutional neural networks (CNNs), are capable of automatically
detecting patterns in complex datasets, offering a faster and more reliable alternative to
traditional methods.

MobileNetV2, a lightweight CNN architecture, is designed for real-time


applications and is well-suited for environments with limited computational resources. Its
efficiency makes it an ideal candidate for blood cancer detection through analysis of
microscopic blood smear images.

A deep learning model built using MobileNetV2 can classify blood smear images
into cancerous and non-cancerous categories, offering an automated solution that
minimizes human error and reduces diagnostic time.

1
The model utilizes image preprocessing techniques such as contrast enhancement,
noise reduction, and normalization to improve classification accuracy. Additionally,
transfer learning is employed to fine-tune a pre-trained MobileNetV2 model, optimizing it
for the specific task of blood cancer detection.

The integration of AI into diagnostic workflows not only enhances accuracy but
also addresses the growing demand for faster, more consistent medical analysis. In regions
with limited access to trained medical professionals, automated tools can serve as crucial
support systems, enabling frontline healthcare workers to make informed decisions with
greater confidence.

Moreover, the adaptability of deep learning models allows continuous improvement


through retraining with new data, ensuring that performance evolves alongside advances in
medical research and diagnostic standards. As healthcare increasingly turns toward digital
transformation, such intelligent systems represent a significant step forward in making
precision medicine more inclusive, efficient, and widely available.

With the potential to deploy on edge devices and cloud-based platforms, this deep
learning approach offers scalable and accessible solutions for healthcare settings,
especially in resource-constrained environments.

The model provides a promising tool for early detection and rapid diagnosis of
blood cancer, enhancing the potential for effective treatment and improving patient
outcomes.

The application of deep learning in medical imaging marks a significant


advancement in the early detection of blood-related diseases. Leveraging MobileNetV2
offers an efficient and scalable solution suitable for real-time diagnostics. Overall, this
approach supports faster diagnosis and enhances the potential for timely treatment of
blood cancer.

2
3
1.2 Background and Motivation

Background of the project:


Blood cancer, particularly leukemia, is among the most aggressive and fast-
spreading forms of cancer, originating in the bone marrow and significantly impairing the
production and function of blood cells. The disease interferes with the body’s immune
defense, oxygen transport, and clotting capabilities, making early and accurate detection
critical for improving survival rates and treatment outcomes.

Traditional diagnostic methods such as microscopic analysis of peripheral blood


smears are labor-intensive, require significant expertise, and are subject to interpretation-
based errors. These methods also face limitations in settings with high patient volumes or
a shortage of qualified professionals, often leading to delays in diagnosis.

In recent years, artificial intelligence (AI) and deep learning have emerged as
transformative technologies in medical image analysis. Convolutional Neural Networks
(CNNs), in particular, have demonstrated exceptional performance in tasks involving
visual pattern recognition, such as tumor detection and cell classification.

While traditional CNN architectures are accurate, they often demand significant
computational resources, restricting their use in real-time systems and mobile healthcare
solutions. Lightweight models such as MobileNetV2 address this limitation through
efficient architectural design, making them suitable for deployment in resource-
constrained environments without compromising diagnostic performance. These
advancements present new opportunities to automate and enhance the reliability of blood
cancer detection.

The increasing availability of digital microscopic imaging equipment has further


accelerated the feasibility of AI-based diagnostics. With large annotated datasets now
available, deep learning models can be trained to match or exceed the accuracy of manual
assessments.

4
Motivation of the project:

The motivation for developing an automated blood cancer detection model is rooted
in the need for fast, consistent, and accessible diagnostic solutions, especially in areas with
limited healthcare infrastructure.

Many rural and underserved regions lack experienced hematologists and diagnostic
labs, resulting in delayed or missed diagnoses that can critically impact patient outcomes.
A system capable of rapid, high-accuracy classification of blood smear images could play
a vital role in addressing this diagnostic gap.

Manual examination of blood cells is often subjective, with significant variability


between observations, even among trained experts. Incorporating deep learning into
diagnostic workflows helps minimize human error, standardize diagnostic outcomes, and
support clinical decision-making through objective analysis.

The MobileNetV2 architecture, due to its reduced computational footprint and high
inference speed, offers a practical path toward real-time implementation on mobile and
embedded devices.

This makes it possible to extend advanced diagnostic capabilities beyond traditional


laboratories, potentially enabling applications in remote clinics, field hospitals, or even via
smartphone-based platforms.

Incorporating such systems into clinical practice supports not only timely diagnosis
but also equitable healthcare delivery. Patients in low-resource settings stand to benefit
most from the scalability and affordability of AI-based tools.

Moreover, as these models continue to improve through continuous learning and


data augmentation, their reliability and diagnostic precision are expected to evolve, further
solidifying their role in modern medical diagnostics.

5
1.3 Purpose of the project

The purpose of this work is to create an accurate, efficient, and accessible diagnostic
system for detecting blood cancer through automated analysis of microscopic blood smear
images using deep learning techniques. The primary goal is to overcome the limitations of
traditional diagnostic approaches, which often involve manual microscopic examination
an error-prone, time-consuming, and expertise-dependent process. Such methods are not
only inconsistent across different observers but are also impractical for large-scale
screening or deployment in areas with limited medical infrastructure.

By leveraging the MobileNetV2 architecture, the system is designed to perform


effective classification of blood smear images into cancerous and non-cancerous
categories. MobileNetV2, known for its depthwise separable convolutions and low
computational footprint, provides a powerful solution for real-time analysis, especially in
hardware-constrained environments such as mobile phones, tablets, or embedded systems.
This makes the system suitable for integration into point-of-care tools, rural health centers,
and cloud-based diagnostic platforms.

To enhance the model’s performance, a range of image preprocessing techniques are


applied, including contrast enhancement, brightness adjustment, noise reduction, resizing,
normalization, and augmentation. These steps help in optimizing the quality of input
images and improving the model’s ability to extract meaningful features, even from
challenging or noisy samples.

The project also emphasizes the practical application of the model by deploying it
through a user-friendly web interface using Python Flask, hosted within a Jupyter
Notebook environment via Anaconda Navigator. This interface allows healthcare
professionals to upload blood smear images and receive quick predictions along with
visual interpretability features.

6
Ultimately, the aim is to provide a scalable, fast, and low-cost solution that supports
early diagnosis, reduces clinical workload, and improves the reliability of cancer screening
thereby contributing to better treatment outcomes and more equitable healthcare delivery.

Figure 1.3.1 TML Vs. DL

Figure 1.3.2 Leukemia Classification

7
1.4 Problem Statement

Despite notable progress in medical technology, the early and accurate detection of
blood cancer remains a significant challenge, particularly in underserved and resource-
limited regions.

Several factors contribute to this issue:

 Limited availability of trained pathologists, especially in rural and remote areas,


delays diagnosis and treatment initiation.

 Manual examination of blood smear slides is time-intensive and prone to subjective


interpretation, leading to inconsistent results.

 High error rates associated with human diagnosis increase the risk of
misclassification and delayed treatment.

 Lack of affordable, AI-powered diagnostic tools that can be deployed on mobile or


embedded platforms restricts the reach of advanced diagnostics to high-resource
settings.

In light of these challenges, a critical problem emerges:

How can a lightweight, accurate, and efficient deep learning-based system be


developed for blood cancer detection that supports real-time performance and is suitable
for deployment in mobile and low-resource clinical environments?

Addressing this question is essential for improving diagnostic accessibility,


reducing healthcare disparities, and enabling early intervention through scalable and
intelligent solutions.

8
1.5 Objectives of the project

The objective is to develop a reliable, lightweight, and accessible deep learning


solution for the early detection of blood cancer using microscopic blood smear images.
The main objectives are as follows:

1. To build an efficient deep learning model using MobileNetV2 architecture


MobileNetV2 is selected for its lightweight structure and suitability for real-time
applications. Transfer learning is applied to fine-tune the pre-trained model for
classifying cancerous and non-cancerous blood smear images with high accuracy and
reduced computational demands.

2. To enhance the dataset through preprocessing and augmentation techniques


The image dataset of 3,700 labeled samples from ALL-IDB and Kaggle is processed
using contrast enhancement, brightness adjustment, noise reduction, resizing,
normalization, and data augmentation to improve feature extraction and model
generalization.

3. To ensure real-time performance with minimal resource consumption


The model is optimized to run efficiently on devices with limited hardware, including
mobile and embedded platforms, making it suitable for deployment in low-resource
or remote environments.

4. To evaluate the system using standard performance metrics


Model performance is assessed using accuracy, precision, recall, F1-score, and
confusion matrix to ensure the system’s reliability and effectiveness for medical
diagnostics. The system achieved 98.6% accuracy during testing.

5. To deploy the model through a web-based interface for practical use


A user-friendly interface is developed using Python Flask in Jupyter Notebook (via
Anaconda Navigator) to allow healthcare professionals to upload images and receive
diagnostic predictions. The system is designed for use in clinics, hospitals, mobile
setups, and cloud-based platforms.
9
1.6 Scope of the project

The work is centered around the design, development, and evaluation of a deep
learning-based system for detecting blood cancer from microscopic blood smear images,
using the MobileNetV2 architecture.

The scope includes:

 Binary classification of blood smear images, distinguishing between cancerous and


non-cancerous samples. While this project focuses on binary classification, the
methodology is adaptable for future expansion into multi-class classification (e.g.,
different types or stages of leukemia).

 Utilization of publicly available datasets, such as ALL-IDB and Kaggle blood smear
image repositories, consisting of 3,700 labeled images. These datasets are used to
train and validate the model.

 Implementation within a Python environment, using TensorFlow and Keras


frameworks, allowing flexibility in model design and experimentation.

 Training and optimizing the model for high accuracy and efficient performance,
aiming to support real-time inference and deployment in resource-constrained
environments. Special emphasis is placed on achieving low computational overhead
without compromising prediction reliability.

However, the current scope does not extend to:

 Integration with comprehensive patient data, such as medical history or laboratory


results, which could further improve diagnostic precision in future developments.

 Deployment on mobile devices in real-time, although the system is optimized for


such environments, actual mobile integration is reserved for future implementation
phases.

 Regulatory certification or clinical trials, such as FDA approval. The system is


10
intended for research, academic, and prototyping purposes at this stage.

1.7 Methodology Overview

The methodology adopted for detecting blood cancer using deep learning follows a
well-defined pipeline that ensures both high accuracy and computational efficiency. Each
step is designed to contribute toward building a lightweight, robust, and deployable
classification system using MobileNetV2.

1. Data Collection:

Blood smear images are collected from two publicly available datasets: ALL-IDB
and Kaggle. The combined dataset includes 3,700 labeled images, categorized into
cancerous and non-cancerous classes.

These datasets are chosen due to their quality, accessibility, and relevance to the
medical imaging domain.

Figure 1.7.1 Leukocytes types (a) Lymphocyte, (b) Monocyte, (c) Neutrophil,
(d) Eosinophil and (e) Basophil.

11
Figure 1.7.2 (A) Benign (B) Early Pre-b (C) Pre-b (D) Pro-b

2. Data Preprocessing:

Preprocessing improves image quality, enhances feature extraction, and prepares the
data for training. Key steps include:
 Resizing: All images are resized to fit the input dimension required by MobileNetV2
(e.g., 224×224 pixels).
 Normalization: Pixel values are scaled to the [0,1] range to improve model
convergence.
 Noise Reduction: Filters such as Gaussian blur are applied to remove unwanted noise.
 Contrast Enhancement: Histogram equalization and other methods are used to
improve cell boundary visibility.
 Brightness Adjustment: Ensures consistent lighting conditions across samples.
 Augmentation: Techniques like rotation, flipping, and zooming increase data
variability and prevent overfitting.

3. Model Selection:

MobileNetV2, a lightweight Convolutional Neural Network (CNN), is selected for its


ability to deliver high accuracy with low computational cost. It uses depthwise separable
convolutions, which drastically reduce the number of parameters, making it ideal for real-
time and mobile applications.

4. Model Training:

The model is trained using transfer learning, where a pre-trained MobileNetV2 model
is fine-tuned on the blood smear dataset. Key points include:

 Binary classification (cancerous vs. non-cancerous)

 Use of TensorFlow and Keras as development frameworks

12
 Training on an 80/20 split between training and testing data

 Optimization techniques such as Adam optimizer, dropout, and batch normalization


to improve accuracy and reduce overfitting
5. Model Evaluation:
The trained model is evaluated on a hold-out test set using the following metrics:
 Accuracy: Proportion of correctly classified images
 Precision: Correct positive predictions relative to total predicted positives
 Recall: Correct positive predictions relative to actual positives
 F1-Score: Harmonic mean of precision and recall
 Confusion Matrix: Visualizes true vs. predicted labels
The final model achieves an accuracy of 98.6%, indicating strong performance.

6. Result Visualization:

 Training and validation accuracy/loss curves


 Confusion matrix to highlight true positives, false positives, etc.
 Sample predictions to demonstrate correct and incorrect classifications
 Activation maps and layer-wise visualizations (optional) to explain model decision-
making.

13
Figure 1.7.3 Project Flow

1.8 Significance of the project


The development of an AI-powered system for blood cancer detection offers
substantial value to both clinical practice and medical research. By leveraging a deep
learning model like MobileNetV2, which is known for its lightweight and efficient
architecture, the approach demonstrates how intelligent diagnostic solutions can be
designed for real-time use even in low-resource environments.

This work contributes to:

 Reducing diagnostic time and improving accuracy, addressing the limitations of


manual microscopy and minimizing the risk of human error.

 Empowering healthcare professionals with reliable, AI-assisted tools that can support
clinical decisions and serve as a second opinion.

 Making cancer screening more accessible by providing a scalable, cost-effective


solution that can be deployed in rural or underdeveloped regions with limited medical
infrastructure.

 Promoting the use of efficient AI models like MobileNetV2, which are suitable for
edge computing and mobile applications, enabling on-the-go diagnostics in clinics
and remote health camps.

From a research perspective, this effort also acts as a proof-of-concept for transitioning
AI models from theoretical and academic environments into practical, real-world medical
settings, thus narrowing the gap between laboratory prototypes and accessible healthcare
technologies.

14
Figure 1.8.1 Binary masked image and segmented image.

Figure 1.8.2 (A) Grayscale feature maps (B) Colored feature maps

15
1.9 Challenges Faced

Several challenges emerged during the development and implementation.

 Availability and quality of datasets:

Public datasets for blood cancer, such as ALL-IDB and those from Kaggle,
were limited in size and varied in resolution, labeling accuracy, and image
consistency. This made it difficult to ensure comprehensive and diverse training data.

 Preventing overfitting with MobileNetV2:

MobileNetV2, although lightweight and efficient, can easily overfit small


datasets. Balancing model complexity with generalization was essential, particularly
when working with limited and sensitive medical data.

 Class imbalance issues:

The datasets contained significantly more non-cancerous images than


cancerous ones. This imbalance affected the model’s ability to learn minority class
features, which are critical for accurate cancer detection.

 Choosing the right preprocessing and augmentation techniques:

Enhancing image features without distorting medically relevant details was


crucial. Finding the optimal combination of contrast adjustment, normalization, noise
reduction, and augmentation required several iterations.

 Model interpretability:

Since medical professionals require transparency in diagnostic systems,


ensuring that the model’s decisions could be explained visually and analytically was a
key requirement for trust and adoption.

These challenges were mitigated through the use of transfer learning, systematic
hyperparameter tuning, data augmentation, and by incorporating visualization tools to

16
support model interpretability and trust.

17
CHAPTER 2

LITERATURE REVIEW

Many scholars have already focused in the area of computer aided


systems to diagnose many pathological conditions such as malaria, allergies,
lymphoma, leukemia etc. Automated methods available in literature initially
focused on identification and classification of leukocytes and systems for detection
of other diseases including leukemia progressed later. This chapter discusses the
related previous studies available in literatures by grouping the works under three
sections: Image segmentation methods, machine learning-based studies and deep
learning based studies.

O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional networks for


biomedical image segmentation," in Proc. Medical Image Computing and
Computer-Assisted Intervention (MICCAI), 2015, pp. 234–241.

2.1 Image Segmentation Methods

Image segmentation is a critical process in computer vision and medical image


analysis, aiming to partition an image into meaningful regions that correspond to
different structures or objects. In the context of medical diagnostics, particularly in tasks
like tumor detection or blood cell classification, segmentation helps isolate regions of
interest (ROIs) for further analysis.

Traditional segmentation techniques include thresholding, region growing, edge


detection, and clustering methods such as K-means. While these approaches are
computationally efficient, they often struggle with images that exhibit complex textures,
varying intensities, or noise, which are common in medical imagery.

With the advancement of deep learning, segmentation has evolved to include more
robust models like Fully Convolutional Networks (FCNs), U-Net, and its variants.
18
These architectures can capture hierarchical spatial features and are particularly
effective in pixel-wise classification tasks. U-Net, for instance, has become a popular
choice in biomedical segmentation due to its encoder-decoder structure that preserves
both low-level and high-level spatial information.

Recent studies have shown that integrating pre-trained backbones like ResNet or
MobileNetV2 into segmentation frameworks significantly improves accuracy while
reducing computational overhead. Such hybrid models offer promising performance in
real-time medical applications.

2.1.1 Clustering-based Segmentation

Clustering-based image segmentation is a widely used unsupervised


technique that groups pixels into clusters based on similarity in features such as
color, intensity, or texture. These methods do not require labeled data and are
especially useful in preliminary segmentation where prior information about the
image is limited.

Among the most common clustering algorithms used in segmentation are


K-means, Fuzzy C-means (FCM), and Mean Shift. K-means is popular for its
simplicity and efficiency, classifying pixels by minimizing the within-cluster
variance. However, it is sensitive to the initial choice of centroids and may
converge to local minima. In contrast, Fuzzy C-means allows partial membership
of pixels in multiple clusters, making it more robust for segmenting images with
overlapping regions or gradual transitions.

In medical imaging, clustering has been used to segment tissues, cells,


and abnormalities in modalities such as MRI, CT, and microscopic imaging.
However, these methods may struggle with noise and intensity
inhomogeneity, leading researchers to integrate clustering with spatial
constraints or hybridize them with deep learning models for improved accuracy.

19
A. Rehman, S. Naz, M. Rauf, T. Saba, and A. A. Rehman, "Classification of
acute lymphoblastic leukemia using deep learning," Microscopy Research and
Technique, vol. 81, no. 11, pp. 1310–1317, 2018.

2.2 WBC's Classification

White Blood Cell (WBC) classification plays a crucial role in


hematological analysis and diagnosis, particularly in identifying conditions such
as infections, leukemia, and other blood disorders. Accurate classification of
WBCs -including lymphocytes, monocytes, neutrophils, eosinophils, and
basophils -is essential for clinical decision-making.

Traditionally, WBC classification was performed manually through


microscopic examination by pathologists. However, this approach is time-
consuming and susceptible to inter-observer variability. With advancements in
computer vision, automated classification systems using machine learning and
deep learning have emerged as powerful alternatives.

Earlier methods used handcrafted features such as cell shape, size, texture,
and color combined with classifiers like Support Vector Machines (SVM),
Decision Trees, and K-Nearest Neighbors (KNN). Although these techniques
showed promising results, they lacked robustness when dealing with variations in
staining, lighting, and cell morphology.

Recent research has shifted toward deep learning approaches, especially


Convolutional Neural Networks (CNNs), which automatically learn hierarchical
features from raw images. Models such as ResNet, Inception, and MobileNetV2
have been successfully employed for WBC classification, achieving high
accuracy and generalization. Furthermore, transfer learning has allowed the use of
pre-trained models on limited medical datasets, significantly reducing the training
20
time and improving performance.

T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense
object detection," IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 42, no. 2, pp. 318–327, 2020.

2.3 Deep Ensemble Learning Technique

Deep ensemble learning combines the predictive power of multiple deep


learning models to improve classification performance, generalization, and
robustness. In medical imaging, where data variability and class imbalance are
common, ensemble methods are particularly valuable for boosting diagnostic
accuracy and reducing false predictions.

Ensemble techniques typically involve training multiple neural networks


often of the same architecture (homogeneous ensemble) or different architectures
(heterogeneous ensemble) and combining their predictions using methods such as
majority voting, weighted averaging, or stacking. This approach helps to mitigate
the weaknesses of individual models by leveraging their diverse learning patterns.

In the context of medical diagnostics, ensemble deep learning has been


applied to tasks such as tumor detection, skin lesion classification, pneumonia
diagnosis, and blood cell identification. For example, combining architectures like
ResNet, DenseNet, has been shown to outperform single-model systems,
especially on noisy or imbalanced datasets.

Moreover, ensemble models often exhibit improved confidence calibration,


which is critical in clinical settings where false positives or negatives can lead to
serious consequences. Although ensemble learning increases computational
complexity, it offers a strong trade-off in terms of reliability and accuracy.

21
R. R. Sarraf, R. Golmohammadi, and N. Goel, "Machine Learning-Based
Classification of Leukemia Subtypes Using Microscopic Blood Images,"
International Journal of Advanced Computer Science and Applications, vol. 12,
no. 3, pp. 62–69, 202

2.4 Machine Learning based studies

Machine learning (ML) has been widely adopted in medical diagnostics due
to its ability to identify complex patterns within large datasets, especially in areas
like disease detection, prognosis, and treatment planning. Traditional machine
learning approaches in medical image analysis involve extracting handcrafted
features (e.g., texture, color, shape) followed by classification using models like
Support Vector Machines (SVM), Random Forests, and K-Nearest Neighbors
(KNN).

In blood cancer detection, ML-based techniques have been applied to


analyze features extracted from blood smear images for classifying white blood
cells or distinguishing between normal and abnormal cells. These models often
rely on image preprocessing, segmentation, and feature engineering, making
performance heavily dependent on the quality of the features.

While these techniques provide decent results with limited data, they may
fall short when handling complex image variations, such as changes in
illumination or morphological inconsistencies among different patients.
Nonetheless, machine learning remains a cost-effective and computationally
efficient option for quick prototyping, real-time applications, and settings where
deep learning infrastructure is not feasible.

22
CHAPTER 3

SYSTEM

ANALYSIS

3.1 System Specification

The blood cancer detection system is meticulously designed to be lightweight,


scalable, and efficient, making it suitable for deployment in clinical, mobile, and cloud
environments. At its core, the system utilizes MobileNetV2, a compact convolutional
neural network architecture known for its computational efficiency and high
classification accuracy. The model is fine-tuned using transfer learning on a curated
dataset of 3,700 labeled microscopic blood smear images sourced from ALL-IDB and
Kaggle, consisting of cancerous and non-cancerous samples.

Functionally, the system offers end-to-end support from data acquisition to


diagnosis. Users can upload blood smear images through an intuitive web-based
interface built using Python Flask in Jupyter Notebook, under the Anaconda Navigator
environment. Upon upload, the image undergoes a rigorous preprocessing pipeline
involving resizing, brightness adjustment, contrast enhancement, noise reduction, and
normalization to ensure optimal feature extraction. This step is critical for achieving
consistent performance across varied imaging conditions and sample quality.

The processed images are then fed into the trained MobileNetV2 model, which
performs binary classification and outputs the diagnostic result. To aid clinical
interpretation and transparency, the system includes visualizations such as classification
probability scores, confusion matrices, and example predictions, along with plots of
training and validation metrics (loss and accuracy curves).

In terms of non-functional requirements, the system is built for responsiveness and


compatibility across devices. The lightweight nature of MobileNetV2 allows for real-
time inference even on devices with limited resources, such as smartphones or

23
embedded systems. This ensures that the solution is accessible in remote or resource-
constrained settings.

Data security is addressed through secure handling of image files and optional
encryption modules to ensure patient confidentiality and compliance with medical data
standards.

The system is modular and future-ready, enabling integration with electronic health
records (EHR), mobile diagnostic applications, and cloud storage systems. Scalability is
a key feature, with provisions for handling large volumes of data and concurrent user
sessions. Additionally, the architecture supports future extensions to multi-class
classification, integration with clinical metadata, or expansion to other hematological
conditions.

This AI-powered diagnostic tool not only improves diagnostic turnaround time and
accuracy but also serves as a second-opinion system for clinicians, empowering them to
make timely and informed decisions. The combination of deep learning, real-time
accessibility, and clinical usability marks a significant advancement in digital pathology
and remote diagnostics.

3.1.1 Hardware Requirements

Component Specification

Processor (CPU) Intel Core i5/i7 (8th Gen or above) / AMD Ryzen 5 or higher

RAM Minimum 8 GB (16 GB or more)

Storage Minimum 100 GB of free SSD space

Display 14-inch or larger HD display (minimum 1366×768 resolution)

Internet Stable broadband connection (for downloading datasets, libraries)

24
Peripheral Keyboard, Mouse, Webcam (optional for further extensions)
Devices

Table 3.1.1 Hardware Requirements

3.1.2 Software Requirement

Software Component Specification / Version

Operating System Windows 10 / 11

Programming Language Python 3.8 or above

Development Environment Jupyter Notebook or Google Colab

Deep Learning Frameworks TensorFlow, Keras

Libraries/Packages NumPy, Pandas, Matplotlib, OpenCV, Flask

Web Deployment Tool Flask (for building a web interface for prediction)

Visualization Tools Matplotlib, Seaborn (for plotting results)

IDE (Optional) Visual Studio Code / PyCharm

Table 3.1.2 Software Requirements

3.1.3 Dataset Details

Dataset Sources:

 ALL-IDB - Publicly available dataset of microscopic images for Acute


Lymphoblastic Leukemia.
 Kaggle - Open access blood smear images labeled with cancerous and
non-cancerous classification.
 GitHub - Repositories with blood smear images labeled for cancerous and non-

25
cancerous.
 TCIA - Blood smear images labeled with cancerous and non-cancerous.

Total Number of Images:


 3700 high-resolution microscopic blood smear images.
Class Distribution:
 Balanced binary classification (Cancerous vs. Non-Cancerous).
Image Type:
 RGB color images (converted to standardized input format during preprocessing).

3.1.4 Image Preprocessing Techniques

To ensure the input data is clean and consistent, the following preprocessing
operations are performed:

 Noise Reduction – To suppress unwanted noise artifacts.


 Contrast Enhancement – To improve the visibility of cell structures.
 Brightness Adjustment – To normalize variations in lighting conditions.
 Image Resizing – All images are resized to 224×224 pixels to match MobileNetV2
input requirements.
 Normalization – Pixel values are scaled to a range of 0–1 for uniform model input.
 Augmentation – Rotation, flipping, zooming, and shifting to improve model
generalization.

26
3.1.5 Model Configuration and Training Setup

Parameter Configuration

Model Used MobileNetV2 (with Transfer Learning)

Input Shape 224×224×3

Batch Size 32

Number of Epochs 50–70 (early stopping used based on validation loss)

Loss Function Binary Crossentropy

Optimizer Adam

Learning Rate Adaptive (using ReduceLROnPlateau strategy)

Evaluation Accuracy, Precision, Recall, F1-score, Confusion Matrix

Metrics

Table 3.1.5 Model Configuration

3.1.6 Deployment Environment

 Interface:
o Web-based interface built using Flask.
 Deployment Tools:

27
o Jupyter Notebook (for testing) and Flask (for browser-based deployment).
 Target Platforms:
o Hospitals, clinics, mobile applications, and cloud-based health monitoring
tools.
 Special Features:
o Visual display of model predictions with highlighted regions in the image.
o Upload functionality for new test images.

28
3.2 Existing System

In current clinical practice, the detection and diagnosis of blood cancer primarily
rely on the manual examination of peripheral blood smear images under a
microscope. Experienced pathologists analyze the morphology of white blood cells
(WBCs) to identify any structural abnormalities or atypical cell appearances indicative
of leukemia or other hematological disorders. Although this method has been the gold
standard for decades, it is inherently subjective, time-consuming, and dependent on
expert interpretation. Variability in human observation may lead to inconsistent or
delayed diagnoses.

To overcome some of these challenges, automated diagnostic systems have been


introduced, using either traditional machine learning algorithms such as Support
Vector Machines (SVM), Decision Trees, and K-Nearest Neighbors (KNN), or more
recently, deep learning models like Convolutional Neural Networks (CNNs). While
deep learning has significantly improved the accuracy and robustness of medical image
analysis, many of the existing models are computationally intensive, requiring
powerful hardware such as GPUs. This restricts their practical deployment in real-time
or mobile-based diagnostic applications, especially in low-resource environments.

3.2.1 Disadvantages of Existing System

Subjectivity: Manual visual diagnosis is prone to human error and varies between
pathologists.

High Cost: Diagnostic procedures like biopsies and advanced imaging tools are
expensive and inaccessible in many rural regions.

Expert Dependency: Diagnosis requires the involvement of highly trained specialists,


which poses a significant barrier to early detection in underdeveloped areas.

Computational Complexity: Most CNN-based systems involve millions of


parameters, making them unsuitable for lightweight applications or real-time web
deployment without dedicated hardware.
29
3.3 Proposed System

To overcome the limitations of manual diagnosis and computationally heavy


models, the system is designed to leverage the efficiency and performance of
MobileNetV2, a lightweight convolutional neural network (CNN) architecture
developed for mobile and embedded applications. The core objective is to provide a
fast, accurate, and computationally efficient solution for detecting blood cancer from
microscopic blood smear images, suitable for real-time usage and deployment on low-
resource devices.

The process begins with image acquisition, using publicly available datasets such as
ALL-IDB and Kaggle. These images undergo a series of preprocessing steps including
noise reduction, contrast enhancement, brightness adjustment, resizing (typically to
224×224 pixels), and normalization. These operations are critical for eliminating
irrelevant variations and enhancing the quality of features extracted by the model.

Once preprocessing is complete, the images are passed into the MobileNetV2
architecture, which performs feature extraction using depthwise separable convolutions.
This architecture drastically reduces the number of parameters compared to traditional
CNNs, while maintaining competitive accuracy. Key features such as cellular structure,
shape, and texture are extracted, allowing the model to identify patterns associated with
cancerous cells.

The system integrates transfer learning, utilizing a pre-trained MobileNetV2 model


that is fine-tuned on the specific domain dataset to improve generalization. To address
the issue of class imbalance commonly observed in medical datasets, data augmentation
and class balancing techniques are employed during training.

The final classification layer uses a softmax activation function to distinguish


between cancerous and non-cancerous samples. The model’s effectiveness is rigorously
evaluated using standard performance metrics such as accuracy, precision, recall, and
F1-score, ensuring reliability for potential clinical use.

30
3.3.1 Advantages of Proposed System

High Accuracy and Efficiency

The MobileNetV2-based deep learning model ensures accurate classification


of blood smear images into cancerous and non-cancerous categories, reducing the
chances of misdiagnosis.

Lightweight and Fast Inference

MobileNetV2’s depthwise separable convolutions significantly reduce


computational complexity, allowing faster processing and real-time diagnosis.

Automated and Objective Diagnosis

Unlike traditional manual microscopic analysis, which is prone to human


error, the proposed system provides consistent, unbiased, and automated detection
of blood cancer.

Effective Image Preprocessing

Advanced preprocessing techniques such as image enhancement, noise


reduction, and normalization improve feature extraction, leading to better model
performance.

Scalability and Deployment Flexibility

The lightweight architecture of MobileNetV2 makes the system suitable for


deployment on mobile devices, cloud platforms, and edge computing environments,
ensuring accessibility in remote or resource-limited areas.

Reduced Dependence on Medical Experts

The automated detection system minimizes the need for continuous expert
supervision, allowing hospitals and clinics to optimize their workflow and focus on
critical cases.
31
Cost-Effective Solution

By integrating deep learning with automated analysis, the system reduces the
costs associated with manual microscopic examinations and extensive laboratory
procedures.

Deep Learning
Feature Manual System Traditional ML
(CNN)

Medium
Expertise
High (pathologists) (engineers+ Low after training
Required
clinicians)

Accuracy Varies Moderate High

Processing Time Slow Moderate Fast

Hardware
Low Low to Medium High
Requirement

Deployment Low in remote Limited due to


Moderate
Feasibility areas complexity

Table 3.3.1 Manual system Vs. TML Vs. DL

32
CHAPTER 4

MODULES

DESCRIPTION

4.1 Data Collection and Preprocessing

Accurate and early diagnosis of blood cancer using deep learning heavily
relies on the availability of high-quality, well-labeled image data. This phase of the
system involves two essential components: assembling a diverse and representative
dataset of microscopic blood smear images, and applying rigorous preprocessing
techniques to enhance the data's suitability for training a robust classification model.

Data Collection

To train the MobileNetV2-based deep learning model for classifying


cancerous and non-cancerous blood smear images, reliable datasets were collected
from verified public sources.

The primary datasets used include:

 Kaggle Blood Smear Dataset:

A widely used repository containing thousands of labeled images of


peripheral blood smears, which serve as a reference standard in research and
educational settings. This dataset includes images with varying staining quality,
lighting conditions, and blood cell morphology, providing a broad learning base for
the model.

33
 ALL-IDB (Acute Lymphoblastic Leukemia Image Database):

Specifically tailored for leukemia research, this dataset includes detailed


annotations for white blood cell types and is widely used in hematological image
processing tasks. It provides high-resolution images of both healthy and leukemic
cells, facilitating binary classification for cancer detection.

Together, these datasets contributed approximately 3700 labeled images, classified


into two main categories:

 Cancerous

 Non-cancerous

These images represent diverse patient demographics, different microscopes


and staining protocols, which enhance the dataset's richness and increase the
model’s ability to generalize across varied clinical environments.

Figure 4.1.1 Peripheral Blood Smear Images

34
Figure 4.1.2 Cancerous & Non Cancerous PBS image

Figure 4.1.3 Blood Cancer Types

• Data Preprocessing

Microscopic blood smear images are inherently noisy, with potential


variations in color intensity, resolution, and focus. Preprocessing is crucial to
eliminate inconsistencies, enhance critical visual features, and standardize the data
before it is passed into the MobileNetV2 architecture.

The following preprocessing steps were implemented:

 Image Resizing:

Since MobileNetV2 requires a fixed input size of 224x224 pixels, all images
were resized accordingly using bilinear interpolation. This resizing ensures
compatibility with the architecture and reduces computational cost without
significant loss of detail.

 Noise Reduction:

Image noise was removed using filters such as Gaussian blur and median
filters. These techniques help suppress unwanted pixel variations and background
artifacts, making key features like cell boundaries and nuclei clearer.
35
 Contrast Enhancement:

Histogram equalization and CLAHE (Contrast Limited Adaptive Histogram


Equalization) were applied to improve contrast and highlight cellular features. This
step ensures that critical variations in texture and structure are more distinguishable.

 Brightness Adjustment:

Brightness normalization helps in reducing the variability caused by different


lighting conditions during image acquisition. Standardizing the brightness ensures
that the model focuses on content-based features rather than lighting differences.

 Normalization:

Pixel intensity values were normalized to a scale of 0 to 1 by dividing the


RGB values by 255. Normalization improves training efficiency and helps maintain
numerical stability during optimization.

 Data Augmentation:

One of the major challenges faced was class imbalance, where fewer
cancerous images were available. To mitigate this and increase model
generalization, several augmentation techniques were applied:
Horizontal and vertical flipping
Random rotations (up to 30 degrees)
Zooming in/out
Width and height shifts

 Random cropping and brightness variation:

These augmentations were performed dynamically during training using


KerasImageDataGenerator to prevent overfitting and improve the model’s
robustness.
36
 Dataset Splitting: The dataset was divided into three subsets:

o Training Set (70%) – used for model learning

o Validation Set (15%) – used to tune model hyperparameters and monitor


overfitting

o Test Set (15%) – used to evaluate final performance on unseen data


Stratified splitting ensured that both cancerous and non-cancerous classes
were evenly distributed across all subsets.

Implementation Tools

The preprocessing pipeline was built using libraries and frameworks such as:

 OpenCV – for image enhancement and manipulation

 TensorFlow and Keras – for preprocessing utilities and image pipelines

 NumPy and Matplotlib – for image array manipulation and visualization

By meticulously processing the input data, the system ensures that only clean, high-
quality, and well-augmented images are fed into the model. This not only improves
training outcomes but also enhances the final classification accuracy of blood cancer
detection, making the model more reliable in clinical and real-time applications.

37
Figure 4.1.4 Preprocessing

38
4.2 Model Training

The MobileNetV2 Model Integration Module plays a central role in the


classification pipeline by leveraging the efficiency and high performance of the
MobileNetV2 architecture for medical image analysis. MobileNetV2 is a family of
lightweight deep convolutional neural networks optimized for mobile and embedded
vision applications, making it an ideal choice for this use case where real-time analysis
and deployment on low-resource devices are critical.

Key Steps Involved:

1. Importing Pre-trained MobileNetV2 Model:

The process begins by importing a MobileNetV2 model pre-trained on the ImageNet


dataset using TensorFlow and Keras. This pre-trained model provides a strong
foundation of learned image features such as edges, textures, and object shapes. By
transferring this knowledge, the model can quickly adapt to the domain of medical
images with minimal training data.

2. Model Customization for Blood Cancer Classification:

Since the original MobileNetV2 model is built for multi-class classification on


ImageNet (1,000 classes), the top (output) layers are removed or replaced. A new set of
dense (fully connected) layers is added to suit the binary classification problem
cancerous vs non-cancerous. Dropout layers may also be introduced to prevent
overfitting during training.

3. Transfer Learning and Fine-Tuning:

Transfer learning is employed to retain the learned features from the early layers of
the pre-trained model while adapting the deeper layers to the specific characteristics of
blood smear images. For smaller datasets, freezing most of the base layers and only
training the top layers may yield optimal results. In cases where the dataset is
sufficiently large or diverse, selective fine-tuning of deeper layers can be performed to
39
improve feature specialization.

4. Addition of Custom Classifier Layers:

After feature extraction through MobileNetV2, the output is flattened and


passed through dense layers. These layers include:

A Global Average Pooling layer to reduce spatial dimensions.

One or more Dense layers with activation functions (e.g., ReLU).

A Dropout layer to prevent overfitting.

A final Dense output layer with a sigmoid (for binary classification) or softmax
(for multi-class classification) activation function.

5. Compilation and Optimization:

The integrated model is compiled using optimizers like Adam, with a binary cross-
entropy loss function (for binary classification) or categorical cross-entropy (for multi-
class). The learning rate is carefully tuned to ensure stable convergence.

6. Model Summary and Architecture Visualization:

After integration, the model architecture is summarized and visualized to verify the
structure. The number of trainable vs. non-trainable parameters is monitored,
especially when partial fine-tuning is applied.

40
Figure 4.2.1 General CNN architecture for leukocytes classification

Figure 4.2.2 MobileNetV2 Architecture

Figure 4.2.3 MobileNetV2 Transfer Learning

41
42
4.3 Web based Interface development and Deployment

The Web-Based Interface Development and Deployment module serves as the


bridge between the end-user and the deep learning backend, providing an accessible
and user-friendly environment for uploading blood smear images and receiving
diagnostic predictions in real time. This interface plays a crucial role in translating
complex AI processes into simple, intuitive workflows suitable for clinics, hospitals,
and even remote health centers.

1. Front-End Development:

The front-end is designed to be clean, intuitive, and responsive across various


devices, including desktops, tablets, and smartphones. Core elements include:

 Image Upload Panel: Allows users to browse and upload blood smear images for
diagnosis.

 Preview Window: Displays the uploaded image for confirmation.

 Prediction Button: Initiates the model inference process.

 Results Display: Shows prediction results (e.g., cancerous or non-cancerous),


confidence scores, and visual explanations if enabled (e.g., saliency maps).

 Styling: Achieved using HTML5, CSS3, and optionally Bootstrap for


responsiveness and a clean UI layout.

2. Backend Development with Flask:

The backend is developed using the Python Flask framework due to its
simplicity, flexibility, and seamless integration with machine learning models.

 Model Integration: The fine-tuned MobileNetV2 model is loaded using


TensorFlow/Keras and integrated with the Flask app.

43
 Prediction Pipeline: Upon receiving an image, the backend handles preprocessing
(resizing, normalization) and passes it to the model for inference.

 Routing: Flask routes manage page navigation (/, /upload, /predict) and handle data
exchange between frontend and backend.

3. Image Processing Workflow:

Before prediction, every uploaded image undergoes:

 Preprocessing: Resizing to 224×224 pixels, normalization, and channel adjustment


to match MobileNetV2 input.

 Validation: Ensuring the file format is acceptable (JPEG/PNG) and the file size is
manageable.

 Error Handling: Ensures robustness against unsupported inputs or system failures.

4. Model Output and Interpretation:

 Classification Result: “Cancerous” or “Non-Cancerous” based on the trained


MobileNetV2.

 Confidence Score: The model's certainty about its prediction.

 Optional Visualization: Features such as heatmaps or saliency overlays for result


interpretation may be included to aid pathologists.

5. Deployment Strategy:

The system is designed to be platform-independent and suitable for multiple


deployment environments:

 Local Servers: For use in clinics or labs with internal infrastructure.

 Cloud Hosting: Deployment on platforms like AWS, or Google Cloud enables


44
remote access and scalability.

 Mobile/Edge Deployment (Future Scope): Given MobileNetV2’s efficiency, the


system can be extended to Android apps or edge devices for field use.

6. Security and Data Privacy:

Security measures are taken to ensure that sensitive medical images are not stored
permanently:

 Temporary in-memory processing or immediate deletion after prediction.

 Secure connection protocols (HTTPS) for data transmission.

Figure 4.3.1 Web based Interface


45
4.4 Performance Evaluation and Improvement

Evaluating the model’s performance is a critical step in ensuring that the system not
only achieves high accuracy but also generalizes well to unseen data. This section outlines
the metrics used for evaluation, visualization techniques to interpret performance, and
strategies applied to enhance the overall accuracy and reliability of the system.

Model Performance Metrics

To quantify the effectiveness of the MobileNetV2-based blood cancer detection


system, multiple performance metrics are employed.

These include:

 Accuracy:
Measures the overall percentage of correctly predicted instances, both
cancerous and non-cancerous. It provides a general indication of model performance
but can be misleading in imbalanced datasets.
 Precision:
Calculates how many of the samples predicted as "cancerous" are truly
positive. High precision reduces false positives, making it essential in medical
diagnostics to avoid unnecessary concern or treatment.
 Recall (Sensitivity):
Determines how many of the actual cancerous cases were correctly identified.
A high recall is vital in medical applications to avoid missing a true positive case,
which can be life-threatening.
 F1-Score:
A harmonic mean of precision and recall. This score is particularly useful in
cases of class imbalance, providing a balanced measure of the model's ability to
identify true positives without over-predicting.

46
Evaluation Techniques

To gain deeper insights into the model’s behavior during training and testing, the
following evaluation techniques are used:

 Confusion Matrix:
A tabular representation of actual versus predicted classes. It helps visualize
the true positives, true negatives, false positives, and false negatives, offering a clear
breakdown of where the model may be underperforming.

 Training and Validation Graphs:


Loss and accuracy curves are plotted across epochs. These graphs help in
identifying:

o Overfitting: Where the model performs well on training data but poorly on
validation data.

o Underfitting: Where the model fails to capture patterns in both training and
validation data.

 Classification Report:
Includes a summary of precision, recall, F1-score, and support for each class,
helping further evaluate how the model performs on individual categories
(cancerous vs non-cancerous).

Improvement Strategies

To optimize model performance and ensure robustness, several strategies are


applied:

 Data Augmentation:
Techniques such as flipping, rotation, zooming, and shifting are used to
synthetically expand the dataset. This helps in making the model more generalizable
and reduces overfitting, especially in cases of limited medical image data.

47
 Transfer Learning and Fine-Tuning:
The MobileNetV2 model is initially used with pre-trained weights. Later,
selected layers are unfrozen and fine-tuned using the blood smear dataset. This
process adapts the model to the specific features relevant to blood cancer
classification.

 Hyperparameter Tuning:
Key training parameters such as learning rate, batch size, number of epochs,
and optimizer are systematically adjusted. This tuning enhances convergence and
helps achieve optimal performance on validation data.

 Class Balancing Techniques:


In case of imbalanced datasets (e.g., more non-cancerous than cancerous
samples), class weights or resampling strategies are used to avoid model bias toward
the majority class.

Figure 4.4.1 Model Training Accuracy and Loss

48
Figure 4.4.2 Performance Evaluation

Figure 4.4.3 Confusion Matrix

49
CHAPTER 5
SYSTEM DESIGN

5.1 System Overview

System design is a crucial phase that defines the framework for building an
efficient, reliable, and user-friendly solution for blood cancer detection based on deep
learning techniques. The design decisions made during this phase impact the overall
system's performance, scalability, maintainability, and user adoption, especially when
applied in critical healthcare environments.

The primary objective of this system is to accurately classify blood smear images
into cancerous or non-cancerous categories using a MobileNetV2-based deep learning
model. The design focuses not only on achieving high predictive accuracy but also on
ensuring smooth user interaction, fast response times, and the ability to integrate
seamlessly with various deployment platforms, such as web browsers, mobile devices,
and cloud environments.

The general design considerations are broken down into the following components:

1. Understanding Requirements

Before initiating the system design, a thorough analysis of functional and non-
functional requirements was performed:

 Functional Requirements:

o Allow users to upload blood smear images.


o Preprocess the images for optimal model performance.
o Predict and display whether the uploaded image indicates cancerous or non-
cancerous conditions.
o Present prediction confidence and key evaluation metrics.

50
 Non-Functional Requirements:

o High accuracy and reliability.


o Intuitive and user-friendly interface.
o Support for responsive design (desktop and mobile compatibility).
o Data security and privacy compliance, especially for sensitive medical
information.
o Scalability for handling multiple users concurrently.

2. System Modularity

The system is designed using a modular approach where each major task is handled
by a dedicated component:

 Data Preprocessing Module:


Responsible for preparing uploaded images by applying noise reduction,
contrast enhancement, brightness correction, resizing to MobileNetV2’s input
dimensions, and normalization.

 Prediction and Inference Module:


Utilizes the pre-trained MobileNetV2 model with transfer learning to predict
the image class. The module is optimized for real-time performance with minimal
latency.

 Web Interface Module:


Developed using Flask for the backend server and HTML/CSS/Bootstrap for
the frontend. This module handles user interactions, image upload, and result
display.

 Visualization Module:
Displays outputs like prediction labels (Cancerous/Non-Cancerous) and
generates graphical representations of model performance metrics, including
confusion matrix, accuracy scores, and others.

51
3. Selection of Tools and Technologies

To build a robust, efficient, and user-friendly blood cancer detection system, a


careful selection of tools and technologies was made. These tools were chosen based on
their compatibility with deep learning workflows, ease of deployment, and ability to
support a smooth frontend–backend integration. The technologies used are described
below:

 Deep Learning Framework: TensorFlow and Keras


TensorFlow, combined with its high-level API Keras, was used for model
building, training, and inference. These libraries offer a wide range of pre-trained
models, including MobileNetV2, and provide the necessary flexibility for
implementing transfer learning and image classification tasks.

 Web Framework: Flask


Flask is a lightweight Python web framework ideal for integrating machine
learning models into web applications. It allows quick setup of web routes, efficient
form handling for image uploads, and seamless connection to the backend logic.

 Frontend Technologies: HTML5, CSS3, Bootstrap, and JavaScript


The frontend of the application was built using standard web technologies.
HTML5 and CSS3 provide structure and style, while Bootstrap ensures responsive
design and modern UI elements. JavaScript supports interactivity where needed.

 Data Visualization Tools: Matplotlib and Seaborn


These Python libraries were used to generate informative charts, such as
confusion matrices, accuracy plots, and classification results. This helps in better
understanding the model’s performance and aids in validation.

 Development Environment: Jupyter Notebook under Anaconda Navigator


Model development, training, and experimentation were conducted in Jupyter
Notebook, a flexible and interactive environment that simplifies debugging and
52
visualization. Anaconda Navigator was used for managing dependencies and
environments efficiently.

4. Data Flow and Processing Pipeline

The typical data flow through the system is designed as follows:

1. Image Upload: The user uploads a blood smear image via the web interface.

2. Preprocessing: The image undergoes automatic enhancement operations


(denoising, contrast stretching, normalization).

3. Model Prediction: The MobileNetV2 model processes the image and outputs a
prediction.

4. Result Interpretation: The predicted class is mapped to a human-readable output


("Cancerous" or "Non-Cancerous").

5. Display and Visualization: The result is shown on the interface along with any
relevant confidence scores or additional visualizations.

Figure 5.1.1 System Overview

53
5.2 System Architecture

The system architecture defines the overall structure, communication, and


interaction between the various components that constitute the blood cancer detection
system. Establishing a robust and well-organized architecture is essential to ensure high
performance, scalability, reliability, and maintainability -all of which are especially critical
in healthcare-related applications where diagnostic accuracy directly impacts patient
outcomes. A clear architectural blueprint also simplifies future enhancements, debugging,
and deployment across different environments, including clinical settings, mobile devices,
and cloud platforms.

The architecture adopts a modular and layered design philosophy, where each
module focuses on a specific responsibility. This separation of concerns allows the system
to be more organized, flexible, and easier to maintain or upgrade without disrupting the
entire workflow. The user interface layer handles the interactions with users, including
uploading blood smear images. The backend processing layer is responsible for
preprocessing these images through techniques such as noise reduction, contrast
enhancement, resizing, and normalization to ensure high-quality inputs for the model.

The core model inference layer incorporates a MobileNetV2-based deep learning


model trained using transfer learning. This layer is responsible for accurately classifying
the input images as either cancerous or non-cancerous. The prediction results are then
passed to the visualization layer, where they are presented to the user through an intuitive
and interpretable display. By modularizing these functionalities, the architecture not only
improves system clarity but also enhances testing, troubleshooting, and future integration
of additional features like expanded cancer types or advanced interpretability techniques.

54
The blood cancer detection system comprises the following major layers:

1. Presentation Layer (Frontend)

 Purpose:
Facilitates interaction between the user and the system.

 Components:
o Web pages developed using HTML5, CSS3, and Bootstrap for
responsive design.
o Image upload interface where users can select and submit blood smear
images.
o Display area for prediction results and performance visualizations.

 Key Features:
o User-friendly and intuitive layout.
o Mobile-responsive interface for accessibility on different devices.

2. Application Layer (Backend Server)

 Purpose:
Acts as a bridge between the frontend and the machine learning model,
handling business logic, requests, and responses.

 Components:

o Flask web framework (Python-based lightweight server).


o Route handlers for processing HTTP requests (e.g., uploading images,
returning predictions).
o Preprocessing pipeline to prepare images for model input.

55
 Key Features:

o Efficient request handling for minimal latency.


o Secure processing of user-uploaded medical images.
o Integration with the deep learning model for on-the-fly predictions.

3. Model Layer (Deep Learning Engine)

 Purpose:
Performs the core machine learning tasks: analyzing the input images and
classifying them as cancerous or non-cancerous.

 Components:
o Pre-trained MobileNetV2 model fine-tuned for blood smear
classification.
o TensorFlow and Keras frameworks for model loading, inference, and
performance evaluation.
 Key Features:
o Lightweight architecture for fast inference.
o High classification accuracy (achieving around 98.6% on test data).
o Ability to run on local servers, cloud, or mobile platforms.

4. Visualization Layer (Result Interpretation)


 Purpose:
Enhances the user experience by visualizing model predictions and
performance metrics.

 Components:

o Display of prediction results (Cancerous/Non-Cancerous) with


confidence scores.
o Graphical representation of confusion matrices, accuracy curves,

56
precision-recall charts.

 Key Features:

o Easy-to-understand output for healthcare professionals.


o Real-time visualization generation based on each prediction.

5. Storage Layer (Temporary or Permanent Storage - Optional)

 Purpose:
Manages temporary handling of images during processing. Permanent storage
is optional based on deployment requirements.

 Components:

o In-memory storage for uploaded images (default in Flask).


o Option for cloud-based or local storage if future extensions require
saving image histories or reports.

 Key Features:

o No permanent storage by default to maintain patient privacy.


o Scalable storage solution if needed for advanced deployments.

6. Security Layer

 Purpose:
Protects user data and ensures secure operation of the system.

 Components:

o HTTPS protocol for data transmission (when deployed in production).


o Input validation and sanitization.
o Session management and server-side security practices.

57
Figure 5.2.1 System Architecture

58
5.3 Use Case Diagram

The use case diagram represents the functional interactions between the user and the
blood cancer detection system.

It captures the different functionalities provided by the system and shows how the
external user interacts with these functionalities.

In this system, the primary actor is the User, who interacts with the system to
perform tasks such as uploading blood smear images, receiving prediction results, and
visualizing outcomes.

` The use case diagram ensures that all major user activities are clearly defined,
helping to guide both the development and validation processes.

The main use cases included in the diagram are:

 Upload Image: Allows the user to upload a blood smear image for analysis.

 Preprocess Image: Automatically triggered after upload to enhance and prepare the
image.

 Classify Image: Uses the MobileNetV2 model to predict if the image is cancerous
or non-cancerous.

 View Prediction Result: Displays the classification result to the user.

 Visualize Details: Offers additional visualization, such as probability scores or


highlighted regions in the image (optional).

This use case structure ensures that the system remains user-friendly while
maintaining a high degree of technical accuracy and reliability.

59
Figure 5.3.1 Use Case Diagram

The use case diagram illustrates the interactions between the user and the blood
cancer detection system. The primary actor, the user, engages with the system by
uploading blood smear images for analysis. Once an image is uploaded, the system
automatically initiates preprocessing operations to enhance the image quality and
prepare it for classification. The deep learning model based on MobileNetV2 processes
the image and predicts whether it is cancerous or non-cancerous.

The system then displays the prediction result to the user in a clear and interpretable
format. Additionally, users have the option to view detailed visualizations of the
processed image, such as highlighted affected regions or prediction confidence levels.
This structured representation of system functionalities helps in understanding user
roles, system boundaries, and the flow of activities at a glance. It also assists developers
in ensuring that all user needs are systematically addressed in the system design.

60
5.4 Class Diagram

The class diagram provides a static view of the blood cancer detection
system’s structure by illustrating the key classes, their attributes, methods, and the
relationships between them. It captures the object-oriented design of the system
and helps in visualizing how data and functionalities are logically organized.

In this system, the main classes identified are:

 User Interface: Handles user interactions, including uploading images and


displaying prediction results.

 Image Processor: Manages preprocessing tasks such as noise reduction,


contrast enhancement, resizing, and normalization of the blood smear images.

 Prediction Model: Contains methods for loading the trained MobileNetV2


model, making predictions, and returning classification results.

 Result Visualizer: Deals with creating and displaying visualizations like


prediction confidence and highlighting affected areas.

 File Handler: Responsible for managing file operations like saving uploaded
images and accessing processed images.

The relationships between these classes are primarily associations where


different classes communicate and interact to complete the end-to-end workflow.
For example, the UserInterface class invokes methods from the ImageProcessor to
prepare images before passing them to the PredictionModel. After classification,
results are forwarded to the ResultVisualizer for final presentation to the user.

Organizing the system into these interconnected classes ensures a modular,


reusable, and maintainable codebase, which is crucial for healthcare applications
where frequent updates and expansions may be necessary.

61
Figure 5.4.1 Class Diagram

The class diagram shows the main components of the blood cancer detection system
and their relationships. It highlights the core classes, including UserInterface,
ImageProcessor, PredictionModel, ResultVisualizer, and FileHandler.

Each class has specific attributes and methods that define its responsibilities. The
diagram also shows how these classes interact with one another to perform tasks such as
image uploading, preprocessing, prediction, and result visualization.

This helps in understanding the system’s structure and supports a modular,


maintainable design.

62
5.5 Activity Diagram

The activity diagram models the workflow of the blood cancer detection
system by illustrating the sequence of activities performed from image upload to
final result visualization.

It helps in understanding the dynamic behavior of the system by focusing


on the flow of operations rather than the structural aspects.

The typical flow starts when a user uploads a blood smear image through
the user interface. The system then preprocesses the image, applying operations
such as noise removal, contrast adjustment, resizing, and normalization to
enhance image quality.

After preprocessing, the image is fed into the trained MobileNetV2 model
for classification. Based on the prediction output, the system identifies whether
the image indicates a cancerous or non-cancerous condition.

The classified result is then displayed to the user. Additionally, the system
may provide detailed visualization of the results, such as highlighting affected
areas or displaying prediction confidence.

The activity diagram clearly outlines these sequential steps, decisions, and
possible outcomes, offering a high-level view of the entire operational flow of the
system.

Using an activity diagram makes it easier to visualize the paths users and
data take through the system, supports better process understanding, and assists
developers and stakeholders in identifying potential improvements or
optimizations in the workflow.

63
Figure 5.5.1 Activity Diagram

The activity diagram illustrates the detailed workflow followed by the blood
cancer detection system from the moment a user uploads a blood smear image to the
final display of the diagnosis result.

The process begins with the user uploading the blood smear image through the
system interface. Once uploaded, the system initiates preprocessing on the image to
ensure it is of suitable quality for classification.

Preprocessing steps include Noise Reduction (removing unwanted artifacts),


Contrast Enhancement (improving visibility of important features), and Resizing
(standardizing the image dimensions for model input).

After preprocessing, the image is passed to the MobileNetV2 model, a


lightweight deep learning model specifically chosen for its efficiency and accuracy.

64
The MobileNetV2 model then extracts features using its convolutional neural
network (CNN) layers, capturing the critical patterns and structures present in the
blood smear image.

Following feature extraction, the system proceeds to classify the image. At this
point, a decision is made:

 If cancer is detected, the system will display a "Cancerous" result to the user and
proceed to generate a detailed report summarizing the findings.

 If no cancer is detected, it will display a "Non-Cancerous" result directly to the


user.

Finally, after either path, the activity concludes, ensuring that every blood smear
image is processed efficiently and results are made available clearly to the user.

This structured flow ensures that all necessary operations are executed
systematically and that both the user experience and the backend processing are
optimized for accuracy and reliability -critical factors in healthcare-related
applications.

65
5.6 Sequence Diagram

The sequence diagram illustrates the dynamic behavior of the blood cancer
detection system by modeling the flow of interactions between key components over time.
It specifically captures how objects in the system collaborate and in what order these
interactions occur. In a healthcare-focused application like blood cancer detection,
understanding this interaction sequence is crucial to ensure accurate, efficient, and timely
predictions. In this system, the interaction begins when the User initiates an action by
uploading a blood smear image via the User Interface. This interaction is the starting
point of the sequence. The uploaded image is then passed to the Image Preprocessing
Module, which plays a critical role in preparing the image for analysis.

Preprocessing involves several important steps, including noise reduction to


eliminate irrelevant artifacts, contrast enhancement to highlight essential features, and
resizing to match the input requirements of the deep learning model. Each of these steps is
performed in a sequential and automated manner to standardize the image data and ensure
optimal model performance.

After preprocessing, the image is transferred to the MobileNetV2 Model, a


lightweight deep learning model pre-trained and fine-tuned for the task of blood cancer
detection. The MobileNetV2 model performs feature extraction through multiple
convolutional layers, detecting intricate patterns, cell structures, and abnormalities in the
image that are indicative of cancerous conditions.

Once feature extraction is complete, the model processes the features to classify the
image as either cancerous or non-cancerous. The result of this classification is sent back
to the User Interface, which promptly displays the result to the user in an easy-to-
understand format.

In addition to displaying the result, the system may also offer the functionality to
generate a detailed report, summarizing the prediction confidence, probability scores,
and possibly including visual highlights to make the result interpretation easier for medical

66
professionals or patients.

Throughout the sequence, the diagram maintains:

 Clear flow of control: each component knows exactly when and how to trigger the
next action.

 Logical separation of responsibilities: the user interface handles interactions, the


preprocessing module handles image enhancement, and the MobileNetV2 model
handles classification.

 Synchronization of operations: the system ensures that no process is skipped or


performed out of order, preserving system reliability.

 Feedback loop: once classification is complete, the system communicates back to


the user without unnecessary delay.

By visualizing this workflow, developers, testers, and healthcare stakeholders can better
understand:

 How system modules are dependent on each other.

 Where delays or bottlenecks might occur.

 How information travels between components.

 How to handle exceptions or errors if, for example, an image upload fails or model
inference is delayed.

Designing a clear and structured sequence diagram also greatly supports system
scalability and maintenance. If new functionalities like "multi-class cancer detection" or
"integration with cloud storage" are to be added in the future, the sequence diagram
provides a blueprint showing exactly where to introduce new processes without disrupting
the core workflow.

Thus, the sequence diagram plays a vital role in modeling not just the current
67
system but also preparing for its growth, making the blood cancer detection system robust,
understandable, and future-ready.

Figure 5.6.1 Sequence Diagram

The sequence diagram shows how the user, user interface, preprocessing module,
and MobileNetV2 model interact over time. It begins with the image upload, followed by
preprocessing, feature extraction, and classification. The classified result is sent back to
the user for display.

This diagram helps visualize the flow of operations and the timing of
each interaction clearly and systematically.

68
CHAPTER 6

SYSTEM TESTING

System testing is a crucial phase in the development lifecycle of the blood cancer
detection system. It ensures that all components work together as intended and verifies that
the system meets the specified requirements in terms of functionality, performance,
reliability, and usability. In a healthcare-related application, testing becomes even more
critical because incorrect outputs can have serious consequences.

The objective of system testing is to validate the end-to-end workflow, from image
upload to final diagnosis display, under various conditions. This includes testing the
system’s ability to handle different types of blood smear images, ensuring consistent
performance across different devices, and verifying the accuracy of classification results.

Both functional and non-functional aspects of the system are thoroughly tested.
Functional testing ensures that each module -such as image preprocessing, model
prediction, and result visualization -behaves correctly. Non-functional testing focuses on
performance (response time), usability (ease of interaction), scalability (handling multiple
requests), and reliability (producing consistent results without failure).

Various types of testing activities are carried out, including:

 Unit Testing: Testing individual modules like preprocessing and model inference
separately.
 Integration Testing: Testing the interaction between modules, ensuring smooth
data flow from upload to prediction.
 System Testing: Evaluating the complete system as a whole in a real-world
simulated environment.
 User Acceptance Testing (UAT): Allowing potential end-users to interact with the
system and validate its usefulness and usability.

69
Proper documentation of test cases, expected outcomes, actual outcomes, and
analysis of discrepancies (if any) is maintained to support traceability and continuous
improvement.

Through comprehensive system testing, the reliability and robustness of the blood
cancer detection system are confirmed, building confidence in its deployment for clinical
or research use.

6.1 Functional Testing

Functional testing is conducted to verify that each component of the blood


cancer detection system operates in accordance with the defined functional
requirements. It ensures that the system behaves correctly under different user
interactions and scenarios, covering both expected and unexpected use cases. The main
goal is to validate that the system delivers the correct output for a given input and that
the core features work seamlessly from end to end.

In this project, functional testing covered the following major areas:

1. Image Upload Functionality

The system was tested to ensure users can upload blood smear images without errors.
Tests were performed for:

 Different image formats (JPEG, PNG).


 Different image sizes (small, medium, large).
 Handling unsupported file types (such as PDF or text files) with appropriate error
messages.
 Ensuring uploaded images are correctly passed to the preprocessing module.

70
2. Image Preprocessing

Functional testing was applied to check if the preprocessing steps were properly
executed:
 Resizing the image to match MobileNetV2's required input dimensions (224x224
pixels).
 Normalization of pixel values to scale between 0 and 1.
 Noise reduction and contrast enhancement were verified to ensure image quality
improvement without losing critical features.
 Augmentation techniques like rotation, flipping, and zooming were applied during
training data preparation.

3. Model Prediction

The MobileNetV2 model's ability to generate predictions was tested by:


 Feeding preprocessed images and checking if a prediction label (Cancerous / Non-
Cancerous) is returned.
 Verifying that the prediction probability/confidence scores are generated alongside
the labels.
 Checking that the correct classification flow is maintained and no crashes occur
even if unusual images are uploaded.

4. User Interface and Output Display

The web-based interface was tested to ensure:


 The system displays a "Processing" message during model inference.
 After prediction, results (including label and confidence percentage) are shown
clearly to the user.
 If an invalid image is uploaded, appropriate error alerts guide the user without
crashing the application.
71
 Buttons like "Upload Another Image" or "View Result" function correctly and reset
the session without errors.

5. Error Handling

Testing also focused on verifying robust error handling, including:

 Handling missing or corrupt images gracefully.


 Timeouts or failures in model loading or predictions resulting in user-friendly error
messages instead of server crashes.
 Ensuring the system prevents multiple uploads simultaneously that could overload
the model.

6. Integration Between Modules

Each module - upload, preprocessing, model prediction, and result display -was
tested to ensure smooth data flow without any data corruption or misinterpretation
between steps.

72
Figure 6.1.1 Functional Test

73
6.2 Non-functional Testing
Non-functional testing ensures that the blood cancer detection system not
only works correctly but also performs well under different conditions. It focuses on
aspects like usability, performance, reliability, and security, rather than specific
functionalities. Non-functional testing is important to verify that the system meets
quality standards and provides a good user experience.

The following non-functional tests were performed:

1. Performance Testing

• The system's response time was tested by uploading different image sizes.
• The prediction result was returned within 2 to 4 seconds for standard blood smear
images.
• This ensures that the model processes inputs efficiently without noticeable delays.

2. Usability Testing

• The web interface was evaluated for user-friendliness.


• The system allows easy image upload, clear result display, and minimal user
interaction.
• Non-technical users were able to understand and use the system without needing
technical assistance.

3. Reliability Testing

• The system was tested with multiple blood smear images back-to-back.
• It consistently provided accurate results without crashing or producing random
outputs.
• The prediction remained stable even after multiple consecutive uses.

74
4. Scalability Testing

• Although currently designed for single-image input, the system architecture


allows for future scaling to batch processing if required.

• Tested with different images, and the server handled multiple prediction requests
without failure.

5. Compatibility Testing

• The web application was accessed using multiple browsers such as Chrome,
Firefox, and Edge.

• The interface and functionality remained consistent across all platforms without
errors.

6. Security Testing

• Basic security checks were implemented to prevent malicious file uploads (only
image files are allowed).

• No user data is stored permanently, ensuring privacy protection for the uploaded
images.

6.3 Test Cases and Strategies

The following testing strategies were employed:

 Functional Testing
To verify that each feature (image upload, prediction generation, result display)
operates according to specifications.

75
 Performance Testing
To assess system responsiveness, especially the time taken to preprocess images and
generate predictions.

 Usability Testing
To ensure the web interface is intuitive, responsive, and user-friendly.

 Validation Testing
To confirm that the trained MobileNetV2 model provides accurate classifications on
unseen data.

 Security Testing
To ensure that user data (uploaded images) is handled securely and not exposed.

Test Case Input Expected Output Actual Result Status

Image Blood smear Image successfully Image uploaded


Pass
Upload image uploaded without errors

Preprocessin Uploaded Resized, normalized Image processed


Pass
g Check image image ready for model correctly

Model Preprocessed Cancerous or Non- Correct label


Pass
Prediction image Cancerous label generated
Average
Performance Batch of 50 Response time < 2
response: 1.5 Pass
Test images seconds/image
seconds

Interface User Smooth navigation and No UI glitches


Pass
Usability interaction result display or confusion

Table 6.3.1 Test Cases


76
77
6.4 Bug Reporting, Resolution, and Validation Results

During the testing phase of the Blood Cancer Detection system, a few minor issues
were identified that needed resolution for ensuring smooth and accurate operation in
practical environments such as hospitals, clinics, and cloud-based platforms.

Bug Reporting and Resolution

The following issues were encountered during testing:

1. Slight delays when uploading very large images:

The system experienced noticeable delays when processing large-sized


images, which could affect the user experience, especially in high-traffic
environments like hospitals where quick diagnostics are essential.

2. Occasional misclassification on rare edge-case samples:

While the model performed well in general, there were instances where it
occasionally misclassified rare edge-case samples. These samples were typically
outliers that did not fully resemble the general characteristics of cancerous or non-
cancerous cells in the dataset.

To address these issues, several solutions were implemented:

 Size limit for image uploads: A size limit was set for image uploads to prevent the
system from encountering performance issues when processing large images. This
ensured the model could handle more typical image sizes quickly and efficiently.

 Enhanced data augmentation: To improve the model's robustness and


generalization, data augmentation techniques were applied more extensively. These
included variations like rotation, flipping, and scaling of the training images. By
exposing the model to a wider variety of sample images, the model became more
adaptable to edge cases, leading to better overall performance on unseen data.

78
Validation Results

After implementing the necessary fixes and refinements, the MobileNetV2-based


model demonstrated excellent performance on the test set. The validation results are
summarized as follows:

 Accuracy: 98.6% – The model correctly classified nearly all images in the test set,
showing its high reliability in detecting both cancerous and non-cancerous blood
cells.

 Precision: 98.2% – This indicates that the model's positive predictions were highly
accurate, with minimal false positives.

 Recall: 98.5% – The model performed well in identifying true positive cases,
ensuring that most cancerous cells were correctly flagged.

 F1-Score: 98.3% – The F1-score, a balance between precision and recall, highlights
the model's effectiveness in maintaining both high sensitivity and accuracy.

These results reinforce that the system is highly effective and can be trusted for
practical deployment in clinical settings, hospitals, and on cloud-based platforms, where
accuracy and reliability are crucial. The high performance across various metrics
showcases the robustness of the MobileNetV2-based approach for blood cancer
detection.

Conclusion of Testing:

The results of system testing validate that the developed blood cancer detection
platform is both functional and reliable. The platform performs with high accuracy,
speed, and user-friendliness. During testing, only minimal bugs were identified, all of
which were promptly addressed. With these improvements in place, the system is now
fully prepared for real-world deployment, and it offers potential for future
enhancements.
79
CHAPTER 7

SYSTEM IMPLEMENTATION

The successful implementation of the blood cancer detection platform requires


seamless integration between the frontend, backend, and machine learning components.
Each of these elements plays a crucial role in delivering an efficient, accurate, and user-
friendly system.

 Frontend Development focuses on the creation of the user interface, ensuring that
healthcare professionals can easily interact with the system by uploading blood
smear images, receiving predictions, and interpreting the results with minimal
effort.
 Backend Development is responsible for the underlying system architecture,
managing the interaction between the frontend and the machine learning model. It
ensures smooth data flow, image processing, and result generation while
maintaining the integrity and performance of the system.
 Machine Learning Integration bridges the technical gap between the data and the
user. This component integrates the deep learning model into the system, enabling it
to process uploaded images, classify them accurately, and provide reliable
predictions.

Together, these elements form a cohesive platform that enables efficient blood
cancer detection. In this chapter, we will explore each of these components in detail,
describing their design, implementation, and the challenges overcome to create a robust
and scalable solution.

7.1 Frontend Development

The frontend of the blood cancer detection platform plays a critical role in providing an
intuitive and responsive interface for users, ensuring a smooth and efficient interaction
with the system. It is designed with healthcare professionals in mind, who may not
80
necessarily have extensive technical expertise, but need to upload images and receive
results quickly and accurately.

Key Features:

 User Interface (UI): The design prioritizes simplicity and clarity, allowing users to
easily navigate the platform. The main features include an image upload section, a
progress indicator to show the status of the image upload, and a results section that
displays the classification outcome-whether the image is cancerous or non-
cancerous.

 Responsive Design: Using HTML and CSS, the interface is optimized for
accessibility across different devices, ensuring that it works on desktops, tablets,
and mobile phones. The layout adjusts seamlessly to various screen sizes, allowing
users to access the platform in diverse environments such as hospitals, clinics, and
remote locations.

 Interactive Elements: The frontend leverages JavaScript to implement interactive


features. These include drag-and-drop functionality for image uploads, dynamic
content updates to show progress, and real-time feedback for users while the image
is being processed. JavaScript was also used to validate uploaded files, ensuring that
only supported image formats are accepted and that the file size remains within the
set limits.

 User Experience (UX): The platform was designed with the goal of making the
user experience as smooth as possible. Healthcare professionals can upload a blood
smear image, view its classification results, and receive actionable insights with just
a few clicks. This design approach minimizes user error and enhances efficiency in
clinical environments.

Technologies Used:

 HTML/CSS: For structuring and styling the web pages. The design was kept simple
and clean to focus on usability and ease of navigation.
81
 JavaScript: For client-side scripting, enabling real-time updates, drag-and-drop
upload, and user-friendly feedback mechanisms.

 Bootstrap: A responsive front-end framework was used to create a fluid design that
adapts to different screen sizes. This is particularly useful for ensuring that the
platform can be accessed from mobile devices, a common scenario in clinical
environments.

7.2 Backend Development

The backend development of the blood cancer detection platform is the foundation
that supports the system’s functionality. It ensures the smooth processing of user
requests, the integration of the machine learning model, and the management of data
flow between the frontend and the model. The backend is responsible for image
handling, communication with the deep learning model, result generation, and ensuring
that all operations are executed efficiently.

Key Components:

 Image Upload and Processing: Upon receiving an image from the frontend, the
backend handles the upload process, ensuring that images are validated (e.g.,
checking file format and size) before being processed. After validation, the backend
resizes the image to a format suitable for the machine learning model and stores it
temporarily for model inference.

 Model Integration: The core functionality of the backend lies in integrating the
deep learning model with the web platform. Using TensorFlow and Keras, the
backend sends the processed image data to the trained MobileNetV2 model. Once
the model processes the image, the backend retrieves the prediction and returns it to
the frontend.

 API and Server Communication: The backend is built using the Flask framework,
which handles all HTTP requests. When a user uploads an image, Flask routes the
request to the appropriate backend logic. It then communicates with the model and
82
returns the prediction results to the frontend, where they are displayed to the user.

 Data Security and Privacy: Ensuring the privacy and security of the data is
paramount, especially when dealing with medical information. The backend
includes security measures such as data encryption, secure image upload, and
access control mechanisms to protect sensitive information. Additionally, secure
communication between the frontend and backend is achieved using HTTPS.

 Database Management: While the platform currently focuses on real-time


processing, the backend is designed to be scalable for future updates, such as storing
user data, images, and results in a database for later retrieval. A SQL or NoSQL
database can be integrated to log user interactions, track system performance, and
allow for additional features, such as saving user profiles and results.

Technologies Used:

 Flask: A lightweight Python web framework used to build the backend API. Flask
allows for efficient handling of HTTP requests, routing, and integration with the
machine learning model.

 TensorFlow/Keras: These deep learning frameworks are used for integrating the
MobileNetV2 model into the backend, allowing the system to process uploaded
images and return predictions based on the model’s output.

 Python: The programming language used for both backend development and model
integration. Python’s extensive support for libraries like Flask, TensorFlow, and
image processing makes it an ideal choice for this project.

 Security: Technologies like SSL/TLS encryption and JWT (JSON Web Tokens)
are used to secure communication and protect data during transmission.

83
CHAPTER 8

CONCLUSION

The project successfully developed a lightweight, efficient, and highly accurate deep
learning-based system for blood cancer detection from microscopic blood smear images.
By leveraging the MobileNetV2 architecture along with transfer learning, the system was
able to achieve an optimal balance between classification performance and computational
efficiency, making it suitable for real-world deployment in clinics, hospitals, cloud
platforms, and mobile applications, particularly in resource-constrained environments
where access to expert diagnosis is limited.

Comprehensive data preprocessing techniques such as noise reduction, contrast


enhancement, resizing, normalization, and data augmentation played a crucial role in
enhancing the quality of the input data.

These preprocessing steps improved the training process and enabled the model to
learn more robust and generalizable features. As a result, the MobileNetV2-based model
demonstrated superior performance, achieving an impressive 98.6% accuracy on the test
dataset while maintaining lightweight architecture suitable for real-time inference.

The integration of a web-based interface using Python Flask further enhanced the
usability of the system, allowing healthcare professionals to easily upload and classify
blood smear images without requiring extensive technical expertise.

This automation reduces diagnostic time, minimizes human error, supports early
disease detection, and facilitates faster clinical decision-making processes.

Throughout the project, several challenges were encountered, including limited


dataset availability, data imbalance between cancerous and non-cancerous samples, the
risk of overfitting, and ensuring model explainability.

84
These challenges were systematically addressed through techniques such as data
augmentation, transfer learning, fine-tuning of model layers, hyperparameter optimization,
and careful evaluation using performance metrics like precision, recall, F1-score,
confusion matrix analysis, and accuracy.

Overall, this project demonstrates that deep learning, when carefully designed and
optimized, can serve as a powerful assistive tool in medical diagnostics. The work
provides a strong foundation for future research and development aimed at expanding AI-
based diagnostic systems into broader applications, including multi-class classification for
different types of leukemia and other hematological disorders.

In conclusion, the developed system not only advances the field of medical image
analysis but also contributes towards democratizing healthcare by making advanced
diagnostic capabilities more accessible, affordable, and efficient for diverse populations
across the world.

It opens a pathway towards integrating AI-powered tools into clinical workflows,


potentially improving patient outcomes and supporting healthcare systems in delivering
faster, more accurate, and more equitable care.

85
CHAPTER 9

FUTURE ENHANCEMENTS

The current blood cancer detection platform demonstrates high accuracy, reliability,
and user-friendliness. However, as technology and healthcare needs continue to evolve,
there are several potential areas for future improvement and expansion. Incorporating
these enhancements will ensure the system remains cutting-edge, scalable, and even more
beneficial for clinical usage.

1. Model Improvements

 Multi-class Classification:
Extend the model to not only detect the presence of blood cancer but also
differentiate between various subtypes (e.g., ALL, AML, CLL, CML) for more
detailed diagnostic support.

 Advanced Architectures:
Explore and integrate newer deep learning architectures like EfficientNet,
Vision Transformers, or hybrid models to potentially achieve even higher accuracy
and robustness.

 Continual Learning:
Implement systems that allow the model to learn from new data over time,
improving its performance as more images and cases are collected.

2. Expanded Dataset

 Diverse Data Collection:


Collect more diverse blood smear images from multiple demographics and
imaging conditions to reduce bias and improve model generalization.

86
 Real-World Data:
Integrate real-world clinical data, including images with different staining
techniques and magnifications, to further validate and strengthen the model.

3. Enhanced User Interface

 Batch Image Upload:


Allow users to upload multiple images at once for bulk testing, increasing
efficiency for laboratory usage.

 Visualization Features:

Incorporate heatmaps or attention maps (e.g., Grad-CAM) that visually


highlight the regions of the image that contributed most to the model's prediction,
helping doctors understand the system’s reasoning.

4. Mobile Application Development

 Mobile App Integration:

Develop a dedicated mobile application for Android and iOS devices, making
the platform more accessible, especially in remote and resource-limited areas.

 Offline Mode:
Enable offline prediction capabilities, allowing the model to run locally
without the need for constant internet access.

5. Cloud Deployment and Scalability

 Cloud-Based Services:

Host the platform on scalable cloud services like AWS, Azure, or Google
Cloud to support a larger number of users simultaneously.

87
 API Services:
Offer API endpoints so that hospitals and clinics can integrate the detection
capabilities into their own systems.

6. Advanced Security Measures

 HIPAA Compliance:
Upgrade the platform’s security to comply with medical data protection
standards such as HIPAA, ensuring full patient data confidentiality.

 Blockchain for Data Integrity:


Explore blockchain technology for tamper-proof storage of diagnostic results,
enhancing trust and security.

7. Integration with Electronic Health Records (EHR)

 EHR Connectivity:
Allow seamless integration with hospital EHR systems, enabling automatic
saving of diagnostic results into patient records for better case tracking and
historical analysis.

88
CHAPTER 10

APPENDICES

10.1 Code Snippet

# app.py

from flask import Flask, render_template, request


from main import getPrediction
import numpy as np
import pandas as pd
import os

# Save images to the 'static' folder as Flask serves images from this directory
UPLOAD_FOLDER = 'static/images/'

# Create an app object using the Flask class


app = Flask(__name__, static_folder="static")
app.debug = False

# Routes

@app.route('/')
@app.route('/first')
def first():
return render_template('index.html')

89
@app.route('/about')
def about():
return render_template('about.html')

@app.route('/service')
def service():
return render_template('service.html')

@app.route('/working')
def working():
return render_template('working.html')

@app.route('/upload')
def upload():
return render_template('upload.html')

@app.route('/login')
def login():
return render_template('login.html')

@app.route('/chart')
def chart():
return render_template('chart.html')

@app.route('/preview', methods=["POST"])

90
def preview():
if request.method == 'POST':
dataset = request.files['datasetfile']
df = pd.read_csv(dataset, encoding='unicode_escape')
df.set_index('Id', inplace=True)
return render_template("preview.html", df_view=df)

@app.route("/prediction", methods=['GET', 'POST'])


def prediction():
return render_template("index1.html")

@app.route("/notebook")
def notebook():
return render_template("notebook.html")

@app.route("/submit", methods=['GET', 'POST'])


def get_output():
if request.method == 'POST':
img = request.files['my_image']

if img:
img_path = os.path.join(UPLOAD_FOLDER, img.filename)
img.save(img_path)

p = getPrediction(img_path)

91
if p == 0:
result = "Benign"
elif p == 1:
result = "Early Pre-B"
elif p == 2:
result = "Pre-B"
else:
result = "Pro-B"

return render_template("index_report.html", prediction=result,


img_path=img_path)
else:
return "No image uploaded."

if __name__ == "__main__":
if not os.path.exists(UPLOAD_FOLDER):
os.makedirs(UPLOAD_FOLDER)
app.run()

# main.py

import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import load_img, img_to_array
import os

# Load the model once when the app starts


92
model = load_model("models_dump/MobileNetV2_Model.h5")
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
def getPrediction(filename):
try:
# Load and preprocess the image
img = load_img(filename, target_size=(224, 224))
input_arr = img_to_array(img).astype('float32') / 255.0
input_arr = np.array([input_arr])

# Predict the class


pred = np.argmax(model.predict(input_arr))

# Class labels mapping


class_labels = {0: "Benign", 1: "Early Pre-B", 2: "Pre-B", 3: "Pro-B"}
diagnosis = class_labels.get(pred, "Unknown")
print(f"---------------Diagnosis is: {diagnosis}")

return diagnosis
except Exception as e:
print(f"Error during prediction: {e}")
return none

93
10.2 Visuals of the Website

Figure 10.2.1 Index Page

Figure 10.2.2 About Us Page


94
Figure 10.2.3 Login Page

Figure 10.2.4 Services Page

95
Figure 10.2.5 Features Page

Figure 10.2.6 Prediction page


96
Figure 10.2.7 Image Upload

97
Figure 10.2.8 Image Upload (2)

Figure 10.2.9 Result Report Page

98
Figure 10.2.10 Stages Description Page

REFERENCES

[1] Akram, S. U., J. Kannala, L. Eklund, and J. Heikkilä, Cell segmentation


proposal network for microscopy image analysis. In G. Carneiro, D.
Mateus, L. Peter, A. Bradley, J. M. R. S. Tavares, V. Belagiannis,
J. P. Papa, J. C. Nascimento.

[2] Amin, M. M., S. Kermani, A. Talebi, and M. Ghelich Oghli (2016a).


Recognition of acute lymphoblastic leukemia cells in microscopic images using
k-means clustering and support vector machine classifier. Journal of Medical
Signals and Sensors, 6(3), 183–93.

[3] Amin, M. M., A. Memari, N. Samadzadehaghdam, S. Kermani, et al. (2016b).


Computer aided detection and classification of acute lymphoblastic leukemia
cell subtypes based on microscopic image analysis. Microscopy Research
99
and Technique, 79(10), 908–916.

[4] Asthana, S., S. Labani, S. Mehrana, and S. Bakhshi (2018). Incidence of


childhood leukemia and lymphoma in india. Pediatric Hematology
Oncology Journal, 3(4), 115 – 120. ISSN 2468-1245.

[5] Bayramoglu, N. and J. Heikkilä, Transfer learning for cell nuclei classification
in histopathology images. In Computer Vision – ECCV 2016 Workshops. 2016.

[6] Bertels, J., T. Eelbode, M. Berman, D. Vandermeulen, F. Maes, R. Bisschops,


and M. B. Blaschko, Optimizing the dice score and jaccard index for medical
image segmentation: Theory and practice. In D. Shen, T. Liu, T. M. Peters, L.
H. Staib, C. Essert, S. Zhou, P. T. Yap, and A. Khan (eds.), Medical Image
Computing and Computer Assisted Intervention – MICCAI 2019. 2019.

[7] Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and
prospects. Science 2015, 349, 255–260. [CrossRef] [PubMed]

[8] Caicedo, J. C., J. Roth, A. Goodman, T. Becker, K. W. Karhohs, M. Broisin, C.


Molnar, C. McQuin, S. Singh, F. J. Theis, and A. E. Carpenter (2019).
Evaluation of deep learning strategies for nucleus segmentation in fluorescence
images. Cytometry Part A, 95(9), 952–965.

[9] Cappabianco, F. A. M., P. F. O. Ribeiro, P. A. V. de Miranda, and J. K. Udupa,


A general and balanced region-based metric for evaluating medical image
segmentation algorithms. In 2019 IEEE International Conference on Image
Processing (ICIP). 2019.

[10] Chang, Y. H., G. Thibault, O. Madin, V. Azimi, C. Meyers, B. Johnson, J. Link,


A.Margolin, and J. W. Gray, Deep learning based Nucleus Classification in
pancreas histological images. In 2017 39th Annual International Conference of
the IEEE Engineering in Medicine and Biology Society (EMBC). 2017.
100
[11] Chatap, N. and S. Shibu (2014). Analysis of blood samples for counting
leukemia cells using support vector machine and nearest neighbour. IOSR
Journal of Computer Engineering, 16(5), 79–87.

[12] Clark, K., B. Vendt, K. Smith, J. Freymann, et al. (2013b). The cancer imaging
archive (TCIA): Maintaining and operating a public information repository.
Journal of Digital Imaging, 26(6), 1045–1057.

[13] Deng, S., X. Zhang, W. Yan, E. I. C. Chang, Y. Fan, M. Lai, and Y. Xu (2020).
Deep learning in digital pathology image analysis: a survey. IEEE
Transactions on Biomedical Engineering, 14(7), 470–487.

[14] Ding, Y., Y. Yang, and Y. Cui, Deep learning for classifying of white blood
cancer. In A.Gupta and R. Gupta (eds.), ISBI 2019 C-NMC Challenge:
Classification in Cancer Cell Imaging. Springer Singapore, 2019.

[15] Duggal, R., A. Gupta, and R. Gupta, Segmentation of overlapping/touching


white blood cell nuclei using artificial neural networks. In CME Series on
Hemato- Oncopathology, All India Institute of Medical Sciences (AIIMS), New
Delhi, India. 2016a.

[16] Duggal, R., A. Gupta, R. Gupta, M. Wadhwa, et al., Overlapping cell nuclei
segmentation in microscopic images using deep belief networks. In
Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and
Image Processing. 2016b.

[17] Gao, Z., L. Wang, L. Zhou, and J. Zhang (2017a). Hep-2 cell image
classification with deep convolutional neural networks. IEEE Journal of
Biomedical and Health Informatics, 21(2), 416–428.

[18] Gehlot, S., A. Gupta, and R. Gupta, EDNFC-Net: Convolutional Neural


Network with Nested Feature Concatenation for Nuclei-Instance

101
Segmentation. In ICASSP 2020-2020 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020a.

[19] Gehlot, S., A. Gupta, and R. Gupta (2020b). SDCT-AuxNetθ : DCT augmented
stain deconvolutional CNN with auxiliary classifier for cancer diagnosis.
Medical Image Analysis, 61, 101661. ISSN 1361-8415.

[20] Ghazvinian Zanjani, F., S. Zinger, P. de With, B. E. Bejnordi, and J. van der
Laak, Histopathology stain-color normalization using deep generative models.
In 1st Conference on Medical Imaging with Deep Learning (MIDL 2018). 2018.

[21] Goswami, S., S. Mehta, D. Sahrawat, A. Gupta, and R. Gupta (2020).


Heterogeneity loss to handle intersubject and intrasubject variability in
cancer.Gupta, A., R. Duggal, S. Gehlot, R. Gupta, A. Mangal, L. Kumar, N.
Thakkar, and D. Satpathy (2020a). GCTI-SN: geometry-inspired
chemical and tissue invariant stain normalization of microscopic medical
images. Medical Image Analysis.

[22] Gupta, A., R. Duggal, S. Gehlot, R. Gupta, A. Mangal, L. Kumar, N. Thakkar, and
D. Satpathy (2020b). GCTI-SN: Geometry-inspired chemical and tissue
invariant stain normalization of microscopic medical images. Medical Image
Analysis, 65, 101788.

[23] Gupta, A. and R. Gupta, ISBI 2019 C-NMC Challenge: Classification in


Cancer Cell Imaging. Springer, Singapore., 2019c. ISBN 978-981-15-0797-7.

[24] Gupta, A. and R. Gupta (2019d). SN-AM Dataset: White Blood Cancer Dataset
of B-ALL and MM for Stain Normalization [Data set]. The Cancer Imaging
Archive (TCIA).https://doi.org/10.7937/tcia.2019.of2w8lxr

[25] Gupta, A., R. Gupta, S. Gehlot, and S. Mourya (2019). Classification of


normal vs malignant cells in B-ALL white blood cancer microscopic
102
images. IEEE International Symposium on Biomedical Imaging (ISBI)-2019
challenges.

CONFERENCE PARTICIPATION CERTIFICATES

103
104
105
106
107
108
109
110
111
112
113
114
115
116

You might also like