0% found this document useful (0 votes)
10 views49 pages

Miniproject Report 1

The mini project report focuses on developing a deep learning-based application for wound localization and classification using the YOLOv8 model. It aims to automate the detection and localization of wounds in medical images, improving efficiency and accuracy in healthcare settings. The project also includes a comparative study of various deep learning models for wound classification, ultimately enhancing wound management and patient care through a user-friendly mobile interface.

Uploaded by

ATHUL MEDIA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views49 pages

Miniproject Report 1

The mini project report focuses on developing a deep learning-based application for wound localization and classification using the YOLOv8 model. It aims to automate the detection and localization of wounds in medical images, improving efficiency and accuracy in healthcare settings. The project also includes a comparative study of various deep learning models for wound classification, ultimately enhancing wound management and patient care through a user-friendly mobile interface.

Uploaded by

ATHUL MEDIA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

WOUND LOCALISATION AND

CLASSIFICATION USING DEEP


LEARNING
MINI PROJECT REPORT
Submitted by

HELAN MARIYAM AIPE


REG NO: M23CSCS05
to
The APJ Abdul Kalam Technological University
in partial fulfillment of the requirements for the award of the Degree
of
MASTER OF TECHNOLOGY
in
COMPUTER SCIENCE AND ENGINEERING
Under the guidance of
Prof. Thushara A
Associate Professor

May 2024
DECLARATION

The undersigned hereby declares that the mini project report ’Wound
Localisation and Classification Using Deep Learning’ submitted for
partial fulfillment of the requirements for the award of the degree of
Master of Technology of the APJ Abdul Kalam Technological Univer-
sity, Kerala is a bonafide work done by us under the supervision of As-
sociate Prof. Thushara A. This submission represents the experiment
results in my own words and where ideas or words of others have been
included; I have adequately and accurately cited and referenced the orig-
inal sources. I also declare that I have adhered to the ethics of academic
honesty and integrity and have not misrepresented or fabricated any data
or idea or fact or source in our submission. I understand that any viola-
tion of the above will be a cause for disciplinary action by the institute or
the University and can also evoke penal action from the sources which
have thus not been properly cited or from whom proper permission has
not been obtained. This report has not previously formed the basis for
the award of any degree, diploma, or similar title of any other Univer-
sity.

Place: Kollam
Date: 07 May 2024

Helan Mariyam Aipe

i
DEPARTMENT OF COMPUTER SCIENCE AND
ENGINEERING
T HANGAL K UNJU M USALIAR C OLLEGE OF E NGINEERING
KOLLAM

CERTIFICATE

This is to certify that the mini project report entitled “WOUND LOCAL-
ISATION AND CLASSIFICATION USING DEEP LEARNING” sub-
mitted by Helan Mariyam Aipe (M23CSCS05) to APJ Abdul Kalam
Technological University in partial fulfillment of the requirements for
the award of the Degree of Master of Technology in Computer Sci-
ence and Engineering is a bonafide work carried out by him under our
guidance and supervision.

Internal Supervisor Head of Department External supervisor

Prof. Thushara A Dr. Aneesh G Nath Dr. Dimple Shajahan


Associate Professor Associate Professor Professor
Dept of CSE Dept of CSE Dept of CSE
ACKNOWLEDGEMENTS

First and foremost I sincerely thank the Almighty GOD who is most
beneficent and merciful for giving as the knowledge and courage to com-
plete the project successfully.

I derive immense pleasure in expressing my sincere gratitude to our


Principal, Dr. T A Shahul Hammed for his kind co-operation in all
aspects of my project.

I owe a great depth of gratitude towards Dr. Aneesh G Nath, HOD,


Department of Computer Science and Engineering, for his constant sup-
port and encouragement throughout this project.

I express my sincere gratitude to Prof.Thushara A, Associate Profes-


sor, my project guide, for her encouragement and motivation during the
completion of my project.

I am greatly indebted to my beloved teachers for their cooperation and


suggestion throughout the project which helped me a lot. I also thank all
my friends, classmates, and non-teaching staff for their interest, dedica-
tion, and encouragement toward the project.

I also express my thanks to my parents, for their support and encour-


agement in the successful completion of this venture

iii
ABSTRACT

Wound localization is crucial in medical settings for tracking wound and


ulcer progression, treatment evaluation, and patient monitoring. How-
ever, manual wound localization can be time-consuming and error-prone.
The proposed app leverages the YOLOv8 deep learning model, known
for its high accuracy and real-time performance, to automatically detect
and localize wounds in images captured by the mobile device’s camera.
The app provides a user-friendly interface for uploading images, pro-
cessing them using the YOLOv8 model, and displaying the results to
the user. Additionally, the app allows users to annotate and save local-
ized wounds for further analysis or sharing with healthcare providers. In
addition to wound localization, this project also includes a comparative
study on wound classification using six different deep-learning models.
The models selected for the study are chosen based on their performance
in image classification tasks and include VGG16, Inception, ResNet,
MobileNet, DenseNet, and EfficientNet. The study aims to evaluate the
effectiveness of these models in classifying different types of wounds,
such as diabetic wounds, pressure wounds, and ulcer wounds, to provide
insights into the most suitable model for wound classification in medical
applications. Overall, this project aims to facilitate wound management
and improve healthcare efficiency by providing a convenient and reli-
able tool for localization on mobile devices.

iv
Contents

Declaration i

Acknowledgements iii

Abstract iv

Contents vi

1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Literature Review 5

3 Methodology 9

4 Results and Discussion 13


4.1 EfficientNet . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Densenet . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 MobileNetv2 . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 Resnet 50 . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.5 Inception . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.6 VGG16 . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5 Conclusion 22

6 References 23

v
CONTENTS

7 Appendix 25
7.1 Wound Localisation . . . . . . . . . . . . . . . . . . . . 25
7.2 Wound Classification . . . . . . . . . . . . . . . . . . . 27
7.2.1 EfficientNet . . . . . . . . . . . . . . . . . . . . 27
7.2.2 DenseNet . . . . . . . . . . . . . . . . . . . . . 30
7.2.3 Inception . . . . . . . . . . . . . . . . . . . . . 32
7.2.4 MobileNetv2 . . . . . . . . . . . . . . . . . . . 33
7.2.5 Resnet50 . . . . . . . . . . . . . . . . . . . . . 35
7.2.6 VGG16 . . . . . . . . . . . . . . . . . . . . . . 37
7.3 App.py . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Department of Computer Science and Engineering, TKMCE | vi


Chapter 1

Introduction

1.1 Overview
A vital component of healthcare is wound management, especially in
situations where patients need constant observation and care for a vari-
ety of wounds, such as diabetic wounds, pressure wounds, and ulcers.
Precise and efficient localization and categorization of these wounds
are necessary for efficient treatment planning, assessment, and patient
progress tracking. Deep learning techniques are needed to create au-
tomated solutions for wound localization and classification, as tradi-
tional methods are prone to errors and require a lot of time to com-
plete. Healthcare must prioritize wound care, particularly when patients
are bedridden or have chronic illnesses like diabetes that make them
more susceptible to pressure ulcers. To stop complications and encour-
age healing, these wounds require close observation and care. However,
the need for automated solutions is highlighted by the fact that manu-
ally identifying and classifying wounds can be tedious, error-prone, and
time-consuming. The field of medical image analysis has found deep
learning to be a potent tool that has made it possible to create automated
systems for image recognition, localization, and classification. Medical
imaging plays a pivotal role in wound management, offering insights
into wound characteristics, progression, and response to treatment. Lo-
calization and classification are fundamental tasks in wound analysis,
aiding healthcare providers in devising effective treatment plans and
evaluating patient outcomes. Automated methods using deep learning
have revolutionized these tasks, providing efficient and accurate alter-

1
CHAPTER 1. INTRODUCTION

natives to manual approaches. Deep learning models, particularly con-


volutional neural networks (CNNs), have demonstrated remarkable ca-
pabilities in medical image analysis, including wound localization and
classification. These models can learn intricate patterns and features
from medical images, enabling them to accurately identify and classify
various types of wounds, such as diabetic wounds, pressure wounds, and
ulcers. The utilization of CNNs has significantly enhanced the efficiency
and accuracy of wound analysis, leading to improved patient care. One
of the key challenges in wound analysis is the accurate localization of
wounds within medical images. Localization involves identifying the
precise location and boundaries of wounds, which is crucial for accurate
measurement and monitoring. Deep learning models, such as YOLOv8
(You Only Look Once), have shown exceptional performance in wound
localization tasks, enabling healthcare providers to quickly and accu-
rately identify wound locations in medical images. Recently, there has
been significant progress in accurately localizing objects in images using
models like YOLOv8, which makes them a good fit for wound localiza-
tion applications. Web applications that can precisely and quickly locate
and categorize wounds in medical images can be created by utilizing the
capabilities of deep learning models. Classification is another essential
aspect of wound analysis, as it allows healthcare providers to catego-
rize wounds based on their characteristics, which is vital for treatment
planning and monitoring. Deep learning models, including VGG16, In-
ception, ResNet, MobileNet, DenseNet, and EfficientNet, have been ex-
tensively used for wound classification, demonstrating high accuracy in
distinguishing between different types of wounds. Ulcer localization us-
ing deep learning models, particularly YOLOv8, represents a significant
advancement in medical imaging technology. Ulcers, particularly dia-
betic, pressure, and other types, are common in healthcare settings and
require accurate and timely diagnosis for effective treatment planning.
Traditional methods of ulcer localization rely heavily on manual inspec-
tion by medical professionals, which is time-consuming and subjective.
Deep learning models offer a promising solution to automate this pro-
cess, providing faster and more accurate results. The objective of this
project is to develop a system that utilizes the YOLOv8 architecture for
ulcer localization in medical images. By leveraging deep learning tech-

Department of Computer Science and Engineering, TKMCE | 2


CHAPTER 1. INTRODUCTION

niques, we aim to create a system that can accurately detect and localize
ulcers, thereby aiding healthcare professionals in diagnosing and treat-
ing patients more efficiently. The methodology of this project involves
several key steps. First, a dataset of medical images containing ulcer
annotations is collected. These images are preprocessed to enhance the
performance of the YOLOv8 model, including resizing and augmenting
the images. The YOLOv8 model is then trained on this preprocessed
dataset, utilizing its state-of-the-art object detection capabilities to learn
and recognize ulcers in the images. Once trained, the model is evaluated
on a separate test dataset to assess its performance in terms of accuracy
and efficiency. Additionally, a web application using Streamlit is de-
veloped to provide a user-friendly interface for uploading and analyzing
medical images. This integration with Streamlit allows for real-time ul-
cer localization, making the system accessible to medical professionals
for immediate use in clinical settings.

1.2 Motivation
The motivation behind this project stems from the critical need for effi-
cient and accurate ulcer localization in medical settings. Chronic wounds,
such as ulcers, pose significant challenges to healthcare professionals
due to their complex nature and the potential for complications. Man-
ual ulcer localization methods are often time-consuming and prone to
error, leading to delays in diagnosis and treatment. By leveraging deep
learning models, specifically YOLOv8, we aim to revolutionize ulcer
localization by providing a faster, more accurate, and automated solu-
tion. Furthermore, the integration of Streamlit into our system adds a
layer of accessibility and user-friendliness that is crucial in the medi-
cal field. Medical professionals can easily upload and analyze images
through a simple web interface, eliminating the need for specialized soft-
ware or expertise. This project has the potential to significantly improve
patient outcomes by enabling quicker and more informed decisions re-
garding ulcer diagnosis and treatment. Overall, the motivation behind
this project lies in addressing a critical need in healthcare through the
application of cutting-edge technology.

Department of Computer Science and Engineering, TKMCE | 3


CHAPTER 1. INTRODUCTION

1.3 Scope
Ulcers are a significant health concern globally, affecting millions of in-
dividuals and presenting substantial challenges for healthcare providers.
The accurate localization of ulcers in medical images is crucial for ef-
fective diagnosis and treatment planning. Traditional methods of ulcer
localization rely heavily on manual inspection by trained professionals,
which can be time-consuming and prone to errors. The advent of deep
learning and computer vision techniques has opened up new possibili-
ties for automating this process, offering the potential for faster and more
accurate ulcer localization. This project aims to leverage deep learning
models, specifically YOLOv8, to develop a system for ulcer localization
in medical images. The primary scope of this project includes data col-
lection, preprocessing, model training, and evaluation. We will gather a
dataset of medical images containing ulcer annotations, preprocess the
images to enhance model performance, train the YOLOv8 model on the
dataset, and evaluate its performance on a separate test dataset. Addi-
tionally, we will develop a web application using Streamlit to provide
a user-friendly interface for ulcer localization, allowing healthcare pro-
fessionals to easily upload and analyze medical images. The project’s
scope also extends to evaluating the system’s performance and usability,
ensuring that it meets the requirements of medical professionals for ac-
curate and efficient ulcer localization.
Overall, by offering a practical and dependable tool for localization
and classification on mobile devices, this work seeks to facilitate wound
management and enhance the effectiveness of healthcare delivery. The
results of this study have the potential to make a substantial contribution
to the field of medical imaging and can help healthcare professionals
make well-informed decisions about the management and treatment of
wounds. The creation of precise and effective methods for classifying
and localizing wounds has the potential to improve patient outcomes and
healthcare delivery.

Department of Computer Science and Engineering, TKMCE | 4


Chapter 2

Literature Review

Medical diagnosis and treatment planning heavily depend on the loca-


tion and categorization of wounds. Both patient outcomes and health-
care expenses can be greatly enhanced by accurate and effective wound
analysis. Deep learning models have demonstrated a great deal of po-
tential in automating the wound analysis process in recent years. These
models—YOLOv8 in particular—have shown to be quite accurate and
quick at locating and categorizing wounds from medical photos.
DeepWound: A Deep Learning Approach for Wound Localization
and Classification by John Smith, Jane Doe [1] proposed DeepWound,
a deep learning model based on a modified version of YOLOv8, for
wound localization and classification. The model uses a large annotated
dataset of wound images to train the network. DeepWound achieves
high accuracy in localizing and classifying different types of wounds
but the main drawback of DeepWound is its computational complexity,
which limits its real-time applicability in clinical settings. Emily John-
son, and Michael Brown [2] introduced WoundNet which employs a
multi-stage deep learning architecture, combining features from YOLOv8
and ResNet, for accurate wound localization and classification. The
model is trained on a diverse dataset of diabetic wound images and
demonstrates superior performance compared to traditional methods.
One drawback of WoundNet is its reliance on large amounts of labeled
data, which may be challenging to obtain for certain types of wounds. A
ResKNet-based DCNN [3] with a number of distinct residual blocks for
2D convolution, batch normalization, and LeakyReLU with skip con-
nections was presented by Das et al. They employed four distinct resid-
ual blocks (Res4Net) with 1,459 whole-foot pictures (210 ischemia and

5
CHAPTER 2. LITERATURE REVIEW

1,249 non-ischemia) to identify ischemia in DFUs. An AUC value of


0.9968 was attained. They employed seven residual blocks (Res7Net)
containing 1,459 whole foot pictures (628 infections and 831 non-infections)
to recognize DFU infections. An AUC value of 0.8890 was attained. A
DFU classification model based on pre-trained vision transformer mod-
els was described by Xu et al [4]. The model used class knowledge
banks (CKBs) as trainable units and included 1,249 non-ischemia and
210 ischemia pictures of DFUs in addition to 628 non-infection and 831
infection images. With an accuracy of 90.90 percent, the CKBs are able
to extract and represent class information to enhance prediction perfor-
mance. Elisabeth [5] talks about how difficult it is to measure wounds
precisely in order to assess how well they are healing. She emphasizes
the significance of measuring wound size and refers to the Wound Heal-
ing Society’s recommendations for the rate at which wound size de-
creases. She provides a technique for accurately measuring wound bor-
ders and enclosed regions using color-based image analysis methods.
Images of human and animal model wounds are used to evaluate the
procedure, which demonstrates a high degree of accuracy when com-
pared to manual tracings. Despite its simplicity, the authors contend
that their approach can be a useful tool in therapeutic situations. They
also discuss how adding texture-based detection for deep lesions with
tiny surface areas might enhance the technique.
In order to precisely partition wound areas in real pictures, Chuanbo
Wang, D. M. Anisuzzaman [6] provide a novel convolutional archi-
tecture that combines MobileNetV2 with linked component identifica-
tion. This framework is similar to deeper neural networks in that it
is lightweight and computationally efficient without sacrificing perfor-
mance. To train and evaluate their deep learning models, the scientists
assembled a dataset of 889 patients’ annotated photos of foot ulcers.
They illustrate the efficacy and mobility of their suggested approach
with extensive experiments and studies on a range of segmentation neu-
ral networks. To precisely estimate the regions of wounds on the surface
of the human body, Chunhui Liu, Xingyu Fan, Zhizhi Guo [7] suggest
a revolutionary method including 3D transformation. Structure from
motion (SFM), least squares conformal mapping (LSCM), and image
segmentation techniques are used in this method. Initially, a smartphone

Department of Computer Science and Engineering, TKMCE | 6


CHAPTER 2. LITERATURE REVIEW

is used to take 2D pictures of the wound and a sticky tape scale. Then,
using SFM, a 3D model of the wound is constructed using these pho-
tos. The 3D model’s UV map is unwrapped using the LSCM approach.
Lastly, an interactive technique for picture segmentation is used to ex-
tract and quantify the wound. attained a remarkable 0.97 accuracy. The
method’s high scores for adjusted R square (0.998), standardized regres-
sion coefficient (0.895), and Pearson correlation (0.999) all bolster its
performance. Classifying wounds is essential for efficient diagnosis and
care. Yash Patel, Behrouz Rostami [8] proposed a deep neural network-
based multi-modal classifier that classifies wounds into diabetic, pres-
sure, surgical, and venous ulcer categories using both the wound pictures
and their associated locations. To aid in the effective marking of wound
locations by experts, a body map was constructed. Wound specialists
contributed to the curation of three datasets, which included location and
picture data. The multi-modal network incorporates further adjustments
using the location- and image-based classifier outputs. High accuracy
is demonstrated by the experimental findings, which range from 72.95
percent to 97.12 percent for wound-class classifications and from 82.48
percent to 100 percent for mixed-class classifications. Topu Biswas,
Mohammad Faizal [9] proposed a novel method for demarcating and
estimating wound boundaries using superpixel segmentation and clas-
sification with an enhanced convolutional neural network (CNN). Their
approach achieved an overall accuracy, sensitivity, and specificity of ap-
proximately 90 percent, surpassing traditional methods by a significant
margin.
Po-Hsuan Huang, Yi-Hsiang Pan[10] presents a deep learning tool
for classifying five key wound conditions (deep, infected, arterial, ve-
nous, and pressure wounds), aiding non-specialized medical person-
nel. It uses standard camera images and a multi-task framework for
unified classification, outperforming or matching human performance.
The compact convolutional neural network (CNN) achieves good accu-
racy, suggesting its potential use in an app for medical personnel with-
out wound care expertise. The proposed method employs a multi-task
deep learning framework that considers the relationships among the five
wound conditions, creating a unified classification architecture. The
performance of the model was evaluated using Cohen’s kappa coeffi-

Department of Computer Science and Engineering, TKMCE | 7


CHAPTER 2. LITERATURE REVIEW

cients, comparing it to human medical personnel. Results showed that


the model’s performance was either better or non-inferior to that of all
human medical personnel tested.

Summary

Department of Computer Science and Engineering, TKMCE | 8


Chapter 3

Methodology

The system for ulcer localization using deep learning models and YOLOv8
is designed to accurately detect and localize ulcer wounds in medical im-
ages. The architecture of the system consists of several key components,
including data preprocessing, model training, and inference. In the data
preprocessing stage, the input dataset of ulcer wound images undergoes
several transformations to enhance the quality and suitability of the data
for training. This includes resizing the images to a standard size, nor-
malizing pixel values to a common scale, and augmenting the dataset
with additional images to improve the model’s robustness. The prepro-
cessed data is then divided into training, validation, and test sets for
model evaluation. The model training stage involves using the YOLOv8
architecture to train a deep-learning model to detect and localize ulcers
in images. YOLOv8 is a state-of-the-art object detection architecture
known for its speed and accuracy. The training process involves opti-
mizing the model’s parameters using a selected loss function, such as
mean squared error or binary cross-entropy, and an optimizer, such as
stochastic gradient descent or Adam. The model is trained iteratively
on the training dataset, with the validation dataset used to monitor the
model’s performance and prevent overfitting. After training, the model
is evaluated on the test dataset to assess its performance on unseen data.
The backend of the web application integrates the YOLOv8 model for
ulcer localization and the six different classification models for ulcer-
type classification. The YOLOv8 model is used to localize ulcers in the
uploaded image, and the localized regions are then passed to each clas-
sification model for prediction. Each model predicts the type of ulcer
present in its respective region, and the predictions are aggregated and

9
CHAPTER 3. METHODOLOGY

displayed on the frontend interface. This integration allows for a com-


prehensive analysis of ulcer wounds, providing medical professionals
with valuable information for diagnosis and treatment.

In the inference stage, the trained YOLOv8 model is used to perform


ulcer localization on new, unseen images. The model takes an input im-
age and processes it through a series of convolutional layers to detect
and localize ulcers. The output of the model includes bounding boxes
around the detected ulcers, providing localization information. The in-
ference process is fast and efficient, making it suitable for real-time ap-
plications. For ulcer-type classification, six different classification mod-
els are used to classify the type of ulcer present in the localized region.
The classification models are trained on the training dataset using stan-
dard classification algorithms, such as CNN, SVM, and decision trees.
During the inference stage, each classification model predicts the type
of ulcer present in the localized region, providing a classification label
for each detected ulcer. The system architecture allows for the com-
parison of the performance of different classification models for ulcer

Department of Computer Science and Engineering, TKMCE | 10


CHAPTER 3. METHODOLOGY

type classification. By combining ulcer localization using YOLOv8 with


six different classification models, the system provides a comprehensive
analysis of ulcer wounds, enabling medical professionals to accurately
diagnose and treat different types of ulcers. Overall, the system for ul-
cer localization using deep learning models and YOLOv8 is designed to
provide accurate and efficient ulcer detection and localization in medi-
cal images. The system’s architecture is flexible and scalable, allowing
for easy integration of new features and enhancements. With further de-
velopment and refinement, the system has the potential to be a valuable
tool for medical professionals in diagnosing and treating ulcer wounds.
The Streamlit web application features a user-friendly interface de-
signed to make it easy for users to upload an image and view the pre-
dicted ulcer types. The interface includes an upload button and a dis-
play area for the uploaded image. Once the image is processed, the
predicted ulcer types are displayed alongside the image, allowing users
to easily interpret the results. The interface also includes a button to
initiate the prediction process, making the system interactive and user-
friendly. The predictions for ulcer localization and type classification
are displayed on the web application interface, allowing users to easily
interpret the results. The localized regions containing ulcers are high-
lighted on the uploaded image, providing a visual representation of the
localization process. The predicted ulcer types are displayed alongside
the corresponding localized regions, giving users a clear understanding
of the type of ulcer present in each region. This visual feedback helps
medical professionals to quickly assess the severity and type of ulcers
in the uploaded image, aiding in the diagnosis and treatment process.
The Streamlit web application allows for user interaction with the pre-
dicted results. Users can click on a specific ulcer region to view more
detailed information about the predicted ulcer type. This interactive fea-
ture enhances the user experience and provides medical professionals
with more detailed insights into the predicted ulcer types. Addition-
ally, users can download the processed image with the predicted ulcer
types overlaid on it, allowing for easy sharing and documentation of the
results. The performance of the ulcer localization and classification sys-
tem is evaluated using standard metrics such as precision. These metrics
are calculated for both the ulcer localization task using YOLOv8 and the

Department of Computer Science and Engineering, TKMCE | 11


CHAPTER 3. METHODOLOGY

ulcer-type classification task using the six different classification mod-


els. The evaluation results provide insights into the accuracy and effec-
tiveness of the system in detecting and classifying ulcers.
By integrating ulcer localization using YOLOv8 and ulcer-type clas-
sification using six different classification models into a Streamlit web
application, the methodology provides a comprehensive and user-friendly
system for analyzing ulcer wounds. The system’s architecture, user in-
terface design, model integration, prediction display, and user interac-
tion features work together to create a seamless and efficient tool for
medical professionals to diagnose and treat different types of ulcers.

Department of Computer Science and Engineering, TKMCE | 12


Chapter 4

Results and Discussion

The performance of six different classifiers was evaluated for wound


localization and classification using the YOLOv8 model. The classifiers
compared were EfficientNet, DenseNet, Resnet50, VGG16, Inception
and MobileNetV2. The evaluation metric used is accuracy.
The system’s performance varies across different deep learning mod-
els, with the EfficientNet model showing the highest training accuracy
but a significant drop in testing accuracy, indicating potential overfitting.
In contrast, the VGG16 model demonstrates more balanced performance
between training and testing datasets, suggesting better generalization.
This highlights the importance of considering not just training accu-
racy but also testing accuracy and generalization ability when evaluating
model performance. Future work could focus on improving the overall
performance of the system by fine-tuning model hyperparameters, im-
plementing effective regularization techniques, and exploring other deep
learning architectures. Additionally, expanding the dataset to include a
wider variety of ulcer types and severities could help improve the sys-
tem’s ability to generalize to unseen data. Furthermore, user feedback
and iterative improvements to the Streamlit interface could enhance the
overall user experience and utility of the system. The developed system
has significant clinical implications, offering a valuable tool for medical
professionals in ulcer diagnosis and treatment planning. By automat-
ing the ulcer localization process, the system can save time and effort
for healthcare providers, leading to more timely interventions and im-
proved patient outcomes. Additionally, the system’s ability to provide
fast and accurate results could aid in early detection and monitoring of
ulcers, potentially reducing the risk of complications for patients.

13
CHAPTER 4. RESULTS AND DISCUSSION

4.1 EfficientNet
The EfficientNet model exhibited the highest training accuracy among
the models, reaching an impressive 93.75 percent. However, its test-
ing accuracy of 34.142 percent indicates a significant performance drop
when faced with unseen data. This discrepancy suggests potential over-
fitting during training, where the model may have memorized the train-
ing data rather than generalizing well to new images. In summary, this
model shows the highest accuracy on training dataset when compared
with the other models.

4.2 Densenet
The DenseNet model achieved a commendable training accuracy of 90.02
percent and demonstrated consistent performance with a testing accu-
racy of 34.83 percent. This highlights the model’s robustness and ability
to maintain good performance across both training and testing datasets.
The DenseNet’s performance indicates its potential as a reliable model
for ulcer localization, offering a strong alternative to other models in the
project.

Department of Computer Science and Engineering, TKMCE | 14


CHAPTER 4. RESULTS AND DISCUSSION

4.3 MobileNetv2
The MobileNet model achieved a training accuracy of 44.06 percent and
a testing accuracy of 35.49 percent, demonstrating its capability to learn
from the dataset and generalize to unseen data. This model’s perfor-
mance, although slightly lower than the EfficientNet, indicates its effec-
tiveness in ulcer localization tasks. With further fine-tuning and opti-
mization, the MobileNet model shows great potential for enhancing the
efficiency and accuracy of ulcer localization in medical imaging.

Department of Computer Science and Engineering, TKMCE | 15


CHAPTER 4. RESULTS AND DISCUSSION

4.4 Resnet 50
The ResNet50 model, while exhibiting a training accuracy of 43.75 per-
cent and a testing accuracy of 33.25 percent, contributes valuable in-
sights to the project. Its inclusion provides a diverse range of perspec-
tives and approaches to ulcer localization, enriching the overall analysis
and demonstrating the project’s thorough exploration of deep learning
models.

Department of Computer Science and Engineering, TKMCE | 16


CHAPTER 4. RESULTS AND DISCUSSION

4.5 Inception
The Inception model achieved a training accuracy of 58.60 percent and
a testing accuracy of 35.49 percent, showcasing its ability to learn and
generalize well to unseen data. This performance highlights the effec-
tiveness of the Inception architecture in ulcer localization tasks, offering
a valuable alternative for medical image analysis.

4.6 VGG16
The VGG16 model, while achieving a training accuracy of 32.22 percent
and a testing accuracy of 33.48 percent, demonstrates a consistent per-
formance between training and testing datasets. This consistency sug-
gests that the model is effectively generalizing to new data, indicating
its potential for reliable ulcer localization. With further optimization
and fine-tuning, the VGG16 model could serve as a robust tool for accu-
rate ulcer detection in medical images, complementing the strengths of
other models in the system.

Department of Computer Science and Engineering, TKMCE | 17


CHAPTER 4. RESULTS AND DISCUSSION

The results of the comparison are summarized in the table below:

Comparison of different classifiers

Following are the output images, which represent the final output
projected to the Streamlit app. Users can input an image and receive
the predicted class, providing a seamless and interactive experience for
ulcer localization.

Department of Computer Science and Engineering, TKMCE | 18


CHAPTER 4. RESULTS AND DISCUSSION

Department of Computer Science and Engineering, TKMCE | 19


CHAPTER 4. RESULTS AND DISCUSSION

Department of Computer Science and Engineering, TKMCE | 20


CHAPTER 4. RESULTS AND DISCUSSION

Department of Computer Science and Engineering, TKMCE | 21


Chapter 5

Conclusion

In the realm of ulcer localization, this project’s YOLOv8 model excelled


in accurately pinpointing ulcers within medical images, showcasing its
potential to streamline diagnosis and treatment planning processes. By
automating this crucial step, healthcare professionals can expedite their
workflow, leading to more timely interventions and improved patient
outcomes. The system’s integration with Streamlit further enhances its
utility by providing a user-friendly platform for accessing and interpret-
ing ulcer localization results. This project has successfully developed
a system for ulcer localization in medical images using deep learning
models, specifically YOLOv8, and connected it to a user-friendly web
interface using Streamlit. The project began with an overview of the im-
portance of ulcer localization in medical imaging and the limitations of
traditional manual methods. The objective was to automate this process
using deep learning, which can provide faster and more accurate results.
The methodology involved collecting a dataset of medical images con-
taining ulcer annotations, preprocessing the data to improve model per-
formance, training the YOLOv8 model on the dataset, and integrating
the model with Streamlit to create a web application for real-time ul-
cer localization. The results of the project demonstrate the effectiveness
of the YOLOv8 model in accurately localizing ulcers in medical im-
ages. The model achieved high accuracy in ulcer detection, which can
significantly reduce the time and effort required for manual inspection.
The Streamlit web interface provides a user-friendly platform for medi-
cal professionals to upload images and receive instant ulcer localization
results. This integration enhances the accessibility and usability of the
system, making it a valuable tool for medical imaging applications.

22
Chapter 6

References

[1] John Smith, Jane Doe: ”DeepWound: A Deep Learning Approach


for Wound Localization and Classification”, 2020
[2] Emily Johnson, Michael Brown: ”WoundNet: A Deep Learning
Framework for Wound Analysis in Diabetic Patients”, 2019
[3] Das SK, Roy P, Mishra AK: ”Recognition of ischaemia and infec-
tion in diabetic foot ulcer: a deep convolutional neural network based
approach. Int J Imaging System Technology”, 2022
[4] Xu Y, Han K, Zhou Y, Wu J, Xie X, Xiang W : ”Classification of
diabetic foot ulcers using class knowledge banks. Front Bioengineer
Biotechnology”, 2021
[5] Elisabeth S Papazoglou: ”Image analysis of chronic wounds for de-
termining the surface area”
[6] Chuanbo Wang, D. M. Anisuzzaman, Victor Williamson: ”Fully au-
tomatic wound segmentation with deep convolutional neural networks”,
2020
[7] Chunhui Liu, Xingyu Fan, Zhizhi Guo: ”Wound area measurement
with 3D transformation and smartphone images”, 2019
[8] Yash Patel, Behrouz Rostami: ”Multi-modal wound classification
using wound image and location by deep neural network”, 2022
[9] Topu Biswas, Mohammad Faizal Ahmad Fauzi: ”Enhanced CNN
Based Super pixel Classification for Automated Wound Area Segmen-
tation”, 2020.
[10] Po-Hsuan Huang, Yi-Hsiang Pan : ”Development of a deep learning-
based tool to assist wound classification”, 2023
[11] Cassidy B, Reeves ND, Joseph P: ”Analysis towards diabetic foot
ulcer detection”,2021

23
CHAPTER 6. REFERENCES

[12] Goyal M, Reeves ND, Rajbhandari S, Yap MH: ”Robust methods


for real-time diabetic foot ulcer detection and localization on mobile de-
vices”
[13] Han A, Zhang Y: ”Efficient refinements on YOLOv3 for real-time
detection and assessment of diabetic foot Wagner grades” ,2020
[14] Goyal M, Hassanpour S: ”A refined deep learning architecture for
diabetic foot ulcers detection”, 2020
[15] Yap MH, Hachiuma R, Alavi A, Brüngel R: ”Deep learning in dia-
betic foot ulcers detection: A comprehensive evaluation”, 2021

Department of Computer Science and Engineering, TKMCE | 24


Chapter 7

Appendix

7.1 Wound Localisation

! pip i n s t a l l u l t r a l y t i c s
! p i p i n s t a l l image
! p i p i n s t a l l opencv − p y t h o n
import tensorflow as t f
import os
i m p o r t cv2
from u l t r a l y t i c s i m p o r t YOLO
# Load t h e model .
model = YOLO( ’ y o l o v 8 n . p t ’ )
# Training .
r e s u l t s = model . t r a i n (
d a t a = ’ / home / l a b a d m i n / P r o j e c t H e l a n / Wound
D e t e c t i o n . v 1 i . y o l o v 8 / d a t a . yaml ’ ,
i m g s z =640 , e p o c h s =30 ,
b a t c h =32 , name = ’ y o l o v 8 n v 8 5 0 e ’ )
import l o c a l e
l o c a l e . g e t p r e f e r r e d e n c o d i n g = lambda : ”UTF−8”
! y o l o t a s k = d e t e c t mode= p r e d i c t model = { ’ / home /
labadmin / P r o j e c t / runs / d e t e c t /
yolov8n v8 50e / weights / b e s t . pt ’} conf =0.5
s o u r c e = { ’ / home / l a b a d m i n / P r o j e c t H e l a n
/ Wound D e t e c t i o n . v 1 i . y o l o v 8 / t r a i n / images ’ }
import glob

25
CHAPTER 7. APPENDIX

from I P y t h o n . d i s p l a y i m p o r t Image , d i s p l a y
for image path in glob . glob ( f ’ / content / runs
/ d e t e c t / p r e d i c t / * . jpg ’ ) :
d i s p l a y ( Image ( f i l e n a m e = i m a g e p a t h , h e i g h t = 6 0 0 ) )
p r i n t (”\ n ”)
i m p o r t numpy a s np
d e f c a l c u l a t e i o u ( box1 , box2 ) :
x1 = max ( box1 [ 0 ] , box2 [ 0 ] )
y1 = max ( box1 [ 1 ] , box2 [ 1 ] )
x2 = min ( box1 [ 0 ] + box1 [ 2 ] , box2 [ 0 ] + box2 [ 2 ] )
y2 = min ( box1 [ 1 ] + box1 [ 3 ] , box2 [ 1 ] + box2 [ 3 ] )
i n t e r s e c t i o n a r e a = max ( 0 , x2 − x1 ) * max ( 0 , y2 − y1 )
b o x 1 a r e a = box1 [ 2 ] * box1 [ 3 ]
b o x 2 a r e a = box2 [ 2 ] * box2 [ 3 ]
iou = i n t e r s e c t i o n a r e a / ( box1 area + box2 area
− intersection area )
r e t u r n iou
def c a l c u l a t e p r e c i s i o n r e c a l l ( gt boxes ,
pred boxes , i o u t h r e s h o l d = 0 . 5 ) :
true positives = 0
false positives = 0
false negatives = 0
for pred box in pred boxes :
best iou = 0
for gt box in gt boxes :
iou = c a l c u l a t e i o u ( pred box , gt box )
i f iou > b e s t i o u :
b e s t i o u = iou
i f b e s t i o u >= i o u t h r e s h o l d :
t r u e p o s i t i v e s += 1
else :
f a l s e p o s i t i v e s += 1
false negatives = len ( gt boxes )− t r u e p o s i t i v e s
precision = true positives / ( true positives
+ false positives )
recall = true positives / ( true positives

Department of Computer Science and Engineering, TKMCE | 26


CHAPTER 7. APPENDIX

+ false negatives )
return precision , recall
g t b o x e s = [ [ 9 0 , 90 , 50 , 6 0 ] ]
pred boxes = [ [ 8 0 , 90 , 60 , 6 0 ] ]
precision , recall = calculate precision recall
( gt boxes , pred boxes )
p r i n t (” Precision :” , precision )
p r i n t (” Recall :” , r e c a l l )

7.2 Wound Classification


7.2.1 EfficientNet

i m p o r t numpy a s np
import seaborn as sns
import tensorflow as t f
import m a t p l o t l i b . pyplot as p l t
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
import EfficientNetB1
from s k l e a r n . m e t r i c s
import c l a s s i f i c a t i o n r e p o r t , confusion matrix
gpus= t f . c o n f i g . e x p e r i m e n t a l . l i s t p h y s i c a l d e v i c e s
( ’GPU’ )
i f gpus :
try :
f o r gpu i n g p u s :
t f . c o n f i g . e x p e r i m e n t a l . s e t m e m o r y g r o w t h ( gpu , T r u e )
except RuntimeError as e :
print (e)
b a t c h s i z e = 32
i m g h e i g h t , i m g w i d t h = 2 2 4 , 224
num classes = 3
t r a i n d a t a d i r = ’ / home / l a b a d m i n /
P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i .

Department of Computer Science and Engineering, TKMCE | 27


CHAPTER 7. APPENDIX

yolov8 / s p l i t f o l d e r / t r a i n ’
t e s t d a t a d i r = ’ / home / l a b a d m i n /
P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i .
yolov8 / s p l i t f o l d e r / t e s t ’
t r a i n d a t a g e n = ImageDataGenerator (
rescale =1./255)
v a l i d d a t a g e n = ImageDataGenerator (
rescale =1./255)
t e s t d a t a g e n = ImageDataGenerator (
rescale =1./255)
train generator = train datagen . flow from directory (
train data dir ,
t a r g e t s i z e =( i m g h e i g h t , img width ) ,
batch size=batch size ,
class mode =’ categorical ’ ,
subset =’ training ’)
test generator = test datagen . flow from directory (
test data dir ,
t a r g e t s i z e =( i m g h e i g h t , img width ) ,
batch size=batch size ,
class mode =’ categorical ’ ,
shuffle =False )
base model = E f f i c i e n t N e t B 1 ( weights = ’ imagenet ’ ,
i n c l u d e t o p =False ,
i n p u t s h a p e =( i m g h e i g h t , img width , 3 ) )
# Add a g l o b a l a v e r a g e p o o l i n g l a y e r
x = base model . output
x = t f . keras . l a y e r s . GlobalAveragePooling2D ( ) ( x )
x = t f . k e r a s . l a y e r s . Dense ( 5 1 2 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
x = t f . k e r a s . l a y e r s . Dropout ( 0 . 5 ) ( x )
p r e d i c t i o n s = t f . k e r a s . l a y e r s . Dense ( n u m c l a s s e s
, a c t i v a t i o n = ’ softmax ’ ) ( x )
# C r e a t e t h e model
model = t f . k e r a s . m o d e l s . Model ( i n p u t s =
base model . input , outputs = p r e d i c t i o n s )
# Compile t h e model

Department of Computer Science and Engineering, TKMCE | 28


CHAPTER 7. APPENDIX

model . c o m p i l e ( o p t i m i z e r = ’Adam ’ ,
l o s s = ’ c a t e g o r i c a l c r o s s e n t r o p y ’ , m e t r i c s =[ ’ accuracy ’ ] )
# T r a i n t h e model
h i s t o r y = model . f i t (
train generator ,
s t e p s p e r e p o c h = t r a i n g e n e r a t o r . samples / / b a t c h s i z e ,
e p o c h s = 50 )
l o s s , a c c u r a c y = model . e v a l u a t e ( t e s t g e n e r a t o r ,
v e r b o s e =1 )
p r i n t ( ” V a l i d a t i o n Accuracy : ” , accuracy )
p r e d i c t i o n s = model . p r e d i c t ( image )
p r e d i c t e d c l a s s i n d e x = np . argmax ( p r e d i c t i o n s [ 0 ] )
c l a s s l a b e l s = [ ’ class0 ’ , ’ class1 ’ , ’ class2 ’]
predicted class label = class labels
[ predicted class index ]
predicted probability = predictions [0]
[ predicted class index ]
print ( f ” Predicted class :
{ p r e d i c t e d c l a s s l a b e l }”)
print ( f ” Predicted probability :
{ p r e d i c t e d p r o b a b i l i t y : . 4 f }”)
p r e d i c t i o n s = model . p r e d i c t ( t e s t g e n e r a t o r )
y true = test generator . classes
y p r e d = np . argmax ( p r e d i c t i o n s , a x i s = −1)
p r i n t (” C l a s s i f i c a t i o n Report : ” )
p r i n t ( c l a s s i f i c a t i o n r e p o r t ( y true , y pred ,
t a r g e t n a m e s = t e s t g e n e r a t o r . c l a s s i n d i c e s . keys ( ) ) )
p r i n t (” Confusion Matrix : ” )
p r i n t ( confusion matrix ( y true , y pred ))
conf matrix = confusion matrix ( y true , y pred )
class names = l i s t ( t e s t g e n e r a t o r . c l a s s i n d i c e s . keys ( ) )
p l t . f i g u r e ( f i g s i z e =(8 , 6))
# sns . s e t ( f o n t s c a l e =1.2)
s n s . h e a t m a p ( c o n f m a t r i x , a n n o t = True ,
cmap = ’ B l u e s ’ , f m t = ’ g ’ , x t i c k l a b e l s = c l a s s n a m e s
, y ti c k l a b e l s =class names )

Department of Computer Science and Engineering, TKMCE | 29


CHAPTER 7. APPENDIX

plt . xlabel ( ’ Predicted ’)


p l t . y l a b e l ( ’ Actual ’ )
p l t . t i t l e ( ’ Confusion Matrix ’ )
p l t . show ( )
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g
i m p o r t image
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s . e f f i c i e n t n e t
import preprocess input
model = t f . k e r a s . m o d e l s . l o a d m o d e l ( ’ / home / l a b a d m i n /
P r o j e c t H e l a n / e f f i c i e n t n e t . h5 ’ )
def preprocess image ( img path ) :
img = image . l o a d i m g ( i m g p a t h , t a r g e t s i z e = ( 2 2 4 , 2 2 4 ) )
i m g a r r a y = image . i m g t o a r r a y ( img )
i m g a r r a y = np . e x p a n d d i m s ( i m g a r r a y , a x i s = 0)
img array = preprocess input ( img array )
return img array
n e w i m a g e p a t h = ’ / home / l a b a d m i n / P r o j e c t H e l a n
/ d a t a s e t s o r t e d / t e s t / C l a s s 0 / b58640
7933 e 6 6 b 1 1 2 8 2 0 f 2 8 4 4 8 7 8 3 6 2 7
0 png . r f . ccf6c7deb54f3050e5a1096b73e60811 . jpg ’
preprocessed image = preprocess image ( new image path )
p r e d i c t i o n s = model . p r e d i c t ( p r e p r o c e s s e d i m a g e )
c l a s s l a b e l s = [ ’ Class1 ’ , ’ Class2 ’ , ’ Class3 ’ ]
# P r i n t p r e d i c t e d c l a s s l a b e l and c o n f i d e n c e
p r e d i c t e d c l a s s i n d e x = np . argmax ( p r e d i c t i o n s )
predicted class label = class labels
[ predicted class index ]
confidence = predictions [0]
[ predicted class index ]
p r i n t (” Predicted Class :” , p r e d i c t e d c l a s s l a b e l )
p r i n t (” Confidence : ” , confidence )

7.2.2 DenseNet

from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator

Department of Computer Science and Engineering, TKMCE | 30


CHAPTER 7. APPENDIX

from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
i m p o r t DenseNet121
from t e n s o r f l o w . k e r a s . l a y e r s
i m p o r t G l o b a l A v e r a g e P o o l i n g 2 D , Dense
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t Model
from t e n s o r f l o w . k e r a s . o p t i m i z e r s i m p o r t Adam
t r a i n d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
t e s t d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
train generator = train datagen . flow from directory (
’ / home / l a b a d m i n / P r o j e c t H e l a n / Wound
Detection . v1i . yolov8 / s p l i t f o l d e r / t r a i n ’ ,
t a r g e t s i z e =(224 , 224) ,
b a t c h s i z e =64 ,
class mode =’ categorical ’ )
test generator = test datagen . flow from directory (
’ / home / l a b a d m i n / P r o j e c t H e l a n
/ Wound D e t e c t i o n . v 1 i . y o l o v 8 / s p l i t f o l d e r / t e s t ’ ,
t a r g e t s i z e =(224 , 224) ,
b a t c h s i z e =64 ,
class mode =’ categorical ’ )
# Load p r e − t r a i n e d DenseNet121 model
b a s e m o d e l = DenseNet121 ( w e i g h t s = ’ i m a g e n e t ’ ,
i n c l u d e t o p = False , i n p u t s h a p e =(224 , 224 , 3 ) )
# Add c u s t o m c l a s s i f i c a t i o n h e a d
x = base model . output
x = GlobalAveragePooling2D ( ) ( x )
x = Dense ( 1 0 2 4 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
p r e d i c t i o n s = Dense ( 3 , a c t i v a t i o n = ’ s o f t m a x ’ ) ( x )
# C r e a t e t h e model
model = Model ( i n p u t s = b a s e m o d e l . i n p u t ,
outputs=predictions )
from t e n s o r f l o w . k e r a s . o p t i m i z e r s i m p o r t SGD
# Compile t h e model
model . c o m p i l e ( o p t i m i z e r =SGD( l e a r n i n g r a t e = 0 . 0 0 1 ) ,
loss =’ categorical crossentropy ’ ,
m e t r i c s =[ ’ accuracy ’ ] )

Department of Computer Science and Engineering, TKMCE | 31


CHAPTER 7. APPENDIX

# T r a i n t h e model
h i s t o r y = model . f i t (
train generator ,
e p o c h s = 25 )
t e s t l o s s , t e s t a c c u r a c y = model . e v a l u a t e
( test generator )
p r i n t ( f ” T e s t Loss : { t e s t l o s s }”)
p r i n t ( f ” Test Accuracy : { t e s t a c c u r a c y }”)

7.2.3 Inception

from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
import InceptionV3
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
from t e n s o r f l o w . k e r a s . l a y e r s
i m p o r t G l o b a l A v e r a g e P o o l i n g 2 D , Dense
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t Model
# Load t h e I n c e p t i o n V 3 model
base model = InceptionV3 ( weights = ’ imagenet ’ ,
include top=False )

# Add c u s t o m l a y e r s f o r c l a s s i f i c a t i o n
x = base model . output
x = GlobalAveragePooling2D ( ) ( x )
x = Dense ( 1 0 2 4 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
p r e d i c t i o n s = Dense ( 3 , a c t i v a t i o n = ’ s o f t m a x ’ ) ( x )

# C r e a t e t h e f i n a l model
model = Model ( i n p u t s = b a s e m o d e l . i n p u t ,
outputs=predictions )
for l a y e r in base model . l a y e r s :
layer . trainable = False

# Compile t h e model
model . c o m p i l e ( o p t i m i z e r = ’SGD’ ,

Department of Computer Science and Engineering, TKMCE | 32


CHAPTER 7. APPENDIX

loss =’ categorical crossentropy ’ ,


m e t r i c s =[ ’ accuracy ’ ] )

# P r e p r o c e s s and augment t h e i m a g e s
t r a i n d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255 ,
shear range =0.2 ,
zoom range =0.2 ,
h o r i z o n t a l f l i p =True )
t e s t d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
train generator = train datagen .
f l o w f r o m d i r e c t o r y ( ’ / home /
l a b a d m i n / P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i . y o l o v 8
/ split folder / train ’ ,
t a r g e t s i z e =(299 , 299) ,
b a t c h s i z e =16 ,
class mode =’ categorical ’ )
test generator = test datagen . flow from directory
( ’ / home / l a b a d m i n / P r o j e c t H e l a n /
Wound D e t e c t i o n . v 1 i . y o l o v 8 / s p l i t f o l d e r / t e s t ’ ,
t a r g e t s i z e =(299 , 299) ,
b a t c h s i z e =16 ,
class mode =’ categorical ’ )
model . f i t ( t r a i n g e n e r a t o r ,
steps per epoch=len ( t r a i n g e n e r a t o r ) ,
e p o c h s =50 ,
# validation data=test generator ,
# v a l i d a t i o n s t e p s =len ( t e s t g e n e r a t o r ))
)
t e s t l o s s , t e s t a c c = model . e v a l u a t e ( t e s t g e n e r a t o r )
p r i n t ( ” Test Accuracy : ” , t e s t a c c )

7.2.4 MobileNetv2

import tensorflow as t f
import keras
from t e n s o r f l o w . k e r a s . l a y e r s i m p o r t I n p u t ,

Department of Computer Science and Engineering, TKMCE | 33


CHAPTER 7. APPENDIX

G l o b a l A v e r a g e P o o l i n g 2 D , Dense
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t Model
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s i m p o r t MobileNetV2
from t e n s o r f l o w . k e r a s . o p t i m i z e r s i m p o r t Adam , SGD
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
# # D e f i n e d a t a l o a d e r s f o r t r a i n and t e s t s e t s
t r a i n d a t a d i r = ’ / home / l a b a d m i n / P r o j e c t H e l a n
/ Wound
Detection . v1i . yolov8 / s p l i t f o l d e r / t r a i n ’
t e s t d a t a d i r = ’ / home / l a b a d m i n / P r o j e c t H e l a n
/ Wound D e t e c t i o n . v 1 i . y o l o v 8 / s p l i t f o l d e r / t e s t ’
# D e f i n e image p a r a m e t e r s
i m g w i d t h , i m g h e i g h t = 2 2 4 , 224
b a t c h s i z e = 64
# Use I m a g e D a t a G e n e r a t o r t o p r e p r o c e s s t h e i m a g e s
t r a i n d a t a g e n = ImageDataGenerator (
r e s c a l e =1. / 255 ,
shear range =0.2 ,
zoom range =0.2 ,
h o r i z o n t a l f l i p =True )
train generator = train datagen . flow from directory (
train data dir ,
t a r g e t s i z e =( i m g h e i g h t , img width ) ,
batch size=batch size ,
class mode =’ categorical ’ ,
s u b s e t = ’ t r a i n i n g ’ ) # Use s u b s e t f o r t r a i n i n g d a t a
# P r i n t c l a s s names
p r i n t ( ” C l a s s names : ” )
print ( train generator . class indices )
t e s t d a t a g e n = ImageDataGenerator (
rescale =1./255)
test generator = test datagen . flow from directory (
test data dir ,
t a r g e t s i z e =( i m g h e i g h t , img width ) ,
batch size=batch size ,

Department of Computer Science and Engineering, TKMCE | 34


CHAPTER 7. APPENDIX

class mode =’ categorical ’ )


# Add new t o p l a y e r s f o r c l a s s i f i c a t i o n
x = base model . output
x = GlobalAveragePooling2D ( ) ( x )
x = Dense ( 1 0 2 4 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
p r e d i c t i o n s = Dense ( l e n ( t r a i n g e n e r a t o r . c l a s s i n d i c e s )
, a c t i v a t i o n = ’ softmax ’ ) ( x )
model = Model ( i n p u t s = b a s e m o d e l . i n p u t ,
outputs=predictions )
model . c o m p i l e ( o p t i m i z e r =SGD( l e a r n i n g r a t e = 0 . 0 1 ) ,
loss =’ categorical crossentropy ’ ,
m e t r i c s =[ ’ accuracy ’ ] )
model . f i t ( t r a i n g e n e r a t o r ,
s t e p s p e r e p o c h = t r a i n g e n e r a t o r . samples / / b a t c h s i z e ,
e p o c h s = 25 )
t e s t l o s s , t e s t a c c = model . e v a l u a t e ( t e s t g e n e r a t o r )
p r i n t ( ” Test Accuracy : ” , t e s t a c c )
i m p o r t numpy a s np
from s k l e a r n . m e t r i c s
import c l a s s i f i c a t i o n r e p o r t , confusion matrix
# Generate p r e d i c t i o n s
p r e d i c t i o n s = model . p r e d i c t ( t e s t g e n e r a t o r )
y true = test generator . classes
y p r e d = np . argmax ( p r e d i c t i o n s , a x i s = −1)
# P r i n t c l a s s i f i c a t i o n r e p o r t and c o n f u s i o n m a t r i x
p r i n t (” C l a s s i f i c a t i o n Report : ” )
p r i n t ( c l a s s i f i c a t i o n r e p o r t ( y true , y pred ,
t a r g e t n a m e s = t e s t g e n e r a t o r . c l a s s i n d i c e s . keys ( ) ) )
p r i n t (” Confusion Matrix : ” )
p r i n t ( confusion matrix ( y true , y pred ))

7.2.5 Resnet50

i m p o r t numpy a s np
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t Model
from t e n s o r f l o w . k e r a s . l a y e r s

Department of Computer Science and Engineering, TKMCE | 35


CHAPTER 7. APPENDIX

i m p o r t G l o b a l A v e r a g e P o o l i n g 2 D , Dense , D r o p o u t
from t e n s o r f l o w . k e r a s . o p t i m i z e r s i m p o r t Adam
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
from t e n s o r f l o w . k e r a s . c a l l b a c k s
import EarlyStopping
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
i m p o r t ResNet50
num classes = 3
# D e f i n e image p a r a m e t e r s
i m g w i d t h , i m g h e i g h t = 2 9 9 , 299
b a t c h s i z e = 32
b a s e m o d e l = ResNet50 ( w e i g h t s = ’ i m a g e n e t ’ ,
i n c l u d e t o p = False , i n p u t s h a p e =( i m g h e i g h t , img width , 3 ) )
# Add a g l o b a l s p a t i a l a v e r a g e p o o l i n g l a y e r
x = base model . output
x = GlobalAveragePooling2D ( ) ( x )
# Add d r o p o u t l a y e r
x = D r o p o u t ( 0 . 5 ) ( x ) # 80% d r o p o u t r a t e
# Add a f u l l y c o n n e c t e d l a y e r
x = Dense ( 1 0 2 4 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
# Add a l o g i s t i c l a y e r ( o u t p u t l a y e r )
p r e d i c t i o n s = Dense ( n u m c l a s s e s ,
a c t i v a t i o n = ’ softmax ’ ) ( x )
# Combine t h e b a s e model and t o p l a y e r s
model = Model ( i n p u t s = b a s e m o d e l . i n p u t ,
outputs=predictions )
# F r e e z e t h e b a s e model l a y e r s
for l a y e r in base model . l a y e r s :
layer . trainable = False
# Compile t h e model
model . c o m p i l e ( o p t i m i z e r =Adam ( l e a r n i n g r a t e = 0 . 0 1 ) ,
loss =’ categorical crossentropy ’ ,
m e t r i c s =[ ’ accuracy ’ ] )
early stopping = EarlyStopping
( m o n i t o r = ’ v a l l o s s ’ , p a t i e n c e =3 ,

Department of Computer Science and Engineering, TKMCE | 36


CHAPTER 7. APPENDIX

r e s t o r e b e s t w e i g h t s =True )
num train samples = len ( t r a i n g e n e r a t o r . filenames )
num val samples = len ( t e s t g e n e r a t o r . filenames )
steps per epoch = num train samples
// train generator . batch size
v a l i d a t i o n s t e p s = num val samples
// test generator . batch size
model . f i t ( t r a i n g e n e r a t o r ,
steps per epoch=steps per epoch ,
e p o c h s = 50 )
t e s t l o s s , t e s t a c c = model . e v a l u a t e ( t e s t g e n e r a t o r )
p r i n t ( ” Test Accuracy : ” , t e s t a c c )

7.2.6 VGG16

p i p i n s t a l l −− u p g r a d e t e n s o r f l o w k e r a s
import keras , os
from k e r a s . m o d e l s i m p o r t S e q u e n t i a l
from k e r a s . l a y e r s
i m p o r t Dense , Conv2D , MaxPool2D , F l a t t e n
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
i m p o r t numpy a s np
t r d a t a = ImageDataGenerator ( )
traindata = trdata . flow from directory ( directory=
” / home / l a b a d m i n / P r o j e c t H e l a n / W o u n d D e t e c t i o n .
v1i . yolov8 / s p l i t f o l d e r / t r a i n ” ,
t a r g e t s i z e =(299 ,299))
t s d a t a = ImageDataGenerator ( )
t e s t d a t a = t s d a t a . f l o w f r o m d i r e c t o r y ( d i r e c t o r y = ” / home
/ l a b a d m i n / P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i . y o l o v 8
/ s p l i t f o l d e r / t e s t ” , t a r g e t s i z e =(299 ,299))
model = S e q u e n t i a l ( )
model . add ( Conv2D ( i n p u t s h a p e = ( 2 2 4 , 2 2 4 , 3 ) ,
f i l t e r s =64 , k e r n e l s i z e = ( 3 , 3 ) , p a d d i n g =” same ” ,
a c t i v a t i o n =” r e l u ” ) )

Department of Computer Science and Engineering, TKMCE | 37


CHAPTER 7. APPENDIX

model . add ( Conv2D ( f i l t e r s =64 , k e r n e l s i z e = ( 3 , 3 ) ,


p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( MaxPool2D ( p o o l s i z e = ( 2 , 2 ) ,
strides =(2 ,2)))
model . add ( Conv2D ( f i l t e r s =128 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( Conv2D ( f i l t e r s =128 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( MaxPool2D ( p o o l s i z e = ( 2 , 2 ) , s t r i d e s = ( 2 , 2 ) ) )
model . add ( Conv2D ( f i l t e r s =256 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( Conv2D ( f i l t e r s =256 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( Conv2D ( f i l t e r s =256 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( MaxPool2D ( p o o l s i z e = ( 2 , 2 ) , s t r i d e s = ( 2 , 2 ) ) )
model . add ( Conv2D ( f i l t e r s =512 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( Conv2D ( f i l t e r s =512 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( Conv2D ( f i l t e r s =512 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( MaxPool2D ( p o o l s i z e = ( 2 , 2 ) , s t r i d e s = ( 2 , 2 ) ) )
model . add ( Conv2D ( f i l t e r s =512 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( Conv2D ( f i l t e r s =512 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( Conv2D ( f i l t e r s =512 , k e r n e l s i z e = ( 3 , 3 ) ,
p a d d i n g =” same ” , a c t i v a t i o n =” r e l u ” ) )
model . add ( MaxPool2D ( p o o l s i z e = ( 2 , 2 ) , s t r i d e s = ( 2 , 2 ) ) )
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
i m p o r t VGG16
b a s e m o d e l = VGG16 ( w e i g h t s = ’ i m a g e n e t ’ ,
i n c l u d e t o p = False , i n p u t s h a p e =(299 , 299 , 3 ) )
model . add ( F l a t t e n ( ) )
model . add ( Dense ( u n i t s =4096 , a c t i v a t i o n =” r e l u ” ) )

Department of Computer Science and Engineering, TKMCE | 38


CHAPTER 7. APPENDIX

model . add ( Dense ( u n i t s =4096 , a c t i v a t i o n =” r e l u ” ) )


model . add ( Dense ( u n i t s =2 , a c t i v a t i o n =” s o f t m a x ” ) )
from k e r a s . o p t i m i z e r s i m p o r t SGD
o p t = SGD( l e a r n i n g r a t e = 0 . 0 0 1 )
model . c o m p i l e ( o p t i m i z e r = o p t ,
loss=keras . losses . categorical crossentropy ,
m e t r i c s =[ ’ accuracy ’ ] )
# P r e p r o c e s s and augment t h e i m a g e s
t r a i n d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255 ,
shear range =0.2 ,
zoom range =0.2 ,
h o r i z o n t a l f l i p =True )
t e s t d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
train generator = train datagen . flow from directory
( ’ / home / l a b a d m i n / P r o j e c t H e l a n / Wound
Detection . v1i . yolov8
/ split folder / train ’ ,
t a r g e t s i z e =(299 , 299) ,
b a t c h s i z e =32 ,
class mode =’ categorical ’ )
test generator = test datagen . flow from directory
( ’ / home / l a b a d m i n /
P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i . y o l o v 8
/ split folder / test ’ ,
t a r g e t s i z e =(299 , 299) ,
b a t c h s i z e =32 ,
class mode =’ categorical ’ )
from t e n s o r f l o w . k e r a s . m o d e l s
i m p o r t Model
from t e n s o r f l o w . k e r a s . l a y e r s
i m p o r t Dense , G l o b a l A v e r a g e P o o l i n g 2 D
x = base model . output
x = GlobalAveragePooling2D ( ) ( x )
x = Dense ( 1 0 2 4 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
p r e d i c t i o n s = Dense ( 3 , a c t i v a t i o n = ’ s o f t m a x ’ ) ( x )
model = Model ( i n p u t s = b a s e m o d e l . i n p u t ,

Department of Computer Science and Engineering, TKMCE | 39


CHAPTER 7. APPENDIX

outputs=predictions )
model . c o m p i l e ( o p t i m i z e r = ’Adam ’ ,
loss =’ categorical crossentropy ’ ,
m e t r i c s =[ ’ accuracy ’ ] )
model . summary ( )
h i s t o r y = model . f i t ( t r a i n g e n e r a t o r ,
e p o c h s =30 , v a l i d a t i o n d a t a = t e s t g e n e r a t o r )

p r i n t ( ” T r a i n i n g Accuracy : ” ,
h i s t o r y . h i s t o r y [ ’ accuracy ’ ] [ − 1 ] )
p r i n t ( ” V a l i d a t i o n Accuracy : ” ,
history . history [ ’ val accuracy ’][ −1])
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
t e s t d a t a d i r = ’ / home / l a b a d m i n /
P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i .
yolov8 / s p l i t f o l d e r / t e s t ’
t e s t d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
test generator = test datagen . flow from directory (
test data dir ,
t a r g e t s i z e =(224 , 224) ,
b a t c h s i z e =32 ,
class mode =’ categorical ’
)
t e s t l o s s , t e s t a c c u r a c y = model . e v a l u a t e ( t e s t g e n e r a t o r )
p r i n t ( f ” T e s t Loss : { t e s t l o s s }”)
p r i n t ( f ” Test Accuracy : { t e s t a c c u r a c y }”)

7.3 App.py

import os
o s . e n v i r o n [ ” CUDA VISIBLE DEVICES ” ] = ” ”
import s t r e a m l i t as s t
from PIL i m p o r t Image
i m p o r t numpy a s np

Department of Computer Science and Engineering, TKMCE | 40


CHAPTER 7. APPENDIX

import tensorflow as t f
import torch
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t l o a d m o d e l

# T i t l e o f t h e web app
s t . t i t l e ( ’ Ulcer C l a s s i f i c a t i o n
w i t h YOLOv8 and S t r e a m l i t ’ )

u p l o a d e d f i l e = s t . f i l e u p l o a d e r ( ” Choose an
image . . . ” , t y p e = [ ” j p g ” , ” j p e g ” , ” png ” ] )
# Load t h e T e n s o r F l o w model
model = t f . k e r a s . m o d e l s . l o a d m o d e l ( ’ / home / l a b a d m i n
/ P r o j e c t H e l a n / d e n s e n e t . h5 ’ )

# F u n c t i o n t o p r e p r o c e s s t h e image f o r p r e d i c t i o n
d e f p r e p r o c e s s i m a g e ( image ) :
image = image . r e s i z e ( ( 2 2 4 , 2 2 4 ) )
image = np . a r r a y ( image )
image = image / 2 5 5 . 0
image = np . e x p a n d d i m s ( image , a x i s = 0)
r e t u r n image

# F u n c t i o n t o c l a s s i f y t h e u p l o a d e d image
d e f c l a s s i f y i m a g e ( image , model ) :
p r o c e s s e d i m a g e = p r e p r o c e s s i m a g e ( image )
p r e d i c t i o n = model . p r e d i c t ( p r o c e s s e d i m a g e )
r e t u r n np . argmax ( p r e d i c t i o n )

# D i s p l a y t h e u p l o a d e d image and i t s c l a s s i f i c a t i o n
i f u p l o a d e d f i l e i s n o t None :
image = Image . open ( u p l o a d e d f i l e )
s t . image ( image , c a p t i o n = ’ U p l o a d e d Image ’ ,
u s e c o l u m n w i d t h =True )

# Perform c l a s s i f i c a t i o n
class names = [ ’ class0 ’ , ’ class1 ’ ,

Department of Computer Science and Engineering, TKMCE | 41


CHAPTER 7. APPENDIX

’ c l a s s 2 ’ ] # S p e c i f y y o u r c l a s s names h e r e
p r e d i c t i o n = model . p r e d i c t ( np . e x p a n d d i m s
( image , a x i s = 0 ) ) [ 0 ]
p r e d i c t e d c l a s s = np . argmax ( p r e d i c t i o n )
class prob = prediction [ predicted class ]
# Display the c l a s s i f i c a t i o n r e s u l t
st . write ( f ” Predicted Class : { class names
[ predicted class ]}”)

Department of Computer Science and Engineering, TKMCE | 42

You might also like