Miniproject Report 1
Miniproject Report 1
May 2024
DECLARATION
The undersigned hereby declares that the mini project report ’Wound
Localisation and Classification Using Deep Learning’ submitted for
partial fulfillment of the requirements for the award of the degree of
Master of Technology of the APJ Abdul Kalam Technological Univer-
sity, Kerala is a bonafide work done by us under the supervision of As-
sociate Prof. Thushara A. This submission represents the experiment
results in my own words and where ideas or words of others have been
included; I have adequately and accurately cited and referenced the orig-
inal sources. I also declare that I have adhered to the ethics of academic
honesty and integrity and have not misrepresented or fabricated any data
or idea or fact or source in our submission. I understand that any viola-
tion of the above will be a cause for disciplinary action by the institute or
the University and can also evoke penal action from the sources which
have thus not been properly cited or from whom proper permission has
not been obtained. This report has not previously formed the basis for
the award of any degree, diploma, or similar title of any other Univer-
sity.
Place: Kollam
Date: 07 May 2024
i
DEPARTMENT OF COMPUTER SCIENCE AND
ENGINEERING
T HANGAL K UNJU M USALIAR C OLLEGE OF E NGINEERING
KOLLAM
CERTIFICATE
This is to certify that the mini project report entitled “WOUND LOCAL-
ISATION AND CLASSIFICATION USING DEEP LEARNING” sub-
mitted by Helan Mariyam Aipe (M23CSCS05) to APJ Abdul Kalam
Technological University in partial fulfillment of the requirements for
the award of the Degree of Master of Technology in Computer Sci-
ence and Engineering is a bonafide work carried out by him under our
guidance and supervision.
First and foremost I sincerely thank the Almighty GOD who is most
beneficent and merciful for giving as the knowledge and courage to com-
plete the project successfully.
iii
ABSTRACT
iv
Contents
Declaration i
Acknowledgements iii
Abstract iv
Contents vi
1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Literature Review 5
3 Methodology 9
5 Conclusion 22
6 References 23
v
CONTENTS
7 Appendix 25
7.1 Wound Localisation . . . . . . . . . . . . . . . . . . . . 25
7.2 Wound Classification . . . . . . . . . . . . . . . . . . . 27
7.2.1 EfficientNet . . . . . . . . . . . . . . . . . . . . 27
7.2.2 DenseNet . . . . . . . . . . . . . . . . . . . . . 30
7.2.3 Inception . . . . . . . . . . . . . . . . . . . . . 32
7.2.4 MobileNetv2 . . . . . . . . . . . . . . . . . . . 33
7.2.5 Resnet50 . . . . . . . . . . . . . . . . . . . . . 35
7.2.6 VGG16 . . . . . . . . . . . . . . . . . . . . . . 37
7.3 App.py . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Introduction
1.1 Overview
A vital component of healthcare is wound management, especially in
situations where patients need constant observation and care for a vari-
ety of wounds, such as diabetic wounds, pressure wounds, and ulcers.
Precise and efficient localization and categorization of these wounds
are necessary for efficient treatment planning, assessment, and patient
progress tracking. Deep learning techniques are needed to create au-
tomated solutions for wound localization and classification, as tradi-
tional methods are prone to errors and require a lot of time to com-
plete. Healthcare must prioritize wound care, particularly when patients
are bedridden or have chronic illnesses like diabetes that make them
more susceptible to pressure ulcers. To stop complications and encour-
age healing, these wounds require close observation and care. However,
the need for automated solutions is highlighted by the fact that manu-
ally identifying and classifying wounds can be tedious, error-prone, and
time-consuming. The field of medical image analysis has found deep
learning to be a potent tool that has made it possible to create automated
systems for image recognition, localization, and classification. Medical
imaging plays a pivotal role in wound management, offering insights
into wound characteristics, progression, and response to treatment. Lo-
calization and classification are fundamental tasks in wound analysis,
aiding healthcare providers in devising effective treatment plans and
evaluating patient outcomes. Automated methods using deep learning
have revolutionized these tasks, providing efficient and accurate alter-
1
CHAPTER 1. INTRODUCTION
niques, we aim to create a system that can accurately detect and localize
ulcers, thereby aiding healthcare professionals in diagnosing and treat-
ing patients more efficiently. The methodology of this project involves
several key steps. First, a dataset of medical images containing ulcer
annotations is collected. These images are preprocessed to enhance the
performance of the YOLOv8 model, including resizing and augmenting
the images. The YOLOv8 model is then trained on this preprocessed
dataset, utilizing its state-of-the-art object detection capabilities to learn
and recognize ulcers in the images. Once trained, the model is evaluated
on a separate test dataset to assess its performance in terms of accuracy
and efficiency. Additionally, a web application using Streamlit is de-
veloped to provide a user-friendly interface for uploading and analyzing
medical images. This integration with Streamlit allows for real-time ul-
cer localization, making the system accessible to medical professionals
for immediate use in clinical settings.
1.2 Motivation
The motivation behind this project stems from the critical need for effi-
cient and accurate ulcer localization in medical settings. Chronic wounds,
such as ulcers, pose significant challenges to healthcare professionals
due to their complex nature and the potential for complications. Man-
ual ulcer localization methods are often time-consuming and prone to
error, leading to delays in diagnosis and treatment. By leveraging deep
learning models, specifically YOLOv8, we aim to revolutionize ulcer
localization by providing a faster, more accurate, and automated solu-
tion. Furthermore, the integration of Streamlit into our system adds a
layer of accessibility and user-friendliness that is crucial in the medi-
cal field. Medical professionals can easily upload and analyze images
through a simple web interface, eliminating the need for specialized soft-
ware or expertise. This project has the potential to significantly improve
patient outcomes by enabling quicker and more informed decisions re-
garding ulcer diagnosis and treatment. Overall, the motivation behind
this project lies in addressing a critical need in healthcare through the
application of cutting-edge technology.
1.3 Scope
Ulcers are a significant health concern globally, affecting millions of in-
dividuals and presenting substantial challenges for healthcare providers.
The accurate localization of ulcers in medical images is crucial for ef-
fective diagnosis and treatment planning. Traditional methods of ulcer
localization rely heavily on manual inspection by trained professionals,
which can be time-consuming and prone to errors. The advent of deep
learning and computer vision techniques has opened up new possibili-
ties for automating this process, offering the potential for faster and more
accurate ulcer localization. This project aims to leverage deep learning
models, specifically YOLOv8, to develop a system for ulcer localization
in medical images. The primary scope of this project includes data col-
lection, preprocessing, model training, and evaluation. We will gather a
dataset of medical images containing ulcer annotations, preprocess the
images to enhance model performance, train the YOLOv8 model on the
dataset, and evaluate its performance on a separate test dataset. Addi-
tionally, we will develop a web application using Streamlit to provide
a user-friendly interface for ulcer localization, allowing healthcare pro-
fessionals to easily upload and analyze medical images. The project’s
scope also extends to evaluating the system’s performance and usability,
ensuring that it meets the requirements of medical professionals for ac-
curate and efficient ulcer localization.
Overall, by offering a practical and dependable tool for localization
and classification on mobile devices, this work seeks to facilitate wound
management and enhance the effectiveness of healthcare delivery. The
results of this study have the potential to make a substantial contribution
to the field of medical imaging and can help healthcare professionals
make well-informed decisions about the management and treatment of
wounds. The creation of precise and effective methods for classifying
and localizing wounds has the potential to improve patient outcomes and
healthcare delivery.
Literature Review
5
CHAPTER 2. LITERATURE REVIEW
is used to take 2D pictures of the wound and a sticky tape scale. Then,
using SFM, a 3D model of the wound is constructed using these pho-
tos. The 3D model’s UV map is unwrapped using the LSCM approach.
Lastly, an interactive technique for picture segmentation is used to ex-
tract and quantify the wound. attained a remarkable 0.97 accuracy. The
method’s high scores for adjusted R square (0.998), standardized regres-
sion coefficient (0.895), and Pearson correlation (0.999) all bolster its
performance. Classifying wounds is essential for efficient diagnosis and
care. Yash Patel, Behrouz Rostami [8] proposed a deep neural network-
based multi-modal classifier that classifies wounds into diabetic, pres-
sure, surgical, and venous ulcer categories using both the wound pictures
and their associated locations. To aid in the effective marking of wound
locations by experts, a body map was constructed. Wound specialists
contributed to the curation of three datasets, which included location and
picture data. The multi-modal network incorporates further adjustments
using the location- and image-based classifier outputs. High accuracy
is demonstrated by the experimental findings, which range from 72.95
percent to 97.12 percent for wound-class classifications and from 82.48
percent to 100 percent for mixed-class classifications. Topu Biswas,
Mohammad Faizal [9] proposed a novel method for demarcating and
estimating wound boundaries using superpixel segmentation and clas-
sification with an enhanced convolutional neural network (CNN). Their
approach achieved an overall accuracy, sensitivity, and specificity of ap-
proximately 90 percent, surpassing traditional methods by a significant
margin.
Po-Hsuan Huang, Yi-Hsiang Pan[10] presents a deep learning tool
for classifying five key wound conditions (deep, infected, arterial, ve-
nous, and pressure wounds), aiding non-specialized medical person-
nel. It uses standard camera images and a multi-task framework for
unified classification, outperforming or matching human performance.
The compact convolutional neural network (CNN) achieves good accu-
racy, suggesting its potential use in an app for medical personnel with-
out wound care expertise. The proposed method employs a multi-task
deep learning framework that considers the relationships among the five
wound conditions, creating a unified classification architecture. The
performance of the model was evaluated using Cohen’s kappa coeffi-
Summary
Methodology
The system for ulcer localization using deep learning models and YOLOv8
is designed to accurately detect and localize ulcer wounds in medical im-
ages. The architecture of the system consists of several key components,
including data preprocessing, model training, and inference. In the data
preprocessing stage, the input dataset of ulcer wound images undergoes
several transformations to enhance the quality and suitability of the data
for training. This includes resizing the images to a standard size, nor-
malizing pixel values to a common scale, and augmenting the dataset
with additional images to improve the model’s robustness. The prepro-
cessed data is then divided into training, validation, and test sets for
model evaluation. The model training stage involves using the YOLOv8
architecture to train a deep-learning model to detect and localize ulcers
in images. YOLOv8 is a state-of-the-art object detection architecture
known for its speed and accuracy. The training process involves opti-
mizing the model’s parameters using a selected loss function, such as
mean squared error or binary cross-entropy, and an optimizer, such as
stochastic gradient descent or Adam. The model is trained iteratively
on the training dataset, with the validation dataset used to monitor the
model’s performance and prevent overfitting. After training, the model
is evaluated on the test dataset to assess its performance on unseen data.
The backend of the web application integrates the YOLOv8 model for
ulcer localization and the six different classification models for ulcer-
type classification. The YOLOv8 model is used to localize ulcers in the
uploaded image, and the localized regions are then passed to each clas-
sification model for prediction. Each model predicts the type of ulcer
present in its respective region, and the predictions are aggregated and
9
CHAPTER 3. METHODOLOGY
13
CHAPTER 4. RESULTS AND DISCUSSION
4.1 EfficientNet
The EfficientNet model exhibited the highest training accuracy among
the models, reaching an impressive 93.75 percent. However, its test-
ing accuracy of 34.142 percent indicates a significant performance drop
when faced with unseen data. This discrepancy suggests potential over-
fitting during training, where the model may have memorized the train-
ing data rather than generalizing well to new images. In summary, this
model shows the highest accuracy on training dataset when compared
with the other models.
4.2 Densenet
The DenseNet model achieved a commendable training accuracy of 90.02
percent and demonstrated consistent performance with a testing accu-
racy of 34.83 percent. This highlights the model’s robustness and ability
to maintain good performance across both training and testing datasets.
The DenseNet’s performance indicates its potential as a reliable model
for ulcer localization, offering a strong alternative to other models in the
project.
4.3 MobileNetv2
The MobileNet model achieved a training accuracy of 44.06 percent and
a testing accuracy of 35.49 percent, demonstrating its capability to learn
from the dataset and generalize to unseen data. This model’s perfor-
mance, although slightly lower than the EfficientNet, indicates its effec-
tiveness in ulcer localization tasks. With further fine-tuning and opti-
mization, the MobileNet model shows great potential for enhancing the
efficiency and accuracy of ulcer localization in medical imaging.
4.4 Resnet 50
The ResNet50 model, while exhibiting a training accuracy of 43.75 per-
cent and a testing accuracy of 33.25 percent, contributes valuable in-
sights to the project. Its inclusion provides a diverse range of perspec-
tives and approaches to ulcer localization, enriching the overall analysis
and demonstrating the project’s thorough exploration of deep learning
models.
4.5 Inception
The Inception model achieved a training accuracy of 58.60 percent and
a testing accuracy of 35.49 percent, showcasing its ability to learn and
generalize well to unseen data. This performance highlights the effec-
tiveness of the Inception architecture in ulcer localization tasks, offering
a valuable alternative for medical image analysis.
4.6 VGG16
The VGG16 model, while achieving a training accuracy of 32.22 percent
and a testing accuracy of 33.48 percent, demonstrates a consistent per-
formance between training and testing datasets. This consistency sug-
gests that the model is effectively generalizing to new data, indicating
its potential for reliable ulcer localization. With further optimization
and fine-tuning, the VGG16 model could serve as a robust tool for accu-
rate ulcer detection in medical images, complementing the strengths of
other models in the system.
Following are the output images, which represent the final output
projected to the Streamlit app. Users can input an image and receive
the predicted class, providing a seamless and interactive experience for
ulcer localization.
Conclusion
22
Chapter 6
References
23
CHAPTER 6. REFERENCES
Appendix
! pip i n s t a l l u l t r a l y t i c s
! p i p i n s t a l l image
! p i p i n s t a l l opencv − p y t h o n
import tensorflow as t f
import os
i m p o r t cv2
from u l t r a l y t i c s i m p o r t YOLO
# Load t h e model .
model = YOLO( ’ y o l o v 8 n . p t ’ )
# Training .
r e s u l t s = model . t r a i n (
d a t a = ’ / home / l a b a d m i n / P r o j e c t H e l a n / Wound
D e t e c t i o n . v 1 i . y o l o v 8 / d a t a . yaml ’ ,
i m g s z =640 , e p o c h s =30 ,
b a t c h =32 , name = ’ y o l o v 8 n v 8 5 0 e ’ )
import l o c a l e
l o c a l e . g e t p r e f e r r e d e n c o d i n g = lambda : ”UTF−8”
! y o l o t a s k = d e t e c t mode= p r e d i c t model = { ’ / home /
labadmin / P r o j e c t / runs / d e t e c t /
yolov8n v8 50e / weights / b e s t . pt ’} conf =0.5
s o u r c e = { ’ / home / l a b a d m i n / P r o j e c t H e l a n
/ Wound D e t e c t i o n . v 1 i . y o l o v 8 / t r a i n / images ’ }
import glob
25
CHAPTER 7. APPENDIX
from I P y t h o n . d i s p l a y i m p o r t Image , d i s p l a y
for image path in glob . glob ( f ’ / content / runs
/ d e t e c t / p r e d i c t / * . jpg ’ ) :
d i s p l a y ( Image ( f i l e n a m e = i m a g e p a t h , h e i g h t = 6 0 0 ) )
p r i n t (”\ n ”)
i m p o r t numpy a s np
d e f c a l c u l a t e i o u ( box1 , box2 ) :
x1 = max ( box1 [ 0 ] , box2 [ 0 ] )
y1 = max ( box1 [ 1 ] , box2 [ 1 ] )
x2 = min ( box1 [ 0 ] + box1 [ 2 ] , box2 [ 0 ] + box2 [ 2 ] )
y2 = min ( box1 [ 1 ] + box1 [ 3 ] , box2 [ 1 ] + box2 [ 3 ] )
i n t e r s e c t i o n a r e a = max ( 0 , x2 − x1 ) * max ( 0 , y2 − y1 )
b o x 1 a r e a = box1 [ 2 ] * box1 [ 3 ]
b o x 2 a r e a = box2 [ 2 ] * box2 [ 3 ]
iou = i n t e r s e c t i o n a r e a / ( box1 area + box2 area
− intersection area )
r e t u r n iou
def c a l c u l a t e p r e c i s i o n r e c a l l ( gt boxes ,
pred boxes , i o u t h r e s h o l d = 0 . 5 ) :
true positives = 0
false positives = 0
false negatives = 0
for pred box in pred boxes :
best iou = 0
for gt box in gt boxes :
iou = c a l c u l a t e i o u ( pred box , gt box )
i f iou > b e s t i o u :
b e s t i o u = iou
i f b e s t i o u >= i o u t h r e s h o l d :
t r u e p o s i t i v e s += 1
else :
f a l s e p o s i t i v e s += 1
false negatives = len ( gt boxes )− t r u e p o s i t i v e s
precision = true positives / ( true positives
+ false positives )
recall = true positives / ( true positives
+ false negatives )
return precision , recall
g t b o x e s = [ [ 9 0 , 90 , 50 , 6 0 ] ]
pred boxes = [ [ 8 0 , 90 , 60 , 6 0 ] ]
precision , recall = calculate precision recall
( gt boxes , pred boxes )
p r i n t (” Precision :” , precision )
p r i n t (” Recall :” , r e c a l l )
i m p o r t numpy a s np
import seaborn as sns
import tensorflow as t f
import m a t p l o t l i b . pyplot as p l t
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
import EfficientNetB1
from s k l e a r n . m e t r i c s
import c l a s s i f i c a t i o n r e p o r t , confusion matrix
gpus= t f . c o n f i g . e x p e r i m e n t a l . l i s t p h y s i c a l d e v i c e s
( ’GPU’ )
i f gpus :
try :
f o r gpu i n g p u s :
t f . c o n f i g . e x p e r i m e n t a l . s e t m e m o r y g r o w t h ( gpu , T r u e )
except RuntimeError as e :
print (e)
b a t c h s i z e = 32
i m g h e i g h t , i m g w i d t h = 2 2 4 , 224
num classes = 3
t r a i n d a t a d i r = ’ / home / l a b a d m i n /
P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i .
yolov8 / s p l i t f o l d e r / t r a i n ’
t e s t d a t a d i r = ’ / home / l a b a d m i n /
P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i .
yolov8 / s p l i t f o l d e r / t e s t ’
t r a i n d a t a g e n = ImageDataGenerator (
rescale =1./255)
v a l i d d a t a g e n = ImageDataGenerator (
rescale =1./255)
t e s t d a t a g e n = ImageDataGenerator (
rescale =1./255)
train generator = train datagen . flow from directory (
train data dir ,
t a r g e t s i z e =( i m g h e i g h t , img width ) ,
batch size=batch size ,
class mode =’ categorical ’ ,
subset =’ training ’)
test generator = test datagen . flow from directory (
test data dir ,
t a r g e t s i z e =( i m g h e i g h t , img width ) ,
batch size=batch size ,
class mode =’ categorical ’ ,
shuffle =False )
base model = E f f i c i e n t N e t B 1 ( weights = ’ imagenet ’ ,
i n c l u d e t o p =False ,
i n p u t s h a p e =( i m g h e i g h t , img width , 3 ) )
# Add a g l o b a l a v e r a g e p o o l i n g l a y e r
x = base model . output
x = t f . keras . l a y e r s . GlobalAveragePooling2D ( ) ( x )
x = t f . k e r a s . l a y e r s . Dense ( 5 1 2 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
x = t f . k e r a s . l a y e r s . Dropout ( 0 . 5 ) ( x )
p r e d i c t i o n s = t f . k e r a s . l a y e r s . Dense ( n u m c l a s s e s
, a c t i v a t i o n = ’ softmax ’ ) ( x )
# C r e a t e t h e model
model = t f . k e r a s . m o d e l s . Model ( i n p u t s =
base model . input , outputs = p r e d i c t i o n s )
# Compile t h e model
model . c o m p i l e ( o p t i m i z e r = ’Adam ’ ,
l o s s = ’ c a t e g o r i c a l c r o s s e n t r o p y ’ , m e t r i c s =[ ’ accuracy ’ ] )
# T r a i n t h e model
h i s t o r y = model . f i t (
train generator ,
s t e p s p e r e p o c h = t r a i n g e n e r a t o r . samples / / b a t c h s i z e ,
e p o c h s = 50 )
l o s s , a c c u r a c y = model . e v a l u a t e ( t e s t g e n e r a t o r ,
v e r b o s e =1 )
p r i n t ( ” V a l i d a t i o n Accuracy : ” , accuracy )
p r e d i c t i o n s = model . p r e d i c t ( image )
p r e d i c t e d c l a s s i n d e x = np . argmax ( p r e d i c t i o n s [ 0 ] )
c l a s s l a b e l s = [ ’ class0 ’ , ’ class1 ’ , ’ class2 ’]
predicted class label = class labels
[ predicted class index ]
predicted probability = predictions [0]
[ predicted class index ]
print ( f ” Predicted class :
{ p r e d i c t e d c l a s s l a b e l }”)
print ( f ” Predicted probability :
{ p r e d i c t e d p r o b a b i l i t y : . 4 f }”)
p r e d i c t i o n s = model . p r e d i c t ( t e s t g e n e r a t o r )
y true = test generator . classes
y p r e d = np . argmax ( p r e d i c t i o n s , a x i s = −1)
p r i n t (” C l a s s i f i c a t i o n Report : ” )
p r i n t ( c l a s s i f i c a t i o n r e p o r t ( y true , y pred ,
t a r g e t n a m e s = t e s t g e n e r a t o r . c l a s s i n d i c e s . keys ( ) ) )
p r i n t (” Confusion Matrix : ” )
p r i n t ( confusion matrix ( y true , y pred ))
conf matrix = confusion matrix ( y true , y pred )
class names = l i s t ( t e s t g e n e r a t o r . c l a s s i n d i c e s . keys ( ) )
p l t . f i g u r e ( f i g s i z e =(8 , 6))
# sns . s e t ( f o n t s c a l e =1.2)
s n s . h e a t m a p ( c o n f m a t r i x , a n n o t = True ,
cmap = ’ B l u e s ’ , f m t = ’ g ’ , x t i c k l a b e l s = c l a s s n a m e s
, y ti c k l a b e l s =class names )
7.2.2 DenseNet
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
i m p o r t DenseNet121
from t e n s o r f l o w . k e r a s . l a y e r s
i m p o r t G l o b a l A v e r a g e P o o l i n g 2 D , Dense
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t Model
from t e n s o r f l o w . k e r a s . o p t i m i z e r s i m p o r t Adam
t r a i n d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
t e s t d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
train generator = train datagen . flow from directory (
’ / home / l a b a d m i n / P r o j e c t H e l a n / Wound
Detection . v1i . yolov8 / s p l i t f o l d e r / t r a i n ’ ,
t a r g e t s i z e =(224 , 224) ,
b a t c h s i z e =64 ,
class mode =’ categorical ’ )
test generator = test datagen . flow from directory (
’ / home / l a b a d m i n / P r o j e c t H e l a n
/ Wound D e t e c t i o n . v 1 i . y o l o v 8 / s p l i t f o l d e r / t e s t ’ ,
t a r g e t s i z e =(224 , 224) ,
b a t c h s i z e =64 ,
class mode =’ categorical ’ )
# Load p r e − t r a i n e d DenseNet121 model
b a s e m o d e l = DenseNet121 ( w e i g h t s = ’ i m a g e n e t ’ ,
i n c l u d e t o p = False , i n p u t s h a p e =(224 , 224 , 3 ) )
# Add c u s t o m c l a s s i f i c a t i o n h e a d
x = base model . output
x = GlobalAveragePooling2D ( ) ( x )
x = Dense ( 1 0 2 4 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
p r e d i c t i o n s = Dense ( 3 , a c t i v a t i o n = ’ s o f t m a x ’ ) ( x )
# C r e a t e t h e model
model = Model ( i n p u t s = b a s e m o d e l . i n p u t ,
outputs=predictions )
from t e n s o r f l o w . k e r a s . o p t i m i z e r s i m p o r t SGD
# Compile t h e model
model . c o m p i l e ( o p t i m i z e r =SGD( l e a r n i n g r a t e = 0 . 0 0 1 ) ,
loss =’ categorical crossentropy ’ ,
m e t r i c s =[ ’ accuracy ’ ] )
# T r a i n t h e model
h i s t o r y = model . f i t (
train generator ,
e p o c h s = 25 )
t e s t l o s s , t e s t a c c u r a c y = model . e v a l u a t e
( test generator )
p r i n t ( f ” T e s t Loss : { t e s t l o s s }”)
p r i n t ( f ” Test Accuracy : { t e s t a c c u r a c y }”)
7.2.3 Inception
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
import InceptionV3
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
from t e n s o r f l o w . k e r a s . l a y e r s
i m p o r t G l o b a l A v e r a g e P o o l i n g 2 D , Dense
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t Model
# Load t h e I n c e p t i o n V 3 model
base model = InceptionV3 ( weights = ’ imagenet ’ ,
include top=False )
# Add c u s t o m l a y e r s f o r c l a s s i f i c a t i o n
x = base model . output
x = GlobalAveragePooling2D ( ) ( x )
x = Dense ( 1 0 2 4 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
p r e d i c t i o n s = Dense ( 3 , a c t i v a t i o n = ’ s o f t m a x ’ ) ( x )
# C r e a t e t h e f i n a l model
model = Model ( i n p u t s = b a s e m o d e l . i n p u t ,
outputs=predictions )
for l a y e r in base model . l a y e r s :
layer . trainable = False
# Compile t h e model
model . c o m p i l e ( o p t i m i z e r = ’SGD’ ,
# P r e p r o c e s s and augment t h e i m a g e s
t r a i n d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255 ,
shear range =0.2 ,
zoom range =0.2 ,
h o r i z o n t a l f l i p =True )
t e s t d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
train generator = train datagen .
f l o w f r o m d i r e c t o r y ( ’ / home /
l a b a d m i n / P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i . y o l o v 8
/ split folder / train ’ ,
t a r g e t s i z e =(299 , 299) ,
b a t c h s i z e =16 ,
class mode =’ categorical ’ )
test generator = test datagen . flow from directory
( ’ / home / l a b a d m i n / P r o j e c t H e l a n /
Wound D e t e c t i o n . v 1 i . y o l o v 8 / s p l i t f o l d e r / t e s t ’ ,
t a r g e t s i z e =(299 , 299) ,
b a t c h s i z e =16 ,
class mode =’ categorical ’ )
model . f i t ( t r a i n g e n e r a t o r ,
steps per epoch=len ( t r a i n g e n e r a t o r ) ,
e p o c h s =50 ,
# validation data=test generator ,
# v a l i d a t i o n s t e p s =len ( t e s t g e n e r a t o r ))
)
t e s t l o s s , t e s t a c c = model . e v a l u a t e ( t e s t g e n e r a t o r )
p r i n t ( ” Test Accuracy : ” , t e s t a c c )
7.2.4 MobileNetv2
import tensorflow as t f
import keras
from t e n s o r f l o w . k e r a s . l a y e r s i m p o r t I n p u t ,
G l o b a l A v e r a g e P o o l i n g 2 D , Dense
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t Model
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s i m p o r t MobileNetV2
from t e n s o r f l o w . k e r a s . o p t i m i z e r s i m p o r t Adam , SGD
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
# # D e f i n e d a t a l o a d e r s f o r t r a i n and t e s t s e t s
t r a i n d a t a d i r = ’ / home / l a b a d m i n / P r o j e c t H e l a n
/ Wound
Detection . v1i . yolov8 / s p l i t f o l d e r / t r a i n ’
t e s t d a t a d i r = ’ / home / l a b a d m i n / P r o j e c t H e l a n
/ Wound D e t e c t i o n . v 1 i . y o l o v 8 / s p l i t f o l d e r / t e s t ’
# D e f i n e image p a r a m e t e r s
i m g w i d t h , i m g h e i g h t = 2 2 4 , 224
b a t c h s i z e = 64
# Use I m a g e D a t a G e n e r a t o r t o p r e p r o c e s s t h e i m a g e s
t r a i n d a t a g e n = ImageDataGenerator (
r e s c a l e =1. / 255 ,
shear range =0.2 ,
zoom range =0.2 ,
h o r i z o n t a l f l i p =True )
train generator = train datagen . flow from directory (
train data dir ,
t a r g e t s i z e =( i m g h e i g h t , img width ) ,
batch size=batch size ,
class mode =’ categorical ’ ,
s u b s e t = ’ t r a i n i n g ’ ) # Use s u b s e t f o r t r a i n i n g d a t a
# P r i n t c l a s s names
p r i n t ( ” C l a s s names : ” )
print ( train generator . class indices )
t e s t d a t a g e n = ImageDataGenerator (
rescale =1./255)
test generator = test datagen . flow from directory (
test data dir ,
t a r g e t s i z e =( i m g h e i g h t , img width ) ,
batch size=batch size ,
7.2.5 Resnet50
i m p o r t numpy a s np
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t Model
from t e n s o r f l o w . k e r a s . l a y e r s
i m p o r t G l o b a l A v e r a g e P o o l i n g 2 D , Dense , D r o p o u t
from t e n s o r f l o w . k e r a s . o p t i m i z e r s i m p o r t Adam
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
from t e n s o r f l o w . k e r a s . c a l l b a c k s
import EarlyStopping
from t e n s o r f l o w . k e r a s . a p p l i c a t i o n s
i m p o r t ResNet50
num classes = 3
# D e f i n e image p a r a m e t e r s
i m g w i d t h , i m g h e i g h t = 2 9 9 , 299
b a t c h s i z e = 32
b a s e m o d e l = ResNet50 ( w e i g h t s = ’ i m a g e n e t ’ ,
i n c l u d e t o p = False , i n p u t s h a p e =( i m g h e i g h t , img width , 3 ) )
# Add a g l o b a l s p a t i a l a v e r a g e p o o l i n g l a y e r
x = base model . output
x = GlobalAveragePooling2D ( ) ( x )
# Add d r o p o u t l a y e r
x = D r o p o u t ( 0 . 5 ) ( x ) # 80% d r o p o u t r a t e
# Add a f u l l y c o n n e c t e d l a y e r
x = Dense ( 1 0 2 4 , a c t i v a t i o n = ’ r e l u ’ ) ( x )
# Add a l o g i s t i c l a y e r ( o u t p u t l a y e r )
p r e d i c t i o n s = Dense ( n u m c l a s s e s ,
a c t i v a t i o n = ’ softmax ’ ) ( x )
# Combine t h e b a s e model and t o p l a y e r s
model = Model ( i n p u t s = b a s e m o d e l . i n p u t ,
outputs=predictions )
# F r e e z e t h e b a s e model l a y e r s
for l a y e r in base model . l a y e r s :
layer . trainable = False
# Compile t h e model
model . c o m p i l e ( o p t i m i z e r =Adam ( l e a r n i n g r a t e = 0 . 0 1 ) ,
loss =’ categorical crossentropy ’ ,
m e t r i c s =[ ’ accuracy ’ ] )
early stopping = EarlyStopping
( m o n i t o r = ’ v a l l o s s ’ , p a t i e n c e =3 ,
r e s t o r e b e s t w e i g h t s =True )
num train samples = len ( t r a i n g e n e r a t o r . filenames )
num val samples = len ( t e s t g e n e r a t o r . filenames )
steps per epoch = num train samples
// train generator . batch size
v a l i d a t i o n s t e p s = num val samples
// test generator . batch size
model . f i t ( t r a i n g e n e r a t o r ,
steps per epoch=steps per epoch ,
e p o c h s = 50 )
t e s t l o s s , t e s t a c c = model . e v a l u a t e ( t e s t g e n e r a t o r )
p r i n t ( ” Test Accuracy : ” , t e s t a c c )
7.2.6 VGG16
p i p i n s t a l l −− u p g r a d e t e n s o r f l o w k e r a s
import keras , os
from k e r a s . m o d e l s i m p o r t S e q u e n t i a l
from k e r a s . l a y e r s
i m p o r t Dense , Conv2D , MaxPool2D , F l a t t e n
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
i m p o r t numpy a s np
t r d a t a = ImageDataGenerator ( )
traindata = trdata . flow from directory ( directory=
” / home / l a b a d m i n / P r o j e c t H e l a n / W o u n d D e t e c t i o n .
v1i . yolov8 / s p l i t f o l d e r / t r a i n ” ,
t a r g e t s i z e =(299 ,299))
t s d a t a = ImageDataGenerator ( )
t e s t d a t a = t s d a t a . f l o w f r o m d i r e c t o r y ( d i r e c t o r y = ” / home
/ l a b a d m i n / P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i . y o l o v 8
/ s p l i t f o l d e r / t e s t ” , t a r g e t s i z e =(299 ,299))
model = S e q u e n t i a l ( )
model . add ( Conv2D ( i n p u t s h a p e = ( 2 2 4 , 2 2 4 , 3 ) ,
f i l t e r s =64 , k e r n e l s i z e = ( 3 , 3 ) , p a d d i n g =” same ” ,
a c t i v a t i o n =” r e l u ” ) )
outputs=predictions )
model . c o m p i l e ( o p t i m i z e r = ’Adam ’ ,
loss =’ categorical crossentropy ’ ,
m e t r i c s =[ ’ accuracy ’ ] )
model . summary ( )
h i s t o r y = model . f i t ( t r a i n g e n e r a t o r ,
e p o c h s =30 , v a l i d a t i o n d a t a = t e s t g e n e r a t o r )
p r i n t ( ” T r a i n i n g Accuracy : ” ,
h i s t o r y . h i s t o r y [ ’ accuracy ’ ] [ − 1 ] )
p r i n t ( ” V a l i d a t i o n Accuracy : ” ,
history . history [ ’ val accuracy ’][ −1])
from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image
import ImageDataGenerator
t e s t d a t a d i r = ’ / home / l a b a d m i n /
P r o j e c t H e l a n / Wound D e t e c t i o n . v 1 i .
yolov8 / s p l i t f o l d e r / t e s t ’
t e s t d a t a g e n = ImageDataGenerator ( r e s c a l e =1./255)
test generator = test datagen . flow from directory (
test data dir ,
t a r g e t s i z e =(224 , 224) ,
b a t c h s i z e =32 ,
class mode =’ categorical ’
)
t e s t l o s s , t e s t a c c u r a c y = model . e v a l u a t e ( t e s t g e n e r a t o r )
p r i n t ( f ” T e s t Loss : { t e s t l o s s }”)
p r i n t ( f ” Test Accuracy : { t e s t a c c u r a c y }”)
7.3 App.py
import os
o s . e n v i r o n [ ” CUDA VISIBLE DEVICES ” ] = ” ”
import s t r e a m l i t as s t
from PIL i m p o r t Image
i m p o r t numpy a s np
import tensorflow as t f
import torch
from t e n s o r f l o w . k e r a s . m o d e l s i m p o r t l o a d m o d e l
# T i t l e o f t h e web app
s t . t i t l e ( ’ Ulcer C l a s s i f i c a t i o n
w i t h YOLOv8 and S t r e a m l i t ’ )
u p l o a d e d f i l e = s t . f i l e u p l o a d e r ( ” Choose an
image . . . ” , t y p e = [ ” j p g ” , ” j p e g ” , ” png ” ] )
# Load t h e T e n s o r F l o w model
model = t f . k e r a s . m o d e l s . l o a d m o d e l ( ’ / home / l a b a d m i n
/ P r o j e c t H e l a n / d e n s e n e t . h5 ’ )
# F u n c t i o n t o p r e p r o c e s s t h e image f o r p r e d i c t i o n
d e f p r e p r o c e s s i m a g e ( image ) :
image = image . r e s i z e ( ( 2 2 4 , 2 2 4 ) )
image = np . a r r a y ( image )
image = image / 2 5 5 . 0
image = np . e x p a n d d i m s ( image , a x i s = 0)
r e t u r n image
# F u n c t i o n t o c l a s s i f y t h e u p l o a d e d image
d e f c l a s s i f y i m a g e ( image , model ) :
p r o c e s s e d i m a g e = p r e p r o c e s s i m a g e ( image )
p r e d i c t i o n = model . p r e d i c t ( p r o c e s s e d i m a g e )
r e t u r n np . argmax ( p r e d i c t i o n )
# D i s p l a y t h e u p l o a d e d image and i t s c l a s s i f i c a t i o n
i f u p l o a d e d f i l e i s n o t None :
image = Image . open ( u p l o a d e d f i l e )
s t . image ( image , c a p t i o n = ’ U p l o a d e d Image ’ ,
u s e c o l u m n w i d t h =True )
# Perform c l a s s i f i c a t i o n
class names = [ ’ class0 ’ , ’ class1 ’ ,
’ c l a s s 2 ’ ] # S p e c i f y y o u r c l a s s names h e r e
p r e d i c t i o n = model . p r e d i c t ( np . e x p a n d d i m s
( image , a x i s = 0 ) ) [ 0 ]
p r e d i c t e d c l a s s = np . argmax ( p r e d i c t i o n )
class prob = prediction [ predicted class ]
# Display the c l a s s i f i c a t i o n r e s u l t
st . write ( f ” Predicted Class : { class names
[ predicted class ]}”)