0% found this document useful (0 votes)
38 views16 pages

SSC Skin

This document discusses using deep learning techniques to develop an automated system for diagnosing skin cancer margins from histopathology images of squamous cell carcinoma (SCC). The system was trained and tested on images from 7 different skin sites and achieved 95.3% accuracy on the EfficientNetB0 model. This system can assist pathologists in margin assessment, reducing diagnosis time from an average of 25 minutes to less than a minute. Currently, margin assessment relies on manual microscopic examination, which is time-consuming and depends on pathologist experience. Automating this process using deep learning could improve efficiency and reduce delays in cancer treatment planning.

Uploaded by

Nanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views16 pages

SSC Skin

This document discusses using deep learning techniques to develop an automated system for diagnosing skin cancer margins from histopathology images of squamous cell carcinoma (SCC). The system was trained and tested on images from 7 different skin sites and achieved 95.3% accuracy on the EfficientNetB0 model. This system can assist pathologists in margin assessment, reducing diagnosis time from an average of 25 minutes to less than a minute. Currently, margin assessment relies on manual microscopic examination, which is time-consuming and depends on pathologist experience. Automating this process using deep learning could improve efficiency and reduce delays in cancer treatment planning.

Uploaded by

Nanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Intelligent Healthcare for Medical Decision Making: AI and Big Data for Cancer Prevention

Cancer Control
Volume 29: 1–16
Squamous Cell Carcinoma of Skin Cancer © The Author(s) 2022
Article reuse guidelines:
Margin Classification From Digital sagepub.com/journals-permissions
DOI: 10.1177/10732748221132528
journals.sagepub.com/home/ccx
Histopathology Images Using Deep Learning

Beshatu Debela Wako1,2 , Kokeb Dese1,3 , Roba Elala Ulfata4,5 ,


Tilahun Alemayehu Nigatu6, Solomon Kebede Turunbedu4, and Timothy Kwa1,7,8

Abstract
Objectives: Now a days, squamous cell carcinoma (SCC) margin assessment is done by examining histopathology images and
inspection of whole slide images (WSI) using a conventional microscope. This is time-consuming, tedious, and depends on
experts’ experience which may lead to misdiagnosis and mistreatment plans. This study aims to develop a system for the
automatic diagnosis of skin cancer margin for squamous cell carcinoma from histopathology microscopic images by applying
deep learning techniques.
Methods: The system was trained, validated, and tested using histopathology images of SCC cancer locally acquired from Jimma
Medical Center Pathology Department from seven different skin sites using an Olympus digital microscope. All images were
preprocessed and trained with transfer learning pre-trained models by fine-tuning the hyper-parameter of the selected models.
Results: The overall best training accuracy of the models become 95.3%, 97.1%, 89.8%, and 89.9% on EffecientNetB0,
MobileNetv2, ResNet50, VGG16 respectively. In addition to this, the best validation accuracy of the models was 94.7%, 91.8%,
87.8%, and 86.7% respectively. The best testing accuracy of the models at the same epoch was 95.2%, 91.5%, 87%, and 85.5%
respectively. From these models, EfficientNetB0 showed the best average training and testing accuracy than the other models.
Conclusions: The system assists the pathologist during the margin assessment of SCC by decreasing the diagnosis time from an
average of 25 minutes to less than a minute.

Keywords
histopathological margins, squamous cell carcinoma, deep learning, transfer learning, classification, recurrence rate,
reconstruction surgery

Received May 25, 2022. Received revised September 17, 2022. Accepted for publication September 26, 2022.

1
School of Biomedical Engineering, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia
2
Center of Biomedical Engineering, Jimma University Medical Center, Jimma, Ethiopia
3
Artificial Intelligence and Biomedical Imaging Research Lab, Jimma Institute of Technology, Jimma University, Jimma, Ethiopia
4
Department of Pathology, Jimma Institute of Health, Jimma University, Jimma, Ethiopia
5
Department of Pathology, Adama General Hospital and Medical College, Adama, Ethiopia
6
Department of Biomedical Sciences (Anatomy Course Unit), Jimma Institute of Health, Jimma University, Jimma, Ethiopia
7
Department of Biomedical Engineering, University of California, 451 Health Sciences, Davis, CA, USA
8
Medtronic MiniMed, 18000 Devonshire St., Northridge, Los Angeles, CA, USA

Corresponding Authors:
Kokeb Dese, Department of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia.
Email: kokebdese86@gmail.com, dese.gebremeskel@ju.edu.et
Timothy Kwa, Department of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia.
Email: tkwa@ucdavis.edu

Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons
Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use,
reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and
Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
2 Cancer Control

Introduction As shown in Figure 1 above, in well-differentiated tumors,


the cells are organized and have a shape that has been usually
Skin cancer is the most common type of cancer that affects seen in normal tissue images. The poorly differentiated cells
humans worldwide. According to the literature out of three are looking disorganized when seen under the eyepiece of the
people diagnosed with cancer, there is a possibility of one microscope, and tend to grow and spread faster than grade I
patient with skin cancer.1 It is a common type of cancer that tumors, ie, the well-differentiated ones. Those SCC tumor
starts to grow in the epidermis layer of the skin.2,3 The number cells, which are not differentiated, look highly disorganized
of people affected by skin cancer will be expected to exceed and spread more tremendously than the poorly differentiated
13.1 million by 2030.2,4 In the United States, the occurrence of categories.
skin cancer is reported to be 22.1 per 100 000 people. The Therefore, early detection of skin cancer margin is required
number of new patients yearly predicted is expected to be to prevent the progression of cancer to advanced stages and
more than 63 000, and skin cancer is now rated as the sixth reduce cancer fatality. Nowadays, SCC is clinically diagnosed
most common of all cancers.4 Skin cancer is generally clas- using dermoscopic examination and tissue biopsy followed by
sified into two major groups; melanoma and non-melanoma. Mohs micrographic surgery (MMS).6,9 Among these, biopsy
The frequency of non-melanoma skin cancer (NMSC), in- tests are the gold standard method in the diagnosis procedure
cluding basal cell carcinoma (BCC) and squamous cell car- of SCC. After diagnosis, for treatment planning, the surgical
cinoma (SCC) has increased from 3.4 to 4.9 million cases per excision is the routinely used method for all SCC treatments,
year.5 Nevertheless, they can be fatal when it is left undiag- followed by a histopathological margin assessment of all ribs
nosed and untreated early.6 SCC accounts for most NMSC- of the tumor. This would help for the confirmation of the total
related metastatic cancer diseases and death. According to removal of the tumor cells.10 A cancer margin, as defined by
Ref. 1 it is the second most frequent kind of skin cancer. the National Cancer Institute (NIH), is “the edge or border of
Generally, scaly red spots, open sores, raised growths with a the tissue removed in cancer surgery”.11 If the margin is
central depression, or warts are frequent signs of SCC. assessed correctly, this border surrounds the cancerous tissue
Nevertheless, there are three differentiation stages of SCC. as well as a rim of normal tissue to later confirm a successful
These are (see Figure 1); (1) well-differentiated SCC, (2) resection. Histopathological assessment of surgical margin is
poorly differentiated SCC, and (3) undifferentiated/or invasive performed by analyzing by taking sample tissue from all
SCC. The well-differentiated SCC is characterized by having margin and examining it under the microscope. Surgery can
one property of grade I SCC. The poorly differentiated cure ∼45% of all patients with cancer,5 however, in 40% to
property of cells indicates grade II and grade III SCC. Grade 50% of cases a remaining tumor cell is found at the margins,7
IV SCC can be characterized as an invasive/undifferentiated and extra surgery is required, which results in sophisticated
type. Visual inspection,, and histopathology are the current treatment, high cost, greater morbidity, infection risk, and late
diagnostic methods for surgeons to differentiate between tu- therapy.12 Unfortunately, up to 39% of the patients who ex-
mor and normal tissue for skin cancer including SCC. Of these perience, surgery leave the operating room without a complete
techniques, histopathology diagnosis is the gold standard resection due to positive or close margins.
method used not only to identify its type but also for grading The manual histopathology, which is based on the con-
and diagnosing/assessing the tumor margins.7 The differential ventional microscope margin assessment method, is a time-
diagnosis between SCC histologic grades is crucial, as it will consuming and tedious process. The accurate decision of the
further determine the therapeutic approach and follow-up of margin diagnosis needs an experienced pathologist. Some-
the tumor.8 However, in this research, since we are focusing on times it may require the decision of two or more experts to
the marginal diagnosis of the tumor, we consider all grades as provide a reliable pathology report, which directly affects the
malignant and the cancer-free margin as normal/or benign. delay of the treatment plan and cure rate. The current

Figure 1. Sample histopathology SCC images acquired from Jimma University Medical Center (a) Well-differentiated SCC, (b) Poorly
differentiated SCC, (c) Undifferentiated/or invasive SCC. Abbreviation: SSC, squamous cell carcinoma.
Wako et al. 3

procedural protocol for any skin cancer-related treatment in tumor-normal margin to train the model, while the patches
Ethiopia is the removal of the tumor part and waiting for a with only tumor or only normal tissue were not used in the
pathology report for the complete removal of cancer. The training process. The model was evaluated per patient and
report took more than a month.12–14 A current topic of research achieved pixel-level tissue classification with an average area
focuses on creating computer-aided diagnostic (CAD) systems under the curve (AUC) of .88, as well as .83 accuracy, .84
for skin lesions, intending to help dermatologists by reliably sensitivity, and .70 specificity. Kassem. M.A et al17 proposed
analyzing histopathology images of skin lesions for automated Skin Lesions Classification Into Eight Classes for ISIC 2019
identification of SCC. Using Deep Convolutional Neural Network and Transfer
Learning. This paper proposes a model for highly accurate
classification of skin lesions. The proposed model utilized the
Related Works
transfer learning and pre-trained model with GoogleNet. The
To date, various image processing and machine learning proposed model successfully classified the eight different
techniques have been used to diagnose the SCC margin. classes of skin lesions, namely, melanoma, melanocytic nevus,
However, the accuracy of the developed system was not basal cell carcinoma, actinic keratosis, benign keratosis,
sufficient most probably due to the use of few data sets only dermatofibroma, vascular lesion, and Squamous cell carci-
from online sources and use of Most recently, M. Halicek noma. The achieved classification accuracy, sensitivity,
et al15 proposed the studies on hyperspectral imaging (HSI) specificity, and precision percentages are 94.92%, 79.8%,
and fluorescent imaging of head and neck SCC in fresh 97%, and 80.36%, respectively. They used online datasets to
surgical samples from 102 patients/293 tissue samples. HIS train and test their models.
was captured using Maestro spectral imaging system. The L. Zhang et al14 proposed a deep learning-based stimulated
autofluorescence images were acquired from 500 to 720 nm in Raman scattering (SRS) microscope of laryngeal squamous
10 nm increments to produce a hypercube of 23 spectral bands cell carcinoma on fresh surgical specimens using a 34-layered
using autofluorescence-imaging modality. They used a deep residual convolutional neural network (ResNet34) to classify
learning method of Inception V4 transfer learning to classify 33 fresh surgical samples into normal and neoplasia to di-
the whole tissue specimens into cancerous and normal. In this agnosis the abnormality of the samples. Even though they
study two experiments were performed. The first experiment modeled the system with high accuracy (100%) for the
consisted of training the CNN on the primary tumor (T) and all classification of samples into normal and neoplasia, margin
normal (N) tissues while testing on T and N tissues from other assessment was not addressed. On the other hand, Khalid M
patients. The second experiment consisted of training on the et al in Ref. 18 proposed Classification of Skin Lesions into
primary tumor (T) and all normal (N) tissues while testing only Seven Classes Using Transfer Learning with AlexNet. The
tumor-involved cancer margin (TN) tissues from other pa- parameters of the original model are used as initial values,
tients. HSI detected conventional SCC in the larynx, oro- where they randomly initialize the weights of the last three
pharynx, and nasal cavity with .85-.95 AUC score, and replaced layers. The proposed method was tested using the
autofluorescence imaging detected HPV+ SCC in tonsillar most recent public dataset, ISIC 2018. Based on the obtained
tissue with .91 AUC score for different organ sites. Generally, results, they could say that the proposed method achieved
the result shows that AUCs upwards of .80-.90 were obtained great success where it accurately classifies the skin lesions into
for SCC detection with HSI-based. Again another study in seven classes. These classes are melanoma, melanocytic ne-
Ref. 16 which was written by M. Halicek et al shows the vus, basal cell carcinoma, actinic keratosis, benign keratosis,
ability of HSI-based cancer margin detection for oral cancer of dermatofibroma, and vascular lesion. The achieved percent-
thyroid cancer and oral SCC. The CNN-based method clas- ages were 98.70%, 95.60%, 99.27%, and 95.06% for accu-
sifies the tumor-normal margin of oral squamous cell carci- racy, sensitivity, specificity, and precision, respectively. In
noma (SCC) vs normal oral tissue with an area under the curve Ref. 19 B. Fei et al proposed a machine learning-based
(AUC) of .86 with 81% accuracy, 84% sensitivity, and 77% quantification method for HIS data from 16 patients, who
specificity. In the same study, thyroid carcinoma cancer underwent head and neck surgery used for binary classifi-
normal margins were classified with an AUC of .94 for in- cation as cancer normal tissues. They used normal and tumor
terpatient validation, performed with 90% accuracy, 91% tissues for training and the model were evaluated on the
sensitivity, and 88% specificity. This study compared support histopathology of tumor-normal interface from the same pa-
vector machine (SVM) with radial basis function (RBF) type tients. The study classifies the normal and cancer tissues but
kernel and CNN deep neural network model to classify SCC, not on the boundary of the tumor margin. They got distin-
and .80 and .85 AUC were achieved by the models respec- guished of 90% ± 8% accuracy, 89% ± 9% sensitivity, and
tively. In Ref. 7 L. Ma et al proposed, that a fully convolutional specificity of 91% ± 6. The above-mentioned studies used
network (FCN) model based on U-Net architecture was im- hyperspectral imaging (HSI) modalities for the peripheral
plemented and trained for tissue classification in hyperspectral margins, which has a limitation on the deep penetration of the
images (HIS) of 25 ex vivo SCC surgical specimens from 20 deep margins where the most positive margin cases were
different patients. They used only patches containing the reported. Starting with the primary clinical samples obtained
4 Cancer Control

from the Jimma Medical Center (JMC), Department of Pa- hands, toes, eyes, face, and neck) of the patient (see Table 1) for
thology, histopathology images tainted with typical artifacts SCC, which is the most abundant and most frequently diagnosed
such as fringing dust, and non-collimated lighting were ac- skin cancer type in Jimma University Medical Center (JUMC). .
quired using a locally available microscope. Our setup closely The tissue images were acquired using a digital compound light
resembled a clinical microscope that is often seen in resource- microscope (Olympus, CX21FS1, Guangzhou, China) equipped
poor hospital settings. The images were then preprocessed to with a ×100 oil immersion objective and a ×10 eyepiece
remove the artifacts and increase the number of trained data magnification integrated with a camera of 5MP digital resolution
sets. Different transfer learning and deep learning artificial (see Figure 3(a)). For a given slide (see Figure 3(b)), a mag-
intelligence-based models were applied and their classification nification of ×10 was used in the image acquisition of the his-
performance was compared. topathology image (see Figure 3(c) and (d)). To do this, the tissue
biopsies were processed via formaldehyde Xing and paraffin
embedding (FFPE) and cut into thin sections. Finally, it was
Proposed Models stained with hematoxylin and eosin (H&E) to observe the
The acquired microscopic histology images often contained structure of the cells (see Figure 3(b)).
artifacts from diverse sources that needed to be rectified using The safest margin for surgical resection of different cancer
appropriate preprocessing methods. Therefore, this section types is different based on the tumor resection margin stan-
explicates the details of the image acquisition and image dards of the providers.20–23 For the oral tongue, a negative
processing techniques required for the margin classification, margin was proposed to be 2.2 mm. Another study found cuts
followed by a brief discourse on the transfer learning methods within 1 mm of oral cavity tumor margins are associated with
used in this work. The overall workflow/block diagram used significantly increased recurrence rates. Negative resection
for developing the system is outlaid in Figure 2. margins are the primary prevention of disease relapse of the
In this research, four models have been selected and trained cancer cells.16,24 For this study, based on the JUMC standard
with the locally collected SCC data sets. These models were of care for skin cancer histopathology margin assessments,
selected due to their outperforming in related works. These more than 1 mm surgical margin is considered as a margin
were, VGG16, ResNet-50, MobileNetV2, and Effi- negative, and less than 1 mm is considered as a margin
cientNetB0. A detailed explanation of each model is found in positive. Taking19,22 as a reference three regions of interest
Supplementary Material 1.
were selected and images were acquired in this study: the
tumor, normal, and tumor-normal interface regions.
Experimental Design The collected slides (see sample slides in Figure 3(b)) were
from 50 patients. The number of patients distributed for each
Data Collection/Image Acquisition organ was: 12 patients with SCC of the legs, 8 on hands, 3 on
In collaboration with the Pathology, Histology, and Derma- the eyes, 14 on feet, 6 on toes, 4 on the neck, and 3 on the face.
tology Departments at Jimma University Medical Center, Regarding histologic grading, 17 patients with well-
tissue samples were obtained from skin cancer surgical resection. differentiated SCC and 15 patients with poorly differenti-
The tissues were obtained from different skin parts (legs, feet, ated SCC, 18 patients were Invasive SCC as stipulated in

Figure 2. The general diagram of the proposed system.


Wako et al. 5

Table 1. Squamous Cell Carcinoma Data Set the Information of Patients and Whole Slide Images.

Site of Dataset Information Number of Patients Normal (WSI) Tumor (WSI) Tumor-Normal (WSI)

Leg 12 104 80 56
Hand 8 60 58 36
Foot 14 101 88 56
Toe 6 46 4 24
Eye 3 17 18 13
Neck 4 8 4 8
Face 3 9 9 6
Total 50 345 284 199
Based on histological grading
Well-differentiated 17 110 82 67
Poorly-differentiated 15 112 95 60
Invasive 18 123 10 72
Total 50 345 284 199

Abbreviation: WSI, whole slide images.

Figure 3. Data acquisition procedure in Jimma University Medical Center pathology department. (a) The setup used for image acquisition, (b)
Shows sample slides with SCC, (c) during the image acquisition, (d) sample acquired well-differentiated SCC histopathology image.
Abbreviation: SSC, squamous cell carcinoma.

Table 1. Tissue samples that are entirely normal were used as normal, 284 images for tumor, and 199 for a tumor-normal
Margin Negative and the sample that contains tumor-normal section of histopathology images were originally acquired (see
margins and entire tumor were used as Margin Positive cat- sample acquired image in Figure 3(d)).
egory. All H&E-stained histopathology images were labeled From Table 1 above, out of 50 patients originally 345
as margin negative and margin positive and confirmed by 2 (two) margin negative and 483 margin positive (the combination of
pathologists for histopathologic assessment. Finally, both pa- pure tumor and tumor-normal section) histopathology images
thologists and histologists validated the correct labeling of the were acquired. Seven different skin organs and three histo-
captured slide images, which were used as our acquired data used logic grades of SCC were used aiming to use the models for
for developing our model. In this research, a total of three 345 most skin parts of the body. As the research did not involve the
6 Cancer Control

direct use of humans, animals or other subjects, a formal ethics time,20,27 the original Red Green Blue (RGB) image
approval was not required for this study. This was checked and (2048 × 1536) was reduced to 224 by 224 pixels (see
confirmation for this was received from the Jimma Uni- Figure 4).
versity’s institutional review board. 2. Image Smoothing: during image capturing of micro-
scopic images, it could be susceptible to different
noises, such as additive, random, impulsive, and
Image Preprocessing multiplicative are normally associated with any image.
The acquired images usually contained noise due to excessive Noise deletion is most important in medical image
irregularities arising from the staining procedure. On the other analysis.28 The most frequently affected noises in the
hand, the number of originally acquired images could be not medical images are Gaussian, pepper, speckle, and
enough to train our model. Thus, the purpose of preprocessing Poisson noises. As compared with other filters, in this
is to improve image quality by removing unwanted objects research, a median filter was used to remove the salt
and noise from histopathology images and increasing the and pepper noise in the whole slide image. One of the
number of images by applying different image augmentation major advantages of the median filter is that it strongly
techniques.25,26 In the preprocessing step, the following preserves the edges of an image29 (see Figure 5).
methodology was adopted. 3. Stain Normalization: color normalization is an im-
portant preprocessing task in the whole-slide image
1. Resize: Deep learning models are computationally (WSI) of digital pathology.30,31 It refers to standardized
expensive and require all input images to have the same color distribution across input images and focused on
size. Therefore, to decrease the computational hematoxylin and eosin (H&E) stained slides. Color

Figure 4. Original and resized image.

Figure 5. The original resized image and the median filtered image.
Wako et al. 7

normalization techniques like stain normalization are one (sigmoid layer with 1 node) for binary classification of
an important processing task for computer-aided di- SCC images.
agnosis (CAD) systems32 which is achieved by nor- During training, the bottom layers were kept fixed (frozen)
malizing the stains for enhancement and reducing the and not retrained (using the weight values from a pre-trained
color and intensity variations present in stained images model or it was already trained), while a few top layers (dense
from different laboratories, consequently, increasing layers or fully connected layers) and the appended classifier
the estimation accuracy of CAD systems.30 In this (activation function (sigmoid) that delivers an output classi-
study, a Macenko stain normalization algorithm, which fication and sigmoid is mostly used for binary classification).
was popular in histopathology slides32–34 was used (see Since training from scratch is computationally expensive and
Figure 6). requires a large amount of data to achieve high performance
4. Data Augmentation: It is a method used to significantly we applied the concept of transfer learning by adjusting the
increase the amount and variety of data available for parameters such as a learning rate, the number of epochs,
training models.28,35,36 Data augmentation was per- and the optimizer, to achieve the best possible results (see
formed by rotating the images in 90°, 180°, 270°, Tables 2 and 3).
horizontal flip, and vertical flip to increase the available Taking a pre-trained deep neural network (VGG 16, Resnet
data without affecting their features. As result, the 50, Mobile net v2, Efficient net B0) as a feature extractor and
number of data was increased by six times. freezing the weights for the convolutional layers in the net-
work. The last three layers have been replaced with a new
fully-connected, sigmoid, and 2 classification output layers on
Model Training top of the body of the network.
The obtained original data was split into 80% for training, 10% After operating on several trials and testing with different
for validation, and 10% for testing through a stratified cross- transfer learning pre-trained models, we have selected four
validation method. This means out of 828 originally acquired models and compared their results. These were (1) the visual
images, 662 were used for training, 82 for validation, and 84 geometry group (VGG16), (2) Residual Network (ResNet50),
for testing purposes. After augmentation of 6× (with 90°, (3) EfficientNetB0 and MobileNetV2.
180°, 270°, horizontal flip, and vertical flip), the number of The network architecture of VGG16 is a sixteen-layer deep
images in each class becomes 1656 for Margin Negative, and CNN. It consists of thirteen convolution layers arranged into
2316 for Margin Positive excluding the testing data set, which five blocks, each followed by a pooling operation. The net-
needs to be the original dataset and is 84 (35 for MN and 49 for work uses filters of size 3 × 3 for convolution and 2 × 2 size
MP) images. Therefore, the training, validation, and testing windows for pooling operation. The convolutional stack is
data classes contain 3972, 492, and 84 images, respectively. followed by two fully connected layers, each consisting of
To train the models for the SCC classification task, utilizing 4096 nodes. The final layer is a SoftMax layer that assigns a
the concept of transfer learning,37,38 the actual classifier was class to each image.37 The residual network (ResNet50): has a
replaced (1000 nodes) in each pre-trained model with a new depth of fifty (50) layers, forty-eight (48) convolutions, one

Figure 6. The median filtered image and stained normalized image.


8 Cancer Control

Table 2. Fine-Tuning Made on the Layers of the Model.

Frozen Convolutional Layers (Fixed New-Top Output Features Input Features for the Classifier
Models Layers) Layer Extracted Classifier Output

VGG 16 13 convolutional layers Last three 25 088 256 2


layers
Resnet 50 48 convolutional layers Last three 2048 256 2
layers
Mobile net v2 52 convolutional layers Last three 1280 256 2
layers
Efficient net 81 convolutional layers Last three 1280 256 2
B0 layers

Table 3. Functions and Parameters Used for Each Model During the Training.

Function/Parameter EffecientNetB0 MobileNet V2 ResNet 50 VGG16

Classification function Sigmoid (binary) Sigmoid Sigmoid Sigmoid


Optimizer Adam Adam Adam Adam
Loss function Binary-cross entropy Binary-cross entropy Binary-cross entropy Binary-cross entropy
Epochs 30 50 100 70
Early stop 10 10 10 10
Learning rate 103 103 103 103
Batch size 64 64 64 64

max-pooling, and one average pooling and 3 times deeper Performance Evaluation Metrics
than VGG-16, having less computational complexity.37 The
residual addresses the problem of training a really deep To evaluate the performance, we calculated accuracy, preci-
architecture by introducing an identity skip connection, sion, recall, F1-score, specificity, and AUC value. These
which is also called a shortcut jump over layer.39 On the other statistical metrics are based on True Positives (TP), False
hand, an EfficientNetB0, which is an Efficient Net family a Negatives (FN), False Positives (FP), and True Negatives
newly developed classifier, uses a compound scaling ap- (TN). Here, TP and TN represent the number of correctly
proach with fixed ratios in all three dimensions to maximize identified margin positive and margin negative images, re-
speed and precision and shows enormous results in this spectively, while FP and FN denote the number of margin
study40 and does not change the layer operation in the negative images wrongly classified or accepted as margin
baseline network while scaling. Furthermore, MobileNetV2 positive and the number of margins positive images incor-
is having bottleneck layer in the residual connections. rectly classified as margin negative respectively.27,37 All
Lightweight depth-wise convolutions are used by the in- equations from equations (1)-(5) were taken from Ref. 41.
termediate expansion layer to filter features as the source of
nonlinearity. MobileNetV2 is having 32 filtered initial fully 1. Accuracy: the accuracy scores tell how often the
connected convolutions.39 models produced correct results and it is calculated
In this research, different hyper-parameters of the model using equation (1) below
were fine-tuned to increase the performance of our developed TP þ TN
module while it was trained with the modified models. These Accuracy ¼ (1)
TP þ TN þ FP þ FN
include choosing the right optimizer, adjusting the learning
rate, and choosing the appropriate activation and loss function.
The following Table 3 shows the functions and parameters 2. Precision: it simply shows “what number of selected
used for the models during the training. data items are relevant”. In other words, out of the
As an optimizer, the Adam optimizer was chosen for its observations that the algorithm has predicted to be
best performance in terms of speed to converge faster and positive, how many of them are positive is calculated
accuracy.37 The number of epochs used was different based on by precision. In other words, precision reflects a
the models, while the learning rate was set to .0001 and the model’s consistency concerning margin positive out-
activation function used was ReLu. The loss function for comes. Precision is calculated based on the following
binary class classification was binary cross-entropy. equation (2)
Wako et al. 9

TP An excellent model has an AUC near 1 which means it


Precision ¼ (2)
TP þ FP has a good measure of separability. A poor model has an
AUC near 0, which means it has the worst measure of
separability.
3. Recall: it presents “what number of relevant data items
are selected”. It indicates out of the positive obser-
vations, how many of them have been predicted by the Results
algorithm. According to equation (3), the recall equals
the number of true positives divided by the sum of true Training and Validation Results
positives and false negatives: Recall calculates the ratio of In this study, a binary classification for the Histopathology
correctly identified Margin Positive images to all Margin Margin of SCC was established. As per the data split ratio
Positive images in the test data (see equation (3)). used, the amount of data for training the models was 1656 for
TP Margin Negative (MN) and 2316 for Margin Positive classes
Recall ¼ (3) (MP). Totally, 3972 images have taken as a training set and
TP þ FN
490 for validation (204 for Margin Negative (MN) and 288 for
Margin Positive (MP), and 84 for testing (34 images for
4. Specificity: determines how much it classifies the Margin Negative (MN), 48 images for Margin Positive (MP))
Margin Negative images correctly (see equation (4)). were used.
TN During training, the performance of the validation group
Specif icity ¼ (4) was calculated and monitored. The optimal operating
TN þ FP
threshold was calculated for the validation group for gener-
alizable results, and it was used for generating performance
5. F1 score: The F1 score represents a weighted average evaluation metrics for the testing group. The early stop trigger
of precision and recall (equation (5)). would activate when validation loss did not improve for 10
Precision x Recall consecutive epochs. In this case, the training phase would
F1 Score ¼ 2* (5) stop. Therefore, the best loss value saved and best validation
Precision þ Recall
loss would be achieved for the optimal operating threshold.
Generally, the training process is monitored by ‘best loss
6. ROC-AUC score: This metric is calculated using the ‘which quantized the error between algorithm output and a
ROC curve (receiver operating characteristic curve) given target value, and the validation accuracy and training
which represents the relation between the true positive accuracy in this best loss would be gained. After the end of the
rate (sensitivity or recall) and false positive rate (1- training, the best model or best checkpoint is saved based on
specificity). Area Under ROC Curve or ROC-AUC is this the saved model is loaded and can be tested using a testing
used for binary classification and demonstrates how dataset that is independent of the training and validation data
good a model is in discriminating between positive and set. In this study, we used stratified cross-validation. So, a 10-
negative target classes. Especially, in our case, the fold cross-validation was performed, splitting all datasets into
importance of margin positive (reduced recurrence) 80% for training, 10% for validation, and 10% for the testing
and margin negative (organ conservation) classes are group. To reduce bias in the experiment, the fully independent
equal for us, ROC-AUC score can be a useful per- testing group was only classified a single time at the end of the
formance metric.37,42 Receiver Operating Character- experiment with 84 images, after all, network optimization
had been determined using the validation set. Different models
istic (ROC) plots TP rate (equation (6)) vs FP rate
(as shown in Figure 7) were trained and tested. From those
(equation (7)) and helps us understand the relationship
models, Four (4) models with higher accuracy and AUC were
between correctly classified Margin Positive and
selected. VGG16, ResNet 50, Mobile Net v2, and Efficient
misclassified Margin Negative images. The area under
Net B0 were the selected models. Finally, the learning and
the curve (AUC) is a scalar value ranging between 0 generalizability performance of the models was measured
and 1 and represents how well our model differentiates using a learning curve.
between Margin Negative and Margin Positive images. The experimental results demonstrate that the application
of Efficient Net B0 to the dataset of SCC considerably im-
TP
TPR ¼ (6) proves the overall performance and thus achieves the best
TP þ FN outcome compared to other convolutional neural networks.
FP The following Figure 8 shows the training and validation
FPR ¼ ¼ 1  Specif icity (7) accuracy for the four (4) selected models (VGG16, ResNet 50,
TN þ FP
Mobile Net v2, and Efficient Net B0).
10 Cancer Control

Figure 7. Different models’ training accuracy on squamous cell carcinoma data set.

Figure 8. Training and validation accuracy for (a) VGG16, (b) ResNet 50, (c) Mobile Net v2, (d) Efficient Net B0.

The train learning curve is calculated from the train data set. and the validation learning curves were good for the three
It shows how well the model is learning while the validation models (VGG16, ResNet 50, and Efficient Net B0). Mobile
learning curve is calculated from a hold-out validation data set Net v2 has less generalization on the validation data set. Table
to see how well the model is generalizing. For the selected 4 shows the testing and validation best accuracy of the models’
models, their trained learning curves are good for all of them weight values acquired at different epochs.
Wako et al. 11

Testing Results On the other hand, the performance of the model can be
evaluated using receiver operator characteristic (ROC)
The performance of the models was tested on 84 images; with Curves, which are a useful tool to predict the probability of
35 margins negative and 49 margins positive, respectively, binary outcomes and describe how well the model is at dis-
obtained from the originally collected data. The confusion tinguishing the classes. The Area Under the Curve (AUC) is a
matrix in Figure 9 shows the performance of each model on measure of the ability of a classifier to distinguish between
the test data. Margin Negative and Margin Positive and is used as a
Once the confusion matrix is done, the TP, TN, FP, and FN summary of the ROC curves. Figure 10 illustrates the ROC
values are easily known. From those values, the overall curve generated using SCC histopathology images for his-
precision, recall, specificity, f1-score, and test accuracy were topathology margin classification with average values of
calculated and their result is seen in Table 5 below. The AUC, 90.5%,94%,95%,100% for VGG16, ResNet 50, Mobile
following table shows the overall training results for the se- Net v2, Efficient Net B0, respectively.
lected network architectures for SCC margin classification. As indicated in Figure 10 above, for all models used in this
As depicted in Table 4 above, among the four (4) models research, EfficientNetB0 outperforms with the highest AUC
used, the EffecientNetB0 model achieved the best and the best performance of the model in distinguishing the
performance. margin positive and margin negative classes with 100%.

Table 4. The Models Have Saved the Best Weight Values Acquired at the nth Epoch.

Models Validation Loss (nth Epoch) Validation Loss (Value) Validation Accuracy (%) Training Accuracy (%)

VGG16 54 .297 86.7 89.9


ResNet-50 70 .278 87.8 89.8
Mobile Net v2 38 .17 91.8 97.1
Efficient Net B0 22 .159 94.7 95.3

Figure 9. The normalized confusion matrix for the (a) VGG16, (b) ResNet 50, (c) Mobile Net v2, (d) Efficient Net B0 models.
12 Cancer Control

Table 5. Models |Testing Performance Results Summary.

Accuracy Precision Recall F1-Score Specificity

Models (%) (%) (%) (%) (%) Area Under the Curve%

VGG16 85.5 87 86 86 86 90.5


ResNet 50 87 91 87 88 87 94
MobileNetV2 91.5 91.5 91.5 91.5 95 95
EffecientNetB0 95.2 95 96 95 96 100

Figure 10. Receiver operating characteristic curve and area under the curve value for (a) VGG16, (b) ResNet 50, (c) Mobile Net v2, (d)
Efficient Net B0 models.

Discussion operate the margin removal process until margin free report is
gained and proceed to the next step for reconstruction surgery,
This work focuses on a deep learning-based SCC diagnosis
which is depending on pathologist margin status reports.
system. The developed system shows the promising result of
Unfortunately, there is a shortage of pathologists in most
replacing the currently existing manual diagnosis methods
developing countries and health care providers, including
with an automated system. Skin cancer SCC can be diagnosed
Ethiopia. The complexity of margin assessments and their
by clinical examination, including visualization,6 optical
subjective decision, which depends on the expert’s experience,
imaging technique, and histopathology (biopsy) tests. Among
leads to misdiagnosis and local recurrence of the cancer cells.
these, the histopathology test is the gold standard and the most
The major aim of this study was to classify SCC histo-
common technique used to identify cancer types and classify pathological images as Margin Negative and Margin Positive
the grade, and margin status of the tumor margin in low re- to classify the histopathological surgical margin. To achieve
source settings.9 The most preferable treatment for SCC is the this, four different models were developed. The best result was
surgical removal of the entire tumor tissue, followed by achieved by fine-tuning the pre-trained model of
margin assessments10 which can help the surgeon repeatedly EfficientNetB0.
Wako et al. 13

Table 6. Comparing the Proposed Method With Others.

Modality/Output
Authors Preprocessing Data Size and Site Model Used Results Accuracy (%)/AUC

Proposed -Median filter 828 images/seven sites, VGG16, ResNet-50, Compound light 95.3% training and 95.2%
method -Stain normalization foot, leg, eye, hand, MobileNetV2, microscope/ testing accuracy with
-Normalization toe, face, and neck EfficientNetB0 binary EffeciantNetB0 model
classification
L. Ma et al Squamous cell U-net architecture Maestro spectral AUC of 88% accuracy, 83%,
(2021)7 carcinoma/ imaging/binary sensitivity 84%, specificity
hypopharynx, larynx classification 70%
A. R. Triki -Sobel edge detector Breast LeNet (CNN) OCT/Binary 90% accuracy
et al -Gaussian filter classification
(2017)12
J. D. Dorm — 293 tissues samples/ Inceptionv4 Fluorescent 80-90% AUC
et al head and neck imaging/Binary
(2019)15 classification
M. Halicek — — CNN-based method Maestro spectral SCC: (AUC) of 86% with
et al imaging/Multi- 81% accuracy, thyroid:
(2018)16 class AUC of 94% 90%
classification accuracy
E. Kho et al Spectral normalization 18 patients SVM Maestro spectral 88% accuracy
(2019)43 imaging/Binary
classification
B. Fei et al19 Data normalization was 16 patients/head and — Maestro spectral Average accuracy of 90% ±
to remove the neck imaging/binary 8%
spectral classifcation
nonuniformity
Abbreviations: SVM, support vector machine; AUC, area under the curve.

As shown in the testing result confusion matrix in Figure 9, locally acquired data sets. We can conclude that the developed
ReseNet50 classifies the margin positive 98% with the best system can classify the whole slide of SCC histopathology
results, and Efficient Net B0 equally classifies the margin images with good classification accuracy. Moreover, the de-
positive as that of ResNet50. VGG16 is about 92% for margin veloped model has overcome the gap in margin classification
negative, ReseNet50 classified worthily, which is 76%. of histopathology images in margin-free results during skin
However, the margin negative data is 100% classified by both cancer surgical treatment of SCC.
MobileNetV2 and Efficient Net B0 models. As shown in Table In the following Table 6, our proposed system was com-
4, the best overall training and validation accuracy achieved pared with some previous studies. Almost all studies were
by Efficient Net B0 was 95.3% and 94.7% respectively, which focused on only one skin organ location for margin classifi-
is on averagely greater than the other models used in this work. cation, ie, oral. However, for the proposed method, seven
Moreover, as depicted in Table 5 the overall testing perfor- different skin organ locations were collected and classified
mance of the system achieved by Efficient Net B0 were95% with good accuracy results.
(at 22 epoch) accuracy, 95% precision, 96% recall, 95% F1 Nevertheless, this study focuses only on the SCC type of
score, 96% specificity, and 100% AUC. This result shows the skin cancer margin classification and was limited due to fi-
EfficientNetB0 model outperformed the other models in nancial and time constraints to acquire more datasets to study
classifying the SCC. for other types of cancer cells. Moreover, the current module
In this work, a histopathological dataset of SCC and im- not able to grade the SCC levels other than classification of the
plement a state-of-the-art EffecientNetB0 CNN architecture tumor.
for margin classification with the best results. To the best of the
authors’ knowledge, this is the first work to investigate SCC
margin classification of skin cancer disease in digitized whole- Algorithm Demonstration
slide histological images for seven different skin parts and on The developed graphical user interface (GUI) using Effe-
the three histologic grades of SCC and with such much- cientNetB0 (with the highest testing accuracy model ∼95.2%)
improved accuracy. This is the first attempt to design and was tested with respect to response time and ease of use. It is
develop a deep learning computer-aided diagnosis of SCC found to be easy to use and convenient for users. Once ini-
margin classification system using whole slide images using tialized, the result can be achieved within less than 10 seconds.
14 Cancer Control

Figure 11. The developed graphical user interface.

As shown in Figure 11, the GUI has a button to load an image margin positive, which benefit the patients with reduction of
and preprocess it and display/classify the diagnosing result. recurrence rate of cancer cells, and Efficient Net B0 which had
Moreover, the result obtained can be saved using the “save” more advantage on margin negative guaranty organ preser-
button, and possible to continue analyzing more images while vation and increases the module performance.
the “clear” button is used.

Appendix
Conclusions Abbreviations
The existing manual histopathology margin assessment for the AUC Area Under the Curve
SCC method requires experienced experts, and it is time- BCC Basal Cell Carcinoma
consuming, tedious, and depends on the knowledge and ex- CCPDMA Complete Circumferential Peripheral and Peep
perience of the pathologist, which may sometimes require two Margin Assessment
or more experts to provide a reliable pathology report, which FN False Negative
directly affects the treatment plan and cure rate. In this research, FP False Positive
we used whole slide images of clinical data collected from H&E Hematoxylin and Eosin
Jimma University Medical Center, Pathology Department and HPV Human Papilloma Virus
trained, validate, and test different selected models by fine- HFUS High-Frequency Ultrasonography
tuning the hyperparameter of four different models, and got IPC Intraoperative Pathologist Consultant
significant accuracy. The novel module of our dataset and the JUMC Jimma University Medical Center
promising results of this work demonstrates the potential of OCT Optical coherence tomography
such methods that could help to create a tool to increase the MMS Mohs Micrographic Surgical
efficiency and accuracy of pathologists performing margin NMSC NonMelanoma Skin Cancer
assessment on histological slides for the guidance of skin cancer RCM Reflectance Confocal Microscopy
resection operations, especially in low resource settings. The ROC Receiver Operator Characteristic
developed system provides the margin classification result SCC Squamous Cell Carcinoma
within a minute, which shows much improvement from 20 to TN True Negative
30 minutes manual diagnosing methods. For the future, con- TP True Positive
catenating models of ResNet 50 which had more advantage on WSI Whole Slide Image
Wako et al. 15

Acknowledgments network through segmentation, transfer learning, and aug-


mentation. medRxiv; 2021. doi:10.1101/2021.02.02.21251038
We would like to thank the Jimma Institute of Technology, School of
3. Hasan M, Das Barman S, Islam S, Reza AW. Skin cancer de-
Biomedical Engineering for the research funding support and all
tection using convolutional neural network. Proceedings of the
collaborators from Jimma University Medical Center.
2019 5th International Conference on Computing and Artificial
Intelligence, Bali Indonesia, April 19-22, 2019, pp. 254–258.
Author Contributions
doi:10.1145/3330482.3330525
BDW, KD: Conceptualization, Methodology, Software, Formal 4. CDC and United States Cancer Statistics (USCS), United States
analysis, Validation, Writing - original draft, and Data curation. KD, Cancer Statistics. Highlights from 2019 Incidence; 2019.
TK: Supervision, Methodology, Writing - original draft, Visualiza- 5. Liu Y, Walker E, Iyer SR, et al. Molecular imaging and vali-
tion, Software, Writing - review & editing. BDW, RE, TA, and SK: dation of margins in surgically excised nonmelanoma skin
Data acquisition, data labeling, and analysis. All authors read and cancer specimens. J Med Imaging. 2019;6(1):16001. doi:10.
approved the final manuscript. 1117/1.JMI.6.1.016001
6. Dildar M, Akram S, Irfan M, et al. Skin cancer detection: A
Declaration of Conflicting Interests review using deep learning techniques. Int J Environ Res Public
The author(s) declared no potential conflicts of interest with respect to Health. 2021;18(10):5479. doi:10.3390/ijerph18105479
the research, authorship, and/or publication of this article. 7. Ma L, Shahedi M, Shi T, et al. Pixel-level tumor margin as-
sessment of surgical specimen with hyperspectral imaging and
Funding deep learning classification. Proc. SPIE 11598, Medical Im-
aging, 15 February 2021, 1159811. doi:10.1117/12.2581046
The author(s) disclosed receipt of the following financial support for
8. Combalia A, Carrera C. Squamous cell carcinoma: An update on
the research, authorship, and/or publication of this article: This work
diagnosis and treatment. Dermatol Pract Concept. 2020;10(3):
was supported by Jimma University (JU/2011), Jimma Institute of
e2020066. doi:10.5826/dpc.1003a66
Technology, School of Biomedical Engineering.
9. Sun C-K, Kao C-T, Wei M-L, Liao Y-H. Slide-free histopath-
ological imaging of hematoxylin-eosin-stained whole mount
Ethical Approval tissues using Cr:forsterite laser-based nonlinear microscopy.
This research did not involve direct humans, animals, or other Proc. SPIE, 30 April 2019, 11026. doi:10.1117/12.2520417
subjects. According to Jimma University’s institutional review board 10. Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot
(IRB), we had checked that no formal ethical approval was required NM, Yener B. Histopathological image analysis: A review. IEEE
in this particular case. Rev Biomed Eng. 2009;2:147-171. doi:10.1109/RBME.2009.
2034865
Data Availability 11. Marsden M, Weyers BW, Bec J, et al. Transactions on bio-
medical engineering intraoperative margin assessment in oral
The datasets used and/or analyzed during the current study are
and oropharyngeal cancer. IEEE Trans Biomed Eng. 2020;68(3):
available from the corresponding authors on reasonable request.
857-868. doi:10.1109/TBME.2020.3010480
12. Triki AR, Blaschko MB, Jung YM, et al. Intraoperative margin
Guarantor
assessment of human breast tissue in optical coherence to-
Jimma University mography images using deep neural networks. Comput Med
Imaging Graph. 2017;69:21-32.
ORCID iDs 13. Robbins KT, Triantafyllou A, Suarez C. Clinics in oncology
Beshatu Debela Wako  https://orcid.org/0000-0003-2449-8491 surgical margins in head and neck cancer: Intra- and postop-
Kokeb Dese  https://orcid.org/0000-0003-3591-9570 erative considerations. Clin Oncol, 3; 2018:1-8.
Roba Elala Ulfata  https://orcid.org/0000-0002-9559-896X 14. Zhang L, Wu Y, Zheng B, et al. Rapid histology of laryngeal
squamous cell carcinoma with deep-learning based stimulated
Supplemental Material Raman scattering microscopy. Theranostics. 2019;9(9):
2541-2554. doi:10.7150/thno.32655
Supplemental material for this article is available online.
15. Halicek M, Dormer JD, Little JV, et al. Hyperspectral imaging of
head and neck squamous cell carcinoma for cancer margin
References detection in surgical specimens from 102 patients using deep
1. Kassem MA, Hosny KM, Damaševičius R, Eltoukhy MM. learning. Cancers. 2019;11(9):1367. doi:10.3390/
Machine learning and deep learning methods for skin lesion cancers11091367
classification and diagnosis: A systematic review. Diagnostics. 16. Halicek M, Little JV, Wang X, et al. Tumor margin classification
2021;11(8):1390. doi:10.3390/diagnostics11081390 of head and neck cancer using hyperspectral imaging and
2. Hasan MK, Elahi MTE, Alam MA, Jawad MT. DermoExpert: convolutional neural networks. Proc SPIE Int Soc Opt Eng.
Skin lesion classification using a hybrid convolutional neural 2018;10576:1057605. doi:10.1117/12.2293167
16 Cancer Control

17. Kassem MA, Hosny KM, Fouad MM. Skin lesions classification image processing: A review. J Pathol Inf. 2021;12:43. doi:10.
into eight classes for ISIC 2019 using deep convolutional neural 4103/jpi.jpi_103_20
network and transfer learning. IEEE Access. 2020;8: 32. Lakshmanan B, Anand S, Jenitha T. Stain removal through color
114822-114832. doi:10.1109/ACCESS.2020.3003890 normalization of haematoxylin and eosin images: A review. J
18. Hosny KM, Kassem MA, Fouad MM. Classification of skin lesions Phys Conf Ser. 2019;1362:012108. doi:10.1088/1742-6596/
into seven classes using transfer learning with AlexNet. J Digit 1362/1/012108
Imag. 2020;33(5):1325-1334. doi:10.1007/s10278-020-00371-9 33. Smith B, Hermsen M, Lesser E, Ravichandar D. Developing
19. Fei B, Lu G, Wang X, et al. Label-free reflectance hyperspectral image analysis pipelines of whole-slide images: Pre- and post-
imaging for tumor margin assessment: A pilot study on surgical processing. J Clin Transl Sci. 2020;5(1):1-33. doi:10.1017/cts.
specimens of cancer patients. J Biomed Opt. 2017;22(8):1. doi: 2020.531
10.1117/1.jbo.22.8.086009 34. Macenko M, Niethammer M, Marron JS et al. A method for
20. You S, Sun Y, Yang L, et al. Real-time intraoperative diagnosis by normalizing histology slides for quantitative analysis. Pro-
deep neural network driven multiphoton virtual histology. NPJ ceedings of the 2009 IEEE International Symposium on Bio-
Precis Oncol. 2019;3(1):33. doi:10.1038/s41698-019-0104-3 medical Imaging: From Nano to Macro, Boston, MA, June 28-
21. Paolino G, Donati M, Didona D, Mercuri SR, Cantisani C. July 1, 2009, pp. 1107–1110. doi: 10.1109/ISBI.2009.5193250
Histology of non-melanoma skin cancers: An update. Bio- 35. Öztürk Ş, Akdemir B. Effects of histopathological image pre-
medicines. 2017;5(4):71. doi:10.3390/biomedicines5040071 processing on convolutional neural networks. Procedia Comput
22. Halicek M, Shahedi M, Little JV, et al. Head and neck cancer Sci. 2018;132:396-403. doi:10.1016/j.procs.2018.05.166
detection in digitized whole-slide histology using convolutional 36. Mahbod A, Schaefer G, Wang C, Ecker R, Ellinge I. Skin lesion
neural networks. Sci Rep. 2019;9(1):14043. doi:10.1038/ classification using hybrid deep neural networks. ICASSP 2019
s41598-019-50313-x - 2019 IEEE International Conference on Acoustics, Speech and
23. Kiss K, Sindelarova A, Krbal L, et al. Imaging margins of skin Signal Processing (ICASSP), Brighton, UK, May 12-17, 2019,
tumors using laser-induced breakdown spectroscopy and ma- pp. 1229–1233. doi:10.1109/ICASSP.2019.8683352
chine learning. J Anal At Spectrom. 2021;36(5):909-916. doi:10. 37. Amin I, Zamir H, Khan FF. Histopathological image analysis for
1039/D0JA00469C oral squamous cell carcinoma classification using concatenated
24. Thomas Robbins K, Triantafyllou A, Suarez C, et al. Surgical deep learning models. medRxiv, 2021. doi:10.1101/2021.05.06.
margins in head and neck cancer: Intra- and postoperative 21256741
considerations. Auris Nasus Larynx. 2019;46(1):10-17. doi:10. 38. Raheem MA. A deep learning approach for the automatic
1016/j.anl.2018.08.011 analysis and prediction of breast cancer for histopathological
25. Van Eycke Y-R, Allard J, Salmon I, Debeir O, Decaestecker C. images using A Webapp. Int J Eng Res Technol. 2021;10(6):
Image processing in digital pathology: An opportunity to solve 996-1001.
inter-batch variability of immunohistochemical staining. Sci 39. Praveen Gujjar J, Prasanna Kumar HR, Chiplunkar NN. Image
Rep. 2017;7(1):42964. doi:10.1038/srep42964 classification and prediction using transfer learning in colab
26. Anitha S, Radha V. Comparison of image preprocessing tech- notebook. Glob Transitions Proc. 2021;2(2):382-385. doi:10.
niques for textile texture images. Int J Eng Sci Technol. 2010;2:12. 1016/j.gltp.2021.08.068
27. Dese K, Raj H, Ayana G, et al. Accurate machine-learning-based 40. Rafi TH. An efficient classification of benign and malignant
classification of leukemia from blood smear images. Clin tumors implementing various deep convolutional neural net-
Lymphoma Myeloma Leuk. 2021;21(11):e903-e914. doi:10. works. Int J Comput Sci Eng Appl 2020;9(2):152-158.
1016/j.clml.2021.06.025 41. Hosny KM, Kassem MA, Foaud MM. Skin melanoma classi-
28. Rani RU and Amsini P. Image processing techniques used in fication using ROI and data augmentation with deep convolu-
digital pathology imaging: An overview. Int J Eng Res Comput tional neural networks. Multimed Tool Appl. 2020;79(33):
Sci Eng. 2018;5(1):1-4. 24029-24055. doi:10.1007/s11042-020-09067-2
29. Kanmani P, Rajivkannan A, Deepak kumar P, et al. Performance
42. Rannen Triki A, Blaschko MB, Jung YM, et al. Intraoperative
analysis of noise filters using histopathological tissue images. Int
margin assessment of human breast tissue in optical coherence
Res J Pharm. 2017;8(1):50-54. doi:10.7897/2230-8407.080110
tomography images using deep neural networks. Comput Med
30. Anghel A, Stanisavljevic M, Andani S, et al. A high- Imaging Graph. 2018;69:21-32. doi:10.1016/j.compmedimag.
performance system for robust stain normalization of whole- 2018.06.002
slide images in histopathology. Front Med. 2019;6:193. doi:10.
43. Kho E, de Boer LL, Van de Vijver KK, et al. Hyperspectral
3389/fmed.2019.00193
imaging for resection margin assessment during cancer surgery.
31. Jose L, Liu S, Russo C, Nadort A, Di Ieva A. Generative ad-
Clin Cancer Res. 2019;25(12):3572-3580. doi:10.1158/1078-
versarial networks in digital pathology and histopathological
0432.CCR-18-2089

You might also like