0% found this document useful (0 votes)
64 views19 pages

Symmetry 12 00651

6

Uploaded by

monira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views19 pages

Symmetry 12 00651

6

Uploaded by

monira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

SS symmetry

Article
Within the Lack of Chest COVID-19 X-ray Dataset:
A Novel Detection Model Based on GAN and Deep
Transfer Learning
Mohamed Loey 1, * , Florentin Smarandache 2 and Nour Eldeen M. Khalifa 3
1 Department of Computer Science, Faculty of Computers and Artificial Intelligence, Benha University,
Benha 13511, Egypt
2 Department of Mathematics, University of New Mexico, Gallup Campus, NM 87301, USA;
smarand@unm.edu
3 Department of Information Technology, Faculty of Computers and Artificial Intelligence, Cairo University,
Cairo 12613, Egypt; nourmahmoud@cu.edu.eg
* Correspondence: mloey@fci.bu.edu.eg

Received: 5 April 2020; Accepted: 16 April 2020; Published: 20 April 2020 

Abstract: The coronavirus (COVID-19) pandemic is putting healthcare systems across the world
under unprecedented and increasing pressure according to the World Health Organization (WHO).
With the advances in computer algorithms and especially Artificial Intelligence, the detection of
this type of virus in the early stages will help in fast recovery and help in releasing the pressure off
healthcare systems. In this paper, a GAN with deep transfer learning for coronavirus detection in
chest X-ray images is presented. The lack of datasets for COVID-19 especially in chest X-rays images
is the main motivation of this scientific study. The main idea is to collect all the possible images for
COVID-19 that exists until the writing of this research and use the GAN network to generate more
images to help in the detection of this virus from the available X-rays images with the highest accuracy
possible. The dataset used in this research was collected from different sources and it is available
for researchers to download and use it. The number of images in the collected dataset is 307 images
for four different types of classes. The classes are the COVID-19, normal, pneumonia bacterial,
and pneumonia virus. Three deep transfer models are selected in this research for investigation.
The models are the Alexnet, Googlenet, and Restnet18. Those models are selected for investigation
through this research as it contains a small number of layers on their architectures, this will result
in reducing the complexity, the consumed memory and the execution time for the proposed model.
Three case scenarios are tested through the paper, the first scenario includes four classes from the
dataset, while the second scenario includes 3 classes and the third scenario includes two classes.
All the scenarios include the COVID-19 class as it is the main target of this research to be detected.
In the first scenario, the Googlenet is selected to be the main deep transfer model as it achieves
80.6% in testing accuracy. In the second scenario, the Alexnet is selected to be the main deep transfer
model as it achieves 85.2% in testing accuracy, while in the third scenario which includes two classes
(COVID-19, and normal), Googlenet is selected to be the main deep transfer model as it achieves
100% in testing accuracy and 99.9% in the validation accuracy. All the performance measurement
strengthens the obtained results through the research.

Keywords: 2019 novel coronavirus; deep transfer learning; machine learning; COVID-19; SARS-CoV-2;
convolutional neural network; GAN

Symmetry 2020, 12, 651; doi:10.3390/sym12040651 www.mdpi.com/journal/symmetry


Symmetry Symmetry
2020, 12,2020,
x FOR PEER REVIEW
12, 651 2 of 19 2 of 18

within the initial days of the novel coronavirus pestilence. The Chinese researchers named the novel
1. Introduction
virus as the 2019 novel coronavirus (2019-nCov) or the Wuhan virus [1]. The International Committee
In 2019, Wuhan is a commercial center of Hubei province in China that faced a flare-up of a novel
of Viruses titled the virus of 2019 as the Severe Acute Respiratory Syndrome CoronaVirus-2 (SARS-
2019 coronavirus that killed more than hundreds and infected over thousands of individuals within the
CoV-2) initial
and days
the malady
of the novel ascoronavirus
Coronavirus disease
pestilence. 2019 (COVID-19)
The Chinese researchers named[2–4]. The virus
the novel subgroups
as the of the
coronaviruses
2019 novel family are (2019-nCov)
coronavirus alpha-CoVor (α), beta-CoV
the Wuhan (β),
virus [1]. Thegamma-CoV (δ), andof Viruses
International Committee delta-CoV (γ)
coronavirus. SARS-CoV-2 was announced to be an organ of the beta-CoV (β) group of coronaviruses.
titled the virus of 2019 as the Severe Acute Respiratory Syndrome CoronaVirus-2 (SARS-CoV-2) and
In 2003, the
themalady
Kwangtung peopledisease
as Coronavirus were 2019infected with a[2–4].
(COVID-19) 2013The virus lead to
subgroups of the Severe Acute
the coronaviruses Respiratory
family
are alpha-CoV (α), beta-CoV (β), gamma-CoV (δ), and delta-CoV (γ) coronavirus. SARS-CoV-2 was
Syndrome (SARS-CoV). SARS-CoV was assured as a family of the beta-CoV (β) subgroup and was
announced to be an organ of the beta-CoV (β) group of coronaviruses. In 2003, the Kwangtung people
title as SARS-CoV
were infected[5].
withHistorically,
a 2013 virus lead SRAS-CoV,
to the Severe across 26 countries
Acute Respiratory in the
Syndrome world, infected
(SARS-CoV). SARS-CoV more than
8000 individuals
was assuredwith a death
as a family of therate of 9%.
beta-CoV Moreover,
(β) subgroup andSARS-CoV-2 infected
was title as SARS-CoV [5]. more than 750,000
Historically,
individuals with aacross
SRAS-CoV, death 26rate of 4%,
countries across
in the world,150 states,more
infected untill
thanthe date
8000 of this lettering.
individuals with a death Itrate
demonstrates
of
that the broadcast rate of SARS-CoV-2 is higher than SRAS-CoV. The transmission ability is enhanced
9%. Moreover, SARS-CoV-2 infected more than 750,000 individuals with a death rate of 4%, across
150 states, untill the date of this lettering. It demonstrates that the broadcast rate of SARS-CoV-2 is
because of authentic recombination of S protein in the RBD region [6].
higher than SRAS-CoV. The transmission ability is enhanced because of authentic recombination of S
Beta-coronaviruses have [6].
protein in the RBD region caused malady to people that have had wild animals generally either
in bats or rats [7,8]. SARS-CoV-1
Beta-coronaviruses and MERS-CoV
have caused malady to people(camel flu) were
that have transmitted
had wild to people
animals generally eitherfrom
in some
wild cats batsand Arabian
or rats camels respectively
[7,8]. SARS-CoV-1 and MERS-CoV as(camel
shown flu) in
wereFigure 1. The
transmitted sale and
to people from buy
some of wildunknown
animals may be the provenance of coronavirus infection. The invention of the various progeny of
cats and Arabian camels respectively as shown in Figure 1. The sale and buy of unknown animals
may be the provenance of coronavirus infection. The invention of the various progeny of pangolin
pangolin coronavirus and their propinquity to SARS-CoV-2 suggests that pangolins should be a
coronavirus and their propinquity to SARS-CoV-2 suggests that pangolins should be a thinker as
thinker possible
as possiblehostshosts of novel
of novel 2019 coronaviruses.
2019 coronaviruses. Wild animals Wild animals
must be takenmust
away befrom
takenwildaway
animal from wild
animal markets
markets toto stop
stop animal
animal coronavirus
coronavirus transmission
transmission [9]. Coronavirus
[9]. Coronavirus transmission has transmission
been assuredhas been
assured by byWorld
World Health
Health Organization
Organization (WHO) and (WHO)by Theand by The
Centers Centers
for Diseases forUS,
of the Diseases of theofUS, with
with evidence
evidencehuman-to-human
of human-to-human conveyanceconveyance from five
from five different casesdifferent casesnamely
outside China, outside China,
in Italy [10], namely
US [11], in Italy
Nepal [12], Germany [13], and Vietnam [14]. On 31 March 2020, SARS-CoV-2 confirmed more than
[10], US [11], Nepal [12], Germany [13], and Vietnam [14]. On 31 March 2020, SARS-CoV-2 confirmed
750,000 cases, 150,000 recovered cases, and 35,000 death cases. Table 1 show some statistics about
more than 750,000 cases,
SARS-CoV-2 [15]. 150,000 recovered cases, and 35,000 death cases. Table 1 show some statistics
about SARS-CoV-2 [15].

Figure 1. Coronavirus transmission from animals to humans.


Figure 1. Coronavirus transmission from animals to humans.

Table 1. SARS-CoV-2 statistics in some countries.

Location Confirmed Recovered Deaths

United States 164,345 5,945 3,171

Italy 101,739 14,620 11,591


Symmetry 2020, 12, 651 3 of 19

Table 1. SARS-CoV-2 statistics in some countries.

Location Confirmed Recovered Deaths


United States 164,345 5,945 3,171
Italy 101,739 14,620 11,591
Spain 94,417 19,259 8,269
China 81,518 76,052 3305
Germany 67,051 7635 682
Iran 44,606 14,656 2898
France 43,973 7202 3018
United Kingdom 22,141 135 1408

1.1. Deep Learning


Nowadays, Deep Learning (DL) is a subfield of machine learning concerned with techniques
inspired by neurons of the brain [16]. Today, DL is quickly becoming a crucial technology in image/video
classification and detection. DL depends on algorithms for reasoning process simulation and data
mining, or for developing abstractions [17]. Hidden deep layers on DL maps input data to labels
to analyze hidden patterns in complicated data [18]. Besides their use in medical X-ray recognition,
DL architectures are also used in other areas in the application of image processing and computer
vision in medical. DL improves such a medical system to realize higher outcomes, widen illness scope,
and implementing applicable real-time medical image [19,20] disease detection systems. Table 2 shows
a series of major contributions in the field of the neural network to deep learning [21].

Table 2. Major contributions in the history of the neural network to deep learning [21,22].

Milestone/Contribution Year
McCulloch-Pitts Neuron 1943
Perceptron 1958
Backpropagation 1974
Neocognitron 1980
Boltzmann Machine 1985
Restricted Boltzmann Machine 1986
Recurrent Neural Networks 1986
Autoencoders 1987
LeNet 1990
LSTM 1997
Deep Belief Networks 2006
Deep Boltzmann Machine 2009

1.2. Generative Adversarial Network


Generative Adversarial Network (GAN) is a class of deep learning models invented by Ian
Goodfellow in 2014 [23]. GAN models have two main networks, called the generative network and
discriminative network. The first neural network is the generator network, responsible for generating
new fake data instances that look like training data. The discriminator tries to distinguish between real
data and fake (artificially generated) data generated by the generator network as shown in Figure 2.
The mission GANs models that generator network is to try fooling the discriminator network and the
discriminator network tries to fight from being fooled [24–27].
Generative Adversarial Network (GAN) is a class of deep learning models invented by Ian
Goodfellow in 2014 [23]. GAN models have two main networks, called the generative network and
discriminative network. The first neural network is the generator network, responsible for generating
new fake data instances that look like training data. The discriminator tries to distinguish between
real data and fake (artificially generated) data generated by the generator network as shown in Figure
Symmetry 2020, 12, 651 4 of 19
2. The mission GANs models that generator network is to try fooling the discriminator network and
the discriminator network tries to fight from being fooled [24–27].

Figure
Figure 2. Generative Adversarial
2. Generative Adversarial Network
Network model.
model.

1.3. Convolution Neural Networks


Convolutional Neural Networks (ConvNets or CNNs) are a category of deep learning techniques
used primarily to recognize and classify the image. Convolutional Neural Networks have accomplished
extraordinary success for medical image/video classification and detection. In 2012, Ciregan et al.
and Krizhevsky and et al. [28,29] showed how CNNs based on Graphics Processing Unit (GPU)
can enhance many vision benchmark records such as MNIST [30], Chinese characters [31], Arabic
digits recognition [32], Arabic handwritten characters recognition [33], NORB (jittered, cluttered) [34],
traffic signs [35], and large-scale ImageNet [36] benchmarks. In the following years, various advances
in ConvNets further increased the accuracy rate on the image detection/classification competition
tasks. ConvNets pre-trained models introduced significant improvements in succeeding in the annual
challenges of ImageNet Large Scale Visual Recognition Competition (ILSVRC). Deep Transfer Learning
(DTL) is a deep learning (DL) model that focuses on storing weights gained while solving one
image classification and applying it to a related problem. Many DTL models were introduced like
VGGNet [37], GoogleNet [38], ResNet [39], Xception [40], Inception-V3 [41] and DenseNet [42].
The novelty of this paper is conducted as follows: i) the introduced ConvNet models have
end-to-end structure without classical feature extraction and selection methods. ii) We show that GAN
is an effective technique to generate X-ray images. iii) Chest X-ray images are one of the best tools for
the classification of SARS-CoV-2. iv) The deep transfer learning models have been shown to yield very
high outcomes in the small dataset COVID-19. The rest of the paper is organized as follows. Section 2
explores related work and determines the scope of this works. Section 3 discusses the dataset used in
our paper. Section 4 presents the proposed models, while Section 5 illustrates the achieved outcomes
and its discussion. Finally, Section 6 provides conclusions and directions for further research.

2. Related Works
This part conducts a survey on the recent scientific researches for applying machine learning and
deep learning in the field of medical pneumonia and coronavirus X-ray classification. Classical image
classification stages can be divided into three main stages: image preprocessing, feature extraction,
and feature classification. Stephen et al. [43] proposed a new study of classifying and detect the
presence of pneumonia from a collection of chest X-ray image samples based on a ConvNet model
trained from scratch based on dataset [44]. The outcomes obtained were training loss = 12.88%, training
accuracy = 95.31%, validation loss = 18.35%, and validation accuracy = 93.73%.
In [45], the Authors introduced an early diagnosis system from Pneumonia chest X-ray images
based on Xception and VGG16. In this study, a database containing approximately 5800 frontal chest
X-ray images introduced by Kermany et al [44] 1600 normal case, 4200 up-normal pneumonia case in
the Kermany X-ray database. The trial outcomes showed that VGG-16 network better than X-ception
network with a classification rate of 87%. Forasmuch X-ception network better than VGG-16 network
by sensitivity 85%, precision 86% and recall 94%. X-ception network is more felicitous for classifying
X-ray images than VGG-16 network. Varshni et al. [46] proposed pre-trained ConvNet models (VGG-16,
Xception, Res50, Dense-121, and Dense-169) as feature-extractors followed by different classifiers
Symmetry 2020, 12, 651 5 of 19

(SVM, Random Forest, k-nearest neighbors, Naïve Bayes) for the detection of normal and abnormal
pneumonia X-rays images. The prosaists used ChestX-ray14 introduced by Wang et al. [47].
Chouhan et al. [48] introduced an ensemble deep model that combines outputs from all transfer
deep models for the classification of pneumonia using the connotation of deep learning. The Guangzhou
Medical Center [44] database introduced a total of approximately 5200 X-ray images, divided to 1300
X-ray normal, 3900 X-rays abnormal. The proposed model reached a miss-classification error of 3.6%
with a sensitivity of 99.6% on test data from the database. Ref. [49] proposed a Compressed Sensing
(CS) with a deep transfer learning model for automatic classification of pneumonia on X-ray images to
assist the medical physicians. The dataset used for this work contained approximately 5850 X-ray data
Symmetry 2020, 12, x FOR PEER REVIEW 5 of 18
of two categories (abnormal /normal) obtained from Kaggle. Comprehensive simulation outcomes
have shown that the
Comprehensive proposedoutcomes
simulation approach have
detectsshown
the classification
that the of pneumonia
proposed (abnormal
approach /normal)
detects the
with 2.66% miss-classification.
classification of pneumonia (abnormal /normal) with 2.66% miss-classification.
In this
In this research,
research, wewe introduced
introduced aa transfer
transfer ofof deep
deep learning
learning models
models toto classify
classify COVID-19
COVID-19 X-ray
x-ray
images. To
images. Toinput
inputadopting
adoptingX-ray images
x-ray of theofchest
images the to the convolutional
chest neural network,
to the convolutional neural we embedded
network, we
the medicalthe
embedded X-ray images
medical using
x-ray GAN
images to generate
using GAN toX-ray images.
generate x-ray After that,
images. a classifier
After is used to
that, a classifier is
ensemble the outputs of the classification outcomes. The proposed transfer model
used to ensemble the outputs of the classification outcomes. The proposed transfer model was was evaluated on
the proposed
evaluated dataset.
on the proposed dataset.
3. Dataset
3. Dataset
The COVID-19 dataset [50] utilized in this research [51] was created by Dr. Joseph Cohen,
The COVID-19 dataset [50] utilized in this research [51] was created by Dr. Joseph Cohen, a
a postdoctoral fellow at the University of Montreal. The Pneumonia [44] dataset Chest X-ray Images
postdoctoral fellow at the University of Montreal. The Pneumonia [44] dataset Chest X-Ray Images
was used to build the proposed dataset. The dataset [52] is organized into two folders (train, test) and
was used to build the proposed dataset. The dataset [52] is organized into two folders (train, test) and
contains sub-folders for each image category (COVID-19/normal/pneumonia bacterial/ pneumonia
contains sub-folders for each image category (COVID-19/normal/pneumonia bacterial/ pneumonia
virus). There are 306 X-ray images (JPEG) and four categories (COVID-19/normal/pneumonia bacterial/
virus). There are 306 X-Ray images (JPEG) and four categories (COVID-19/normal/pneumonia
pneumonia virus). The number of images for each class is presented in Table 3. Figure 3 illustrates
bacterial/ pneumonia virus). The number of images for each class is presented in Table 3. Figure 3
samples of images used for this research. Figure 4 also illustrates that there is a lot of variation of image
illustrates samples of images used for this research. Figure 4 also illustrates that there is a lot of
sizes and features that may reflect on the accuracy of the proposed model which will be presented in
variation of image sizes and features that may reflect on the accuracy of the proposed model which
the next section.
will be presented in the next section.
Table 3. Number of images for each class in the COVID-19 dataset.
Table 3. Number of images for each class in the COVID-19 dataset.
Dataset/Class Covid Normal Pneumonia_bac Pneumonia_vir Total
Dataset/ Class covid normal pneumonia_bac pneumonia_vir total
Train 60 70 70 70 270
Train 60 70 70 70 270
Test 9 9 9 9 36
Test 9 9 9 9 36
Total 69 79 79 79 306
Total 69 79 79 79 306

Figure 3. Samples of the used images in this research.


Figure 3. Samples of the used images in this research.
Symmetry 2020, 12, 651 6 of 19
Symmetry 2020, 12, x FOR PEER REVIEW 6 of 18

Figure 4. The proposed GAN/deep transfer learning mode.


Figure 4. The proposed GAN/deep transfer learning mode.
4. The Proposed Model
4. The Proposed Model
The proposed model includes two main deep learning components, the first component is GAN
and the The proposed
second model includes
component is the deep twotransfer
main deep model. learning
Figurecomponents,
4 illustratesthe thefirst component
proposed is GAN
GAN/Deep
and thelearning
transfer second component
model. Mainly, is thethedeep
GAN transfer
used for model. Figure 4 illustrates
the preprocessing phasethe proposed
while the deep GAN/Deep
transfer
transfer learning model. Mainly, the GAN
model used in the training, validation and testing phase. used for the preprocessing phase while the deep transfer
model used in the training, validation and testing phase.
Algorithm 1 introduces the proposed transfer model in detail. Let D = {Alexnet, Googlenet,
Algorithm
Resnet18} 1 introduces
be the set of transferthe proposed
models. Eachtransfer
deep transfermodelmodelin detail. Let 𝐷 = with
is fine-tuned {Alexnet, Googlenet,
the COVID-19
X-ray Images dataset (X, Y); where X the set of N input data, each of size, 512 lengths × 512 widths,x-
Resnet18} be the set of transfer models. Each deep transfer model is fine-tuned with the COVID-19
rayYImages
and have the dataset
identical 𝑌); where
(𝑋,class, Y = y/y
 𝑋 the set of 𝑁 input data, each of size, 512 lengths × 512 widths,
∈ {COVID-19; normal; pneumonia bacterial; pneumonia virus
}}.and have the
The𝑌dataset identical
divided class,
to train and𝑌test,
= {𝑦/𝑦 ∈ { COVID-19;
training set (Xtrain ; Ynormal; pneumonia bacterial; pneumonia
train ) for 90% percent for the training and
virus }}. The dataset divided to train and test, training
then validation while 10% percent for the testing. The 90% percent/0123 set (X was /0123 ) forinto
; Ydivided 90%80% percent for the
for training
training
and 20% forand then
the validation
validation. The while 10% percent
selection of 80% for for the
the training
testing. The
and 90%
20% percent was divided
in the validation intoit80%
proved is
for training and 20% for the
efficient in many types of research validation.
such The
as selection
[53–57]. The of 80% for
training the
data training
then and
divided 20%
into in the validation
mini-batches,
proved it isnefficient in many
that types
Xq ; Yqof∈research
(Xtrain ; Ysuch as [53–57]. TheN training data then divided theinto
 
each of size = 64, such train ); q = 1, 2, . . . , n and iteratively >
optimizes
mini-batches,
DCNN model deach ∈ D toof reduce
size 𝑛 the
= 64, such that
functional loss(𝑋as9 ; 𝑌 9 ) ∈ (X /0123
illustrated in ;Equation = 1,2, … , ? and iteratively
Y/0123 ); 𝑞 (1).
optimizes the DCNN model 𝑑 ∈ 𝐷 to reduce the functional loss as illustrated in Equation (1).
1 X
C(w, Xi ) = 1 c(d(x, w), y), (1)
𝐶(𝑤, 𝑋C ) =n x∈X iD ,y∈Yi 𝑐(𝑑(𝑥, 𝑤), 𝑦), (1)
𝑛
G∈HI ,J∈KI
where d(x, w) is the ConvNet model that true label y for input x given w is a weight and c(.) is the
where 𝑑(𝑥, 𝑤) is the ConvNet model that true label 𝑦 for input 𝑥 given 𝑤 is a weight and 𝑐(. ) is
multi-class entropy loss function.
the multi-class entropy loss function.
This research relied on the deep transfer learning CNN architectures to transfer the learning
This research relied on the deep transfer learning CNN architectures to transfer the learning
weights to reduce the training time, mathematical calculations and the consumption of the available
weights to reduce the training time, mathematical calculations and the consumption of the available
hardware resources. There are several types of research in [53,58,59] tried to build their architecture,
hardware resources. There are several types of research in [53,58,59] tried to build their architecture,
but those architecture are problem-specific and cannot fit the data presented in this paper. The used
but those architecture are problem-specific and cannot fit the data presented in this paper. The used
deep transfer learning CNN models investigated in this research are Alexnet [29], Resnet18 [39],
deep transfer learning CNN models investigated in this research are Alexnet [29], Resnet18 [39],
Googlenet [60], The mentioned CNN models had a few numbers of layers if it is compared to large
Googlenet [60], The mentioned CNN models had a few numbers of layers if it is compared to large
CNN models such as Xception [40], Densenet [42], and Inceptionresnet [61] which consist of 71, 201 and
CNN models such as Xception [40], Densenet [42], and Inceptionresnet [61] which consist of 71, 201
164 layers accordingly. The choice of these models will reflect on reducing the training time and the
and 164 layers accordingly. The choice of these models will reflect on reducing the training time and
complexity of the calculations.
the complexity of the calculations.
Algorithm 1 Introduced algorithm.
1: Input data: COVID-19 Chest x-ray Images (𝑋, 𝑌); where 𝑌 = {𝑦/𝑦 ∈ {COVID-19; normal; pneumonia bacterial;
pneumonia virus }}
2: Output data: The transfer model that detected the COVID-19 Chest x-ray image 𝑥 ∈ 𝑋
Symmetry 2020, 12, 651 7 of 19

Algorithm 1 Introduced algorithm.



1: Input data: COVID-19 Chest X-ray Images (X, Y); where Y = y/y ∈ {COVID-19; normal; pneumonia
bacterial; pneumonia virus}}
2: Output data: The transfer model that detected the COVID-19 Chest X-ray image x ∈ X
3: Pre-processing steps:
4: modify the X-ray input to dimension 512 height × 512 width
5: Generate X-ray images using GAN
6: Mean normalize each X-ray data input
7: download and reuse transfer models D = {Alexnet, Googlenet, Resnet18}
8: Replace the last layer of each transfer model by (4 × 1) layer dimension.
9: foreach ∀d ∈ D do
10: µ = 0.01
11: for epochs = 1 to 20 do
12: foreach mini-batch (Xi ; Yi ) ∈ (Xtrain ; Ytrain ) do
Modify the coefficients of the transfer d(·)
if the error rate is increased for five epochs then
µ = µ × 0.01
end
end
13: end
14: end
15: foreach ∀x ∈ Xtest do
16: the outcome of all transfer architectures, d ∈ D
17: end

4.1. Generative Adversarial Network


GANs consist of two different types of networks. Those networks are trained simultaneously.
The first network is trained on image generation while the other is used for discrimination. GANs are
considered a special type of deep learning models. The first network is the generator, while the
second network is the discriminator. The generator network in this research consists of five transposed
convolutional layers, four ReLU layers, four batch normalization layers, and Tanh Layer at the end of
the model, while the discriminator network consists of five convolutional layers, four leaky ReLU,
and three batch normalization layers. All the convolutional and transposed convolutional layers used
the same window size of 4*4* pixel with 64 filters for each layer. Figure 5 presents the structure and the
sequence of layers of the GAN network proposed in this research.
The GAN network helped in overcoming the overfitting problem caused by the limited number of
images in the dataset. Moreover, it increased the dataset images to be 30 times larger than the original
dataset. The dataset number of images reached 8100 images after using the GAN network for 4 classes.
This will help in achieving a remarkable testing accuracy and performance matrices. The achieved
results will be deliberated in detail in the experimental outcomes section. Figure 6 presents samples of
the output of the GAN network for the COVID-19 class.
second network is the discriminator. The generator network in this research consists of five
transposed convolutional layers, four ReLU layers, four batch normalization layers, and Tanh Layer
at the end of the model, while the discriminator network consists of five convolutional layers, four
leaky ReLU, and three batch normalization layers. All the convolutional and transposed
convolutional
Symmetry 2020, 12,layers
651 used the same window size of 4*4* pixel with 64 filters for each layer. Figure 5
8 of 19

presents the structure and the sequence of layers of the GAN network proposed in this research.

Symmetry 2020, 12, x FOR PEER REVIEW 8 of 18


Figure 5. The structure and the sequence of layers for the proposed GAN network.
Figure 5. The structure and the sequence of layers for the proposed GAN network.

The GAN network helped in overcoming the overfitting problem caused by the limited number
of images in the dataset. Moreover, it increased the dataset images to be 30 times larger than the
original dataset. The dataset number of images reached 8100 images after using the GAN network
for 4 classes. This will help in achieving a remarkable testing accuracy and performance matrices. The
achieved results will be deliberated in detail in the experimental outcomes section. Figure 6 presents
samples of the output of the GAN network for the COVID-19 class.

Figure 6.
Figure 6. Samples
Samples of
of images
images generated using the
generated using the proposed
proposed GAN
GAN structure.
structure.

4.2. Deep Transfer Learning


4.2. Deep Transfer Learning
Convolutional Neural Networks (ConvNet) is the most successful type of model for image
Convolutional Neural Networks (ConvNet) is the most successful type of model for image
classification and detection. A single ConvNet model contains many different layers of neural networks
classification and detection. A single ConvNet model contains many different layers of neural
that work on labeling edges and simple/complex features on neural network layers and more complex
networks that work on labeling edges and simple/complex features on neural network layers and
deep features in deeper network layers. An image is convolved with filters (kernels) and then max
more complex deep features in deeper network layers. An image is convolved with filters (kernels)
pooling is applied, this process may go on for some layers and at last recognizable features are obtained.
and then max pooling is applied, this process may go on for some layers and at last recognizable
Take the size of W l−1 × Hl−1 × Cl−1 (where WVWX × H is width × height) feature map and a filterbank in
features are obtained. Take the size of 𝑊 × 𝐻VWX × 𝐶 VWX (where 𝑊 × 𝐻 is width × height)
layer l − 1 for example within Cl kernels at the size of f l × Cl−1 , augmenting the other two coefficients
feature map and a filterbank in layer 𝑙 − 1 for example withinl 𝐶 V lkernels at the size of 𝑓 V × 𝐶 VWX ,
stride sl and padding pl , the outcome feature box in layer l is W × H × Cl as shown in Equation (2):
augmenting the other two coefficients stride 𝑠 and padding 𝑝 , the outcome feature box in layer 𝑙
V V

is 𝑊 V × 𝐻V × 𝐶 V as shown in Equation (2): "


(W l−1 × Hl−1 ) + 2pl − f l
#
(W l , H l ) = + 1 , (2)
VWX VWX
sl ) + 2𝑝V − 𝑓 V
(𝑊 × 𝐻
(𝑊 V , 𝐻V ) = _ + 1a, (2)
𝑠V
where [·] indicate to floor math. Kernels must be equal to that of the input map. as in Equation (3):
where [·] indicate to floor math. Kernels must
Xbe equal to that of ! the input map. as in Equation (3):
xlj = σ xl−1 × f l
+ b l
j , (3)
i∈V j j ij
V VWX V V
𝑥b = 𝜎 dD 𝑥b × 𝑓Cb + 𝑏b h, (3)
C∈fg

where 𝑖 and 𝑗 are indexes of input/output network maps at a range of 𝑊 V × 𝐻V and 𝑊 VWX × 𝐻VWX
respectively. 𝑉b here indicates the receptive field of kernel and 𝑏bV is the bias term. In equation (3),
𝜎(. ) is a non-linearity function applied to get non-linearity in deep transfer learning. In our transfer
method, we used ReLU in equation (4) as the non-linearity function for rapid training process:

𝜎l𝑥 o = maxl0, 𝑥 o. (4)


Symmetry 2020, 12, 651 9 of 19

where i and j are indexes of input/output network maps at a range of W l × Hl and W l−1 × Hl−1
respectively. V j here indicates the receptive field of kernel and blj is the bias term. In equation (3),
σ(.) is a non-linearity function applied to get non-linearity in deep transfer learning. In our transfer
method, we used ReLU in equation (4) as the non-linearity function for rapid training process:
   
σ xinput = max 0, xinput . (4)

Our cost function in Equation (5):

L(s, t) = Lcls (sc∗ ) + λ[p∗ > 0]Lreg ( g, g∗ ), (5)


h i
where sc∗ is output label c∗ while g and g∗ denote gx , g y , gw , gh of bounding boxes. λ[p∗ > 0] consider
the boxes of non-background (if p∗ = 0 is background). This cost function have detection loss Lcls and
regression loss Lreg , in Equations (6)–(8):

Lcls (sc∗ ) = − log(sc∗ ) (6)

and X  
Lreg ( g, g∗ ) = RL1 gi − g∗i (7)
i∈(x,y,w,h)

where: 
2
0.5x ,

 i f x < 0
RL1 (x) =  (8)
|x| − 0.5,

otherwise
In terms of optimizer technique, the momentum Stochastic Gradient Descent (SGD) [62] with
momentum 0.9 is chosen as our optimizer technique, which updates weights parameters. This optimizer
technique updates the weights of the gradient at the previous iteration and fine-tuning of the gradient.
To bypass deep learning network overfitting problems, we utilize this problem by using the dropout
technique [63] and the early-stopping technique [64] to select the best training steps. As to the learning
rate policy, the step size technique is performed in SGD. We introduced the learning rate (µ) to 0.01
and the number of iterations to be 2000. The mini-batch size is set to 64 and early-stopping to be five
epochs if the accuracy did not improve.

5. Experimental Results
The introduced model was coded using a software package (MATLAB). The development was
CPU specific. All outcomes were conducted on a computer server equipped by an Intel Xeon processor
(2 GHz), 96 GB of RAM. The proposed model has been tested under three different scenarios, the first
scenario is to test the proposed model for 4 classes, the second scenario for three classes and the third
one for two classes. All the test experiment scenarios included the COVID-19 class. Every scenario
consists of the validation phase and the testing phase. In the validation phase, 20% of total generated
images will be used while in the testing phase consists of around 10% from the original dataset will
be used.
The main difference between the validation phase and testing phase accuracy is in the validation
phase, the data used to validate the generalization ability of the model or for the early stopping, during
the training process. In the testing phase, the data used for other purposes other than training and
validating. The data used in training, validation, and testing never overlap with each other to build a
concrete result about the proposed model.
Before listing the major results of this research, Table 4 presents the validation and the testing
accuracy for four classes before using GAN as an image augmenter. The presented results in Table 4
show that the validation and testing accuracy is quite low and not acceptable as a model for the
detection of coronavirus.
Symmetry 2020, 12, 651 10 of 19

Table 4. Validation and testing accuracy for 4 classes according to 3 deep transfer learning models
without using GAN.

Model/Validation-Testing Accuracy ALexnet Googlenet Resnet18


Validation Accuracy 73.1% 76.9% 67.3%
Testing Accuracy 52.0% 52.8% 50.0%

5.1. Verification and Testing Accuracy Measurement


Testing accuracy is one of the estimations which demonstrates the precision and the accuracy of
any proposed models. The confusion matrix also is one of the accurate measurements which give
more insight into the achieved validation and testing accuracy. First, the four classes scenario will
Symmetry 2020, 12, x FOR PEER REVIEW 10 of 18
be investigated with the three types of deep transfer learning which include Alexnet, Googlenet,
and Resnet18. Figures 7–9 illustrates the confusion matrices for the validation and testing phases for
Symmetry
four 2020,in
classes 12,the
x FOR PEER REVIEW
dataset. 10 of 18

Figure 7. Confusion matrices of Alexnet for 4 classes (a) validation accuracy, and (b) testing
accuracy.
Figure 7. Confusion matrices of Alexnet for 4 classes (a) validation accuracy, and (b) testing accuracy.
Figure 7. Confusion matrices of Alexnet for 4 classes (a) validation accuracy, and (b) testing
accuracy.

Figure 8. Confusion matrices of Googlenet for 4 classes (a) validation accuracy, and (b) testing accuracy.
Figure 8. Confusion matrices of Googlenet for 4 classes (a) validation accuracy, and (b) testing
accuracy.
Figure 8. Confusion matrices of Googlenet for 4 classes (a) validation accuracy, and (b) testing
accuracy.
Figure 8. Confusion matrices of Googlenet for 4 classes (a) validation accuracy, and (b) testing
accuracy.
Symmetry 2020, 12, 651 11 of 19

Figure 9. Confusion matrices of Resnet18 for 4 classes (a) validation accuracy, and (b) testing accuracy.
Figure 9. Confusion matrices of Resnet18 for 4 classes (a) validation accuracy, and (b) testing
Symmetry
Table2020, 12, x FOR PEERthe
5 summarizes validation and the accuracy.
REVIEW 11 of 18
testing accuracy for the different deep transfer models
for four classes. The table illustrates according to validation accuracy, the Resnet18 achieved the
Table 5 summarizes the validation and the testing accuracy for the different deep transfer
highest accuracy with 99.6%, this is due to the large number of parameters in the Resnet18 architecture
models for four classes. The table illustrates according to validation accuracy, the Resnet18 achieved
which contains 11.7 million parameters which are not larger than Alexnet but the Alexnet only include
the highest accuracy with 99.6%, this is due to the large number of parameters in the Resnet18
8 layers while the Resnet18 includes 18 layers. According to testing accuracy, the Googlenet achieved
architecture which contains 11.7 million parameters which are not larger than Alexnet but the Alexnet
the highest accuracy with 80.6%, this is due to a large number of layers if it is compared to other models
only include 8 layers while the Resnet18 includes 18 layers. According to testing accuracy, the
as it contains about 22 layers.
Googlenet achieved the highest accuracy with 80.6%, this is due to a large number of layers if it is
compared
Tableto
5. other models
Validation and as it contains
testing about
accuracy for 4 22 layers.
classes according to 3 deep transfer learning models.

Table 5. Validation and testing accuracy


Model/Validation-Testing for 4 classes
Accuracy according to
ALexnet 3 deep transfer
Googlenet learning models.
Resnet18
Validation Accuracy 98.5% 98.9% 99.6%
Model/ Validation- Testing Accuracy ALexnet Googlenet Resnet18
Testing Accuracy 66.7% 80.6% 66.7%
Validation Accuracy 98.5% 98.9% 99.6%

Testing Accuracy 66.7% 80.6% 66.7%


The second scenario to be tested in this research when the dataset only contains three classes.
The10–12
Figures second scenario
illustrate theto be testedmatrices
confusion in this research when theand
for the validation dataset only
testing contains
phases three
for three classes.
classes in
Figures 10–12 illustrate the confusion
the dataset including the Covid class. matrices for the validation and testing phases for three classes
in the dataset including the Covid class.

Figure
Figure 10. Confusion
10. Confusion matrices
matrices of Alexnet
of Alexnet for 3for 3 classes
classes (a) validation
(a) validation accuracy,
accuracy, and
and (b) (b) testing
testing accuracy.
accuracy.
Figure 10. Confusion matrices of Alexnet for 3 classes (a) validation accuracy, and (b) testing
Symmetry 2020, 12, 651 accuracy. 12 of 19

Symmetry 2020, 12, x FOR PEER REVIEW 12 of 18


Figure 11. Confusion matrices of Googlenet for 3 classes (a) validation accuracy, and (b) testing accuracy.
Figure 11. Confusion matrices of Googlenet for 3 classes (a) validation accuracy, and (b) testing
accuracy.

Figure 12. Confusion matrices of Resnet18 for 3 classes (a) validation accuracy, and (b) testing accuracy.
Figure 12. Confusion matrices of Resnet18 for 3 classes (a) validation accuracy, and (b) testing
accuracy.
Table 6 summarizes the validation and the testing accuracy for the different deep transfer models
for 3 classes. The table illustrates according to validation accuracy, the Resnet18 achieved the highest
accuracyTable
with6 99.6%.
summarizes the validation
According and the testing
to testing accuracy, accuracy
the Alexnet for the
achieved the different deep transfer
highest accuracy with
models for 3 classes. The table illustrates according to validation accuracy, the Resnet18 achieved the
85.2%, this is maybe due to the large number of parameters in the Alexnet architecture which include
61highest
millionaccuracy
parameters with
and99.6%.
also dueAccording to testing
to the elimination accuracy,
of the the Alexnet
fourth class achieved
which include the highest
the pneumonia
accuracy
virus which with
has85.2%, this
similar is maybe
features if itdue to the largetonumber
is compared COVID-19 of parameters in the
which is also Alexnet architecture
considered a type of
which include
pneumonia 61The
virus. million parameters
elimination of the and also due to
pneumonia the helps
virus elimination of the better
in achieving fourthtesting
class which include
accuracy for
theallpneumonia
the deep transfer virus
modelwhich
than has
when similar features
it is trained overiffour
it isclasses
compared to COVID-19
as mentioned which
before as is also
COVID-19
isconsidered
a special typea type of pneumonia
of pneumonia virus. The elimination of the pneumonia virus helps in achieving
virus.
better testing accuracy for the all deep transfer model than when it is trained over four classes as
mentioned
Table 6.before as COVID-19
Validation and testingisaccuracy
a special fortype of pneumonia
3 classes according tovirus.
3 deep transfer learning models.

Model/Validation-Testing Accuracy ALexnet Googlenet Resnet18


Table 6. Validation and testing accuracy for 3 classes according to 3 deep transfer learning models.
Validation Accuracy 97.2% 98.3% 99.6%
Model/ Validation- Testing Accuracy ALexnet Googlenet Resnet18
Testing Accuracy 85.2% 81.5% 81.5%
Validation Accuracy 97.2% 98.3% 99.6%

Testingwhen
The third scenario to be tested Accuracy
the dataset only85.2%
includes 81.5% 81.5%
two classes, the covid class, and the
normalThe third scenario to be tested when the dataset only includes two classes,transfer
class. Figure 13 illustrates the confusion matrix for the three different the covid class, for
models and
the normal class. Figure 13 illustrates the confusion matrix for the three different transfer models
validation accuracy, While the confusion matrix for testing accuracy is presented in Figure 14 which for
is
validation
the same foraccuracy, While
all the deep the confusion
transfer matrix in
models selected forthis
testing accuracy is presented in Figure 14 which
research.
is the same for all the deep transfer models selected in this research.
Validation Accuracy 97.2% 98.3% 99.6%

Testing Accuracy 85.2% 81.5% 81.5%

The third scenario to be tested when the dataset only includes two classes, the covid class, and
the normal class. Figure 13 illustrates the confusion matrix for the three different transfer models for
validation accuracy, While the confusion matrix for testing accuracy is presented in Figure 14 which
Symmetry 2020, 12, 651 13 of 19

is the same for all the deep transfer models selected in this research.

Symmetry 2020, 12, x FOR PEER REVIEW 13 of 18


Figure 13. Confusion matrices the verification accuracy for (a) Alexnet, (b) Googlenet, and (c) Resnet.
Figure 13. Confusion matrices the verification accuracy for (a) Alexnet, (b) Googlenet, and (c)
Resnet.

.
Figure 14. Confusion matrix for testing accuracy for Alexnet, Googlenet, and Resnet18.
Figure 14. Confusion matrix for testing accuracy for Alexnet, Googlenet, and Resnet18.
Table 7 summarizes the validation and the testing accuracy for the different deep transfer models
Table
for two 7 summarizes
classes. the validation
The table illustrates and the
according testing accuracy
to validation accuracy,forthe
theGooglenet
different achieved
deep transfer
the
models for two classes. The table illustrates according to validation accuracy, the Googlenet achieved
highest accuracy with 99.9%. According to testing accuracy, all the pre-trained model Alexnet,
the highestand
Goolgenet, accuracy
Resnet18 withachieved
99.9%. According
the highesttoaccuracy
testing accuracy,
with 100%, all This
the pre-trained
due to the model Alexnet,
elimination of
Goolgenet,
the third andandtheResnet18
fourth classachieved
whichthe highestpneumonia
includes accuracy with 100%,and
bacterial Thispneumonia
due to the elimination
virus whichofhasthe
third and
similar the fourth
features class which
if it is compared includes pneumonia
to COVID-19. This leads tobacterial and pneumonia
a noteworthy enhancement virus which
in the has
testing
similar features
accuracy if it is compared
which reflects to COVID-19.
on whatever This leads
the deep transfer to a noteworthy
model will be usedenhancement in the testing
the testing accuracy will
accuracy
reach 100%.which reflects
The choice ofon
thewhatever
best modelthehere
deep transfer
will modelto
be according will be usedaccuracy
validation the testing accuracy
which will
achieved
reach So
99.9%. 100%. The choicewill
the Googlenet of be
thethe
best modeldeep
selected heretransfer
will bemodel
according
in the to validation
third scenario.accuracy which
achieved 99.9%. So the Googlenet will be the selected deep transfer model in the third scenario.
Table 7. Validation and testing accuracy for 2 classes according to 3 deep transfer learning models.
Table 7. Validation and testing accuracy for 2 classes according to 3 deep transfer learning models.
Model/Validation-Testing Accuracy ALexnet Googlenet Resnet18
Model/ Validation-
Validation Testing Accuracy 99.6%
Accuracy ALexnet Googlenet
99.9% Resnet18
99.8%
Validation
Testing AccuracyAccuracy 99.6%
100% 99.9%
100% 99.8%100%

Testing Accuracy 100% 100% 100%


To
Toconclude
concludethis thispart,
part,every
everyscenario
scenariohas hasititisisown
owndeep
deeptransfer
transfermodel.
model. InIn the
thefirst
firstscenario,
scenario,
Googlenet
Googlenet was selected, while the second scenario, Alexnet was selected, and finally,ininthe
was selected, while the second scenario, Alexnet was selected, and finally, thethird
third
scenario, Googlenet was selected as a deep transfer model. To draw a full conclusion
scenario, Googlenet was selected as a deep transfer model. To draw a full conclusion for the selected for the selected
deep
deeptransfer
transferlearning
learningthat
thatfitfitthe
thedataset
datasetandandall
allscenarios,
scenarios,testing
testingaccuracy
accuracyfor
forevery
everyclass
classisisrequired
required
for
for the different deep transfer model. Table 7 presents the testing accuracy for everyclass
the different deep transfer model. Table 7 presents the testing accuracy for every classfor
forthe
the
different three scenarios. Table 8 does not help much to determine the deep transfer
different three scenarios. Table 8 does not help much to determine the deep transfer model that fits model that fits all
scenarios but for
all scenarios butthe
fordistinction
the distinctionof COVID-19
of COVID-19 classclass
among the other
among classes,
the other Alexnet
classes, and Resent18
Alexnet will
and Resent18
be the selected as deep transfer model as they achieved 100% testing accuracy
will be the selected as deep transfer model as they achieved 100% testing accuracy for COVID-19 for COVID-19 class
whatever the number
class whatever of classes
the number is 2,3 or
of classes is 4.
2,3 or 4.

Table 8. Testing accuracy for every class for the different 3 scenarios.

# of classes Class Name Alexnet Googlenet Resnet18

Covid 100% 100% 100%


Symmetry 2020, 12, 651 14 of 19

Table 8. Testing accuracy for every class for the different 3 scenarios.

# of Classes Class Name Alexnet Googlenet Resnet18


Covid 100% 100% 100%
4 classes Normal 64.3% 100% 100%
Pneumonia _bac 44.4% 70% 50%
Pneumonia _vir 50% 66.7% 40%
Covid 100% 81.8% 100%
3 classes Normal 77.7% 75.0% 100%
Pneumonia _bac 77.8% 87.5% 64.3%
Covid 100% 100% 100%
2 classes
Normal 100% 100% 100%

5.2. Performance Evaluation and Discussion


To estimate the performance of the proposed model, extra performance matrices are required
to be explored through this study. The most widespread performance measures in the field of deep
learning are Precision, Sensitivity (recall), F1 Score [65] and they are presented from Equation (9) to
Equation (11).
TrueP
Precision = (9)
(TrueP + FalseP)
TrueP
Sensitivity = (10)
(TrueP + FalseN)
Precision ∗ Sensitivity
F1Score = 2 ∗ (11)
(Precision + Sensitivity)
where TrueP is the count of true positive samples, TrueN is the count of true negative samples,
FalseP is the count of false positive samples, and FalseN is the count of false negative samples from a
confusion matrix.
Table 9 presents the performance metrics for different scenarios and deep transfer models for the
testing accuracy. The table illustrates that in the first scenario which contains four classes, Googlenet
achieved the highest percentage for precision, sensitivity and F1 score metrics which strengthen the
research decision for choosing Googlenet as a deep transfer model. The table also illustrates that in the
second scenario which contains three classes, Alexnet achieved the highest percentage for precision
and recall score metrics while Resnet achieved the highest score in F1 with 88.10% but overall the
Alexnet had the highest testing accuracy which also strengthens the research decision for choosing
Alexnet as deep transfer model.
Table 9 also illustrates that in the third scenario, which contains two classes, all deep transfer
learning models achieved similar the highest percentage for precision, recall and F1 score metrics
which strengthen the research decision for choosing Googlenet as it achieved the highest validation
accuracy with 99.9% as illustrated in Table 6.
Symmetry 2020, 12, 651 15 of 19

Table 9. Performance measurements for different scenarios.

# of Classes Class Name Alexnet Googlenet Resnet18


Precision 64.68% 84.17% 72.50%
4 classes Recall 66.67% 80.56% 66.67%
F1 Score 65.66% 82.32% 69.46%
Testing Accuracy 66.67% 80.56% 69.46%
Precision 85.19% 81.44% 88.10%
3 classes Recall 85.19% 81.48% 81.48%
F1 Score 85.19% 81.46% 84.66%
Testing Accuracy 85.19% 81.48% 81.48%
Precision 100% 100% 100%
2 classes Recall 100% 100% 100%
F1 Score 100% 100% 100%
Testing Accuracy 100% 100% 100%

6. Conclusions and Future Works


The 2019 novel Coronaviruses (COVID-19) are a family of viruses that leads to illnesses ranging
from the common cold to more severe diseases and may lead to death according to World Health
Organization (WHO), with the advances in computer algorithms and especially artificial intelligence,
the detection of this type of virus in early stages will help in fast recovery. In this paper, a GAN with
deep transfer learning for COVID-19 detection in limited chest X-ray images is presented. The lack
of benchmark datasets for COVID-19 especially in chest X-rays images was the main motivation of
this research. The main idea is to collect all the possible images for COVID-19 and use the GAN
network to generate more images to help in the detection of the virus from the available X-ray’s images.
The dataset in this research was collected from different sources. The number of images of the collected
dataset was 307 images for four types of classes. The classes are the covid, normal, pneumonia bacterial,
and pneumonia virus.
Three deep transfer models were selected in this research for investigation. Those models are
selected for investigation through this research as it contains a small number of layers on their
architectures, this will result in reducing the complexity and the consumed memory and time for
the proposed model. A three-case scenario was tested through the paper, the first scenario which
included the four classes from the dataset, while the second scenario included three classes and the
third scenario included two classes. All the scenarios included the COVID-19 class as it was the main
target of this research to be detected. In the first scenario, the Googlenet was selected to be the main
deep transfer model as it achieved 80.6% in testing accuracy. In the second scenario, the Alexnet was
selected to be the main deep transfer model as it achieved 85.2% in testing accuracy while in the third
scenario which included two classes(COVID-19, and normal), Googlenet was selected to be the main
deep transfer model as it achieved 100% in testing accuracy and 99.9% in the validation accuracy.
One open door for future works is to apply the deep models with a larger dataset benchmark.

Author Contributions: All authors contributed equally to this work. All authors have read and agree to the
published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest: The author declares no conflict of interest.
Symmetry 2020, 12, 651 16 of 19

References
1. Singhal, T. A Review of Coronavirus Disease-2019 (COVID-19). Indian J. Pediatrics 2020, 87, 281–286.
[CrossRef]
2. Lai, C.-C.; Shih, T.-P.; Ko, W.-C.; Tang, H.-J.; Hsueh, P.-R. Severe acute respiratory syndrome coronavirus 2
(SARS-CoV-2) and coronavirus disease-2019 (COVID-19): The epidemic and the challenges. Int. J. Antimicrob.
Agents 2020, 55, 105924. [CrossRef]
3. Li, J.; Li, J.J.; Xie, X.; Cai, X.; Huang, J.; Tian, X.; Zhu, H. Game consumption and the 2019 novel coronavirus.
Lancet Infect. Dis. 2020, 20, 275–276. [CrossRef]
4. Sharfstein, J.M.; Becker, S.J.; Mello, M.M. Diagnostic Testing for the Novel Coronavirus. JAMA 2020.
[CrossRef] [PubMed]
5. Chang, L.; Yan, Y.; Wang, L. Coronavirus Disease 2019: Coronaviruses and Blood Safety. Transfus. Med. Rev.
2020. [CrossRef]
6. Shereen, M.A.; Khan, S.; Kazmi, A.; Bashir, N.; Siddique, R. COVID-19 infection: Origin, transmission,
and characteristics of human coronaviruses. J. Adv. Res. 2020, 24, 91–98. [CrossRef] [PubMed]
7. Rabi, F.A.; Al Zoubi, M.S.; Kasasbeh, G.A.; Salameh, D.M.; Al-Nasser, A.D. SARS-CoV-2 and Coronavirus
Disease 2019: What We Know So Far. Pathogens 2020, 9, 231. [CrossRef]
8. York, A. Novel coronavirus takes flight from bats? Nat. Rev. Microbiol. 2020, 18, 191. [CrossRef]
9. Lam, T.T.-Y.; Shum, M.H.-H.; Zhu, H.-C.; Tong, Y.-G.; Ni, X.-B.; Liao, Y.-S.; Wei, W.; Cheung, W.Y.-M.; Li, W.-J.;
Li, L.-F.; et al. Identifying SARS-CoV-2 related coronaviruses in Malayan pangolins. Nature 2020, 1–6.
[CrossRef] [PubMed]
10. Giovanetti, M.; Benvenuto, D.; Angeletti, S.; Ciccozzi, M. The first two cases of 2019-nCoV in Italy: Where
they come from? J. Med. Virol. 2020, 92, 518–521. [CrossRef]
11. Holshue, M.L.; DeBolt, C.; Lindquist, S.; Lofy, K.H.; Wiesman, J.; Bruce, H.; Spitters, C.; Ericson, K.;
Wilkerson, S.; Tural, A.; et al. First Case of 2019 Novel Coronavirus in the United States. N. Engl. J. Med.
2020, 382, 929–936. [CrossRef] [PubMed]
12. Bastola, A.; Sah, R.; Rodriguez-Morales, A.J.; Lal, B.K.; Jha, R.; Ojha, H.C.; Shrestha, B.; Chu, D.K.W.;
Poon, L.L.M.; Costello, A.; et al. The first 2019 novel coronavirus case in Nepal. Lancet Infect. Dis. 2020, 20,
279–280. [CrossRef]
13. Rothe, C.; Schunk, M.; Sothmann, P.; Bretzel, G.; Froeschl, G.; Wallrauch, C.; Zimmer, T.; Thiel, V.; Janke, C.;
Guggemos, W.; et al. Transmission of 2019-nCoV Infection from an Asymptomatic Contact in Germany.
N. Engl. J. Med. 2020, 382, 970–971. [CrossRef] [PubMed]
14. Phan, L.T.; Nguyen, T.V.; Luong, Q.C.; Nguyen, T.V.; Nguyen, H.T.; Le, H.Q.; Nguyen, T.T.; Cao, T.M.;
Pham, Q.D. Importation and Human-to-Human Transmission of a Novel Coronavirus in Vietnam. N. Engl. J.
Med. 2020, 382, 872–874. [CrossRef] [PubMed]
15. Coronavirus (COVID-19) Map. Available online: https://www.google.com/COVID-19-map/ (accessed on 31
March 2020).
16. Rong, D.; Xie, L.; Ying, Y. Computer vision detection of foreign objects in walnuts using deep learning.
Comput. Electron. Agric. 2019, 162, 1001–1010. [CrossRef]
17. Eraslan, G.; Avsec, Ž.; Gagneur, J.; Theis, F.J. Deep learning: new computational modelling techniques for
genomics. Nat. Rev. Genet. 2019, 20, 389–403. [CrossRef]
18. Riordon, J.; Sovilj, D.; Sanner, S.; Sinton, D.; Young, E.W.K. Deep Learning with Microfluidics for Biotechnology.
Trends Biotechnol. 2019, 37, 310–324. [CrossRef]
19. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. für
Med. Phys. 2019, 29, 102–127. [CrossRef]
20. Maier, A.; Syben, C.; Lasser, T.; Riess, C. A gentle introduction to deep learning in medical image processing.
Z. für Med. Phys. 2019, 29, 86–101. [CrossRef]
21. Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access 2019, 7,
53040–53065. [CrossRef]
22. Pouyanfar, S.; Sadiq, S.; Yan, Y.; Tian, H.; Tao, Y.; Reyes, M.P.; Shyu, M.-L.; Chen, S.-C.; Iyengar, S.S. A Survey
on Deep Learning: Algorithms, Techniques, and Applications. ACM Comput. Surv. 2018, 51, 1–36. [CrossRef]
Symmetry 2020, 12, 651 17 of 19

23. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y.
Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information
Processing Systems—Volume 2, Montreal, QC, Canada, 8–13 December 2014; MIT Press: Cambridge, MA,
USA, 2014; pp. 2672–2680.
24. Cao, Y.; Jia, L.; Chen, Y.; Lin, N.; Yang, C.; Zhang, B.; Liu, Z.; Li, X.; Dai, H. Recent Advances of Generative
Adversarial Networks in Computer Vision. IEEE Access 2019, 7, 14985–15006. [CrossRef]
25. Gonog, L.; Zhou, Y. A Review: Generative Adversarial Networks. In Proceedings of the 2019 14th IEEE
Conference on Industrial Electronics and Applications (ICIEA), Xi’an, China, 19–21 June 2019; pp. 505–510.
26. Lee, M.; Seok, J. Controllable Generative Adversarial Network. IEEE Access 2019, 7, 28158–28169. [CrossRef]
27. Caramihale, T.; Popescu, D.; Ichim, L. Emotion Classification Using a Tensorflow Generative Adversarial
Network Implementation. Symmetry 2018, 10, 414. [CrossRef]
28. Ciregan, D.; Meier, U.; Schmidhuber, J. Multi-column deep neural networks for image classification.
In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI,
USA, 16–21 June 2012; pp. 3642–3649.
29. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks.
Adv. Neural Inf. Process. Syst. 2012, 1097–1105. [CrossRef]
30. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition.
Proc. IEEE 1998, 86, 2278–2324. [CrossRef]
31. Yin, F.; Wang, Q.; Zhang, X.; Liu, C. ICDAR 2013 Chinese Handwriting Recognition Competition.
In Proceedings of the 2013 12th International Conference on Document Analysis and Recognition, Washington,
DC, USA, 25–28 August 2013; pp. 1464–1470.
32. El-Sawy, A.; EL-Bakry, H.; Loey, M. CNN for Handwritten Arabic Digits Recognition Based on LeNet-5 BT.
In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2016, Cairo,
Egypt, 24–26 October 2016; Hassanien, A.E., Shaalan, K., Gaber, T., Azar, A.T., Tolba, M.F., Eds.; Springer
International Publishing: Cham, Switzerland, 2017; pp. 566–575.
33. El-Sawy, A.; Loey, M.; EL-Bakry, H. Arabic Handwritten Characters Recognition Using Convolutional Neural
Network. WSEAS Trans. Comput. Res. 2017, 5, 11–19.
34. LeCun, Y.; Huang, F.J.; Bottou, L. Learning methods for generic object recognition with invariance to pose
and lighting. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; Volume 2, p. II-104.
35. Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. The German Traffic Sign Recognition Benchmark: A multi-class
classification competition. In Proceedings of the The 2011 International Joint Conference on Neural Networks,
San Jose, CA, USA, 31 July–5 August 2011; pp. 1453–1460.
36. Deng, J.; Dong, W.; Socher, R.; Li, L.; Kai, L.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In
Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA,
20–25 June 2009; pp. 248–255.
37. Liu, S.; Deng, W. Very deep convolutional neural network based image classification using small training
sample size. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala
Lumpur, Malaysia, 3–6 November 2015; pp. 730–734.
38. Szegedy, C.; Wei, L.; Yangqing, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A.
Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9.
39. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cairo, Egypt, 24–26 October 2016;
pp. 770–778.
40. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017;
pp. 1800–1807.
41. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer
vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV,
USA, 26 June 1–1 July 2016; pp. 2818–2826.
Symmetry 2020, 12, 651 18 of 19

42. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks.
In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu,
HI, USA, 21–26 July 2017; pp. 2261–2269.
43. Stephen, O.; Sain, M.; Maduh, U.J.; Jeong, D.-U. An Efficient Deep Learning Approach to Pneumonia
Classification in Healthcare. J. Healthc. Eng. 2019, 2019, 4180949. [CrossRef] [PubMed]
44. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.S.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.;
Wu, X.; Yan, F.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning.
Cell 2018, 172, 1122–1131. [CrossRef]
45. Ayan, E.; Ünver, H.M. Diagnosis of Pneumonia from Chest X-ray Images Using Deep Learning. In Proceedings
of the 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science
(EBBT), Istanbul, Turkey, 24–26 April 2019; pp. 1–5.
46. Varshni, D.; Thakral, K.; Agarwal, L.; Nijhawan, R.; Mittal, A. Pneumonia Detection Using CNN based
Feature Extraction. In Proceedings of the 2019 IEEE International Conference on Electrical, Computer and
Communication Technologies (ICECCT), Coimbatore, India, 20–22 February 2019; pp. 1–7.
47. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-ray8: Hospital-Scale Chest X-ray
Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax
Diseases. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Honolulu, HI, USA, 21–26 July 2017; pp. 3462–3471.
48. Chouhan, V.; Singh, S.K.; Khamparia, A.; Gupta, D.; Tiwari, P.; Moreira, C.; Damaševičius, R.; de
Albuquerque, V.H.C. A Novel Transfer Learning Based Approach for Pneumonia Detection in Chest
X-ray Images. Appl. Sci. 2020, 10, 559. [CrossRef]
49. Islam, S.R.; Maity, S.P.; Ray, A.K.; Mandal, M. Automatic Detection of Pneumonia on Compressed Sensing
Images using Deep Learning. In Proceedings of the 2019 IEEE Canadian Conference of Electrical and
Computer Engineering (CCECE), Edmonton, AB, Canada, 5–8 May 2019; pp. 1–4.
50. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 Image Data Collection. arXiv 2020, arXiv:2003.11597.
51. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 Image Data Collection. Available online: https://github.com/
ieee8023/covid-chestxray-dataset (accessed on 31 March 2020).
52. Dataset. Available online: https://drive.google.com/uc?id=1coM7x3378f-Ou2l6Pg2wldaOI7Dntu1a (accessed
on 31 March 2020).
53. Khalifa, N.; Taha, M.; Hassanien, A.; Mohamed, H. Deep Iris: Deep Learning for Gender Classification
Through Iris Patterns. Acta Inform. Medica 2019, 27, 96. [CrossRef]
54. Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E.; Hemedan, A.A. Deep bacteria: robust deep learning data
augmentation design for limited bacterial colony dataset. Int. J. Reason.-Based Intell. Syst. 2019, 11, 256–264.
[CrossRef]
55. Lemley, J.; Bazrafkan, S.; Corcoran, P. Smart Augmentation Learning an Optimal Data Augmentation Strategy.
IEEE Access 2017, 5, 5858–5869. [CrossRef]
56. Khalifa, N.E.M.; Taha, M.H.N.; Ezzat Ali, D.; Slowik, A.; Hassanien, A.E. Artificial Intelligence Technique for
Gene Expression by Tumor RNA-Seq Data: A Novel Optimized Deep Learning Approach. IEEE Access 2020,
8, 22874–22883. [CrossRef]
57. Khalifa, N.; Loey, M.; Taha, M.; Mohamed, H. Deep Transfer Learning Models for Medical Diabetic
Retinopathy Detection. Acta Inform. Medica 2019, 27, 327. [CrossRef] [PubMed]
58. Khalifa, N.E.; Hamed Taha, M.; Hassanien, A.E.; Selim, I. Deep galaxy V2: Robust deep convolutional neural
networks for galaxy morphology classifications. In Proceedings of the 2018 International Conference on
Computing Sciences and Engineering, ICCSE 2018—Proceedings, Kuwait City, Kuwait, 11–13 March 2018;
pp. 1–6.
59. Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E. Aquarium Family Fish Species Identification System Using
Deep Neural Networks. In International Conference on Advanced Intelligent Systems and Informatics; Springer:
Cham, Switzerland, 2018; pp. 347–356.
60. Aswathy, P.; Siddhartha; Mishra, D. Deep GoogLeNet Features for Visual Object Tracking. In Proceedings of
the 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India,
1–2 December 2018; pp. 60–66.
Symmetry 2020, 12, 651 19 of 19

61. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-ResNet and the impact of residual
connections on learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, AAAI 2017,
San Francisco, CA, USA, 4–9 February 2017.
62. Bottou, L. Stochastic Gradient Descent Tricks. In Neural Networks: Tricks of the Trade: Second Edition;
Montavon, G., Orr, G.B., Müller, K.-R., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–436.
ISBN 978-3-642-35289-8.
63. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent
Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958.
64. Caruana, R.; Lawrence, S.; Giles, L. Overfitting in Neural Nets: Backpropagation, Conjugate Gradient,
and Early Stopping. In Proceedings of the 13th International Conference on Neural Information Processing
Systems; MIT Press: Cambridge, MA, USA, 2000; pp. 381–387.
65. Goutte, C.; Gaussier, E. A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for
Evaluation. In European Conference on Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2010.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like