0% found this document useful (0 votes)
15 views9 pages

Cancer 05

This study compares various CNN-based deep learning architectures for the early diagnosis of bone cancer using CT images. It employs image processing techniques such as K-means clustering and Canny edge detection to classify normal and cancerous bone images, achieving a notable accuracy of 100% with the AlexNet model. The research emphasizes the importance of early detection in improving survival rates for patients with bone cancer.

Uploaded by

Marufa Nowrin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views9 pages

Cancer 05

This study compares various CNN-based deep learning architectures for the early diagnosis of bone cancer using CT images. It employs image processing techniques such as K-means clustering and Canny edge detection to classify normal and cancerous bone images, achieving a notable accuracy of 100% with the AlexNet model. The research emphasizes the importance of early detection in improving survival rates for patients with bone cancer.

Uploaded by

Marufa Nowrin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

www.nature.

com/scientificreports

OPEN A comparative analysis


of CNN‑based deep learning
architectures for early diagnosis
of bone cancer using CT images
Kanimozhi Sampath 1, Sivakumar Rajagopal 1* & Ananthakrishna Chintanpalli 2
Bone cancer is a rare in which cells in the bone grow out of control, resulting in destroying the normal
bone tissue. A benign type of bone cancer is harmless and does not spread to other body parts,
whereas a malignant type can spread to other body parts and might be harmful. According to Cancer
Research UK (2021), the survival rate for patients with bone cancer is 40% and early detection can
increase the chances of survival by providing treatment at the initial stages. Prior detection of these
lumps or masses can reduce the risk of death and treat bone cancer early. The goal of this current
study is to utilize image processing techniques and deep learning-based Convolution neural network
(CNN) to classify normal and cancerous bone images. Medical image processing techniques, like pre-
processing (e.g., median filter), K-means clustering segmentation, and, canny edge detection were
used to detect the cancer region in Computer Tomography (CT) images for parosteal osteosarcoma,
enchondroma and osteochondroma types of bone cancer. After segmentation, the normal and
cancerous affected images were classified using various existing CNN-based models. The results
revealed that AlexNet model showed a better performance with a training accuracy of 98%, validation
accuracy of 98%, and testing accuracy of 100%.

Bones are made of two regions, outer and inner regions. The outer region is compact and enclosed by cancel-
lous tissues while the inner region consists of blood-producing m ­ aterial1. Bone cancer can originate from any
part of the bones and can occur due to hereditary factors or previous radiation exposure. The benign cancer
occurs commonly and is asymptomatic until the disease spreads or injuries the other body parts. The malignant
cancer can lead to the patient’s death unless treated at the early s­ tage2. Since most of the cancers are asympto-
matic, early diagnosis and treatment is critical to stop spreading to the other regions of the body. Bone cancer
is divided into primary and secondary types. If the unrestricted cell growth is not treated during the primary
type, cancer can develop unwanted new cells which may later lead to death. In the primary type, cancer starts
from cells of bone whereas in the secondary type, cancer starts from other body regions and then affect the cells
­ one3. Primary detection of bone cancer has a chance of reducing the death rate. In the beginning stage,
of the b
the symptoms of bone cancer may include bowel movement change, formation of new lumps, weight loss,
bone loss, pain and, weakness in b ­ ones4. Proper treatment of cancer requires information like the history of
patients, physical examination, and imaging techniques (e.g., X-ray2, Computed Tomography (CT)5, Magnetic
Resonance Imaging (MRI)6, and Positron Emission Tomography (PET)7). Radiologists prefer medical imag-
ing procedure for the detection of cancer due to the management of time, low cost and early detection. The
preprocessing, segmentation, feature extraction, and classification stages are incorporated in medical devices
for early d­ iagnosis8. Moreover, the pre-processing stage includes, either bilateral, median or Gaussian filter to
remove the noise from the ­images9,10. After the noise removal, cancer regions can be segmented either using the
threshold ­based11, region ­based11,12 or edge based ­segmentation13 methods. The segmentation techniques like
Prewitt, Canny, Sobel, K-means and region growing were used to analyze the osteosarcoma type of bone cancer
in X-ray i­ mages2,10,13. The K-means and edge detection segmentation algorithms have also been used for bone
­cancer14. After segmenting the cancer regions, seven Gray Level Co-occurrence Matrix (GLCM) features were
extracted from the image. These features were then trained and tested using the K-nearest neighbors (KNN)
classifier with a resulting accuracy of 98.18%14. The fusion of K-means with the fuzzy C-means segmentation

1
Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute
of Technology, Vellore 632014, India. 2Department of Communication Engineering, School of Electronics
Engineering, Vellore Institute of Technology, Vellore 632014, India. *email: rsivakumar@vit.ac.in

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 1

Vol.:(0123456789)
www.nature.com/scientificreports/

of the MRI images was used to calculate the mean intensity to identify the cancer and non-cancer images. The
accuracy rate was 98% with a sensitivity of 65.21% and a specificity of 98.47%15. The X-ray images of 105, with
65 cancers and 40 normal, were used to extract the histogram of the gradient with GLCM features. Using the
support vector machine (SVM) classifier, an accuracy of 92.5% was a­ chieved16. The 36 X-ray images were used
to extract the cancer border clarity and GLCM features and these features were then used to classify the benign
and cancerous image using random forest and SVM classifiers with the resulting testing accuracy of 85% and
81%, respectively. Among these two classifiers, random forest performed well compared to SVM which may be
due to the use of small dataset and decision tree in a random forest classifier whereas SVM uses only the linear
kernel, hence random forest works faster and performs good ­result17. Recently, the development of Artificial
Intelligence (AI) has becoming more advanced in medical image ­analysis18–20. Deep neural networks (DNNs) are
used as computational models to acquire training to learn the features of the images from a large set of datasets,
resulting in reduction of false positive and false negative rates and thereby increasing the accuracy rate during
the testing s­ tage20,21. The previous works on DNN primarily focused on X-ray2,9 and MRI i­ mages2,22,23 for bone
cancer diagnosis while usage of CT images is rare due to the limited numbers of publicly available ­database5,24,25.
The 2899 X-ray images were used to evaluate the 3 way classification (benign, intermediate and malignant)
using Convolutional neural network (CNN) classifier and achieved the testing accuracy of 73.4%9. To classify
the normal and bone cancer images, the 1060 MRI images were divided into training (70%), validation (20%)
and testing (10%). EfficientNet B0 was then used for the image classification and achieved the testing accuracy
of 72%6.The 39 MRI images with histopathological confirmation were used to predict the malignancy in the
bone cancer using DNN. The dataset were splitted into training (70%), validation (10%), testing (20%) and then
ResNet50 model was used to classify the benign and malignant type of bone cancer with the resulting testing
accuracy of 95%23. The 832 CT scans, with 732 for training, 40 for validation and 60 for testing, were used to
segment and classify the cancer regions using 2D and 3D UNet model and 3D ResNet, respectively. This model
achieved the testing sensitivity of 82.7% with 0.617 false positive ­rate5.
The Computer aided design (CAD) system were presented to distinguish the benign and malignant type of
bone cancer in 79 CT images. Active contour model were used to segment the cancer regions and then GLCM
features were extracted to train and test using the Random Forest classifier and obtained the overall testing accu-
racy of 91.47%24. The K-mean clustering segmentation algorithm was used to segment the cancer regions in 3
MRI and 3 CT images. The surface area of the cancer regions were evaluated using the algorithm and compared
with the radiologist performance. The relative difference of algorithm and radiologists ranges from 0.63 to 1.75%
for MRI images and 0.34 to 1.51% for CT ­images25. As CT is the primary scan after X-ray, hence is necessary
to conduct a thorough investigation using the CT scans for detecting early bone cancer. Usually, CT scans pre-
ferred over other medical imaging modalities due to the excellent spatial resolution and lesser scanning t­ ime12.
CT is also the best imaging method to visualize the complex bone structures in the early stage for detecting the
bone ­metastasis12,26. The current study deals with commonly affected bone cancers for the early detection of
parosteal ­osteosarcoma27, ­enchondroma28, and o ­ steochondroma29 types of bone cancer. Perosteal osteosarcoma
is the primary malignant type which arises on the surface of the ­bone30. The common location is metaphyseal to
diaphyseal junction or the diaphysis part of the long bone like humerus, tibia, mandible, and ­femur31. Enchon-
droma commonly occurs in the cartilage inside the b ­ one32 and osteochondroma occurs in the end of growth
plate of long b ­ one33. The goal of this study is to detect bone cancer at a preliminary stage by utilizing the larger
datasets of CT images and applying the image processing and deep learning (DL) techniques to detect the cancer
with higher accuracy rate. More specifically, using 1141 bone CT images, the current study utilized K-means
clustering, canny edge detection segmentation, and CNN models to classify the normal and cancerous images.

Methods
The proposed method involved detection and classification of bone cancer. The cancer region has more intensity
than the other regions in the ­image24,34. Figure 1 shows the flowchart of the step involved in detecting the cancer
region from the CT image for classifying the normal and cancer affected bones.

Image collection
The bone cancer images are obtained from publicly available databases: radiopeadia (radiopeadia.org) and can-
cer_imaging_archive (cancerimagingarchive.net). The dataset used in this study consists of 1141 CT scan images
(730 CT scans from radiopeadia and 411 CT scans from cancer_imaging_archive), with 530 bone cancer images
and 511 normal images.

Pre‑processing
The image was converted into a grayscale prior to applying the fi ­ lter34. There exists many filters (e.g., Average,
Median, Gaussian, Weiner filters) for noise reduction during the pre-processing s­ tage25. Among these, the median
filter had a better performance for early-stage detection of the bone cancer ­images24. Moreover, this is a non-linear
method that is effective in removing the salt and pepper noise while preserving the e­ dges25,34.

Image segmentation using K‑means clustering


K-means clustering is the unsupervised ­learning35 to classify the data into clusters (or groups). In the K-means
clustering algorithm, the number of clusters (e.g., k ) is required to be known. Initially, ‘k ’ centroids are selected
randomly in the dimensional space. The squared Euclidean distance metrics were computed between each data
point and all the centroid locations. The minimum distance is then used to cluster the data to a specific centroid.
The location of each centroid is updated by averaging all the data points that belong to a specific cluster. This

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 2

Vol:.(1234567890)
www.nature.com/scientificreports/

Image Collecon (CT scans)

Pre-processing (Median filter)

K-means clustering

Canny edge detecon

Convoluonal Neural Network

Normal Abnormal

Figure 1.  Flowchart illustrating the steps involved in the detection of bone cancer.

procedure of computing the distance metric and updating the centroid location is repeated until there is no
change in centroid l­ ocation35,36. This algorithm was mainly used to segment the cancer region from the original
CT image.

Canny edge detection


The edge detection is used to find the object boundaries by detecting the discontinuities in the image. This is
widely applied in the image processing applications for extracting relevant features from an i­mage37. Different
types of edge detection techniques are Sobel, Prewitt, Roberts, and C­ anny10,15,35. Among these, the canny edge
detection method provides better results for early-stage detection of bone cancer but this technique requires
thresholding-in which low and high threshold values are chosen based on the histogram of the ­images35. Moreo-
ver, this approach performs well compared to other edge detection methods due to specific advantages: localiza-
tion of edges, reduction of noise and gradient i­ nformation37.
Canny edge detection consists of a Gaussian filter, gradient magnitude, non-maxima suppression and two
threshold values. This approach has a single response and better localization to accurately identify weak and
strong backgrounds without missing any detail ­information36.The gradient magnitude can be calculated by
­using13,36:
1 0 1 1 2 1
   
Gx = −2 0 2 × A Gy = 0 0 0 × A,
−1 0 1 1 −2 −1

|G| = Gx2 + Gy2 ,

Gy
Angle(θ) = tan−1 ,
Gx
where Gx represents horizontal edges, Gy represents vertical edges, and A represents the filtered bone cancer
image that convolves with the 3 × 3 convolutional kernel to detect the horizontal and vertical edges. The non-
maxima suppression is used to narrow the edges of the image. If the gradient of the pixel is lesser than the lower
threshold value, then the pixel is neglected and if the gradient of the pixel is greater than the higher threshold
value, the pixel is ­accepted36. If the gradient of the pixel lies between lower and upper threshold value and the
pixel is connected to edge, then only the pixel is ­accepted10,36.

Convolutional neural network


Convolutional Neural Network (CNN) is commonly used for classifying the medical images with good accuracy
and better ­performance36,38,39 The CNN is a supervised learning scheme that processes the input images and
produces the output to determine whether the disease exists or not. The current study had utilized AlexNet model
as shown in Fig. 2. This network architecture consists of eight layers; the first five were convolutional layers with
the combination of maxpooling and next 3 were fully connected l­ayers36,38. After each convolutional layers, a

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 3

Vol.:(0123456789)
www.nature.com/scientificreports/

Figure 2.  The AlexNet architecture for detecting normal and cancerous CT bone i­ mages38,40.

rectifier linear unit (ReLU) activation function is used. The convolutional layers utilize specific number of filters
(along with ReLU) to extract the relevant features from the input image. The maxpooling layer (an optional layer),
is then used to remove the computational complexity while preserving the features. Followed by convolutional
and pooling layers, there are 3 fully connected layers that flatten the features of the image. A dropout layer exists
between fully connected layer to prevent the over fitting problems. The last layer is the fully connected layer
that uses softmax activation function to analyze the probabilities of each ­class36,38–40. The layer specifications
like filter size, kernel size, stride, input shape and output shape of the AlexNet architecture is shown in Table 1.
In the current study, various CNN models like A ­ lexNet41, ­ResNet5042, ­ResNet10143, ­VGG1643, ­VGG1943,
42 44 42,43
­InceptionV3 , ­Xception , ­DenseNet121 , EfficientNet B ­ 06 and EfficientNet B
­ 245 were applied to classify the
CT image either into normal or cancer. Each CNN model was trained to perform two-way classification (normal
and malignant). The input image size, number of epochs, loss function, and learning optimizer were the same
for all the CNN models to facilitate the comparison in terms of accuracy and computational processing time.
The size of the input image was 227 × 227 and the batch size was set to 32. Adam optimizer was used with the
learning rate of 0.001, due to its better convergence, less memory requirements and computationally efficient
compared to Stochastic and RMSprop ­optimizers46. Since the model focuses on two way classification, binary
cross entropy loss f­ unction47 was used for all CNN models during the training, validation and testing stages. These
models were implemented in Python using Jupyter Notebook version 6.4.12. The accuracy of the classification
model was calculated using the equation:-
(TP + TN)
Accuracy = ,
(TP + TN + FP + FN)
where TP represents the true positive rate (i.e., diseased images are correctly predicted as diseased images), FP
represents the false positive rate (i.e., normal images are wrongly predicted as diseased images), FN represents
the false negative rate (i.e., diseased images are wrongly predicted as normal images) and TN represents the true
negative rate (i.e., normal images are correctly predicted as normal images)48,49.

Layer Filter size No. of filters Stride Input dimension Output dimension Activation function
Convolution 1 11 × 11 96 4 227 × 227 × 3 55 × 55 × 96 ReLU
Maxpooling 3×3 – 2 55 × 55 × 96 27 × 27 × 96 –
Convolution 2 5×5 256 1 27 × 27 × 96 27 × 27 × 256 ReLU
Maxpooling 3×3 – 2 27 × 27 × 256 13 × 13 × 256 –
Convolution 3 3×3 384 1 13 × 13 × 256 13 × 13 × 384 ReLU
Convolution 4 3×3 384 1 13 × 13 × 384 13 × 13 × 384 ReLU
Convolution 5 3×3 256 1 13 × 13 × 384 13 × 13 × 256 ReLU
Maxpooling 3×3 – 2 13 × 13 × 256 6 × 6 × 256 –
Flatten – – – 6 × 6 × 256 9216 –
Dense – – – 9216 4096 ReLU
Dense – – – 4096 4096 ReLU
Dense (output) – – – 4096 2 Softmax

Table 1.  Layer specifications of the AlexNet ­architecture38,40.

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 4

Vol:.(1234567890)
www.nature.com/scientificreports/

Results and discussion


The CT images of Parosteal osteosarcoma, Osteochondroma and Enchondroma types of bone cancer images
were used for analysis in the current study and are shown in Fig. 3.
Figure 4 describes the filtered CT images after the median filter. The original CT images (as shown in Fig. 3)
usually contain noise that reduces the visibility of the low—contrast pixels in the image. Thus, the noise present
in Fig. 4 has been removed using the median filter to increase the contrast of the images. The K-means clustering
segment the filtered CT image into different regions based upon pixel intensity which aids to identify the area
that contain cancerous growth. More specifically, the red colour label in Fig. 5 represents the bone cancer-affected
region. Figure 6 describes the segmented edges and boundaries of the cancer affected area after applying the
canny edge detection algorithm.
The dataset was divided into 80% for training, 10% for validation, and 10% for testing. Figures 7 and 8 depict
the graphical representation of binary cross entropy loss and accuracy of AlexNet model. As shown in Fig. 7,
at the initial epoch value the total weighted loss was high and then the loss was decreased as the epoch value
was increased. The accuracy, as shown in Fig. 8, was lower at the initial epoch value and then improved with
increasing epoch value. From epoch 14 onwards (Fig. 7), the training and validation losses converge, indicating
that the training can be stopped. For comparative analysis across various CNN-based models, the epoch number
was selected when any one of the models reached 100% accuracy during the testing stage. In this case, AlexNet
reached 100% accuracy at 20th epoch and hence number of epoch was set to 20 for all the CNN models.
Table 2 describes the results of two way classification performed by AlexNet, ResNet50, ResNet101, VGG16,
VGG19, DenseNet121, EfficientNet B0, EfficientNet B2, Xception, and InceptionV3 models. Among these mod-
els, AlexNet performed well with the training accuracy of 98%, validation accuracy of 98% and testing accuracy
of 100% with lesser computational processing time (29 min) compared to other CNN models.

Figure 3.  Original CT images: (a) lateral CT of parosteal osteosarcoma, (b) coronal CT of Osteochondroma,
and (c) lateral CT of Enchondroma.

Figure 4.  Effect of the median filter: (a) lateral CT of parosteal osteosarcoma, (b) coronal CT of
Osteochondroma, and (c) lateral CT of enchondroma.

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 5

Vol.:(0123456789)
www.nature.com/scientificreports/

Figure 5.  Effect of K-means clustering: (a) lateral CT of Parosteal osteosarcoma, (b) coronal CT of
osteochondroma, and (c) lateral CT of enchondroma.

Figure 6.  Canny edge detection: (a) lateral CT of parosteal osteosarcoma, (b) coronal CT of osteochondroma,
and (c) lateral CT of enchondroma.

AlexNet Model Loss


1
Training
0.8 Validaon
Binary Crossentropy Loss

0.6

0.4

0.2

0
0 4 8 12 16 20
Epochs

Figure 7.  Total weighted loss of AlexNet model during training and validation stages.

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 6

Vol:.(1234567890)
www.nature.com/scientificreports/

AlexNet Model Accuracy


1

0.9

0.8

Accuracy 0.7

0.6
Training
Validaon
0.5
0 4 8 12 16 20
Epochs

Figure 8.  Accuracy of AlexNet model during training and validation stages.

Computational processing
Classification model Training accuracy (%) Validation accuracy (%) Testing accuracy (%) time (min) Number of epochs
AlexNet 98 98 100 29 20
ResNet50 84 83 81 50 20
ResNet101 88 92 89 71 20
VGG16 83 77 74 120 20
VGG19 86 87 80 150 20
DenseNet121 64 64 68 33 20
EfficientNet B0 86 94 89 17 20
EfficientNet B2 87 91 91 48 20
Xception 65 58 68 105 20
InceptionV3 59 59 69 51 20

Table 2.  Comparison performance of each convolutional neural network (CNN) model.

Conclusion
Bone cancer is one of the hazardous disease and hence early detection is utmost important for better diagnosis.
This can be diagnosed based on three elements: symptoms, histopathological and imaging. The symptoms are
mostly nonspecific during the initial stages whereas histopathology examination is an invasive method that
detects the cancer mostly at the final stage but not during initial stage. In such cases, imaging has the ability to
differentiate the normal and cancerous image during the early stage. The goal of this current study is to detect
and classify bone cancer present in the CT images using various image processing techniques along with the vari-
ous CNN models. The image processing techniques were used to detect the cancer region using pre-processing
(median filter) to remove the noise in the image, K- means clustering to segment the cancer region, canny edge
detection segmentation to extract the cancer edges. When compared with other CNN models, the AlexNet model
showed the best performance, with training accuracy of 98%, validation accuracy of 98%, testing accuracy of
100% and lowest computational processing time. Thus, AlexNet could be a useful tool to predict the bone cancer
at the early stage from CT images using DNN. As a future work, the low, medium, and high level features from
the CT images can also be extracted prior to classification using DNNs (e.g., ResNet, VGGNet and DenseNet) to
achieve automated AI based model to detect the stages of bone cancer and classification of normal and subtypes
of bone cancer.

Data availability
The dataset generated and/or analyzed during the current study are available in the [radiopeadia and cancerim-
agingarchive] repositories, [www.​radio​peadia.​org and www.​cance​rimag​ingar​chive.​net].

Received: 28 February 2023; Accepted: 23 January 2024

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 7

Vol.:(0123456789)
www.nature.com/scientificreports/

References
1. Boulehmi, H., Mahersia, H. & Hamrouni, K. Bone cancer diagnosis using GGD analysis. In 2018 15th International Multi-conference
on Systems, Signals & Devices 246–251. https://​doi.​org/​10.​1109/​SSD.​2018.​85706​58 (IEEE, 2018).
2. Shukla, A. & Patel, A. Bone cancer detection from X-ray and MRI images through image segmentation techniques. Int. J. Recent
Technol. Eng. 8, 273–278. https://​doi.​org/​10.​35940/​ijrte.​F7159.​038620 (2020).
3. Sujatha, K. et al. Screening and identify the bone cancer/tumor using image processing. In 2018 International Conference on Cur-
rent Trends Towards Converging Technologies 1–5. https://​doi.​org/​10.​1109/​ICCTCT.​2018.​85509​17 (IEEE, 2018).
4. Ibrahim, T., Mercatali, L. & Amadori, D. Bone and cancer: The osteoncology. Clin. Cases Mineral Bone Metab. 10, 121 (2013).
5. Noguchi, S. et al. Deep learning-based algorithm improved radiologists’ performance in bone metastases detection on CT. Eur.
Radiol. 32, 1–12. https://​doi.​org/​10.​1007/​s00330-​022-​08741-3 (2022).
6. Eweje, F. R. et al. Deep learning for classification of bone lesions on routine MRI. EBioMedicine 68, 103402. https://​doi.​org/​10.​
1016/j.​ebiom.​2021.​103402 (2021).
7. Han, S., Li, Y., Li, Y. & Zhao, M. Diagnostic efficacy of PET/CT in bone tumors. Oncol. Lett. 17, 4271–4276. https://​doi.​org/​10.​
3892/​ol.​2019.​10101 (2019).
8. Xia, C. et al. SVM-based bone tumor detection by using the texture features of X-ray image. In 2018 International Conference on
Network Infrastructure and Digital Content 130–134. https://​doi.​org/​10.​1109/​ICNIDC.​2018.​85258​06 (IEEE, 2018).
9. Zimbalist, T. et al. Detecting bone lesions in X-ray under diverse acquisition conditions. https://​doi.​org/​10.​48550/​arXiv.​2212.​
07792 (2022).
10. Huo, Y. K., Wei, G., Zhang, Y. D., & Wu, L. N. An adaptive threshold for the Canny operator of edge detection. In 2010 International
Conference on Image Analysis and Signal Processing 371–374. https://​doi.​org/​10.​1109/​IASP.​2010.​54760​95 (IEEE, 2010).
11. Hossain, E. & Rahaman, M. A. Comparative evaluation of segmentation algorithms for tumor cells detection from bone MR scan
imagery. In 2018 International Conference on Innovations in Science, Engineering and Technology 361–366. https://d ​ oi.o​ rg/1​ 0.1​ 109/​
ICISET.​2018.​87456​12 (IEEE, 2018).
12. Kaur, E. C. & Garg, U. Bone cancer detection techniques using machine learning. In 2022 International Conference on Computa-
tional Modelling, Simulation and Optimization 315–319. https://​doi.​org/​10.​1109/​ICCMS​O58359.​2022.​00068 (IEEE, 2022).
13. Pandey, A. & Shrivastava, S. K. A survey paper on calcaneus bone tumor detection using different improved canny edge detector.
In 2018 IEEE International Conference on System, Computation, Automation and Networking 1–5. https://d ​ oi.o
​ rg/1​ 0.1​ 109/I​ CSCAN.​
2018.​85411​94 (IEEE, 2018).
14. Ranjitha, M. M., Taranath, N. L., Arpitha, C. N. & Subbaraya, C. K. Bone cancer detection using K-means segmentation and Knn
classification. In 2019 1st International Conference on Advances in Information Technology 76–80. https://​doi.​org/​10.​1109/​ICAIT​
47043.​2019.​89873​28 (IEEE, 2019).
15. Mistry, K. D. & Talati, B. J. Integrated approach for bone tumor detection from mri scan imagery. In 2016 International Conference
on Signal and Information Processing 1–5. https://​doi.​org/​10.​1109/​ICONS​IP.​2016.​78574​71 (IEEE, 2016).
16. Sharma, A. et al. Bone cancer detection using feature extraction based machine learning model. Comput. Math. Methods Med.
https://​doi.​org/​10.​1155/​2021/​74331​86 (2021).
17. Shen, R. et al. Osteosarcoma patients classification using plain X-rays and metabolomic data. In 2018 40th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society 690–693. https://​doi.​org/​10.​1109/​EMBC.​2018.​85123​38 (IEEE,
2018).
18. Zhao, Z. et al. Deep neural network based artificial intelligence assisted diagnosis of bone scintigraphy for cancer bone metastasis.
Sci. Rep. 10, 17046. https://​doi.​org/​10.​1038/​s41598-​020-​74135-4 (2020).
19. Dong, M., Huang, X. & Xu, B. Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional
spiking neural network. PLoS ONE 13, e0204596. https://​doi.​org/​10.​1371/​journ​al.​pone.​02045​96 (2018).
20. Frank, D. A., Chrysochou, P., Mitkidis, P. & Ariely, D. Human decision-making biases in the moral dilemmas of autonomous
vehicles. Sci. Rep. 9, 13080. https://​doi.​org/​10.​1038/​s41598-​019-​49411-7 (2019).
21. Xiong, C., Xu, X., Zhang, H. & Zeng, B. An analysis of clinical values of MRI, CT and X-ray in differentiating benign and malignant
bone metastases. Am. J. Transl. Res. 13, 7335 (2021).
22. Asuntha, A. et al. Feature extraction to detect bone cancer using image processing. Res. J. Pharm. Biol. Chem. Sci. 8, 434 (2018).
23. Georgeanu, V. A., Mămuleanu, M., Ghiea, S. & Selișteanu, D. Malignant bone tumors diagnosis using magnetic resonance imaging
based on deep learning algorithms. Medicina 58, 636. https://​doi.​org/​10.​3390/​medic​ina58​050636 (2022).
24. Mishra, A. & Suhas, M. V. Classification of benign and malignant bone lesions on CT images using random forest. In 2016 IEEE
International Conference on Recent Trends in Electronics, Information & Communication Technology 1807–1810. https://​doi.​org/​
10.​1109/​RTEICT.​2016.​78081​46 (2016).
25. Kadhim, W. D. & Abdoon, R. S. Utilizing k-means clustering to extract bone tumor in CT scan and MRI images. J. Phys. Conf. Ser.
1591, 012010. https://​doi.​org/​10.​1088/​1742-​6596/​1591/1/​012010 (2020).
26. Power, S. et al. Computed tomography and patient risk: Facts, perceptions and uncertainties. World J. Radiol. 8, 902. https://​doi.​
org/​10.​4329/​wjr.​v8.​i12.​902 (2016).
27. Yarmish, G. et al. Imaging characteristics of primary osteosarcoma: Nonconventional subtypes. Radiographics 30, 1653–1672.
https://​doi.​org/​10.​1148/​rg.​30610​5524 (2010).
28. Ravish, V. N., Vinod Kumar, A. C. & Sen, G. Enchondroma—A case study. Int. J. Sci. Res. 4, 2319–7064 (2015).
29. BinMohi, A. M., Alzahrani, A. A. & Reda, B. R. A case report of femur osteochondroma in 22 years old female patient. Int. J. Adv.
Res. 8, 1263–1267. https://​doi.​org/​10.​21474/​IJAR01/​11964 (2020).
30. Papathanassiou, Z. G. et al. Parosteal osteosarcoma mimicking osteochondroma: A radio-histologic approach on two cases. Clin.
Sarcoma Res. 1, 1–8. https://​doi.​org/​10.​1186/​2045-​3329-1-2 (2011).
31. Larousserie, F. et al. Parosteal osteoliposarcoma: A new bone tumor (from imaging to immunophenotype). Eur. J. Radiol. 82,
2149–2153. https://​doi.​org/​10.​1016/j.​ejrad.​2011.​11.​035 (2013).
32. Ferrer-Santacreu, E. M., Ortiz-Cruz, E. J., Díaz-Almirón, M. & Pozo Kreilinger, J. J. Enchondroma versus chondrosarcoma in long
bones of appendicular skeleton: Clinical and radiological criteria—A follow-up. J. Oncol. https://​doi.​org/​10.​1155/​2016/​82620​79
(2016).
33. Tepelenis, K. et al. Osteochondromas: An updated review of epidemiology, pathogenesis, clinical presentation, radiological features
and treatment options. In Vivo 35, 681–691. https://​doi.​org/​10.​21873/​invivo.​12308 (2021).
34. Sinthia, P. & Sujatha, K. A novel approach to detect bone cancer using k-means clustering algorithm and edge detection method.
Asian Res. Publ. Netw. J. Eng. Appl. Sci. 11, 8002–8007 (2016).
35. Reis, H. C. Calcaneus benign tumor detection using canny edge detector. Int. J. Oncol. Cancer Ther. 2, 1 (2017).
36. Heravi, E. J., Aghdam, H. H. & Puig, D. Classification of foods using spatial pyramid convolutional neural network. In CCIA
163–168 (2016).
37. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 6, 679–698. https://d ​ oi.o​ rg/1​ 0.1​ 109/​
TPAMI.​1986.​47678​51 (1986).
38. Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf.
Process. Syst. 25, 386. https://​doi.​org/​10.​1145/​30653​86 (2012).

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 8

Vol:.(1234567890)
www.nature.com/scientificreports/

39. Sunitha, M. R., Huda, R., Gopinath, C. B. & Sathyabhama, R. Bone cancer detection using AlexNet and VGG16. Int. Res. J. Eng.
Technol. 9, 7 (2022).
40. Han, X., Zhong, Y., Cao, L. & Zhang, L. Pre-trained Alexnet architecture with pyramid pooling and supervision for high spatial
resolution remote sensing image scene classification. Remote Sens. 9, 848. https://​doi.​org/​10.​3390/​rs908​0848 (2017).
41. Lin, C. J., Li, Y. C. & Lin, H. Y. Using convolutional neural networks based on a Taguchi method for face gender recognition.
Electronics 9, 1227. https://​doi.​org/​10.​3390/​elect​ronic​s9081​227 (2020).
42. Pan, C., Lian, L., Chen, J. & Huang, R. FemurTumorNet: Bone tumor classification in the proximal femur using DenseNet model
based on radiographs. J. Bone Oncol. 42, 100504. https://​doi.​org/​10.​1016/j.​jbo.​2023.​100504 (2023).
43. Gawade, S., Bhansali, A., Patil, K. & Shaikh, D. Application of the convolutional neural networks and supervised deep-learning
methods for osteosarcoma bone cancer detection. Healthcare Anal. 3, 100153. https://d ​ oi.o
​ rg/1​ 0.1​ 016/j.h
​ ealth.2​ 023.1​ 00153 (2023).
44. Mehmood, A. et al. SBXception: A shallower and broader xception architecture for efficient classification of skin lesions. Cancers
15, 3604. https://​doi.​org/​10.​3390/​cance​rs151​43604 (2023).
45. Park, C. W. et al. Artificial intelligence-based classification of bone tumors in the proximal femur on plain radiographs: System
development and validation. PLoS ONE 17(2), e0264140. https://​doi.​org/​10.​1371/​journ​al.​pone.​02641​40 (2022).
46. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. https://​doi.​org/​10.​48550/​arXiv.​1412.​6980 (2014).
47. Anisuzzaman, D. M. et al. A deep learning study on osteosarcoma detection from histological images. Biomed. Signal Process.
Control 69, 102931. https://​doi.​org/​10.​48550/​arXiv.​2011.​01177 (2021).
48. Jmour, N., Zayen, S., & Abdelkrim, A. Convolutional neural networks for image classification. In International Conference on
Advanced Systems and Electric Technologies 397. https://​doi.​org/​10.​1109/​ASET.​2018.​83798​89 (2018).
49. Rajoub, B. Supervised and unsupervised learning. In Biomedical Signal Processing and Artificial Intelligence in Healthcare (ed.
Rajoub, B.) 51–89 (Elsevier, 2020).

Acknowledgements
This work was supported by the third author’s Seed Grant (SG20220094) awarded by the Vellore Institute of
Technology.

Author contributions
S.K.—Concept and writing—original draft. R.S.—Supervision and reviewing. A.K.C.—Supervision and
validation.

Competing interests
The authors declare no competing interests.

Additional information
Correspondence and requests for materials should be addressed to S.R.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/.

© The Author(s) 2024

Scientific Reports | (2024) 14:2144 | https://doi.org/10.1038/s41598-024-52719-8 9

Vol.:(0123456789)

You might also like