Paper 3
Paper 3
1 School of Computer Science and Engineering, Vellore Institute of Technology, Chennai Campus,
Chennai 603103, Tamilnadu, India
2 Department of Computer Science and Engineering, Sri Krishna College of Engineering and Technology,
Coimbatore 641008, Tamilnadu, India; rameshk@skcet.ac.in
3 Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation,
Vaddeswaram 522502, Andrapradesh, India; bala08.ap@gmail.com
4 Department of Electronics and Communication Engineering, Vel Tech Rangaranjan Dr. Sagunthala R & D
Institute of Science and Technology, Chennai 600062, Tamilnadu, India; drmantobenet@veltech.edu.in
* Correspondence: renjith.pn@vit.ac.in
† Presented at the International Conference on Recent Advances on Science and Engineering, Dubai,
United Arab Emirates, 4–5 October 2023.
Abstract: Alzheimer’s disease (AD) is the most common form of dementia in senior individuals. It is
a progressive neurological ailment that predominantly affects memory, cognition, and behavior. An
early AD diagnosis is essential for effective disease management and timely intervention. Due to its
complexity and heterogeneity, AD is, however, difficult to diagnose precisely. This paper investigates
the integration of disparate machine learning algorithms to improve AD diagnostic accuracy. The
used dataset includes instances with missing values, which are effectively managed by employing
appropriate imputation techniques. Several feature selection algorithms are applied to the dataset
to determine the most relevant characteristics. Moreover, the Synthetic Minority Oversampling
Citation: Neelakandan, R.P.;
Technique (SMOTE) is employed to address class imbalance issues. The proposed system employs an
Kandasamy, R.; Subbiyan, B.; Bennet,
Ensemble Classification algorithm, which integrates the outcomes of multiple predictive models to
M.A. Early Detection of Alzheimer’s
enhance diagnostic accuracy. The proposed method has superior disease prediction capabilities in
Disease: An Extensive Review of
Advancements in Machine Learning
comparison to existing methods. The experiment employs a robust AD dataset from the UCI machine
Mechanisms Using an Ensemble and learning repository. The findings of this study contribute significantly to the field of AD diagnoses
Deep Learning Technique. Eng. Proc. and pave the way for more precise and efficient early detection strategies.
2023, 59, 10. https://doi.org/
10.3390/engproc2023059010 Keywords: Alzheimer’s disease (AD); machine learning; early detection; diagnosis; Ensemble
Classification
Academic Editors: Nithesh Naik,
Rajiv Selvam, Pavan Hiremath,
Suhas Kowshik CS and Ritesh
Ramakrishna Bhat
1. Introduction
Published: 11 December 2023 Alzheimer’s disease is a progressive neurological illness that affects memory, cognition,
and behavior in senior individuals and is the most common cause of dementia [1–3]. An AD
diagnosis should be accurate and timely to ensure effective disease management and timely
Copyright: © 2023 by the authors.
intervention, resulting in improved patient care and potential therapeutic interventions.
Licensee MDPI, Basel, Switzerland. For clinicians, the precise diagnosis of AD is a challenging task due to its complexity and
This article is an open access article heterogeneity. In recent years, machine learning has emerged as a powerful tool in medical
distributed under the terms and diagnoses, offering the potential to augment traditional diagnostic approaches and improve
conditions of the Creative Commons accuracy [4,5]. Figure 1 represents the IoT-based patient monitoring system. Through the
Attribution (CC BY) license (https:// integration of disparate machine learning algorithms, this study aims to improve AD
creativecommons.org/licenses/by/ diagnosis accuracy by leveraging the capabilities of machine learning. Multi-algorithm
4.0/). approaches strive to overcome the limitations of individual models by harnessing the
models by harnessing
predictive capabilitiesthe
of predictive capabilities This
multiple algorithms. of multiple algorithms.
study aims This Alzheimer’s
to enhance study aims
todisease
enhance Alzheimer’s disease (AD) diagnoses using
(AD) diagnoses using machine learning techniques.machine learning techniques.
Analyzing
state-of-the-art techniques. Additionally, the research does not assess the method’s inter-
pretability, potentially limiting its application [11]. Another study introduces an ensemble
learning architecture using 2D CNNs for an AD diagnosis [12]. This method trains on grey
matter density maps and uses ensemble models to improve prediction accuracy. How-
ever, its limitations include reliance on 2D MRI images and the need for testing on larger
datasets [12]. Research on machine learning for AD diagnoses using neuroimaging data
explored techniques like Support Vector Machines and CNNs [13,14]. While some methods
achieve significant accuracy, they often face challenges with real-world healthcare data or
require further testing on more extensive datasets [15].
A study using a stacking-genetic algorithm ensemble learning model reached a high
accuracy, precision, recall, and F1-score in early AD diagnoses [15]. Nevertheless, issues like
variable dataset validation and clinical interpretability remain. Combining MRI classifiers
offers reliable AD detection, but its applicability requires further exploration [16]. On the
other hand, Random Forest achieves high accuracy predicting AD using limited features
from MRI scans [17]. Deep learning has shown potential in AD diagnoses, especially when
studying complex disease pathways [18]. Still, its reliability in predicting AD progression
needs rigorous testing across various imaging modalities and larger datasets. The use of
a deep CNN for a stage-based AD diagnosis shows promise, but a comprehensive method-
ology comparison and general applicability assessment are essential [19]. Other methods,
such as high-pressure liquid chromatography with AI algorithms, offer insights into pre-
dicting Alzheimer’s medication properties [20]. Deep learning techniques integrating
expert knowledge and multi-source data have outperformed many ensemble methods [21].
However, the system might need substantial computational resources and could vary across
datasets. Research on ensemble learning with Conformal Predictors indicates improved
categorization, but a broader dataset is essential for validation [22]. Hierarchical ensem-
ble learning addresses some deep learning challenges, providing enhanced classification
accuracy with pre-trained neural networks [23]. However, this may require substantial
training datasets and high-quality MRI scans. Lastly, ensemble learning for regression
problems shows potential in predicting medication effects, but needs expansion for broader
applications [14,24]. Ensemble learning and advanced algorithms demonstrate significant
promise in AD diagnoses [25,26]. However, broader dataset validations, methodology
comparisons, and evaluations of real-world applicability are crucial.
A. Pre-processing
The pre-processing phase prepares the raw AD dataset for an analysis. Categorical
attributes are transformed to numeric for compatibility with machine learning. Median
imputation addresses missing values, ensuring data completeness. Feature extraction
enriches the dataset, while feature selection pinpoints the most informative attributes.
Recursive Feature Elimination (RFE) removes less vital features iteratively. A Univariate
Analysis ranks features based on their importance. A Principal Component Analysis (PCA)
compresses data without losing critical information. This rigorous preparation creates a
solid foundation for the ensemble-based AD diagnosis model.
B. Extraction and Selection of Features
In the ensemble-based model for an AD diagnosis, feature extraction transforms the
raw AD dataset to capture essential patterns, enhancing its richness for better prediction.
Feature selection then identifies the most critical characteristics within this dataset. Several
algorithms assess which features most influence diagnostic accuracy. Recursive Feature
Elimination (RFE) methodically removes less important features to streamline the model,
while a Univariate Analysis ranks each feature’s significance in classification. A Principal
Component Analysis (PCA) compresses data, retaining essential variance for a concise rep-
resentation. By using these feature extraction and selection methods, the model highlights
the AD dataset’s key aspects, improving prediction accuracy and supporting early disease
detection for improved patient results.
Given a dataset X of size n × m (n samples, m features),
X ∈ R( n × m ) (1)
Compute the mean of each feature and subtract it from the corresponding feature in X,
resulting in a zero-mean dataset Xcentered .
1
Xcentered = s = X − × ( ∑ ( i = 1 ) n × n · Xi ) (2)
n
Calculate the covariance matrix.
Xcentered
C( Xcentered ) = Xcentered T × (3)
( n − 1)
Compute the eigen values (λ) and eigenvectors (v) of the covariance matrix C.
Xcentered
C = Xcentered T × (4)
( n − 1)
Z m
C × vi = λi × v i (5)
1
Using SMOTE ensures a balanced dataset, enhancing the model’s prediction accuracy for
both classes.
Algorithm: synthetic samples depending on the minority-majority class ratio.
Input
i. Minority class samples: M
ii. k (number of nearest neighbors to consider)
Output
Synthetic samples: S
1. Create an empty synthetic sample list: S = []
2. Calculate the number of synthetic samples (n_synthetic) depending on the minority-
majority class ratio.
3. Each minority class sample m in M:
a. Find k closest neighbors of m from minority class samples, omitting m.
b. Randomly choose one of the k neighbours (nn).
c. Difference vector diff = nn − m.
d. Add a random proportion of diff to ‘m’ to create n_synthetic samples.
4. Add all newly synthesized samples to S.
5. Return synthetic sample list S.
6. The method identifies AD efficiently and correctly. Healthy and AD patients are first
separated. SMOTE fakes minority class samples for dataset balance. Representing
both groups promotes learning. Splitting the balanced dataset into training and testing
sets preserves class distribution.
7. Logistic Regression, Random Forest, or SVM predict AD. The chosen model learns
from training set characteristics and labels. The testing set evaluates the model’s
accuracy, precision, recall, and F1-score.
8. Successful classification models can discover AD in new data. Fresh instance features
suggest AD.
SMOTE builds synthetic samples along line segments linking a minority class sample
and its k nearest neighbors, extending the minority class in feature space. Logistic Regres-
sion, Random Forest, or a Support Vector Machine are used to predict AD. The selected
model learns features and annotations from the training set. To measure the model’s
efficacy, the accuracy, precision, recall, and F1-score are used on the assessment set. The
trained classification model is able to detect AD in new, unlabeled data if its performance is
adequate. By feeding the model the characteristics of new instances, it can precisely predict
the presence of AD. With careful consideration of dataset quality, feature selection, and
model selection, this algorithm provides a promising strategy for early and accurate AD de-
tection. Utilizing SMOTE to resolve class imbalance and advanced classification techniques,
the algorithm improves patient outcomes by facilitating timely diagnoses and intervention.
D. AD Prediction Using SMOTE
The proposed method efficiently classifies AD. The dataset, initially divided into
healthy and AD patients, is balanced using SMOTE. This enhances learning by representing
both classes equally. The data are then split for training and testing with equal class
distribution. The model, using Logistic Regression, Random Forest, or a Support Vector
Machine, learns from the training set and is evaluated based on the accuracy, precision,
recall, and F1-score. Once trained, the model can predict AD in new data. By addressing
class imbalances with SMOTE and using advanced techniques, this approach promises
early and accurate AD detection, improving patient outcomes.
E. Classification Procedure
A Support Vector Machine (SVM) is a key classification tool with significant potential
for an AD diagnosis. It is a versatile supervised learning algorithm suited for both linear
and nonlinear tasks. Especially useful for complex medical datasets like AD, SVM identifies
Eng. Proc. 2023, 59, 10 6 of 11
the best hyperplane to separate classes. After refining features, SVM can discern complex
patterns and relationships in the dataset. Its ability to handle nonlinear relationships
through various kernel functions and resist outliers ensures reliable predictions. When
trained on a balanced dataset from SMOTE, SVM offers high sensitivity and specificity,
vital for early AD detection.
F. Recall
Figure 3. A representation
Recall of precision
is a performance statisticscores for different
for binary models.
classification
Figure 3. A representation of precision scores for different models. models. It tests the model’s
ability to identify all positive occurrences from the dataset’s total positive instances. Recall
F. Recall
The precision
is sensitivity and thegraph
True represented
Positive ratein Figure
(TPR). 3 clearly
Figure illustrates
4 represents the
the varying
recall scoreprecision
for dif-
scoresRecall
of is a performance
predictive algorithms.statistic
SVM for stands
binary classification
out with an models. It tests
impressive 96%, the model’s
indicating
ferent models. The ratio of True Positive predictions (properly recognized positive cases)
ability
accuratetopositive
identifypredictions.
all positive occurrences
Extra Tree fromathe
shows dataset’s
lower 76%, total positive
while the instances.
decision tree, Recall
Logistic
to the total of True Positive and False Negative predictions (positive instances mistakenly
is sensitivity
Regression, and the True perform moderately at 81%. SVM’s dominance is evident. dif-
Positive rate (TPR). Figure 4 represents the recall score for
forecasted asand XG Boost
negative) is used.
ferent models. The ratio of True Positive predictions (properly recognized positive cases)
F. Recall
to the total of TrueRecall = (True
Positive andPositives)/((True Positives + False
False Negative predictions Negatives))
(positive instances mistakenly
Recallas
forecasted is negative)
a performance statistic for binary classification models. It tests the model’s
is used.
A high
ability recall all
to identify score suggests
positive that the from
occurrences modelthecan properly
dataset’s totalidentify
positivea instances.
significantRecall
pro-
portion of positive
is sensitivity cases,
Recall
and the =True meaning
(True few
rateFalse
Positives)/((True
Positive Negatives.
(TPR). Figure 4+A
Positives lowNegatives))
False
representsrecallthescore
recallmeans
scorethefor
model misses
different many
models. Thepositive
ratio ofexamples,
True resulting
Positive in more
predictions False
(properly Negatives.
recognized Recall is
positive critical
cases)
A high recall score suggests that the model can properly identify a significant pro-
in
tomedical
the total diagnoses (to detect
of True Positive andillnesses)
Falsefew and fraud
Negative detection
predictions (to detect
(positive fraudulent
instances trans-
portion of positive cases, meaning False Negatives. A low recall score mistakenly
means the
actions)
forecastedto accurately
as negative) identify
is used.positive instances. However, optimizing one statistic might
model misses many positive examples, resulting in more False Negatives. Recall is critical
affect other metrics in a classification assignment; therefore, it is important to balance re-
in medical diagnoses (to detect illnesses) and fraud detection (to detect fraudulent trans-
call and other metrics
Recall =like accuracy.
(True Positives)/((True Positives + False Negatives))
actions) to accurately identify positive instances. However, optimizing one statistic might
affect other metrics in a classification assignment; therefore, it is important to balance re-
call and other metrics like accuracy.
Figure4.4.AArepresentation
Figure representationof
ofrecall
recallscore
score for
for different
different model.
model.
A high recall score suggests that the model can properly identify a significant propor-
tion of4.positive
Figure cases, meaning
A representation of recall few
scoreFalse Negatives.
for different A low recall score means the model
model.
misses many positive examples, resulting in more False Negatives. Recall is critical in med-
ical diagnoses (to detect illnesses) and fraud detection (to detect fraudulent transactions) to
accurately identify positive instances. However, optimizing one statistic might affect other
metrics in a classification assignment; therefore, it is important to balance recall and other
metrics like accuracy.
Eng. Proc. 2023, 59, 10 FOR PEER REVIEW 9 of 11
Eng. Proc. 2023, 59, 10 9 of 11
The recall scores for various models were evaluated to quantify their ability to accu-
The recall scores for various models were evaluated to quantify their ability to accu-
rately identify positive cases in the dataset. The SVM algorithm exhibited an outstanding
rately identify positive cases in the dataset. The SVM algorithm exhibited an outstanding
recall score of 97%, correctly identifying 97% of the positive cases. Surprisingly, the KNN
recall score of 97%, correctly identifying 97% of the positive cases. Surprisingly, the KNN
algorithm surpassed even the SVM, obtaining a recall score of 95%, demonstrating its ef-
algorithm surpassed even the SVM, obtaining a recall score of 95%, demonstrating its
fectiveness
effectiveness inincorrectly
correctlyidentifying
identifyingpositive
positivecases.
cases.InIncontrast,
contrast,thethedecision
decision tree
tree algorithm
algorithm
achieved a lower recall score of 84%, indicating that it missed a considerable
achieved a lower recall score of 84%, indicating that it missed a considerable portion portion of
of the
the positive cases. The Naive Bayes model achieved a recall score of
positive cases. The Naive Bayes model achieved a recall score of 75%, while the Logistic75%, while the Lo-
gistic Regression
Regression model model performed
performed relatively
relatively better
better with with a recall
a recall score
score ofof81%.
81%. Overall,
Overall, the
the
results
results highlight the superior performance of the KNN model in identifying positive cases
highlight the superior performance of the KNN model in identifying positive cases
compared
compared to tothe
theother
otherfour
fouralgorithms.
algorithms.The Themodels’
models’ diagnostic
diagnosticskills on the
skills testing
on the set are
testing set
assessed using the accuracy, precision, recall, F1-score, and AUC-ROC.
are assessed using the accuracy, precision, recall, F1-score, and AUC-ROC. The confusion The confusion ma-
trix alsoalso
matrix assesses
assessestrue, false,
true, positive,
false, and
positive, andnegative
negativepredictions.
predictions.External
Externalvalidation
validation onon aa
different dataset assesses
different dataset assessesthe themodel’s
model’scapacity
capacitytotogeneralize
generalize toto unobserved
unobserved datadata
byby com-
compar-
paring
ing the the proposed
proposed model’s
model’s performance
performance to baseline
to the the baseline classifiers
classifiers and using
and using statistical
statistical tests
tests to discover performance
to discover performance differences. differences.
G.
G. F1‐Score
F1-Score
A model with a high F1-score is one that effectively balances precision and recall.
A model with a high F1-score is one that effectively balances precision and recall.
Evaluating the F1-scores of the aforementioned models would provide a more thorough
Evaluating the F1-scores of the aforementioned models would provide a more thorough
comprehension
comprehension of oftheir
theiroverall
overalleffectiveness
effectiveness and
and potential
potential trade-offs
trade-offs between
between precision
precision and
and recall.
recall. Figure
Figure 5 represents
5 represents a confusion
a confusion matrix
matrix on prediction
on prediction of Alzheimer’s.
of Alzheimer’s.
Confusion matrix
Figure 5. Confusion matrix on
on prediction
prediction of Alzheimer’s.
Alzheimer’s.
Visualization methods
Visualization methods like
like ROC
ROC curves
curves andand precision–recall
precision–recall curves
curves show
show the
the model’s
model’s
discrimination performance, whereas a feature significance analysis shows feature
discrimination performance, whereas a feature significance analysis shows feature contri- contribu-
tions. Figure
butions. 5 illustrates
Figure an AD
5 illustrates prediction
an AD confusion
prediction matrix.
confusion TrueTrue
matrix. Positive (TP) occurrences
Positive (TP) occur-
are accurately predicted as including the disease, while False Positive
rences are accurately predicted as including the disease, while False Positive (FP) instances
(FP) are
in-
wrongly forecasted as positive. True Negative (TN) occurrences were accurately
stances are wrongly forecasted as positive. True Negative (TN) occurrences were accu- predicted
as negative
rately (not having
predicted the disease),
as negative while
(not having theFalse Negative
disease), while(FN) examples
False Negativewere
(FN)mistakenly
examples
forecasted as negative but included the disease. This research study evaluates the
were mistakenly forecasted as negative but included the disease. This research study eval- suggested
ensemble-based
uates the suggestedmodel for an AD diagnosis
ensemble-based model forto improve medical data
an AD diagnosis analytics
to improve and patient
medical data
care by revealing its accuracy and efficacy.
analytics and patient care by revealing its accuracy and efficacy.
5. Conclusions
5. Conclusions
This research offers an in-depth study of an AD diagnosis through machine learn-
ing. This
Using research
featureoffers an in-depth
selection and datastudy of an ADour
resampling, diagnosis
proposed through machine learning.
ensemble-based model
Using feature selection and data resampling, our proposed ensemble-based
effectively differentiates between healthy individuals and AD patients. It outperforms model effec-
tively differentiates
baseline classifiers between
like SVMhealthy individuals
and Logical and ADinpatients.
Regression accuracy, It outperforms
precision, andbaseline
other
classifiers like SVM and Logical Regression in accuracy, precision, and
metrics. Relevant features enhance the model’s clarity and effectiveness, while SMOTEother metrics. Rel-
evant features enhance the model’s clarity and effectiveness, while SMOTE
balancing addresses class imbalance. This work contributes significantly to AD diagnoses, balancing ad-
dresses class imbalance. This work contributes significantly to AD diagnoses,
promoting early detection and better patient outcomes. Future studies could explore deep promoting
early detection
learning and better
techniques, such aspatient
CNNsoutcomes.
and RNNs, Future studies could
for improved explore deep
brain imaging learning
pattern recog-
techniques, such as varied
nition. Combining CNNs data
and RNNs,
sources,for
likeimproved
genetics brain imaging
and clinical pattern
data, mightrecognition.
refine the
Combining
diagnosis. A varied data sources,
longitudinal likedata
patient genetics and clinical
analysis data,
can track mightprogression
disease refine the diagnosis.
and risk
A longitudinal patient data analysis can track disease progression
prediction. Collaborating with medical experts for real-world validation, improvingand risk prediction.
model
Eng. Proc. 2023, 59, 10 10 of 11
interpretability, and integrating it into clinical systems will further its potential in AD
diagnoses and treatment.
Author Contributions: Experiment Design and Data Pre-processing, R.P.N. and B.S.; Design, R.K.;
Review and Interpretation, M.A.B.; Data Analysis and Interpreted Result, R.P.N.; Writing—Review
and Editing, R.P.N. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data will be provided on request.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Grueso, S.; Viejo-Sobera, R. Machine learning methods for predicting progression from mild cognitive impairment to AD dementia:
A systematic review. Alzheimer’s Res. Ther. 2021, 13, 1–29.
2. El-Sappagh, S.; Saleh, H.; Ali, F.; Amer, E.; Abuhmed, T. Two-stage deep learning model for AD detection and prediction of the
mild cognitive impairment time. Neural Comp. Appl. 2022, 34, 14487–14509. [CrossRef]
3. Iddi, S.; Li, D.; Aisen, P.S.; Rafii, M.S.; Thompson, W.K.; Donohue, M.C. AD Neuroimaging Initiative. Predicting the course of
Alzheimer’s progression. Brain Inform. 2019, 6, 1–18. [CrossRef]
4. Yuen, S.C.; Liang, X.; Zhu, H.; Jia, Y.; Leung, S.W. Prediction of differentially expressed microRNAs in blood as potential
biomarkers for AD by meta-analysis and adaptive boosting ensemble learning. Alzheimer’s Res. Ther. 2021, 13, 1–30.
5. Naz, S.; Ashraf, A.; Zaib, A. Transfer learning using freeze features for Alzheimer neurological disorder detection using ADNI
dataset. Multi. Syst. 2022, 28, 85–94. [CrossRef]
6. Wang, S.; Du, Z.; Ding, M.; Rodriguez-Paton, A.; Song, T. KG-DTI: A knowledge graph based deep learning method for
drug-target interaction predictions and AD drug repositions. Appl. Intell. 2022, 52, 846–857. [CrossRef]
7. Bermudez, C.; Graff-Radford, J.; Syrjanen, J.A.; Stricker, N.H.; Algeciras-Schimnich, A.; Kouri, N.; Vemuri, P. Plasma biomarkers
for prediction of AD neuropathologic change. Acta Neuropathol. 2023, 146, 13–29. [CrossRef]
8. Diogo, V.S.; Ferreira, H.A.; Prata, D. AD Neuroimaging Initiative. Early diagnosis of AD using machine learning: A multi-
diagnostic, generalizable approach. Alzheimer’s Res. Ther. 2022, 14, 107. [CrossRef]
9. Venkataramana, L.Y.; Jacob, S.G.; Prasad, V.; Athilakshmi, R.; Priyanka, V.; Yeshwanthraa, K.; Vigneswaran, S. Geometric
SMOTE-Based Approach to Improve the Prediction of Alzheimer’s and Parkinson’s Diseases for Highly Class-Imbalanced Data.
In AI, IoT, and Blockchain Breakthroughs in E-Governance; IGI Global: Hershey, PA, USA, 2023; pp. 114–137.
10. Zhang, P.; Lin, S.; Qiao, J.; Tu, Y. Diagnosis of AD with ensemble learning classifier and 3D convolutional neural network. Sensors
2021, 21, 7634. [CrossRef]
11. Rao, K.N.; Gandhi, B.R.; Rao, M.V.; Javvadi, S.; Vellela, S.S.; Basha, S.K. Prediction and Classification of AD using Machine
Learning Techniques in 3D MR Images. In Proceedings of the 2023 International Conference on Sustainable Computing and
Smart Systems (ICSCSS), Coimbatore, India, 14–16 June 2023; pp. 85–90.
12. Khoei, T.T.; Labuhn, M.C.; Caleb, T.D.; Hu, W.C.; Kaabouch, N. A stacking-based ensemble learning model with genetic algorithm
for detecting early stages of AD. In Proceedings of the 2021 IEEE International Conference on Electro Information Technology
(EIT), Mt. Pleasant, MI, USA, 14–15 May 2021; pp. 215–222.
13. Tambe, P.; Saigaonkar, R.; Devadiga, N.; Chitte, P.H. Deep Learning techniques for effective diagnosis of AD using MRI images.
ITM Web Conf. 2021, 40, 03021. [CrossRef]
14. Ghali, U.M.; Usman, A.G.; Chellube, Z.M.; Degm, M.A.A.; Hoti, K.; Umar, H.; Abba, S.I. Advanced chromatographic technique
for performance simulation of anti-Alzheimer agent: An ensemble machine learning approach. SN Appl. Sci. 2020, 2, 1–12.
[CrossRef]
15. An, N.; Ding, H.; Yang, J.; Au, R.; Ang, T.F. Deep ensemble learning for AD classification. J. Biomed. Inform. 2020, 105, 103411.
16. Pereira, T.; Cardoso, S.; Silva, D.; Guerreiro, M.; de Mendonça, A.; Madeira, S.C. Ensemble learning with Conformal Predictors:
Targeting credible predictions of conversion from Mild Cognitive Impairment to AD. arXiv 2018, arXiv:1807.01619.
17. Wang, R.; Li, H.; Lan, R.; Luo, S.; Luo, X. Hierarchical Ensemble Learning for AD Classification. In Proceedings of the 2018 7th
International Conference on Digital Home (ICDH), Guilin, China, 30 November–1 December 2018; pp. 224–229.
18. Orhobor, O.I.; Soldatova, L.N.; King, R.D. Federated ensemble regression using classification. In Proceedings of the Discovery
Science: 23rd International Conference, DS 2020, Thessaloniki, Greece, 19–21 October 2020; Springer International Publishing:
Berlin/Heidelberg, Germany; Volume 23, pp. 325–339.
19. Kang, W.; Lin, L.; Zhang, B.; Shen, X.; Wu, S. AD Neuroimaging Initiative. Multi-model and multi-slice ensemble learning
architecture based on 2D convolutional neural networks for AD diagnosis. Comput. Biol. Med. 2021, 136, 104678.
Eng. Proc. 2023, 59, 10 11 of 11
20. Mirzaei, G.; Adeli, H. Machine learning techniques for diagnosis of Alzheimer disease, mild cognitive disorder, and other types
of dementia. Biomed. Sign. Process. Control 2022, 72, 103293. [CrossRef]
21. Nguyen, D.K.; Lan, C.H.; Chan, C.L. Deep ensemble learning approaches in healthcare to enhance the prediction and diagnosing
performance: The workflows, deployments, and surveys on the statistical, image-based, and sequential datasets. Int. J. Environ.
Res. Public Health 2021, 18, 10811. [CrossRef]
22. Shaikh, T.A.; Ali, R. Enhanced computerised diagnosis of AD from brain MRI images using a classifier merger strategy. Int. J.
Inform. Technol. 2021, 14, 1–13.
23. Song, M.; Jung, H.; Lee, S.; Kim, D.; Ahn, M. Diagnostic classification and biomarker identification of AD with random forest
algorithm. Brain Sci. 2021, 11, 453. [CrossRef]
24. Hemalatha, B.; Renukadevi, M. Analysis of Alzheimer disease prediction using machine learning techniques. Inf. Technol. Ind.
2021, 9, 519–525.
25. Alamro, H.; Thafar, M.A.; Albaradei, S.; Gojobori, T.; Essack, M.; Gao, X. Exploiting machine learning models to identify novel
AD biomarkers and potential targets. Sci. Rep. 2023, 13, 4979. [CrossRef]
26. Albahri, A.S.; Alwan, K.J.; Taha, Z.K.; Ismail, S.F.; Hamid, R.A.; Zaidan, A.A.; Albahri, O.S.; Zaidan, B.B.; Alamoodi, A.H.;
Alsalem, M.A. IoT-based telemedicine for disease prevention and health promotion: State-of-the-Art. J. Netw. Comput. Appl. 2021,
173, 102873. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.