Cancers 16 03702
Cancers 16 03702
1 Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy;
giovanni.pirrone@cro.it (G.P.); adrigo@cro.it (A.D.)
2 Elekta SA, 92100 Boulogne-Billancourt, France; joseph.stancanello@elekta.com
3 National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy; alessandra.retico@pi.infn.it
* Correspondence: mavanzo@cro.it
Simple Summary: Artificial intelligence, now one of the most promising frontiers of medicine, has
a long and tumultuous history punctuated by successes and failures. One of its successes was its
application to medical images. We reconstruct the timeline of the advancements in this field, from
its origins in the 1940s before crossing medical images to early applications of machine learning to
radiology, to the present era where artificial intelligence is revolutionizing radiology.
Abstract: Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines
or computers the ability to perform human-like cognitive functions, began in the 1940s with the
first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning
algorithms such as neural networks and decision trees ignited significant enthusiasm. More recent
advancements include the refinement of learning algorithms, the development of convolutional
neural networks to efficiently analyze images, and methods to synthesize new images. This renewed
enthusiasm was also due to the increase in computational power with graphical processing units
and the availability of large digital databases to be mined by neural networks. AI soon began to
be applied in medicine, first through expert systems designed to support the clinician’s decision
Citation: Avanzo, M.; Stancanello, J.; and later with neural networks for the detection, classification, or segmentation of malignant lesions
Pirrone, G.; Drigo, A.; Retico, A. The in medical images. A recent prospective clinical trial demonstrated the non-inferiority of AI alone
Evolution of Artificial Intelligence in compared with a double reading by two radiologists on screening mammography. Natural language
Medical Imaging: From Computer processing, recurrent neural networks, transformers, and generative models have both improved
Science to Machine and Deep the capabilities of making an automated reading of medical images and moved AI to new domains,
Learning. Cancers 2024, 16, 3702. including the text analysis of electronic health records, image self-labeling, and self-reporting. The
https://doi.org/10.3390/ availability of open-source and free libraries, as well as powerful computing resources, has greatly
cancers16213702
facilitated the adoption of deep learning by researchers and clinicians. Key concerns surrounding AI
Academic Editors: David Wong and in healthcare include the need for clinical trials to demonstrate efficacy, the perception of AI tools
Jason Li as ‘black boxes’ that require greater interpretability and explainability, and ethical issues related
to ensuring fairness and trustworthiness in AI systems. Thanks to its versatility and impressive
Received: 27 September 2024
results, AI is one of the most promising resources for frontier research and applications in medicine,
Revised: 26 October 2024
Accepted: 29 October 2024
in particular for oncological applications.
Published: 1 November 2024
Keywords: artificial intelligence; medical imaging; neural networks; machine learning; deep learning
2.1. Prehistory of AI
The idea of inanimate objects being able to complete tasks that are usually performed
by humans and require “intelligence” dates back to ancient times [8]. The history of AI
started with a group of great visionaries and scientists in the 1900s, including Alan Turing
(London, 1912–Manchester, 1954, Figure 1a), one of the fathers of modern computers. He
devised an abstract computer called the Turing machine, a concept of paramount importance
in modern informatics, as any modern computer device is thought to be a special case of
a Turing machine [9]. He also worked on a device, the Bombe, at Bletchley Park (75 km
northwest of London), which involved iteratively reducing the space of solutions from a set of
message transcriptions to discover the decryption key of enemy messages during World War
II [10]. This process had some resemblance to ML, which hypotheses a model from a set of
observations [11]. A timeline of the origin and development of AI, starting from the Turing
machine to the triumph of artificial neural networks (ANNs), is provided in Figure 2.
In a public lecture held in 1947, Turing first mentioned the concept of a “machine
that can learn from experience” [12] and posed the question “can machine think?” in his
seminal paper entitled “Computing machinery and intelligence” [13]. In this paper, the
“imitation game”, also referred to as the Turing test, was proposed to define if a machine
can think. In this test, a human interrogates another human and the machine alternatively.
If it is not possible to distinguish the machine from the human based on the answers then
the machine passes the test and is considered able to think.
Turing discussed strategies for achieving a thinking machine by programming and
learning. He likened the learning process to that of a child being educated by an adult
who provides positive and negative examples [11,14]. From its very beginning, different
branches of AI emerged. Symbolic AI searches for the proper rule (e.g., IF-THEN) to apply
to the problem at hand by testing/simulating all the possible rules, like in a chess game,
without training [15,16]. On the other hand, ML is characterized by a training phase, where
the machine analyses a set of data, builds a model, and measures model performance
through a function called goal or cost function [17]. The term ML was introduced by Arthur
L. Samuel (Emporia, USA, 1901–Stanford, USA, 1990), who developed the first machine
able to learn to play checkers [6]. The dawn of AI is considered the summer conference at
Cancers 2024, 16, 3702 3 of 23
Dartmouth College (Hanover, NH, USA) in 1956 [18]. At the meeting, “artificial intelligence”
was defined by John McCarthy (Boston, USA, 1927–Stanford, USA, 2011) as “the science
and engineering of making intelligent machines”. This definition, as well as the implicit
definition of AI in the imitation game, escapes the cumbersome issue of defining what
intelligence is [19], making the goals and boundaries of the science of AI blurry. For instance,
in the early years of AI, research clearly targeted computers that could have performance
Cancers 2024, 16, x FOR PEER REVIEW
comparable with those of the human mind (”strong AI”). In later years, the AI community 3 of 24
shifted its aim to more limited realistic tasks, like solving practical problems and 3carrying
Cancers 2024, 16, x FOR PEER REVIEW of 24
out individual cognitive functions (“weak AI”) [19].
Figure2.2.Timeline
Figure Timelineof
ofAI
AI (orange) and of
(orange) and of AI
AIin
inmedicine
medicine(blue).
(blue).
Figure
Figure3.3.Scheme
Schemeofofthe
theperceptron.
perceptron.
2.3.Supervised
2.3. SupervisedandandUnsupervised
UnsupervisedML ML
ML is used to explore data (‘datamining’)
ML is used to explore data (‘data mining’)totoidentify
identify variables
variablesof interest andand
of interest uncover
un-
usefuluseful
cover correlations and patterns
correlations without
and patterns any predefined
without any predefinedhypothesis to test.
hypothesis to In this
test. Insense,
this
ML operates
sense, inversely
ML operates to traditional
inversely statistical
to traditional approaches,
statistical which
approaches, begin
which withwith
begin a hypothe-
a hy-
sis [33]. The most common approach is supervised learning, where
pothesis [33]. The most common approach is supervised learning, where the system uses the system uses training
data with
training corresponding
data groundground
with corresponding truth labels
truthto learntohow
labels learntohow
predict these labels
to predict these [34].
labelsIn
unsupervised ML, the training data have no ground truth labels, and the
[34]. In unsupervised ML, the training data have no ground truth labels, and the ML learns ML learns patterns
or relationships
patterns in the data,
or relationships resulting
in the in data-driven
data, resulting solutionssolutions
in data-driven for dimensionality reduction,
for dimensionality
data partitioning, and the detection of outliers. To the first category
reduction, data partitioning, and the detection of outliers. To the first category belongsbelongs the principal
the
component analysis, PCA [35], which uses an orthogonal linear transformation
principal component analysis, PCA [35], which uses an orthogonal linear transformation to to convert
the datathe
convert into a new
data into coordinate system system
a new coordinate to perform data dimension
to perform reduction
data dimension [36]. PCA
reduction [36].is
useful when a high number of variables may cause ML models to overfit.
PCA is useful when a high number of variables may cause ML models to overfit. Overfitting Overfitting occurs
occurs when a model memorizes the training examples but performs poorly on independ-
ent test sets due to a lack of generalization capability [34].
when a model memorizes the training examples but performs poorly on independent test
sets due to a lack of generalization capability [34].
(a) (b)
Figure4.4.Application
Figure Applicationofofdecision
decisiontrees
trees(a)
(a)and
andsupport
supportvector
vectormachines
machines(b)(b) learning
learning toto
thethe classifi-
classifica-
tion of iris flower species from petal width and length. Prediction (areas) and training data (dots)(dots)
cation of iris flower species from petal width and length. Prediction (areas) and training data and
and the resulting decision tree are shown on the left and right sides, respectively.
the resulting decision tree are shown on the left and right sides, respectively.
terms of a feature vector can lead to an extremely large number of features depending on the
complexity of the problem to address, increasing the risk of overfitting. Feature selection is
a process to determine a subset of features such as all the features in the subset are relevant
to the target concept, and no feature is redundant [67,68]. A feature is considered redundant
when adding it on top of the others will not provide additional information; for instance,
if two features are correlated, these are redundant to each other [69]. Feature selection
may be considered an application of Ockham’s razor to ML. According to Ockham’s razor
principle, attributed to the 14th-century English logician William of Ockham (Ockham,
England, 1285–Munich, Bavaria, 1347), given two hypotheses consistent with the observed
data, the simpler one (i.e., the ML model using the lower number of features), should
be preferred [70]. Depending on the type of data, feature selection can be classified as
supervised, semi-supervised, or unsupervised [69]. There are three main classes of feature
selection methods: (i) embedded feature selection, where ML includes the choice of the
optimal subset; (ii) filtering, where features are discarded or passed to the learning phase
according to their relevance; and (iii) wrapping, which requires evaluating the accuracy of
a specific ML model on different feature subsets for choice of the optimal one [67]. “Tuning”
is the task of finding optimal hyperparameters for a learning algorithm for a considered
dataset. For instance, decision trees have several hyperparameters that may influence their
performance, such as the maximum depth of the tree and the minimum number of samples
at a leaf node [71]. Early attempts for parameter optimization include the introduction of
the Akaike information criteria for model selection [72]. More recent strategies include grid
Cancers 2024, 16, x FOR PEER REVIEW 8 of 24
search, in which all parameter space is discretized and searched, and random search [73], in
which values are drawn randomly from a specified hyperparameter space, which is more
efficient, especially for ANNs [71].
3.3. First Uses of Neural Networks for Image Recognition
3.3. First Uses of Neural Networks for Image Recognition
To address the criticism of M. Minsky and S. Papert [37] and enable neural networks
To address
to solve the criticism
nonlinearly separableofproblems,
M. Minskymanyand S. Papert [37]
additional andofenable
layers neural units
neuron-like networks
must
to
besolve nonlinearly
placed between separable
input andproblems, manyleading
output layers, additional layers of neuron-like
to multilayer ANNs. Theunits
first must
work
be placed between
proposing input
multilayer and output
perceptrons layers,
was leading
published to multilayer
in 1965 ANNs.and
by Ivakhnenko TheLapa
first[71].
workA
proposing multilayer perceptrons was published in 1965 by Ivakhnenko
multilayered neural network was proposed in 1980 by Fukushima called “Neocognitron” and Lapa [71].
A[74],
multilayered neural network was proposed in 1980 by Fukushima called “Neocogni-
which was used for image recognition [75], and included multiple convolutional layers
tron” [74], image
to extract which features
was usedoffor image recognition
increasing complexity.[75], andintermediate
These included multiple
layersconvolutional
are called hid-
layers to extract image features of increasing complexity. These intermediate
den layers [21] and multilayer architectures of neural networks are called “deep”. layers are
Hence,
called hidden layers [21] and multilayer architectures of neural networks are called
the term “deep learning” (DL) was coined by R. Dechter [76]. The difference between single- “deep”.
Hence, the multilayer
layer and learning”
term “deep ANNs (DL) was
is shown in coined
Figure by
5. R. Dechter [76]. The difference between
single-layer and multilayer ANNs is shown in Figure 5.
(a) (b)
Figure5.5.Comparison
Figure Comparisonbetween
betweensingle-layer
single-layer(a)
(a)and
andmultilayer
multilayerANNs
ANNs(b).
(b).
InaaDL
In DLneural
neural network,
network, training
training isis performed
performedby byupdating
updatingall allthe
theweights
weightssimultane-
simulta-
ously in in
neously thethe
opposite
oppositedirection to a to
direction vector that indicates
a vector by theby
that indicates change in the error
the change in theif error
weights
if
are changed
weights by a small
are changed by amount to searchtoa search
a small amount minimum in the error
a minimum in response.
in the This method
error in response. Thisis
called “standard
method is called gradient
“standarddescent”
gradient[77], and one[77],
descent” of itsand
limitations
one of was that it made
its limitations tasks
was such
that it
as image recognition too computationally expensive.
made tasks such as image recognition too computationally expensive.
The invention of the backpropagation learning algorithm in the mid-1980s [78] improved
significantly the efficiency of training of neural networks. The back-propagation equations
provide us with a way of computing the gradient of the cost function starting from the final
layer [79]. Then, the backpropagation equation can be applied repeatedly to propagate gradi-
ents through all modules all the way to the input layer by using the chain rule of derivatives
to estimate how the cost varies with earlier weights [80]. This learning rule and its variants
Cancers 2024, 16, 3702 8 of 23
The invention of the backpropagation learning algorithm in the mid-1980s [78] improved
significantly the efficiency of training of neural networks. The back-propagation equations
provide us with a way of computing the gradient of the cost function starting from the
final layer [79]. Then, the backpropagation equation can be applied repeatedly to propagate
gradients through all modules all the way to the input layer by using the chain rule of
derivatives to estimate how the cost varies with earlier weights [80]. This learning rule and its
variants enabled the use of neural networks in many hard medical diagnostic tasks [81].
The introduction of the rectifier function or ReLu (rectified linear unit), an activation
function, which is zero if the input is lower than the threshold and is equal to the input
otherwise, helped reduce the risk of vanishing/exploding gradients [82] and is the most
used activation function as of today.
Despite this progress, AI entered its second winter at the beginning of the 1990s„ which
involved the use of neural networks, to the point that, at a prominent conference, it was
noted that the term “neural networks” in a manuscript title was negatively correlated with
acceptance [83]. This new AI winter was partly due to the vanishing/exploding gradient
problem of DL, which is the exponential increase or decrease in the backpropagated gradient
of the weights in a deep neural network.
research field encompassing the entire view of a system by mining a large amount of
data [102]. Radiomics focused on investigating the tumor phenotype in imaging for build-
ing prognostic and predictive models [103], in particular for oncological applications [14].
Thus, it has largely contributed to the idea that ML can be applied to quantitatively analyze
images [104]. An array of ML techniques is currently used for radiomics, including SVM
and ensemble decision trees [1]. The radiomic approach, complemented by ML, has been
largely implemented in a large variety of studies devoted to the identification of imaging-
based biomarkers of disease severity assessment or staging and patient’s outcome or risk
for side effects [105–109]. The scientific community is still investigating the robustness and
reproducibility of radiomics features and their dependence on image acquisition systems
and parameters across different modalities [110,111].
Among the CNN-based systems approved for clinical use, ENDOANGEL (Wuhan
EndoAngel Medical Technology Company, Wuhan, China) can provide an objective assess-
ment of bowel preparation every 30 s during the withdrawal phase of a colonoscopy [129].
The CNN-based system can also analyze images to predict overall survival and occurrence
of distant metastases [130].
These DL models are characterized by a very large number of free parameters that
must be set during the training phase, making network training from scratch computa-
tionally intensive. In transfer learning [131,132], the knowledge acquired in one domain
is transferred to a different one, much like a person using their guitar-playing skills to
learn the piano [133]. This allows a learner in one domain (e.g., radiographs) to leverage
information previously acquired by models such as the Visual Geometry Group (VGG)
and Residual Network (ResNet), which were trained on a related domain (e.g., images of
common objects).
have also been applied to rigid [158] and deformable image registration [159], which is
necessary to precisely track the absorbed dose in radiotherapy treatments at the voxel
level [160]. A new promising neural architecture, the neural fields, can perform efficiently
any of the above tasks by parameterizing the physical properties of images [161].
demonstrated the ability to predict specific phenotypes from raw genomic data [193], to
assign emergency codes based on symptoms during triage in emergency departments [194],
and other tasks [195].
Since recently, multi-input AI models can merge and mine the complementary infor-
mation encoded in omics data, EHRs, imaging data, phenomics, and environmental data of
the patient, which represent a current technical challenge [152]. Meta AI’s Imagebind [196]
and Google Deepmind’s Perceiver IO [197] represent significant advancements in pro-
cessing and integrating multimodal data. Other than the flexibility of architecture, other
reasons contributed to the large adoption of DL in medical imaging [198–200]. Despite
these successes, there are also challenges. This section explores both the factors contributing
to AI’s success and the concerns surrounding its use.
5.3. Explainability/Interpretability
A key barrier to the widespread adoption of AI-based tools in medical imaging is that
these systems are often viewed as black boxes, making it challenging to understand how
they arrive at their decisions [216].
Cancers 2024, 16, 3702 14 of 23
6. Conclusions
After a long and tumultuous history, we are in a phase of enthusiasm and promises
regarding AI applications to medicine. Fueled by its versatility, impressive results, and
the availability of powerful computing resources and open-source libraries, AI is one of
the most promising frontiers in medicine. Some medical imaging tasks can be successfully
addressed by traditional ML methods like RF, which is less prone to overfitting than DL and
more easily interpretable. Various DL architectures can efficiently and accurately perform a
range of tasks, including image reconstruction and registration. DL networks have also
achieved human-level performance in tasks such as lesion detection, image classification,
and segmentation. Additionally, foundation models, pre-trained on a large scale, can be
fine-tuned for diverse domains, requiring less training data than training a DL model
from scratch.
Indeed, to facilitate the diffusion of AI-based tools in clinical workflows, in addition to
the development of increasingly cutting-edge technological solutions that can answer dif-
ferent clinical questions, AI-based systems should be validated in large-scale clinical trials
to demonstrate their effectiveness. Additional concerns regarding AI in healthcare must be
addressed, including the view of AI tools as ‘black boxes’, which calls for more interpretable
Cancers 2024, 16, 3702 15 of 23
and explainable models to earn the trust of both doctors and patients. Ethical issues, such
as ensuring fairness and reliability in AI systems, also need careful consideration.
References
1. Avanzo, M.; Porzio, M.; Lorenzon, L.; Milan, L.; Sghedoni, R.; Russo, G.; Massafra, R.; Fanizzi, A.; Barucci, A.; Ardu, V.; et al.
Artificial Intelligence Applications in Medical Imaging: A Review of the Medical Physics Research in Italy. Phys. Med. 2021, 83,
221–241. [CrossRef] [PubMed]
2. Dembrower, K.; Crippa, A.; Colón, E.; Eklund, M.; Strand, F. Artificial Intelligence for Breast Cancer Detection in Screening
Mammography in Sweden: A Prospective, Population-Based, Paired-Reader, Non-Inferiority Study. Lancet Digit. Health 2023, 5,
e703–e711. [CrossRef] [PubMed]
3. Zanca, F.; Brusasco, C.; Pesapane, F.; Kwade, Z.; Beckers, R.; Avanzo, M. Regulatory Aspects of the Use of Artificial Intelligence
Medical Software. Semin. Radiat. Oncol. 2022, 32, 432–441. [CrossRef] [PubMed]
4. Armato, S.G.; Drukker, K.; Hadjiiski, L. AI in Medical Imaging Grand Challenges: Translation from Competition to Research
Benefit and Patient Care. Br. J. Radiol. 2023, 96, 20221152. [CrossRef] [PubMed]
5. Radanliev, P.; de Roure, D. Review of Algorithms for Artificial Intelligence on Low Memory Devices. IEEE Access 2021, 9,
109986–109993. [CrossRef]
6. Samuel, A. Some Studies in Machine Learning Using the Game of Checkers. IBM J. Res. Dev. 1959, 3, 210–229. [CrossRef]
7. Hassabis, D. Artificial Intelligence: Chess Match of the Century. Nature 2017, 544, 413–414. [CrossRef]
8. Fron, C.; Korn, O. A Short History of the Perception of Robots and Automata from Antiquity to Modern Times. In Social Robots:
Technological, Societal and Ethical Aspects of Human-Robot Interaction; Korn, O., Ed.; Springer International Publishing: Cham,
Switzerland, 2019; pp. 1–12; ISBN 978-3-030-17107-0.
9. Common Sense, the Turing Test, and the Quest for Real AI. The MIT Press. Available online: https://mitpress.mit.edu/books/
common-sense-turing-test-and-quest-real-ai (accessed on 7 July 2022).
10. Haenlein, M.; Kaplan, A. A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. Calif.
Manag. Rev. 2019, 61, 000812561986492. [CrossRef]
11. Muggleton, S. Alan Turing and the Development of Artificial Intelligence. AI Commun. 2014, 27, 3–10. [CrossRef]
12. Turing, A. Lecture on the Automatic Computing Engine (1947); Oxford University Press: Oxford, UK, 2004.
13. Turing, A.M. I.—Computing Machinery and Intelligence. Mind 1950, LIX, 433–460. [CrossRef]
14. Reaching for Artificial Intelligence: A Personal Memoir on Learning Machines and Machine Learning Pioneers—UNESCO Digital
Library. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000367501?posInSet=38&queryId=2deebfb4-e631-4723-
a7fb-82590e5c3eb8 (accessed on 24 August 2022).
15. Understanding AI—Part 3: Methods of Symbolic AI. Available online: https://divis.io/en/2019/04/understanding-ai-part-3-
methods-of-symbolic-ai/ (accessed on 20 July 2022).
16. Garnelo, M.; Shanahan, M. Reconciling Deep Learning with Symbolic Artificial Intelligence: Representing Objects and Relations.
Curr. Opin. Behav. Sci. 2019, 29, 17–23. [CrossRef]
17. Sorantin, E.; Grasser, M.G.; Hemmelmayr, A.; Tschauner, S.; Hrzic, F.; Weiss, V.; Lacekova, J.; Holzinger, A. The Augmented
Radiologist: Artificial Intelligence in the Practice of Radiology. Pediatr. Radiol. 2021, 52, 2074–2086. [CrossRef] [PubMed]
18. McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A Proposal for the Dartmouth Summer Research Project on Artificial
Intelligence, August 31, 1955. AI Mag. 2006, 27, 12. [CrossRef]
19. Wang, P. On Defining Artificial Intelligence. J. Artif. Gen. Intell. 2019, 10, 1–37. [CrossRef]
20. McCulloch, W.S.; Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys. 1943, 5, 115–133.
[CrossRef]
21. Basheer, I.A.; Hajmeer, M. Artificial Neural Networks: Fundamentals, Computing, Design, and Application. J. Microbiol. Methods
2000, 43, 3–31. [CrossRef]
22. Piccinini, G. The First Computational Theory of Mind and Brain: A Close Look at Mcculloch and Pitts’s “Logical Calculus of
Ideas Immanent in Nervous Activity”. Synthese 2004, 141, 175–215. [CrossRef]
23. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [CrossRef]
24. Wang, H.; Raj, B. On the Origin of Deep Learning. arXiv 2017, arXiv:1702.07800. [CrossRef]
Cancers 2024, 16, 3702 16 of 23
25. Hebb, D.O. Organization of Behavior. New York: Wiley, 1949, Pp. 335, $4.00. J. Clin. Psychol. 1950, 6, 307. [CrossRef]
26. Chakraverty, S.; Sahoo, D.M.; Mahato, N.R. Hebbian Learning Rule. In Concepts of Soft Computing: Fuzzy and ANN with
Programming; Chakraverty, S., Sahoo, D.M., Mahato, N.R., Eds.; Springer: Singapore, 2019; pp. 175–182. ISBN 9789811374302.
27. Toosi, A.; Bottino, A.G.; Saboury, B.; Siegel, E.; Rahmim, A. A Brief History of AI: How to Prevent Another Winter (A Critical
Review). PET Clin. 2021, 16, 449–469. [CrossRef] [PubMed]
28. Rosenblatt, F. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol. Rev. 1958, 65,
386–408. [CrossRef] [PubMed]
29. Parhi, K.; Unnikrishnan, N. Brain-Inspired Computing: Models and Architectures. IEEE Open J. Circuits Syst. 2020, 1, 185–204.
[CrossRef]
30. Dawson, M.R.W. Connectionism and Classical Conditioning. Comp. Cogn. Behav. Rev. 2008, 3, 115–133. [CrossRef]
31. Macukow, B. Neural Networks—State of Art, Brief History, Basic Models and Architecture. In Proceedings of the Computer Informa-
tion Systems and Industrial Management; Saeed, K., Homenda, W., Eds.; Springer International Publishing: Cham, Switzerland,
2016; pp. 3–14.
32. Raschka, S. What Is the Difference Between a Perceptron, Adaline, and Neural Network Model? Available online: https:
//sebastianraschka.com/faq/docs/diff-perceptron-adaline-neuralnet.html (accessed on 8 July 2022).
33. Butterworth, M. The ICO and Artificial Intelligence: The Role of Fairness in the GDPR Framework. Comput. Law Secur. Rev. 2018,
34, 257–268. [CrossRef]
34. Avanzo, M.; Wei, L.; Stancanello, J.; Vallières, M.; Rao, A.; Morin, O.; Mattonen, S.A.; El Naqa, I. Machine and Deep Learning
Methods for Radiomics. Med. Phys. 2020, 47, e185–e202. [CrossRef]
35. Wold, S.; Esbensen, K.; Geladi, P. Principal Component Analysis. Proc. Multivar. Stat. Workshop Geol. Geochem. 1987, 2, 37–52.
[CrossRef]
36. Dayan, P.; Sahani, M.; Deback, G. Unsupervised Learning. In Proceedings of the MIT Encyclopedia of the Cognitive Sciences; The MIT
Press: Cambridge, MA, USA, 1999.
37. Minsky, M.; Papert, S. Perceptrons; an Introduction to Computational Geometry; MIT Press: Cambridge, MA, USA, 1969; ISBN
978-0-262-13043-1.
38. Spears, B. Contemporary Machine Learning: A Guide for Practitioners in the Physical Sciences. arXiv 2017, arXiv:1712.08523.
[CrossRef]
39. The Dendral Project (Chapter 15)—The Quest for Artificial Intelligence. Available online: https://www.cambridge.org/core/
books/abs/quest-for-artificial-intelligence/dendral-project/7791DA5FAAF8D57E4B27E4EE387758E1 (accessed on 14 July 2022).
40. Rediscovering Some Problems of Artificial Intelligence in the Context of Organic Chemistry—Digital Collections—National
Library of Medicine. Available online: https://collections.nlm.nih.gov/catalog/nlm:nlmuid-101584906X921-doc (accessed on 25
August 2022).
41. Weiss, S.; Kulikowski, C.A.; Safir, A. Glaucoma Consultation by Computer. Comput. Biol. Med. 1978, 8, 25–40. [CrossRef]
42. Miller, R.A.; Pople, H.E.; Myers, J.D. Internist-I, an Experimental Computer-Based Diagnostic Consultant for General Internal
Medicine. N. Engl. J. Med. 1982, 307, 468–476. [CrossRef]
43. Sutton, R.T.; Pincock, D.; Baumgart, D.C.; Sadowski, D.C.; Fedorak, R.N.; Kroeker, K.I. An Overview of Clinical Decision Support
Systems: Benefits, Risks, and Strategies for Success. NPJ Digit. Med. 2020, 3, 1–10. [CrossRef] [PubMed]
44. Bone Tumor Diagnosis. Available online: http://uwmsk.org/bayes/bonetumor.html (accessed on 29 July 2022).
45. Lodwick, G.S.; Haun, C.L.; Smith, W.E.; Keller, R.F.; Robertson, E.D. Computer Diagnosis of Primary Bone Tumors. Radiology
1963, 80, 273–275. [CrossRef]
46. Belson, W.A. A Technique for Studying the Effects of a Television Broadcast. J. R. Stat. Soc. Ser. C (Appl. Stat.) 1956, 5, 195–202.
[CrossRef]
47. de Ville, B. Decision Trees. WIREs Comput. Stat. 2013, 5, 448–455. [CrossRef]
48. Belson, W.A. Matching and Prediction on the Principle of Biological Classification. J. R. Stat. Soc. Ser. C 1959, 8, 65–75. [CrossRef]
49. Ritschard, G. CHAID and Earlier Supervised Tree Methods; Routledge/Taylor & Francis Group: London, UK, 2010; p. 74;
ISBN 978-0-415-81706-6.
50. Morgan, J.N.; Sonquist, J.A. Problems in the Analysis of Survey Data, and a Proposal. J. Am. Stat. Assoc. 1963, 58, 415–434.
[CrossRef]
51. Gini, C. Variabilità e Mutabilità: Contributo allo Studio delle Distribuzioni e delle Relazioni Statistiche; Tipogr. di P. Cuppini: Bologna,
Italy, 1912. Available online: https://books.google.it/books?id=fqjaBPMxB9kC (accessed on 10 July 2022).
52. Podgorelec, V.; Kokol, P.; Stiglic, B.; Rozman, I. Decision Trees: An Overview and Their Use in Medicine. J. Med. Syst. 2002, 26,
445–463. [CrossRef]
53. Breiman, L.; Friedman, J.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees. Available online: https://www.
taylorfrancis.com/books/mono/10.1201/9781315139470/classification-regression-trees-leo-breiman-jerome-friedman-
richard-olshen-charles-stone (accessed on 10 July 2022).
54. Shim, E.J.; Yoon, M.A.; Yoo, H.J.; Chee, C.G.; Lee, M.H.; Lee, S.H.; Chung, H.W.; Shin, M.J. An MRI-Based Decision Tree
to Distinguish Lipomas and Lipoma Variants from Well-Differentiated Liposarcoma of the Extremity and Superficial Trunk:
Classification and Regression Tree (CART) Analysis. Eur. J. Radiol. 2020, 127, 109012. [CrossRef]
Cancers 2024, 16, 3702 17 of 23
87. Avanzo, M.; Pirrone, G.; Mileto, M.; Massarut, S.; Stancanello, J.; Baradaran-Ghahfarokhi, M.; Rink, A.; Barresi, L.; Vinante, L.;
Piccoli, E.; et al. Prediction of Skin Dose in Low-kV Intraoperative Radiotherapy Using Machine Learning Models Trained on
Results of in Vivo Dosimetry. Med. Phys. 2019, 46, 1447–1454. [CrossRef] [PubMed]
88. Ho, T.K. Random Decision Forests. In Proceedings of the Third International Conference on Document Analysis and Recognition,
Montreal, QC, Canada, 14–16 August 1995; p. 278.
89. Schapire, R.E. The Strength of Weak Learnability. Mach. Learn. 1990, 5, 197–227. [CrossRef]
90. Freund, Y.; Schapire, R.E. A Desicion-Theoretic Generalization of on-Line Learning and an Application to Boosting. In Proceedings
of the Computational Learning Theory; Vitányi, P., Ed.; Springer: Berlin/Heidelberg, Germany, 1995; pp. 23–37.
91. Giger, M.L.; Chan, H.-P.; Boone, J. Anniversary Paper: History and Status of CAD and Quantitative Image Analysis: The Role of
Medical Physics and AAPM. Med. Phys. 2008, 35, 5799–5820. [CrossRef] [PubMed]
92. Doi, K.; Giger, M.L.; Nishikawa, R.M.; Schmidt, R.A. Computer Aided Diagnosis of Breast Cancer on Mammograms. Breast
Cancer 1997, 4, 228–233. [CrossRef]
93. Le, E.P.V.; Wang, Y.; Huang, Y.; Hickman, S.; Gilbert, F.J. Artificial Intelligence in Breast Imaging. Clin. Radiol. 2019, 74, 357–366.
[CrossRef]
94. Ackerman, L.V.; Gose, E.E. Breast Lesion Classification by Computer and Xeroradiograph. Cancer 1972, 30, 1025–1035. [CrossRef]
95. Asada, N.; Doi, K.; MacMahon, H.; Montner, S.M.; Giger, M.L.; Abe, C.; Wu, Y. Potential Usefulness of an Artificial Neural
Network for Differential Diagnosis of Interstitial Lung Diseases: Pilot Study. Radiology 1990, 177, 857–860. [CrossRef]
96. U. S. Food and Drug Administration. Summary of Safety and Effectiveness Data: R2 Technologies (P970058). 1998. Available
online: https://www.accessdata.fda.gov/cdrh_docs/pdf/p970058.pdf (accessed on 28 October 2024).
97. Gilbert Fiona, J.; Astley Susan, M.; Gillan Maureen, G.C.; Agbaje Olorunsola, F.; Wallis Matthew, G.; James, J.; Boggis Caroline,
R.M.; Duffy Stephen, W. Single Reading with Computer-Aided Detection for Screening Mammography. N. Engl. J. Med. 2008, 359,
1675–1684. [CrossRef]
98. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 3,
610–621. [CrossRef]
99. Cavouras, D.; Prassopoulos, P. Computer Image Analysis of Brain CT Images for Discriminating Hypodense Cerebral Lesions in
Children. Med. Inform. 1994, 19, 13–20. [CrossRef]
100. Gillies, R.J.; Anderson, A.R.; Gatenby, R.A.; Morse, D.L. The Biology Underlying Molecular Imaging in Oncology: From Genome
to Anatome and Back Again. Clin. Radiol. 2010, 65, 517–521. [CrossRef] [PubMed]
101. Proceedings of the 2010 World Molecular Imaging Congress, Kyoto, Japan, 8–11 September 2010. Mol. Imaging Biol. 2010, 12,
500–1636. [CrossRef]
102. Falk, M.; Hausmann, M.; Lukasova, E.; Biswas, A.; Hildenbrand, G.; Davidkova, M.; Krasavin, E.; Kleibl, Z.; Falkova, I.; Jezkova,
L.; et al. Determining Omics Spatiotemporal Dimensions Using Exciting New Nanoscopy Techniques to Assess Complex Cell
Responses to DNA Damage: Part B--Structuromics. Crit. Rev. Eukaryot. Gene Expr. 2014, 24, 225–247. [CrossRef]
103. Avanzo, M.; Gagliardi, V.; Stancanello, J.; Blanck, O.; Pirrone, G.; El Naqa, I.; Revelant, A.; Sartor, G. Combining Computed
Tomography and Biologically Effective Dose in Radiomics and Deep Learning Improves Prediction of Tumor Response to Robotic
Lung Stereotactic Body Radiation Therapy. Med. Phys. 2021, 48, 6257–6269. [CrossRef] [PubMed]
104. Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016, 278, 563–577.
[CrossRef]
105. Quartuccio, N.; Marrale, M.; Laudicella, R.; Alongi, P.; Siracusa, M.; Sturiale, L.; Arnone, G.; Cutaia, G.; Salvaggio, G.; Midiri,
M.; et al. The Role of PET Radiomic Features in Prostate Cancer: A Systematic Review. Clin. Transl. Imaging 2021, 9, 579–588.
[CrossRef]
106. Ubaldi, L.; Valenti, V.; Borgese, R.F.; Collura, G.; Fantacci, M.E.; Ferrera, G.; Iacoviello, G.; Abbate, B.F.; Laruina, F.; Tripoli, A.;
et al. Strategies to Develop Radiomics and Machine Learning Models for Lung Cancer Stage and Histology Prediction Using
Small Data Samples. Phys. Med. 2021, 90, 13–22. [CrossRef]
107. Pirrone, G.; Matrone, F.; Chiovati, P.; Manente, S.; Drigo, A.; Donofrio, A.; Cappelletto, C.; Borsatti, E.; Dassie, A.; Bortolus, R.;
et al. Predicting Local Failure after Partial Prostate Re-Irradiation Using a Dosiomic-Based Machine Learning Model. J. Pers. Med.
2022, 12, 1491. [CrossRef]
108. Avanzo, M.; Stancanello, J.; Pirrone, G.; Sartor, G. Radiomics and Deep Learning in Lung Cancer. Strahlenther. Onkol. 2020, 196,
879–887. [CrossRef]
109. Peira, E.; Sensi, F.; Rei, L.; Gianeri, R.; Tortora, D.; Fiz, F.; Piccardo, A.; Bottoni, G.; Morana, G.; Chincarini, A. Towards an
Automated Approach to the Semi-Quantification of [18 F]F-DOPA PET in Pediatric-Type Diffuse Gliomas. J. Clin. Med. 2023,
12, 2765. [CrossRef]
110. Ubaldi, L.; Saponaro, S.; Giuliano, A.; Talamonti, C.; Retico, A. Deriving Quantitative Information from Multiparametric MRI via
Radiomics: Evaluation of the Robustness and Predictive Value of Radiomic Features in the Discrimination of Low-Grade versus
High-Grade Gliomas with Machine Learning. Phys. Med. 2023, 107, 102538. [CrossRef] [PubMed]
111. Traverso, A.; Wee, L.; Dekker, A.; Gillies, R. Repeatability and Reproducibility of Radiomic Features: A Systematic Review. Int. J.
Radiat. Oncol. *Biol. *Phys. 2018, 102, 1143–1158. [CrossRef] [PubMed]
112. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to
Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [CrossRef]
Cancers 2024, 16, 3702 19 of 23
113. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks
from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958.
114. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of
the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25.
115. Reinke, A.; Tizabi, M.D.; Eisenmann, M.; Maier-Hein, L. Common Pitfalls and Recommendations for Grand Challenges in Medical
Artificial Intelligence. Eur. Urol. Focus 2021, 7, 710–712. [CrossRef]
116. Castiglioni, I.; Rundo, L.; Codari, M.; Di Leo, G.; Salvatore, C.; Interlenghi, M.; Gallivanone, F.; Cozzi, A.; D’Amico, N.C.;
Sardanelli, F. AI Applications to Medical Images: From Machine Learning to Deep Learning. Phys. Med. 2021, 83, 9–24. [CrossRef]
117. Yu, K.-H.; Beam, A.L.; Kohane, I.S. Artificial Intelligence in Healthcare. Nat. Biomed. Eng. 2018, 2, 719–731. [CrossRef]
118. Chan, H.-P.; Hadjiiski, L.M.; Samala, R.K. Computer-Aided Diagnosis in the Era of Deep Learning. Med. Phys. 2020, 47, e218–e227.
[CrossRef]
119. Fujita, H. AI-Based Computer-Aided Diagnosis (AI-CAD): The Latest Review to Read First. Radiol. Phys. Technol. 2020, 13, 6–19.
[CrossRef]
120. Wu, Y.C.; Doi, K.; Giger, M.L. Detection of Lung Nodules in Digital Chest Radiographs Using Artificial Neural Networks: A Pilot
Study. J. Digit. Imaging 1995, 8, 88–94. [CrossRef]
121. Lo, S.C.; Lou, S.L.; Lin, J.S.; Freedman, M.T.; Chien, M.V.; Mun, S.K. Artificial Convolution Neural Network Techniques and
Applications for Lung Nodule Detection. IEEE Trans. Med. Imaging 1995, 14, 711–718. [CrossRef] [PubMed]
122. Lo, S.-C.B.; Lin, J.-S.; Freedman, M.T.; Mun, S.K. Computer-Assisted Diagnosis of Lung Nodule Detection Using Artificial
Convoultion Neural Network. In Proceedings of the Medical Imaging 1993: Image Processing, Newport Beach, CA, USA, 14–19
February 1993; Volume 1898, pp. 859–869.
123. Chan, H.P.; Lo, S.C.; Sahiner, B.; Lam, K.L.; Helvie, M.A. Computer-Aided Detection of Mammographic Microcalcifications:
Pattern Recognition with an Artificial Neural Network. Med. Phys. 1995, 22, 1555–1567. [CrossRef] [PubMed]
124. Zhou, S.K.; Greenspan, H.; Davatzikos, C.; Duncan, J.S.; Van Ginneken, B.; Madabhushi, A.; Prince, J.L.; Rueckert, D.; Summers,
R.M. A Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies With Progress Highlights,
and Future Promises. Proc. IEEE 2021, 109, 820–838. [CrossRef] [PubMed]
125. Kaul, V.; Enslin, S.; Gross, S.A. History of Artificial Intelligence in Medicine. Gastrointest. Endosc. 2020, 92, 807–812. [CrossRef]
126. Lipkova, J.; Chen, R.J.; Chen, B.; Lu, M.Y.; Barbieri, M.; Shao, D.; Vaidya, A.J.; Chen, C.; Zhuang, L.; Williamson, D.F.; et al.
Artificial Intelligence for Multimodal Data Integration in Oncology. Cancer Cell 2022, 40, 1095–1110. [CrossRef]
127. Eixelberger, T.; Wolkenstein, G.; Hackner, R.; Bruns, V.; Mühldorfer, S.; Geissler, U.; Belle, S.; Wittenberg, T. YOLO Networks for
Polyp Detection: A Human-in-the-Loop Training Approach. Curr. Dir. Biomed. Eng. 2022, 8, 277–280. [CrossRef]
128. Ragab, M.G.; Abdulkadir, S.J.; Muneer, A.; Alqushaibi, A.; Sumiea, E.H.; Qureshi, R.; Al-Selwi, S.M.; Alhussian, H. A Comprehen-
sive Systematic Review of YOLO for Medical Object Detection (2018 to 2023). IEEE Access 2024, 12, 57815–57836. [CrossRef]
129. Gong, D.; Wu, L.; Zhang, J.; Mu, G.; Shen, L.; Liu, J.; Wang, Z.; Zhou, W.; An, P.; Huang, X.; et al. Detection of Colorectal
Adenomas with a Real-Time Computer-Aided System (ENDOANGEL): A Randomised Controlled Study. Lancet Gastroenterol.
Hepatol. 2020, 5, 352–361. [CrossRef]
130. Wang, Y.; Lombardo, E.; Avanzo, M.; Zschaek, S.; Weingärtner, J.; Holzgreve, A.; Albert, N.L.; Marschner, S.; Fanetti, G.; Franchin,
G.; et al. Deep Learning Based Time-to-Event Analysis with PET, CT and Joint PET/CT for Head and Neck Cancer Prognosis.
Comput. Methods Programs Biomed. 2022, 222, 106948. [CrossRef]
131. Perkins, D.; Salomon, G. Transfer of Learning. In The International Encyclopedia of Education, 2nd ed.; Husén, T., Postlethwaite,
T.N., Eds.; Pergamon: Oxford, UK, 1994; pp. 425–441.
132. Kim, H.E.; Cosa-Linan, A.; Santhanam, N.; Jannesari, M.; Maros, M.E.; Ganslandt, T. Transfer Learning for Medical Image
Classification: A Literature Review. BMC Med. Imaging 2022, 22, 69. [CrossRef]
133. Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A Survey of Transfer Learning. J. Big Data 2016, 3, 9. [CrossRef]
134. Michael, E.; Ma, H.; Li, H.; Kulwa, F.; Li, J. Breast Cancer Segmentation Methods: Current Status and Future Potentials. Biomed.
Res. Int. 2021, 2021, 9962109. [CrossRef] [PubMed]
135. O’Donnell, M.; Gore, J.C.; Adams, W.J. Toward an Automated Analysis System for Nuclear Magnetic Resonance Imaging. II.
Initial Segmentation Algorithm. Med. Phys. 1986, 13, 293–297. [CrossRef] [PubMed]
136. Bezdek, J.C.; Hall, L.O.; Clarke, L.P. Review of MR Image Segmentation Techniques Using Pattern Recognition. Med. Phys. 1993,
20, 1033–1048. [CrossRef] [PubMed]
137. Comelli, A.; Stefano, A.; Russo, G.; Bignardi, S.; Sabini, M.G.; Petrucci, G.; Ippolito, M.; Yezzi, A. K-Nearest Neighbor Driving
Active Contours to Delineate Biological Tumor Volumes. Eng. Appl. Artif. Intell. 2019, 81, 133–144. [CrossRef]
138. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation; Navab, N., Hornegger, J.,
Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241.
139. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al.
Attention U-Net: Learning Where to Look for the Pancreas. arXiv, 2018; arXiv:1804.03999. [CrossRef]
140. Lizzi, F.; Agosti, A.; Brero, F.; Cabini, R.F.; Fantacci, M.E.; Figini, S.; Lascialfari, A.; Laruina, F.; Oliva, P.; Piffer, S.; et al.
Quantification of Pulmonary Involvement in COVID-19 Pneumonia by Means of a Cascade of Two U-Nets: Training and
Assessment on Multiple Datasets Using Different Annotation Criteria. Int. J. CARS 2022, 17, 229–237. [CrossRef]
Cancers 2024, 16, 3702 20 of 23
141. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.-W.; Heng, P.A. H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor
Segmentation from CT Volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [CrossRef]
142. Khaled, R.; Vidal, J.; Vilanova, J.C.; Martí, R. A U-Net Ensemble for Breast Lesion Segmentation in DCE MRI. Comput. Biol. Med.
2022, 140, 105093. [CrossRef]
143. Moradi, S.; Oghli, M.G.; Alizadehasl, A.; Shiri, I.; Oveisi, N.; Oveisi, M.; Maleki, M.; Dhooge, J. MFP-Unet: A Novel Deep Learning
Based Approach for Left Ventricle Segmentation in Echocardiography. Phys. Medica 2019, 67, 58–69. [CrossRef]
144. Yi, X.; Walia, E.; Babyn, P. Generative Adversarial Network in Medical Imaging: A Review. Med. Image Anal. 2019, 58, 101552.
[CrossRef]
145. Singh, S.P.; Wang, L.; Gupta, S.; Goli, H.; Padmanabhan, P.; Gulyás, B. 3D Deep Learning on Medical Images: A Review. Sensors
2020, 20, 5097. [CrossRef] [PubMed]
146. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from
Sparse Annotation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016; Ourselin, S.,
Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W., Eds.; Springer International Publishing: Cham, Swizerland, 2016; pp. 424–432.
147. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial
Nets. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2014;
Volume 27.
148. Shavlokhova, V.; Vollmer, A.; Zouboulis, C.C.; Vollmer, M.; Wollborn, J.; Lang, G.; Kübler, A.; Hartmann, S.; Stoll, C.; Roider, E.;
et al. Finetuning of GLIDE Stable Diffusion Model for AI-Based Text-Conditional Image Synthesis of Dermoscopic Images. Front.
Med. 2023, 10, 1231436. [CrossRef] [PubMed]
149. Toda, R.; Teramoto, A.; Tsujimoto, M.; Toyama, H.; Imaizumi, K.; Saito, K.; Fujita, H. Synthetic CT Image Generation of Shape-
Controlled Lung Cancer Using Semi-Conditional InfoGAN and Its Applicability for Type Classification. Int. J. Comput. Assist.
Radiol. Surg. 2021, 16, 241–251. [CrossRef] [PubMed]
150. Chlap, P.; Min, H.; Vandenberg, N.; Dowling, J.; Holloway, L.; Haworth, A. A Review of Medical Image Data Augmentation
Techniques for Deep Learning Applications. J. Med. Imaging Radiat. Oncol. 2021, 65, 545–563. [CrossRef]
151. Wolterink, J.M.; Mukhopadhyay, A.; Leiner, T.; Vogl, T.J.; Bucher, A.M.; Išgum, I. Generative Adversarial Networks: A Primer for
Radiologists. RadioGraphics 2021, 41, 840–857. [CrossRef]
152. Acosta, J.N.; Falcone, G.J.; Rajpurkar, P.; Topol, E.J. Multimodal Biomedical AI. Nat. Med. 2022, 28, 1773–1784. [CrossRef]
153. Gong, Y.; Shan, H.; Teng, Y.; Tu, N.; Li, M.; Liang, G.; Wang, G.; Wang, S. Parameter-Transferred Wasserstein Generative
Adversarial Network (PT-WGAN) for Low-Dose PET Image Denoising. IEEE Trans. Radiat. Plasma Med. Sci. 2021, 5, 213–223.
[CrossRef]
154. Lee, J. A Review of Deep Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE Trans.
Radiat. Plasma Med. Sci. 2020, 5, 160–184. [CrossRef]
155. Liu, M.; Zhu, A.H.; Maiti, P.; Thomopoulos, S.I.; Gadewar, S.; Chai, Y.; Kim, H.; Jahanshad, N.; for the Alzheimer’s Disease
Neuroimaging Initiative. Style Transfer Generative Adversarial Networks to Harmonize Multisite MRI to a Single Reference
Image to Avoid Overcorrection. Hum. Brain Mapp. 2023, 44, 4875–4892. [CrossRef]
156. Eo, T.; Jun, Y.; Kim, T.; Jang, J.; Lee, H.-J.; Hwang, D. KIKI-Net: Cross-Domain Convolutional Neural Networks for Reconstructing
Undersampled Magnetic Resonance Images. Magn. Reson. Med. 2018, 80, 2188–2201. [CrossRef]
157. Zhao, X.; Yang, T.; Li, B. A Review on Generative Based Methods for MRI Reconstruction. J. Phys. Conf. Ser. 2022, 2330, 012002.
[CrossRef]
158. Sloan, J.M.; Goatman, K.A.; Siebert, J.P. Learning Rigid Image Registration—Utilizing Convolutional Neural Networks for
Medical Image Registration. In Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and
Technologies, Funchal, Madeira, Portugal, 19–21 January 2018; pp. 89–99.
159. Fourcade, C.; Ferrer, L.; Moreau, N.; Santini, G.; Brennan, A.; Rousseau, C.; Lacombe, M.; Fleury, V.; Colombié, M.; Jézéquel, P.;
et al. Deformable Image Registration with Deep Network Priors: A Study on Longitudinal PET Images. Phys. Med. Biol. 2022,
67, 155011. [CrossRef] [PubMed]
160. Avanzo, M.; Barbiero, S.; Trovo, M.; Bissonnette, J.P.; Jena, R.; Stancanello, J.; Pirrone, G.; Matrone, F.; Minatel, E.; Cappelletto, C.;
et al. Voxel-by-Voxel Correlation between Radiologically Radiation Induced Lung Injury and Dose after Image-Guided, Intensity
Modulated Radiotherapy for Lung Tumors. Phys. Med. 2017, 42, 150–156. [CrossRef] [PubMed]
161. Xie, Y.; Takikawa, T.; Saito, S.; Litany, O.; Yan, S.; Khan, N.; Tombari, F.; Tompkin, J.; Sitzmann, V.; Sridhar, S. Neural Fields in
Visual Computing and Beyond. Comput. Graph. Forum 2022, 41, 641–676. [CrossRef]
162. Mao, S.; Sejdic, E. A Review of Recurrent Neural Network-Based Methods in Computational Physiology. IEEE Trans. Neural Netw.
Learn. Syst. 2023, 34, 6983–7003. [CrossRef]
163. Majumdar, A. and Gupta M. Recurrent Transfer Learning. Neural. Netw. 2019, 118, 271–279. [CrossRef]
164. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [CrossRef]
165. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to Forget: Continual Prediction with LSTM. Neural Comput. 2000, 12, 2451–2471.
[CrossRef]
166. Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A Search Space Odyssey. IEEE Trans. Neural
Netw. Learn. Syst. 2017, 28, 2222–2232. [CrossRef]
Cancers 2024, 16, 3702 21 of 23
167. Steinkamp, J.; Cook, T.S. Basic Artificial Intelligence Techniques: Natural Language Processing of Radiology Reports. Radiol. Clin.
N. Am. 2021, 59, 919–931. [CrossRef]
168. Kreimeyer, K.; Foster, M.; Pandey, A.; Arya, N.; Halford, G.; Jones, S.F.; Forshee, R.; Walderhaug, M.; Botsis, T. Natural Language
Processing Systems for Capturing and Standardizing Unstructured Clinical Information: A Systematic Review. J. Biomed. Inform.
2017, 73, 14–29. [CrossRef] [PubMed]
169. Gultepe, E.; Green, J.P.; Nguyen, H.; Adams, J.; Albertson, T.; Tagkopoulos, I. From Vital Signs to Clinical Outcomes for Patients
with Sepsis: A Machine Learning Basis for a Clinical Decision Support System. J. Am. Med. Inform. Assoc. 2013, 21, 315–325.
[CrossRef] [PubMed]
170. Ravuri, M.; Kannan, A.; Tso, G.J.; Amatriain, X. Learning from the Experts: From Expert Systems to Machine-Learned Diagnosis
Models. In Proceedings of the 3rd Machine Learning for Healthcare Conference, Palo Alto, CA, USA, 17–18 August 2018;
pp. 227–243.
171. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need.
arXiv 2023, arXiv:1706.03762v7. [CrossRef]
172. Bajaj, S.; Gandhi, D.; Nayar, D. Potential Applications and Impact of ChatGPT in Radiology. Acad. Radiol. 2023, 31, 1256–1261.
[CrossRef]
173. Langlotz, C.P. The Future of AI and Informatics in Radiology: 10 Predictions. Radiology 2023, 309, e231114. [CrossRef]
174. Huang, J.; Neill, L.; Wittbrodt, M.; Melnick, D.; Klug, M.; Thompson, M.; Bailitz, J.; Loftus, T.; Malik, S.; Phull, A.; et al. Generative
Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department. JAMA Netw. Open 2023, 6, e2336100.
[CrossRef]
175. Ismail, A.; Ghorashi, N.S.; Javan, R. New Horizons: The Potential Role of OpenAI’s ChatGPT in Clinical Radiology. J. Am. Coll.
Radiol. 2023, 20, 696–698. [CrossRef]
176. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.;
Gelly, S.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929.
[CrossRef]
177. Akbari, H.; Yuan, L.; Qian, R.; Chuang, W.-H.; Chang, S.-F.; Cui, Y.; Gong, B. VATT: Transformers for Multimodal Self-Supervised
Learning from Raw Video, Audio and Text. arXiv 2021, arXiv:2104.11178. [CrossRef]
178. Oren, O.; Gersh, B.J.; Bhatt, D.L. Artificial Intelligence in Medical Imaging: Switching from Radiographic Pathological Data to
Clinically Meaningful Endpoints. Lancet Digit. Health 2020, 2, e486–e488. [CrossRef]
179. Hafezi-Nejad, N.; Trivedi, P. Foundation AI Models and Data Extraction from Unlabeled Radiology Reports: Navigating
Uncharted Territory. Radiology 2023, 308, e232308. [CrossRef] [PubMed]
180. Moor, M.; Banerjee, O.; Abad, Z.S.H.; Krumholz, H.M.; Leskovec, J.; Topol, E.J.; Rajpurkar, P. Foundation Models for Generalist
Medical Artificial Intelligence. Nature 2023, 616, 259–265. [CrossRef] [PubMed]
181. Bluethgen, C.; Chambon, P.; Delbrouck, J.-B.; van der Sluijs, R.; Połacin, M.; Zambrano Chaves, J.M.; Abraham, T.M.; Purohit, S.;
Langlotz, C.P.; Chaudhari, A.S. A Vision-Language Foundation Model for the Generation of Realistic Chest X-Ray Images. Nat.
Biomed. Eng. 2024. [CrossRef] [PubMed]
182. Chen, R.J.; Ding, T.; Lu, M.Y.; Williamson, D.F.K.; Jaume, G.; Song, A.H.; Chen, B.; Zhang, A.; Shao, D.; Shaban, M.; et al. Towards
a General-Purpose Foundation Model for Computational Pathology. Nat. Med. 2024, 30, 850–862. [CrossRef]
183. Fink, M.A.; Bischoff, A.; Fink, C.A.; Moll, M.; Kroschke, J.; Dulz, L.; Heußel, C.P.; Kauczor, H.-U.; Weber, T.F. Potential of ChatGPT
and GPT-4 for Data Mining of Free-Text CT Reports on Lung Cancer. Radiology 2023, 308, e231362. [CrossRef]
184. Schäfer, R.; Nicke, T.; Höfener, H.; Lange, A.; Merhof, D.; Feuerhake, F.; Schulz, V.; Lotz, J.; Kiessling, F. Overcoming Data Scarcity
in Biomedical Imaging with a Foundational Multi-Task Model. Nat. Comput. Sci. 2024, 4, 495–509. [CrossRef]
185. Chen, X.; Wang, X.; Zhang, K.; Fung, K.-M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Recent Advances and
Clinical Applications of Deep Learning in Medical Image Analysis. Med. Image Anal. 2022, 79, 102444. [CrossRef]
186. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al.
Mastering the Game of Go without Human Knowledge. Nature 2017, 550, 354–359. [CrossRef]
187. Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A Guide to
Deep Learning in Healthcare. Nat. Med. 2019, 25, 24–29. [CrossRef]
188. Antropova, N.; Huynh, B.Q.; Giger, M.L. A Deep Feature Fusion Methodology for Breast Cancer Diagnosis Demonstrated on
Three Imaging Modality Datasets. Med. Phys. 2017, 44, 5162–5171. [CrossRef]
189. Terpstra, M.L.; Maspero, M.; D’Agata, F.; Stemkens, B.; Intven, M.P.W.; Lagendijk, J.J.W.; Van den Berg, C.A.T.; Tijssen, R.H.N. Deep
Learning-Based Image Reconstruction and Motion Estimation from Undersampled Radial k-Space for Real-Time MRI-Guided
Radiotherapy. Phys. Med. Biol. 2020, 65, 155015. [CrossRef] [PubMed]
190. Furlong, J.W.; Dupuy, M.E.; Heinsimer, J.A. Neural Network Analysis of Serial Cardiac Enzyme Data A Clinical Application of
Artificial Machine Intelligence. Am. J. Clin. Pathol. 1991, 96, 134–141. [CrossRef]
191. Baxt, W.G. Use of an Artificial Neural Network for the Diagnosis of Myocardial Infarction. Ann. Intern. Med. 1991, 115, 843–848.
[CrossRef] [PubMed]
192. Gross, G.W.; Boone, J.M.; Greco-Hunt, V.; Greenberg, B. Neural Networks in Radiologic Diagnosis. II. Interpretation of Neonatal
Chest Radiographs. Investig. Radiol. 1990, 25, 1017–1023. [CrossRef] [PubMed]
Cancers 2024, 16, 3702 22 of 23
193. Romagnoni, A.; Jégou, S.; Van Steen, K.; Wainrib, G.; Hugot, J.-P. Comparative Performances of Machine Learning Methods for
Classifying Crohn Disease Patients Using Genome-Wide Genotyping Data. Sci. Rep. 2019, 9, 10351. [CrossRef] [PubMed]
194. Vântu, A.; Vasilescu, A.; Băicoianu, A. Medical Emergency Department Triage Data Processing Using a Machine-Learning
Solution. Heliyon 2023, 9, e18402. [CrossRef]
195. Momenzadeh, M.; Vard, A.; Talebi, A.; Mehri Dehnavi, A.; Rabbani, H. Computer-Aided Diagnosis Software for Vulvovaginal
Candidiasis Detection from Pap Smear Images. Microsc. Res. Tech. 2018, 81, 13–21. [CrossRef]
196. Girdhar, R.; El-Nouby, A.; Liu, Z.; Singh, M.; Alwala, K.V.; Joulin, A.; Misra, I. ImageBind One Embedding Space to Bind Them
All. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC,
Canada, 18–22 June 2023; pp. 15180–15190.
197. Choi, K.-H.; Ha, J.-E. Semantic Segmentation with Perceiver IO. In Proceedings of the 2022 22nd International Conference on
Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 27 November–1 December 2022; pp. 1607–1610.
198. Mancosu, P.; Lambri, N.; Castiglioni, I.; Dei, D.; Iori, M.; Loiacono, D.; Russo, S.; Talamonti, C.; Villaggi, E.; Scorsetti, M.
Applications of artificial intelligence in stereotactic body radiation therapy. Phys. Med. Biol. 2022, 67. [CrossRef]
199. Robust machine learning challenge: An AIFM multicentric competition to spread knowledge, identify common pitfalls and
recommend best practice. Phys. Med. 2024, 127, 104834. [CrossRef]
200. Egger, J.; Gsaxner, C.; Pepe, A.; Pomykala, K.L.; Jonske, F.; Kurz, M.; Li, J.; Kleesiek, J. Medical Deep Learning—A Systematic
Meta-Review. Comput. Methods Programs Biomed. 2022, 221, 106874. [CrossRef]
201. Stollmayer, R.; Budai, B.K.; Rónaszéki, A.; Zsombor, Z.; Kalina, I.; Hartmann, E.; Tóth, G.; Szoldán, P.; Bérczi, V.; Maurovich-
Horvat, P.; et al. Focal Liver Lesion MRI Feature Identification Using Efficientnet and MONAI: A Feasibility Study. Cells 2022, 11,
1558. [CrossRef] [PubMed]
202. Gillot, M.; Baquero, B.; Le, C.; Deleat-Besson, R.; Bianchi, J.; Ruellas, A.; Gurgel, M.; Yatabe, M.; Turkestani, N.A.; Najarian, K.;
et al. Automatic Multi-Anatomical Skull Structure Segmentation of Cone-Beam Computed Tomography Scans Using 3D UNETR.
PLoS ONE 2022, 17, e0275033. [CrossRef] [PubMed]
203. Termine, A.; Fabrizio, C.; Caltagirone, C.; Petrosini, L.; on behalf of the Frontotemporal Lobar Degeneration Neuroimaging
Initiative. A Reproducible Deep-Learning-Based Computer-Aided Diagnosis Tool for Frontotemporal Dementia Using MONAI
and Clinica Frameworks. Life 2022, 12, 947. [CrossRef] [PubMed]
204. Vallieres, M.; Zwanenburg, A.; Badic, B.; Cheze Le Rest, C.; Visvikis, D.; Hatt, M. Responsible Radiomics Research for Faster
Clinical Translation. J. Nucl. Med. 2018, 59, 189–193. [CrossRef]
205. Zhang, S.; Liu, R.; Wang, Y.; Zhang, Y.; Li, M.; Wang, Y.; Wang, S.; Ma, N.; Ren, J. Ultrasound-Base Radiomics for Discerning
Lymph Node Metastasis in Thyroid Cancer: A Systematic Review and Meta-Analysis. Acad. Radiol. 2024, 31, 3118–3130.
[CrossRef]
206. Kocak, B.B.B.; Bakas, S.; Cuocolo, R.; Fedorov, A.; Maier-Hein, L.; Mercaldo, N.; Müller, H.; Orlhac, F.; Pinto Dos Santos, D.;
Stanzione, A.; et al. CheckList for EvaluAtion of Radiomics Research (CLEAR): A Step-by-Step Reporting Guideline for Authors
and Reviewers Endorsed by ESR and EuSoMII. Insights Imaging 2023, 14, 75. [CrossRef]
207. Acharya, U.R.; Hagiwara, Y.; Sudarshan, V.K.; Chan, W.Y.; Ng, K.H. Towards Precision Medicine: From Quantitative Imaging to
Radiomics. J. Zhejiang Univ. Sci. B 2018, 19, 6–24. [CrossRef]
208. Park, C.J.; Park, Y.W.; Ahn, S.S.; Kim, D.; Kim, E.H.; Kang, S.-G.; Chang, J.H.; Kim, S.H.; Lee, S.-K. Quality of Radiomics Research
on Brain Metastasis: A Roadmap to Promote Clinical Translation. Korean J. Radiol. 2022, 23, 77–88. [CrossRef]
209. Avery, E.W.; Behland, J.; Mak, A.; Haider, S.P.; Zeevi, T.; Sanelli, P.C.; Filippi, C.G.; Malhotra, A.; Matouk, C.C.; Griessenauer, C.J.;
et al. Dataset on Acute Stroke Risk Stratification from CT Angiographic Radiomics. Data Brief. 2022, 44, 108542. [CrossRef]
210. Prior, F.W.; Clark, K.; Commean, P.; Freymann, J.; Jaffe, C.; Kirby, J.; Moore, S.; Smith, K.; Tarbox, L.; Vendt, B.; et al. TCIA: An
Information Resource to Enable Open Science. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 2013, 1282–1285. [CrossRef]
211. van Leeuwen, K.G.; Schalekamp, S.; Rutten, M.J.C.M.; van Ginneken, B.; de Rooij, M. Artificial Intelligence in Radiology: 100
Commercially Available Products and Their Scientific Evidence. Eur. Radiol. 2021, 31, 3797–3804. [CrossRef] [PubMed]
212. Park, S.H.; Han, K.; Jang, H.Y.; Park, J.E.; Lee, J.-G.; Kim, D.W.; Choi, J. Methods for Clinical Evaluation of Artificial Intelligence
Algorithms for Medical Diagnosis. Radiology 2023, 306, 20–31. [CrossRef] [PubMed]
213. Wu, E.; Wu, K.; Daneshjou, R.; Ouyang, D.; Ho, D.E.; Zou, J. How Medical AI Devices Are Evaluated: Limitations and
Recommendations from an Analysis of FDA Approvals. Nat. Med. 2021, 27, 582–584. [CrossRef] [PubMed]
214. Han, R.; Acosta, J.N.; Shakeri, Z.; Ioannidis, J.P.A.; Topol, E.J.; Rajpurkar, P. Randomised Controlled Trials Evaluating Artificial
Intelligence in Clinical Practice: A Scoping Review. Lancet Digit. Health 2024, 6, e367–e373. [CrossRef] [PubMed]
215. Fazal, M.I.; Patel, M.E.; Tye, J.; Gupta, Y. The Past, Present and Future Role of Artificial Intelligence in Imaging. Eur. J. Radiol.
2018, 105, 246–250. [CrossRef]
216. Neri, E.; Aghakhanyan, G.; Zerunian, M.; Gandolfo, N.; Grassi, R.; Miele, V.; Giovagnoni, A.; Laghi, A.; SIRM expert group on
Artificial Intelligence. Explainable AI in Radiology: A White Paper of the Italian Society of Medical and Interventional Radiology.
Radiol. Med. 2023, 128, 755–764. [CrossRef]
217. Avanzo, M.; Pirrone, G.; Vinante, L.; Caroli, A.; Stancanello, J.; Drigo, A.; Massarut, S.; Mileto, M.; Urbani, M.; Trovo, M.;
et al. Electron Density and Biologically Effective Dose (BED) Radiomics-Based Machine Learning Models to Predict Late
Radiation-Induced Subcutaneous Fibrosis. Front. Oncol. 2020, 10, 490. [CrossRef]
Cancers 2024, 16, 3702 23 of 23
218. Murdoch, W.J.; Singh, C.; Kumbier, K.; Abbasi-Asl, R.; Yu, B. Definitions, Methods, and Applications in Interpretable Machine
Learning. Proc. Natl. Acad. Sci. USA 2019, 116, 22071–22080. [CrossRef]
219. Neves, J.; Hsieh, C.; Nobre, I.B.; Sousa, S.C.; Ouyang, C.; Maciel, A.; Duchowski, A.; Jorge, J.; Moreira, C. Shedding Light on Ai in
Radiology: A Systematic Review and Taxonomy of Eye Gaze-Driven Interpretability in Deep Learning. Eur. J. Radiol. 2024, 172,
111341. [CrossRef]
220. Champendal, M.; Müller, H.; Prior, J.O.; dos Reis, C.S. A Scoping Review of Interpretability and Explainability Concerning
Artificial Intelligence Methods in Medical Imaging. Eur. J. Radiol. 2023, 169, 111159. [CrossRef]
221. Ricci Lara, M.A.; Echeveste, R.; Ferrante, E. Addressing Fairness in Artificial Intelligence for Medical Imaging. Nat. Commun.
2022, 13, 4581. [CrossRef] [PubMed]
222. Burlina, P.; Joshi, N.; Paul, W.; Pacheco, K.D.; Bressler, N.M. Addressing Artificial Intelligence Bias in Retinal Diagnostics. Transl.
Vis. Sci. Technol. 2021, 10, 13. [CrossRef] [PubMed]
223. Mahmood, U.; Shukla-Dave, A.; Chan, H.P.; Drukker, K.; Samala, R.K.; Chen, Q.; Vergara, D.; Greenspan, H.; Petrick, N.; Sahiner,
B.; et al. Artificial Intelligence in Medicine: Mitigating Risks and Maximizing Benefits via Quality Assurance, Quality Control,
and Acceptance Testing. BJR|Artif. Intell. 2024, 1, ubae003. [CrossRef] [PubMed]
224. Kelly, B.S.; Quinn, C.; Belton, N.; Lawlor, A.; Killeen, R.P.; Burrell, J. Cybersecurity Considerations for Radiology Departments
Involved with Artificial Intelligence. Eur. Radiol. 2023, 33, 8833–8841. [CrossRef] [PubMed]
225. Mahadevaiah, G.; Rv, P.; Bermejo, I.; Jaffray, D.; Dekker, A.; Wee, L. Artificial Intelligence-Based Clinical Decision Support
in Modern Medical Physics: Selection, Acceptance, Commissioning, and Quality Assurance. Med. Phys. 2020, 47, e228–e235.
[CrossRef]
226. COCIR. COCIR Analysis on AI in Medical Device Legislation—May 2021. Available online: https://www.cocir.org/latest-news/
publications/article/cocir-analysis-on-ai-in-medical-device-legislation-may-2021 (accessed on 9 April 2024).
227. Ebers, M.; Hoch, V.R.S.; Rosenkranz, F.; Ruschemeier, H.; Steinrötter, B. The European Commission’s Proposal for an Artificial
Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS). J 2021, 4, 589–603. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.