Intelligent Systems in Medicine and Health The Role of Ai Trevor A Cohen Download
Intelligent Systems in Medicine and Health The Role of Ai Trevor A Cohen Download
https://ebookbell.com/product/intelligent-systems-in-medicine-
and-health-the-role-of-ai-trevor-a-cohen-47214010
https://ebookbell.com/product/hybrid-intelligent-systems-in-control-
pattern-recognition-and-medicine-1st-ed-2020-oscar-castillo-10801372
https://ebookbell.com/product/intelligent-and-adaptive-systems-in-
medicine-1st-edition-olivier-c-l-haas-1383708
https://ebookbell.com/product/recommender-systems-for-medicine-and-
music-studies-in-computational-intelligence-946-1st-ed-2021-zbigniew-
w-ras-editor-36373750
https://ebookbell.com/product/intelligent-systems-in-digital-
transformation-theory-and-applications-cengiz-kahraman-47288488
Intelligent Systems In Production Engineering And Maintenance Iii Anna
Burduk
https://ebookbell.com/product/intelligent-systems-in-production-
engineering-and-maintenance-iii-anna-burduk-56891888
https://ebookbell.com/product/intelligent-systems-in-technical-and-
medical-diagnostics-1st-edition-l-travmassuys-auth-4405032
https://ebookbell.com/product/intelligent-systems-in-science-and-
information-2014-extended-and-selected-results-from-the-science-and-
information-conference-2014-1st-edition-kohei-arai-4974004
https://ebookbell.com/product/intelligent-systems-in-production-
engineering-and-maintenance-ispem-2017-proceedings-of-the-first-
international-conference-on-intelligent-systems-in-production-
engineering-and-maintenance-ispem-2017-burduk-6753418
Cognitive Informatics in Biomedicine and Healthcare
Trevor A. Cohen
Vimla L. Patel
Edward H. Shortliffe Editors
Intelligent
Systems in
Medicine and
Health
The Role of AI
Cognitive Informatics in Biomedicine
and Healthcare
Series Editor
Vimla L. Patel, Ctr Cognitive Studies in Med & PH
New York Academy of Med, Suite 454, New York, NY, USA
Enormous advances in information technology have permeated essentially all facets
of life. Although these technologies are transforming the workplace as well as
leisure time, formidable challenges remain in fostering tools that enhance
productivity, are sensitive to work practices, and are intuitive to learn and to use
effectively. Informatics is a discipline concerned with applied and basic science of
information, the practices involved in information processing, and the engineering
of information systems.
Cognitive Informatics (CI), a term that has been adopted and applied particularly
in the fields of biomedicine and health care, is the multidisciplinary study of cogni-
tion, information, and computational sciences. It investigates all facets of computer
applications in biomedicine and health care, including system design and computer-
mediated intelligent action. The basic scientific discipline of CI is strongly grounded
in methods and theories derived from cognitive science. The discipline provides a
framework for the analysis and modeling of complex human performance in tech-
nology-mediated settings and contributes to the design and development of better
information systems for biomedicine and health care.
Despite the significant growth of this discipline, there have been few systematic
published volumes for reference or instruction, intended for working professionals,
scientists, or graduate students in cognitive science and biomedical informatics,
beyond those published in this series. Although information technologies are now in
widespread use globally for promoting increased self-reliance in patients, there is
often a disparity between the scientific and technological knowledge underlying
healthcare practices and the lay beliefs, mental models, and cognitive representa-
tions of illness and disease. The topics covered in this book series address the key
research gaps in biomedical informatics related to the applicability of theories,
models, and evaluation frameworks of HCI and human factors as they apply to clini-
cians as well as to the lay public.
Trevor A. Cohen • Vimla L. Patel
Edward H. Shortliffe
Editors
Intelligent Systems
in Medicine and Health
The Role of AI
Editors
Trevor A. Cohen Vimla L. Patel
University of Washington New York Academy of Medicine
Seattle, WA, USA New York, NY, USA
Edward H. Shortliffe
Columbia University
New York, NY, USA
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2022
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
We wish to dedicate this volume to three
giants of science who greatly inspired and
influenced us while helping to lay the
groundwork for artificial intelligence in
medicine, even though each of them made
their primary contributions in other fields.
Furthermore, each has been a personal
friend of at least one of us and we look back
fondly on their humanity and their
contributions.
Herbert Simon (Carnegie Mellon University)
won the Nobel Prize in Economics (1978) for
his pioneering research into the decision-
making process within economic
organizations. He is also known as an early
innovator in the field of artificial intelligence,
whose long partnership with Allen Newell
led to early work on the Logic Theory
Machine and General Problem Solver. As a
psychologist who studied human and
machine problem solving, he pioneered
notions of bounded rationality and
satisficing.
Joshua Lederberg (Stanford and Rockefeller
Universities) won the Nobel Prize in
Physiology or Medicine (1958) for his
discoveries of genetic transfer in bacteria. A
computer scientist as well as a geneticist, he
was instrumental in devising the notion of
capturing expert knowledge in computers.
His Dendral Project led to a body of work
that dealt with the interpretation of mass
spectral data to identify organic compounds,
and he created the first national resource to
support research on artificial intelligence in
medicine (SUMEX-AIM).
vi
1
Polya G. How to Solve It: A New Aspect of Mathematical Method. Princeton, NJ: Princeton
University Press, 2014 (originally published 1945).
2
Two memorable examples of identifying key subproblems are cited in McCullough’s book on
building the Panama Canal [McCullough, D. The Path between the Seas: The Creation of the
Panama Canal, 1870-1914. New York: Simon & Schuster, 1978]. First, after several years of fail-
ure, a new engineer put in charge of the project rethought the steps of digging a huge trench and
recognized that the most rate-limiting step was getting rid of the dirt. Second, the same engineer
recognized another rate-limiting step was keeping workers healthy. Neither was “the problem” per
se; solving both subproblems made the difference.
vii
viii Foreword
and learning—to work smarter, not harder. Acquiring measurements is one thing,
interpreting the data is another. Importantly, early AI research introduced a concep-
tual framework that includes two critical differences from mathematical computa-
tion: symbolic reasoning and heuristic reasoning. Reasoning with symbols, as
opposed to numeric quantities, carries with it the essential link human beings made
when they invented language between words and the world, i.e., semantics. Heuristic
reasoning is even older than humankind, being bound up with the Darwinian notion
of survival. A decision to fight or flee does not leave time to consider the conse-
quences of all possible options,3 and most decisions in everyday life have no algo-
rithm that guarantees a correct answer in a short time (or often not in any amount
of time).
The importance of human health quickly attracted early pioneers to the possibil-
ity of using computers to assist physicians, nurses, pharmacists, and other health-
care professionals.
The initial applications of AI limited their scope so they could be dealt with suc-
cessfully in single demonstration projects on the small computers of the day.
Cognitive psychologists recognized that clinicians reason about the available data
in medicine just as they do in other fields like chess. The process of diagnostic rea-
soning became the focus of work on expert systems, with early programs becoming
convincing examples that AI could replicate human reasoning in this area on nar-
rowly defined problems.
An important part of this demonstration was with Bayesian programs that used
statistics to reason in a mathematically rigorous way for clinical decision support.
However, those initial applications of Bayes’ Theorem to medicine were also lim-
ited until means were found to reduce their complexity with heuristic reasoning.
Another important set of demonstrations encoded knowledge accumulated by
human experts, along the lines suggested by cognitive science. One of the short-
comings of the expert systems approach, however, is the time and effort it takes to
acquire the knowledge from experts in the first place, and to maintain a large knowl-
edge base thereafter.
In practice, medical professionals are faced with several inconvenient truths,
which further complicate efforts to use computers in health care. The chapters of
this book address many of them. For example:
• The body of knowledge in, and relevant to, medicine is growing rapidly. E.g.,
diagnosis and treatment options for genetic diseases in the last few decades, and
of viral infections in the last few years, have come into the mainstream.
• Complete paradigm shifts in medicine require rethinking whole areas of medi-
cine. E.g., prion diseases have forced a reconceptualization of the mechanisms of
pathogen transmission.
3
Herb Simon, one of the founding fathers of AI, received a Nobel Prize in Economics in 1978 for
elucidation of this concept in the world of decision making. His term for it was “Bounded
Rationality.” See https://www.cmu.edu/simon/what-is-simon/herbert-a-simon.html (accessed
August 11, 2022).
Foreword ix
• Medical knowledge is incomplete and there are no good treatment options, and
sometimes no good diagnostic criteria, for some conditions. E.g., Parkinson’s
disease can be managed, but is still not curable after decades of research.
• Information about an individual patient is often erroneous and almost always
incomplete. E.g., false positive and false negative test results are expected for
almost every diagnostic test.
• Patients’ medical problems exist within a larger emotional, social, economic,
and cultural context. E.g., the most effective treatment options may be unafford-
able or unacceptable to an individual.
• Professionals are expected to learn from their own, and others’, experience (both
positive and negative). E.g., continuing to recommend a failed treatment modal-
ity is reason for censure.
• Professionals at the level of recognized specialists are expected to deal with
unique cases for which there are neither case studies nor established diagnostic
or treatment wisdom. E.g., primary care providers refer recalcitrant cases to spe-
cialists for just these reasons.
• Communication between patients and professionals is imperfect. E.g., language
is full of ambiguity, and we all have biases in what we want to hear or fear most.
Collectively, these issues are more than merely “inconvenient,” but are humbling
reminders that “the problem” of providing health care is overwhelming in the large.
They also represent significant barriers to harnessing the presently available power
of computers to actual healthcare delivery. The perspectives offered in this book
summarize current approaches to these issues and highlight work that remains to be
done. As such it is valuable as a textbook for biomedical informatics and as a road-
map for the possibilities of using AI for the benefit of humankind.
The book’s emphasis on reasoning provides a central focus not found in other
collections. The chapters here deal with transforming data about patients, once
acquired, to actionable information and using that information in clinical contexts.
With today’s understanding, AI offers the means to augment human intelligence by
making the accumulated knowledge available, suggesting possible options, and
considering consequences. We are betting that computers can help to overcome
human limitations of imperfect memory, reasoning biases, and sheer physical stam-
ina. We are betting on the power of knowledge over the persistent forces of random
mutations. Most of all we are betting that the synergy of human and computer intel-
ligence will succeed in the noble quest of improving the quality of human lives.
Recent advances in computing power and the availability of large amounts of train-
ing data have spurred tremendous advances in the accuracy with which computers
can perform tasks that were once considered the exclusive province of human intel-
ligence. These include perceptual tasks related to medical diagnosis, where deep
neural networks have attained expert-level performance for the diagnosis of diabetic
retinopathy and the identification of biopsy-proven cases of skin cancer from der-
matological images. These accomplishments reflect an increase in activity in
Artificial Intelligence (AI), both in academia and industry. According to the 2021 AI
Index report assembled and published at Stanford University, there was a close to
twelvefold increase in the total number of annual peer-reviewed AI publications
between 2000 and 2019, and a close to fivefold increase in annual private invest-
ment in funded AI companies between 2015 and 2020, with commercial applica-
tions of AI technologies such as speech-based digital assistants and personalized
advertising and newsfeeds by now woven deeply into the threads of our everyday
lives. These broad developments throughout society have led to a resurgence of
public interest in the role of AI in medicine1 (AIM), reviving long-standing debates
about the nature of intelligence, the relative value of data-driven predictive models
and human decision makers, and the potential for technology to enhance patient
safety and to disseminate expertise broadly.
Consequently AIM is in the news, with frequent and often thoughtful accounts of
the ways in which it might influence—and hopefully improve—the practice of med-
icine appearing in high visibility media venues such as The New York Times [1, 2],
The Atlantic [3], The New Yorker [4], The Economist [5], and others [6, 7]. As bio-
medical informaticians with long-standing interests in AIM, we are encouraged to
see this level of attention and investment in the area. However, we are also aware
1
We consider the scope of AIM to include public health and clinically driven basic science research,
as well as clinical practice.
xi
xii Preface
that this is by no means the first time that the promise of AIM has emerged as a
focus of media attention. For example, a 1977 New York Times article [8] describes
the MYCIN system (discussed in Chap. 2), noting the ability of this system to make
medical diagnoses, to request missing information, and to explain how it reaches
conclusions. The article also mentions the extent of government funding for AI
research at the time at $5 million a year, which when adjusted for inflation would be
around $23 million annually. The question arises as to what has changed between
then and now, and how these changes might affect the ongoing and future prospects
for AI technologies to improve medical care. Some answers to this question are
evident in this 1977 article, which is critical of the potential of the field to accom-
plish its goals of championship-level chess performance and machine translation
(“neither has been accomplished successfully, and neither is likely to be any time
soon”) and face recognition (“they cannot begin to distinguish one face from another
as babies can”). Today all of these tasks are well within the capabilities of contem-
porary AI systems, which is one indication of the methodological advances that
have been made over the intervening four decades. However, translating methods
with strong performance on tightly constrained tasks to applications with positive
impact in health care is not (and has never been) a straightforward endeavor, on
account of the inherent complexities of the healthcare system and the prominent
role of uncertain and temporarily unavailable information in medical decision mak-
ing, among other factors. While AI technologies do have the potential to transform
the practice of medicine, computer programs demonstrating expert-level perfor-
mance in diagnostic tasks have existed for decades, but significant challenges to
realizing the potential value of AI in health care—such as how such AI systems
might best be integrated into clinical practice—remain unresolved.
It is our view that the discipline of Cognitive Informatics (CI) [9], which brings
the perspective of the cognitive sciences to the study of medical decision making by
human beings and machines, is uniquely positioned to address many of these chal-
lenges. Through its roots in the study of medical reasoning, CI provides a sound
scientific basis from which to consider the relationship between current technolo-
gies and human intelligence. CI has extended its area of inquiry to include both
human-computer interaction and the study of the effects of technology on the flow
of work and information in clinical settings [10]. Accordingly, CI can inform the
integration of AI systems into clinical practice. In recent years, patient safety has
emerged as a research focus of the CI community, providing new insights into the
ways in which technology might mitigate or, despite best intentions, facilitate medi-
cal error. Consequently, a volume describing approaches to AIM from a CI perspec-
tive seemed like an excellent fit for the Springer Nature Cognitive Informatics book
series led by one of us (VLP). We fondly recall that this volume is a project that we
first discussed several years ago over a bottle of Merlot in proximity to a conference
in Washington, DC.
However, as our discussions developed over the course of subsequent meetings,
it became apparent that there was a need for a more comprehensive account of the
field. As educators, we considered the knowledge and skills that future researchers
and practitioners in the field might need in order to realize the transformative
Preface xiii
potential of AI methods for the practice of health care. We were aware of books on
the subject, such as cardiologist Eric Topol’s cogent account of the implications of
contemporary machine learning approaches for the practice of medicine [11], and
the development of pediatric cardiologist Anthony C. Chang’s excellent clinician-
oriented introduction to AI methods with a focus on their practical application to
problems in health care and medicine [12].2 Books drawing together the perspec-
tives of multiple authors were also available, including a compendium of chapters
in which authors focus on their research interests within the field [13], and an
account of the organizational implications of “big data” and predictive models that
can be derived from such large collections of information [14]. However, none of
the books we encountered was developed with the focused intention to provide a
basis for curricular development in AIM, and we were hearing an increasing demand
for undergraduate, graduate, and postgraduate education from our own trainees—
students and aspiring physician-informaticians.
Consequently, we pivoted from our original goal of a volume primarily con-
cerned with highlighting the role of CI in AIM, to the goal of developing the first
comprehensive coauthored textbook in the AIM field, still with a CI emphasis. We
reached out to our friends and colleagues, prominent researchers with deep exper-
tise in the application of AI methods to clinical problems, several of whom have
been engaged with AIM since the inception of the field. This was a deliberate deci-
sion on our part, as we felt that in addition to lacking a cognitive perspective (despite
the emergence of the term “cognitive computing” as a catch-all for AI methods),
much work we encountered was presented without apparent consideration of the
history of the field. Our concern with this disconnect was not only a matter of the
academic impropriety of failure to acknowledge prior work. We were also con-
cerned that work conducted from this perspective would not be informed by the
many lessons learned from decades of work and careful consideration of the issues
involved in implementing AI at the point of care. Thus, in developing our ideas for
the structure of this volume, and in our selection of chapter authors, we endeavored
to make sure that presentations of current methods and applications were contextu-
alized in relation to the history of the field, and informed by a CI perspective.
The result of these efforts is the current volume, a comprehensive textbook that
takes stock of the current state of the art of AIM, places current developments in
their historical context, and identifies cognitive and systemic factors that may help
or hinder their ability to improve patient care. It is our intention that a reader of this
2
A trained data scientist, Dr. Chang also contributed a chapter on the future of medical AI and data
science in this volume.
xiv Preface
volume will attain an accurate picture of the strengths and limitations of these
emerging technologies, emphasizing how they relate to the AI systems that pre-
ceded them, to the intelligence of human decision makers in medicine, and to the
needs and expectations of those who use the resulting tools and systems. This will
lay a foundation for an informed discussion of the potential of such technology to
enhance patient care, the obstacles that must be overcome for this to take place, and
the ways in which emerging and as-yet-undeveloped technologies may transform
the practice of health care.
With increasing interest and investment in AIM technologies will come the rec-
ognition of the need for professionals with the prerequisite expertise to see these
technologies through to the point of positive impact. Progress toward this goal will
require both advancement of the state of the science through scholarly research and
measurably successful deployment of AIM systems in clinical practice settings. As
the first comprehensive coauthored textbook in the field of AIM, this volume aims
to define and aggregate the knowledge that researchers and practitioners in the field
will require to advance it. As such, it draws together a range of expert perspectives
to provide a holistic picture of the current state of the field, to identify opportunities
for further research and development, and to provide guidance to inform the suc-
cessful integration of AIM into clinical and public health practice.
We intend to provide a sound basis for a seminar series or a university level
course on AIM. To this end, authors have been made aware of the context of their
chapters within the logical flow of the entire volume. We have sought to assure
coordination among authors to facilitate cross-references between chapters, and to
minimize either coverage gaps or redundancies. Furthermore, all chapters have fol-
lowed the same basic organizational structure, which includes explicit learning
objectives, questions for self-study, and annotated suggestions for further reading.
Chapters have been written for an intended audience of students in biomedical
informatics, AI, machine learning, cognitive engineering, and clinical decision sup-
port. We also offer the book to established researchers and practitioners of these
disciplines, as well as those in medicine, public health, and other health professions,
who would like to learn more about the potential for these emerging technologies to
transform their fields.
The book is divided into four parts. They are designed to emphasize pertinent con-
cepts rather than technical detail. There are other excellent sources for exploring the
technical details of the topics we introduce. The Introduction provides readers with
an overview of the field. Chapter 1 provides an introduction to the fields of Artificial
Intelligence and Cognitive Informatics and describes how they relate to one another.
Chapter 2 provides a historical perspective, drawing attention to recurring themes,
issues, and approaches that emerged during the course of the development of early
AI systems, most of which remain highly relevant today. Chapter 3 provides an
Preface xv
education, including the roles of cognitive and learning sciences in informing how
clinicians-in-training should be educated about AIM, and how AI might support the
education of clinicians in their chosen fields.
The final part in the volume considers the road ahead for AIM. It addresses
issues that are likely to be of importance for the successful progression of the field,
including two potential stumbling blocks: inadequate evaluation, and failure to con-
sider the ethical issues that may accompany the deployment of AI systems in health-
care settings. Accordingly, the first chapter in this part, Chap. 17, is focused on
evaluation of AIM systems, including enabling capabilities such as usability and the
need to move beyond “in situ” evaluations of accuracy toward demonstrations of
acceptance and clinical utility in the natural world. Chapter 18 concerns the need for
a robust ethical framework to address issues proactively such as algorithmic bias,
and exacerbation of healthcare inequities due to limited portability of methods and
algorithms. Chapter 19 projects from the trajectory of current trends to anticipate
the future of AI in medicine with an emphasis on data science, and how broader
deployment of AI systems may affect the practice of medicine. Finally, Chap. 20
provides a summary and synthesis of the volume, including the editors’ perspectives
on the prospects and challenges for the field. The final part is then followed by a
detailed glossary that provides definitions of all terms displayed in bold throughout
the body of the book (with an indication of the chapter(s) in which each term was
used). The book closes with a subject index for the entire volume.
This book is written as a textbook, such that it can be used in formal courses or
seminars. For this purpose, we would anticipate curricular design to follow the over-
all structure of the book, with a logical progression from introduction through
approaches to applications and projections for the future. For example, this structure
could support an undergraduate or graduate level course in a Computer Science,
Biomedical Informatics, or Cognitive Engineering program that aims to provide
students with a comprehensive survey of current applications and concerns in the
field. At the graduate level, this could be coupled to a student-led research project.
Alternatively, one might imagine an MS level course that aims to provide clinical
practitioners seeking additional training in clinical informatics with the knowledge
they will need to be informed users of AIM systems, in which case content could be
drawn from the book selectively with an emphasis on introductory content, and
clinical applications and issues that relate to them directly (Chaps. 1–3, 10–12, 15,
and 17–19). Of course, the book may be used for self-study and reference, and read-
ers may wish to explore particular topics in greater detail—starting with a particular
chapter (say, machine learning methods) and then exploring the cross-references in
this chapter to find out more about how this topic features in the context of particu-
lar applications, or issues that are anticipated to emerge as the field progresses.
Preface xvii
This is an exciting time to be working in the field of AIM, and an ideal time to
enter it. There is increasing support for AIM work, both through federal funding
initiatives such as the NIH-wide Bridge to Artificial Intelligence program3 and in
light of an acceleration in private investment in digital health technologies. Such
support has been stimulated in part by the field’s demonstrated utility and accep-
tance as a way to diagnose disease, to deliver care, and to support public health
efforts during the COVID-19 pandemic. On account of the pervasiveness of AI tech-
nologies across industries outside of health care, skepticism about the ability of
these technologies to deliver meaningful improvements is balanced by enthusiasm
for their potential to improve the practice of medicine. It is our goal that readers of
this volume will emerge equipped with the knowledge needed to realize this poten-
tial and to proceed to lead the advancement of health care through AIM.
References
1. Metz C. A.I. shows promise assisting physicians. The New York Times. 2019.
[cited 2021 Apr 15]. https://www.nytimes.com/2019/02/11/health/artificial-
intelligence-medical-diagnosis.html.
2. O’Connor A. How artificial intelligence could transform medicine. The
New York Times. 2019. [cited 2021 Apr 15]. https://www.nytimes.com/
2019/03/11/well/live/how-a rtificial-i ntelligence-c ould-t ransform-
medicine.html.
3. Cohn J. The robot will see you now. The Atlantic. 2013. [cited 2021 Jun 14].
https://www.theatlantic.com/magazine/archive/2013/03/the-robot-will-see-
you-now/309216/.
4. Mukherjee S. A.I. versus M.D.. The New Yorker. [cited 2021 Apr 15]. https://
www.newyorker.com/magazine/2017/04/03/ai-versus-md.
5. Artificial intelligence will improve medical treatments. The Economist. 2018.
[cited 2021 Jun 16]. https://www.economist.com/science-and-technology/
2018/06/07/artificial-intelligence-will-improve-medical-treatments.
6. Aaronovitch D. DeepMind, artificial intelligence and the future of the
NHS. [cited 2021 Jun 16]. https://www.thetimes.co.uk/article/deepmind-
artificial-intelligence-and-the-future-of-the-nhs-r8c28v3j6.
7. Artificial intelligence has come to medicine. Are patients being put at risk?. Los
Angeles Times. 2020. [cited 2021 Jun 16]. https://www.latimes.com/business/
story/2020-01-03/artificial-intelligence-healthcare.
8. Experts argue whether computers could reason, and if they should. The
New York Times. [cited 2021 Apr 15]. https://www.nytimes.com/1977/05/08/
archives/experts-a rgue-w hether-c omputers-c ould-r eason-a nd-i f-t hey-
should.html.
9. Patel VL, Kaufman DR, Cohen T, editors. Cognitive informatics in health and
biomedicine: case studies on critical care, complexity and errors. London:
3
https://commonfund.nih.gov/bridge2ai (accessed August 11, 2022).
xviii Preface
While my coeditors are well known for their prescience in anticipating (and influ-
encing) the evolution of AIM, I think it is safe to say that none of us envisioned
developing a textbook together when we first met at the annual retreat of Columbia
University’s Department of Medical Informatics in the Catskills in 2002. At the
time, I had just joined the program as an incoming graduate student, attracted in part
by what I had learned of Ted’s work in medical AI and Vimla’s in medical cognition.
As these topics have remained core components of my subsequent research, it is
especially encouraging to see the upsurge in interest in the field. The expanding
pool of talented graduate students and physician-informaticians excited about the
potential of AIM was a key motivator for our development of this volume, as was
our recognition of the need for a textbook in the field that represented historical and
cognitive perspectives, in addition to recent methodological developments. However,
given the breadth of methods and biomedical applications that by now fall under the
AIM umbrella, it was apparent to us that we would need to draw upon the expertise
of leaders in relevant fields to develop a multiauthored textbook. Our efforts to
weave the perspectives of these authors into a coherent textbook benefited consider-
ably from Ted’s experience as lead editor of a multiauthored textbook in biomedical
informatics (fondly known as the “blue bible” of BMI, and now in its fifth edition).
We modeled both our approach to encouraging the integration of ideas across chap-
ters and the key structural elements of each chapter on the example of that book. We
were aided in this endeavor by our authors, who were highly responsive to our sug-
gestions for points of reference between chapters, as well as our recommendations
to reduce overlap. We especially appreciated our correspondence with authors dur-
ing the editing process, which broadened our own perspectives on AIM, and influ-
enced the themes we focused on when developing the final chapter (Reflections and
Projections). We owe additional gratitude to Grant Weston, Executive Editor of
Springer’s Medicine and Life Sciences division, for his steadfast support and guid-
ance throughout the development of this volume. We would also like to acknowl-
edge our Production Editor Rakesh Jotheeswaran, Project Manager Hashwini
Vytheswaran, and Editorial Assistant Leo Johnson for their assistance in keeping
the project on track. Special thanks also go to Bruce Buchanan (an AI luminary, key
xix
xx Acknowledgments
mentor for Ted’s work in the 1970s, and his long-term collaborator and colleague)
for his willingness to craft the foreword for this volume. The development of this
volume coincided with a global pandemic. Many involved in the project were
affected by this in a professional capacity through their practice of medicine, their
role in an informatics response at their institutions, or the position of their profes-
sional home within a healthcare system under siege. We trust the development of
their chapters provided a welcome respite for these authors, as ours did for us, and
hope that readers of this volume will be inspired to develop AIM solutions that
equip us to anticipate and manage global health crises more effectively in the future.
Part I Introduction
1 Introducing AI in Medicine�������������������������������������������������������������������� 3
Trevor A. Cohen, Vimla L. Patel, and Edward H. Shortliffe
2 in Medicine: Some Pertinent History����������������������������������������������� 21
AI
Edward H. Shortliffe and Nigam H. Shah
3
Data and Computation: A Contemporary Landscape ������������������������ 51
Ida Sim and Marina Sirota
Part II Approaches
4
Knowledge-Based Systems in Medicine ������������������������������������������������ 75
Peter Szolovits and Emily Alsentzer
5
Clinical Cognition and AI: From Emulation to Symbiosis������������������ 109
Vimla L. Patel and Trevor A. Cohen
6 Machine Learning Systems �������������������������������������������������������������������� 135
Devika Subramanian and Trevor A. Cohen
7 Natural Language Processing ���������������������������������������������������������������� 213
Hua Xu and Kirk Roberts
8 Explainability in Medical AI ������������������������������������������������������������������ 235
Ron C. Li, Naveen Muthu, Tina Hernandez-Boussard, Dev Dash,
and Nigam H. Shah
9
Intelligent Agents and Dialog Systems �������������������������������������������������� 257
Timothy Bickmore and Byron Wallace
Part III Applications
10
Integration of AI for Clinical Decision Support������������������������������������ 285
Shyam Visweswaran, Andrew J. King, and Gregory F. Cooper
xxi
xxii Contents
xxiii
xxiv Contributors
After reading this chapter, you should know the answers to these questions:
• How does one define artificial intelligence (AI)? What are some ways in
which AI has been applied to the practice of medicine and to health care more
broadly?
• How does one define cognitive informatics (CI)? How can the CI perspective
inform the development, evaluation and implementation of AI-based tools to
support clinical decision making?
• What are some factors that have driven the current wave of interest in AI
methods?
• How can one compare and contrast knowledge-based systems with machine
learning models? What are some of the relative advantages and disadvantages of
these approaches?
• Considering the current state of progress, where is research and development
most urgently needed in the field and why?
T. A. Cohen (*)
University of Washington, Seattle, WA, USA
e-mail: cohenta@uw.edu
V. L. Patel
New York Academy of Medicine, New York, NY, USA
E. H. Shortliffe
Columbia University, New York, NY, USA
Knowledge-Based Systems
The term “artificial intelligence” (AI) can first be found in a proposal for a confer-
ence that took place at Dartmouth College in 1956, which was written by John
McCarthy and his colleagues [1]. The research to be conducted in this two-month
conference was built upon the “conjecture that every aspect of learning or any
other feature of intelligence can in principle be so precisely described that a
machine can be made to simulate it.” This conference is considered a seminal event
in AI, and was followed by a steady growth of interest in the field that is reflected
by the frequency with which the term ‘artificial intelligence’ appeared in books of
this era (Fig. 1.1). There was a first peak of activity in the mid-1980s that followed
a period of rapid progress in the development of knowledge-based expert systems,
systems that were developed by eliciting knowledge from human experts and ren-
dering this content in computer-interpretable form. Diagnostic reasoning in medi-
cine was one of the first focus areas for the development of such systems, providing
proof that AI methods could approach human performance in tasks demanding a
command of a rich base of knowledge [3]. This shows that medical decision mak-
ing has long been considered a paradigmatic example of intelligent human behav-
ior, and has been a focus of—and has had an influence on—AI research for decades.
The historical trend in term usage in Fig. 1.1 also reveals a dip in enthusiasm and
in support for AI endeavors following the peak in the 1980s (one of the so-called ‘AI
Winters’), for reasons that are discussed in Chap. 2. For the purpose of this introduc-
tion, we focus on the events of recent years, which have seen rapid growth in inter-
est in AIM applications driven by media attention to AI in general (evident to the
right of Fig. 1.1), coupled with high profile medical demonstrations of diagnostic
Loosely inspired by the interconnections between neurons in the human brain, arti-
ficial neural networks consist of interconnected functional units named neurons,
each producing an output signal determined by their input data, weights assigned to
incoming connections, and an activation function that transforms cumulative
incoming signals into an output that is passed on to a next layer of the network. The
weights of a neural network serve as parameters that can be altered during training
of a model, so that the output of the neural network better approximates a desired
result, such as assigning a high probability to the correct diagnostic label for a radio-
logical image. When used in this way, neural networks exemplify the paradigm of
supervised machine learning, in which models learn from labels (such as diagno-
ses) assigned to training data. This approach is very different in nature from the
deliberate engineering of human knowledge that supported the expert systems in the
first wave of AIM (see Chap. 2 and, for detailed accounts of knowledge modeling
and machine learning methods, see Chaps. 4 and 6 respectively).
While machine learning models can learn to make impressively accurate predic-
tions, especially when large data sets are available for training, systems leveraging
explicitly modeled human knowledge—systems intended to reason as humans do—
are much better positioned to explain themselves (for an example, see Box 1.1) than
systems that have been developed to optimize accuracy without considering human
cognition. Explanation has long been recognized as a desirable property of AI sys-
tems for automated diagnosis, and as a prerequisite for their acceptance by clini-
cians [6] (and see Chap. 8). However, the general trend in machine learning has
been that accuracy comes at the cost of interpretability, to the point at which restor-
ing some semblance of interpretability to the predictions made by contemporary
machine learning models has emerged as a field of research in its own right—
explainable AI—with support from the Defense Advanced Research Projects
Agency (DARPA),1 the same agency that initiated the research program on network
protocols that ultimately led to a consumer-accessible internet.
1
See https://www.darpa.mil/program/explainable-artificial-intelligence (accessed August 18,
2022) for details.
6 T. A. Cohen et al.
This trend toward accurate but opaque predictions has accelerated with the advent
of deep learning models—neural networks that have multiple intervening layers of
neurons between input data and output predictions. While deep neural network
architectures are not new phenomena (see for example the important paper by
Hinton et al. [8]), their performance when trained on large data sets has produced
dramatic improvements in results attained across fundamental tasks such as speech
recognition, question answering and image recognition.
Figure 1.2 shows the extent of recent improvements for three key benchmarks:
the Stanford Question Answering Dataset (SQUAD [9])—over 100,000 compre-
hension questions related to short articles; ImageNet—over 14 million images each
assigned one of two hundred possible class labels [10]; and LibriSpeech—over
1000 hours of speech with matching text from audiobooks [11]. Of note, with both
SQUAD and ImageNet, human performance on the tasks concerned has been esti-
mated, and superseded by deep learning models.
Conceptually, the advantages of deep learning models over previous machine
learning approaches have been attributed to their capacity for representation learn-
ing [12]. With prior machine learning approaches, performance generally advanced
through engineering ways to represent incoming data (such as pixels of an image
representing a handwritten digit) that led to better downstream machine learning
performance (representations such as a count of the number of loops in a handwrit-
ten digit [13]). With deep learning models, the lower layers of a network can learn
to represent incoming data in ways that facilitate task performance automatically.2
Of particular importance for domains such as medicine, where large labeled data
2
While deep learning models excel at learning representations that lead to better predictive model-
ing performance, representation learning is broader than deep learning and includes a number
of previously established methods. For a review of developments up to 2013, see [14].
1 Introducing AI in Medicine 7
SQUAD1.1 (F1)
ImageNet (top 5 acc)
97.5 LibriSpeech (100-wer)
92.5
Squad1.1 human: 91.2
best performance
90.0
87.5
85.0
82.5
80.0
Fig. 1.2 Best documented performance, by year, on three key benchmarks (data from the 2021 AI
Index Report [15, 16]). (1) SQUAD1.1 = Stanford Question Answering Dataset (version 1.1).
Performance metric is “F1” (the balanced f-measure; see Chap. 6); (2) ImageNet - performance
metric is “top 5 acc” (the percent of images in which the correct label, among 200 possibilities,
appeared in the top 5 predictions); (3) LibriSpeech - performance metrics is “100-wer” (a transfor-
mation of the word error rate, with 100 indicating every word in a recording was recognized cor-
rectly). Dashed lines indicate documented human performance on the task concerned, which has
been superseded by AI in both cases
sets are relatively difficult to obtain, the ability to extract useful representations for
one task can often be learned from training on another related one. This ability to
apply information learned from one task or data set to another is known as transfer
learning, and is perhaps best exemplified by what has become a standard approach
to classifying medical images (see Chap. 12): adding a classification layer to a deep
neural network that has been pretrained on the task of recognizing non-medical
images in ImageNet [17]. Similarly, fine-tuning of models such as Google’s BERT
and Open-AI’s GPT series, which were originally trained to predict held-out words
in large amounts of text from a range of digital data sources, has advanced perfor-
mance across a broad range of natural language processing (NLP) tasks [18, 19].
8 T. A. Cohen et al.
a b
Fig. 1.3 Recognition of a subtle diagnostic cue by a deep neural network trained to detect thyroid
cancer in different ultrasound images of the same nodule. Each image (top row) is annotated with
the probability of malignancy according to the model, and is paired with a visualization of the
pixels attended to by the deep learning model when making a prediction for whether an image is
in the “malignant class”, developed using the GradCam method [20]. Only the second image from
the left exhibits the diagnostic feature of interrupted eggshell calcification, in which the rim of the
opaque “shell” of calcification (blue arrows in the top row) is disrupted (red arrow). The GradCam
visualization reveals the model has learned to attend to this subtle diagnostic feature. Image cour-
tesy of Dr. Nikita Pozdeyev
3
In the United States this increase in adoption is attributable to the incentivization structures pro-
vided by the Health Information Technology for Economic and Clinical Health (HITECH) act of
2009 [21].
1 Introducing AI in Medicine 9
For example, a 2016 paper in the Journal of the American Medical Association
describes an impressively accurate deep learning system for the diagnosis of diabetes-
related eye disease in images of the retina [23]. Similarly, a widely-cited 2017 paper
in Nature describes the application of deep learning to detect skin cancer [24], with
the resulting system performing as well as 21 board-certified dermatologists in iden-
tifying two types of neoplastic skin lesions. These systems leveraged recent advances
in AI, including deep neural network architectures and approaches to train them effi-
ciently, as well as large sets of labeled data that were used to train the networks—over
125,000 images in each study. The dermatology system benefitted further from pre-
training on over 1.25 million non-medical images labeled with 1000 object categories.
Beyond imaging, deep learning models trained on EHR data have learned to predict
in-hospital mortality, unplanned readmission, prolonged length of stay, and final dis-
charge diagnosis—in many cases outperforming traditional predictive models that are
still widely used in clinical practice [25]. In this case, models were trained on data
from over 200,000 hospitalized adult patients from two academic medical centers,
considering over 40 billion sequential data points collectively.
These advances have attracted a great deal of press attention, with frequent arti-
cles in prominent media outlets considering the potential of AI to enhance—or dis-
rupt—the practice of medicine [26–28]. As we have discussed in the preface to this
volume, neither AI systems with physician-level performance nor media attention to
such systems are without precedent, even in the days before advances in computa-
tional power and methodology mediated the current explosive interest in machine
learning. However, the convergence of an unprecedented availability of clinical data
with the maturation of machine learning models (and the computational resources
to train them at scale) has allowed the rapid development of AI-based predictive
models in medicine. Many demonstrate impressive results beyond those we have
briefly described here. Furthermore, the proven commercial viability and public
acceptance of such models in other areas have offset some of the skepticism with
which AI models were greeted initially. Having seen the effectiveness with which
machine learning models leverage data to deliver our entertainment and shopping
recommendations on a daily basis, why would we not wish such systems to assist
our clinicians in their medical practice? A strong indicator of the commercial poten-
tial of AI-based systems in medicine is the emergence of regulatory frameworks for
their application in practice (see also Chap. 18) [29], with a number of AI systems
already approved for medical use in the United States (Fig. 1.4) and Europe [30].
A fundamental question in the study (and regulation) of AIM systems concerns the
definition of the term “Artificial Intelligence”. Given the breadth of approaches that
have been categorized as related to AI, it is perhaps not surprising that there is no
universally-accepted definition of this term, and that the extent to which contempo-
rary deep learning approaches constitute AI is still vigorously debated [32, 33]. A
representative sample of AI definitions is provided in Box 1.2. While there are
10 T. A. Cohen et al.
SPECIALITY
Cardiology
Radiology
70 Neurology
Psychiatry
General medicine
Hospital monitoring
Ophthalmology
Endocrinology
Orthopedics
60 Internal Medicine
Urology / General practise
50
40
COUNT
30
20
10
Fig. 1.4 FDA approvals for AI-related products by specialty (data drawn from medicalfuturist.
com [30, 31]) with radiology systems (X) the most common category
clearly common threads that run among them, notably the emphasis on intelligence
(loosely defined by Barr as exhibiting the characteristics we associate with intelli-
gence in human behavior, or by Winston as emphasizing the use of knowledge and
an ability to communicate ideas), the definitions also reflect a departure from the
cognitive motivations of AI at its inception—performance of tasks as humans do—
to the more pragmatic motivations of the performance-oriented systems that are
commonly termed AI today. Note that McCarthy in particular asserts explicitly that
biological constraints need not apply. Of course, motivations for understanding how
machines might solve a problem presumed to require human intelligence are not
exclusively pragmatic, as this topic is also of considerable academic interest.
As one might anticipate given the fluidity of definitions of AI in general, the
notion of what qualifies as AI in medicine is also in flux. At the inception of the
field, the focus was on systems that could reason, leveraging encoded knowledge
(including probabilistic estimates or uncertainty) derived from clinical experts.
Such formalization of knowledge to render it computable also underlies the clinical
decision support rules embedded in contemporary EHR systems. However, few
would argue that the individual rules firing alerts in such systems constitute AI, even
when considered collectively (see the discussion of warnings and alerts in Chap.
17). It seems, therefore, that the perceived difficulty of the tasks accomplished by a
system determine whether it is thought to have exhibited intelligent behavior. Today,
machine learning approaches (including deep neural networks) are strongly associ-
ated with the term AI. These systems are not designed to reason, but instead learn to
recognize patterns, such as diagnostic features of radiology images, leading to per-
formance on constrained tasks that is comparable to that of highly trained physi-
cians. As such it is easy to argue that they exhibit intelligent human behavior, at
least in the context of a task for which large amounts of labeled training data are
readily available. Furthermore, such models can make predictions that are beyond
the capabilities of human experts at times, such as prediction of cardiovascular risk
factor status from retinal fundus photographs [39], or prediction of 3-D protein
structure from an amino acid sequence [40]. Perhaps as a consequence of the lack
of funding for research associated with the term AI during periods in which it was
out of favor (see Chap. 2), a great deal of machine learning work in the field was not
framed as AI research, but would be perceived this way in retrospect. Analogous to
the case with rule-based models, this raises the question of how sophisticated a
machine learning model is required to qualify as AI. For example, would a system
12 T. A. Cohen et al.
based on a logistic regression model trained on a handful of features, with less than
ten trainable parameters constitute AI? Perhaps, as with rules, the main question
concerns the nature of the task that the model is able to accomplish, with a bench-
mark for AIM being the automated accomplishment of tasks that would be chal-
lenging for a highly trained human.
Why CI?
It is our view that the discipline of cognitive informatics (CI) [44–46], which brings
the perspective of the cognitive sciences to the study of medical decision making by
human beings and machines, is uniquely positioned to address these challenges.
Through its roots in the study of medical reasoning [47–49], CI provides a sound
scientific basis from which to consider the relationship between current technolo-
gies and human intelligence. CI has extended its area of inquiry to include both
human-computer interaction and the study of the effects of technology on the flow
of work and information in clinical settings [50–53]. Accordingly CI is well-
positioned to inform the integration of AIM systems into clinical practice, and more
broadly to inform the design of AI systems that complement the cognitive capabili-
ties of human decision makers, in alignment with seminal ideas concerning the
potential of cooperative human-machine systems [54].
decision making [60], such as biases in diagnostic reasoning that have been identi-
fied through cognitive research [61], or distracted attention in busy clinical settings
[62]. Alternatively, one might envision developing ways to distribute labor across a
human/AI collaborative system to maximize the expected utility of this system, tak-
ing into account both the accuracy of model predictions and the time required for a
human actor to reassess them. Recent work has developed an approach to optimiz-
ing collaborative systems in this way, resulting in some experiments in systems that
increase high-confidence predictions (i.e. predictions to which the model assigns
extremely high or low probability) at the expense of its accuracy in edge cases (i.e.
predictions close to the model’s decision boundary), where human input could
resolve model uncertainty [63].
CI methods are already well established as means to evaluate the usability of deci-
sion support tools [45, 46]. Findings from this line of research have led to recom-
mendations that the usability of clinical systems should be prioritized as a means
to enhance their acceptability and safety [64]. In contrast to system-centric meth-
ods of usability evaluation, such as heuristic evaluations by usability experts [65],
CI approaches attempt to understand the thought process of a user, which is par-
ticularly important in knowledge-rich domains, such as medicine, where both
knowledge of the system being used and of the domain are required to perform
tasks correctly [66]. This can be accomplished through analysis of a think-aloud
protocol, collected by prompting users to verbalize their thoughts during the pro-
cess of completing representative tasks [67]. This approach is similarly well-suited
to the study of clinician interactions with AI-based systems, where users must
make clinical decisions on the basis of their estimation of the veracity of sys-
tem output.
Critical questions concerning the nature of these interactions remain to be
answered. One such question concerns how best to represent model predictions. For
example, recent work in dermatology diagnosis found that advantages in perfor-
mance for a human-computer collective were contingent upon the granularity (prob-
abilities of all of the diseases in the differential diagnosis vs. a single global risk of
malignancy) and cognitive demand of the representation used to convey predictions
to physicians [57]. Analysis of verbal protocols collected during interactions with
interfaces, using alternative representations of the same predictions, could inform our
understanding of why this is the case by revealing the reasoning dermatologists use
when deciding whether to accept a particular recommendation. Another important
question concerns the role of explanations provided by a system in influencing human
decision making. Intriguingly, recent research has shown that revealing the influence
of input features (here, words in a passage of text) on model predictions increases the
likelihood that users will accept the validity of these predictions, irrespective of
whether they are accurate [68]. This suggests that displaying feature salience may not
1 Introducing AI in Medicine 15
be adequate to support the fault detection procedures that are a prerequisite to safe
and resilient human/AI collaborative systems. CI methods are well-suited to identify
the thought processes through which faulty AI decisions are (or are not) identified
when considering explanations, to inform the development of effective systems in
which process are both highly automated and subject to human control. This should
arguably be the case for systems making critical medical decisions, where mistakes
have irreversible consequences [69].
Concluding Remarks
Further Reading
Chang, AC. Intelligence-Based Medicine: Artificial Intelligence and Human
Cognition in Clinical Medicine and Healthcare. Academic Press (Elsevier); July
8th 2020.
• This book provides a survey of AI methods from clinical and data science per-
spectives, with an emphasis on their implementation in, and impact upon, medi-
cine and its subspecialties.
16 T. A. Cohen et al.
Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare:
review, opportunities and challenges. Briefings in Bioinformatics. 2018 Nov 27;19
(6):1236–1246.
• This paper provides an overview and of deep learning applications in healthcare
up to 2018, and introduces a number of issues that are addressed in the cur-
rent volume.
Patel VL, Kannampallil TG. Cognitive informatics in biomedicine and healthcare.
Journal of biomedical informatics. 2015 Feb 1;53:3–14.
• This paper provides a definition and overview of the field of cognitive informat-
ics, with a focus on biomedical applications.
Topol EJ. High-performance medicine: the convergence of human and artificial
intelligence. Nature Medicine. Nature Publishing Group; 2019 Jan;25 (1):44–56.
• This paper provides an overview of AI applications in healthcare, including a
thoughtful account of challenges that distinguish this domain from others in
which AI applications have established their value.
Zhang D, Mishra S, Brynjolfsson E, Etchemendy J, Ganguli D, Grosz B, Lyons T,
Manyika J, Niebles JC, Sellitto M, Shoham Y, Clark J, Perrault R. The AI Index
2021 Annual Report. arXiv:210306312 [cs] [Internet]. 2021 Mar 8 [cited 2021 Apr
24]; Available from: http://arxiv.org/abs/2103.06312
• Stanford’s AI Index Report provides an overview of national and global AI trends
in research and industry.
References
1. McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the Dartmouth summer
research project on artificial intelligence, August 31, 1955. AIMag. 2006;27(4):12.
2. Google Books Ngram Viewer [Internet]. [cited 2021 June 25]. Available from: https://books.
google.com/ngrams.
3. Yu VL, Buchanan BG, Shortliffe EH, Wraith SM, Davis R, Scott AC, Cohen SN. Evaluating the
performance of a computer-based consultant. Comput Programs Biomed. 1979;9(1):95–102.
4. Rosenblatt F. The perceptron: a probabilistic model for information storage and organization
in the brain. Psychol Rev. 1958;65(6):386.
5. McClelland JL, Rumelhart DE, Group PR. Parallel distributed processing. Boston, MA: MIT
Press; 1986. p. 1.
6. Swartout WR. Explaining and justifying expert consulting programs. Computer-assisted medi-
cal decision making. Springer; 1985. p. 254–71.
7. Shortliffe EH, Davis R, Axline SG, Buchanan BG, Green CC, Cohen SN. Computer-based
consultations in clinical therapeutics: explanation and rule acquisition capabilities of the
MYCIN system. Comput Biomed Res. 1975;8(4):303–20.
8. Hinton GE, Osindero S, Teh Y-W. A fast learning algorithm for deep belief nets. Neural
Comput. 2006;18(7):1527–54.
1 Introducing AI in Medicine 17
9. Rajpurkar P, Zhang J, Lopyrev K, Liang P. Squad: 100,000+ questions for machine compre-
hension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural
Language Processing, pages 2383–2392, Austin, Texas. Association for Computational
Linguistics.
10. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. Imagenet: a large-scale hierarchical image
database. 2009 IEEE conference on computer vision and pattern recognition. IEEE; 2009.
p. 248–55.
11. Panayotov V, Chen G, Povey D, Khudanpur S. Librispeech: an ASR corpus based on public
domain audio books. 2015 IEEE international conference on acoustics, speech and signal pro-
cessing (ICASSP). 2015. p. 5206–5210.
12. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44.
13. Kumar G, Bhatia PK. A detailed review of feature extraction in image processing systems.
2014 fourth international conference on advanced computing communication technologies.
2014. p. 5–12.
14. Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives.
IEEE Trans Pattern Anal Mach Intell. 2013;35(8):1798–828.
15. Zhang D, Mishra S, Brynjolfsson E, Etchemendy J, Ganguli D, Grosz B, Lyons T, Manyika
J, Niebles JC, Sellitto M, Shoham Y, Clark J, Perrault R. The AI Index 2021 annual report.
arXiv:210306312 [cs] [Internet]. 2021 Mar 8 [cited 2021 Apr 24]. Available from: http://arxiv.
org/abs/2103.06312.
16. AI Index 2021 [Internet]. Stanford HAI. [cited 2021 June 25]. Available from: https://hai.
stanford.edu/research/ai-index-2021.
17. Shin H-C, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep
convolutional neural networks for computer-aided detection: CNN architectures, dataset char-
acteristics and transfer learning. IEEE Trans Med Imaging. 2016;35(5):1285–98.
18. Devlin J, Chang M-W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional trans-
formers for language understanding. Proceedings of the 2019 conference of the North
American Chapter of the Association for computational linguistics: human language technolo-
gies, Vol. 1 (Long and Short Papers). 2019. p. 4171–4186.
19. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised
multitask learners. OpenAI Blog. 2019;1(8):9.
20. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: Visual expla-
nations from deep networks via gradient-based localization. Proceedings of the IEEE interna-
tional conference on computer vision. 2017. p. 618–626.
21. Adler-Milstein J, Jha AK. HITECH act drove large gains in hospital electronic health record
adoption. Health Aff. 2017;36(8):1416–22.
22. Bauman RA, Gell G, Dwyer SJ. Large picture archiving and communication systems of the
world--part 1. J Digit Imaging. 1996;9(3):99–103.
23. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner
K, Madams T, Cuadros J. Development and validation of a deep learning algorithm for detec-
tion of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402–10.
24. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level
classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8.
25. Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, Hardt M, Liu PJ, Liu X, Marcus J, Sun M,
Sundberg P, Yee H, Zhang K, Zhang Y, Flores G, Duggan GE, Irvine J, Le Q, Litsch K, Mossin
A, Tansuwan J, Wang D, Wexler J, Wilson J, Ludwig D, Volchenboum SL, Chou K, Pearson
M, Madabushi S, Shah NH, Butte AJ, Howell MD, Cui C, Corrado GS, Dean J. Scalable and
accurate deep learning with electronic health records. NPJ Digit Med. 2018;1(1):1–10.
26. Mukherjee S. A.I. versus M.D. [Internet]. The New Yorker. [cited 2021 Apr 15]. https://www.
newyorker.com/magazine/2017/04/03/ai-versus-md.
27. Metz C. A.I. shows promise assisting physicians. The New York Times [Internet]. 2019 Feb
11 [cited 2021 Apr 15]. https://www.nytimes.com/2019/02/11/health/artificial-intelligence-
medical-diagnosis.html.
18 T. A. Cohen et al.
28. O’Connor A. How artificial intelligence could transform medicine. The New York Times
[Internet]. 2019 Mar 11 [cited 2021 Apr 15]. https://www.nytimes.com/2019/03/11/well/live/
how-artificial-intelligence-could-transform-medicine.html.
29. Health C for D and R. Artificial intelligence and machine learning in soft-
ware as a medical device. FDA [Internet]. FDA; 2021 Jan 11 [cited 2021 Apr 19].
https://www.fda.gov/medical-d evices/software-m edical-d evice-s amd/artificial-
intelligence-and-machine-learning-software-medical-device.
30. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved
medical devices and algorithms: an online database. NPJ Digit Med. 2020;3(1):1–8.
31. The Medical Futurist [Internet]. The Medical Futurist. [cited 2021 Apr 19]. Available from:
https://medicalfuturist.com/fda-approved-ai-based-algorithms.
32. Marcus G. Deep learning: a critical appraisal. arXiv preprint arXiv:180100631. 2018.
33. Zador AM. A critique of pure learning and what artificial neural networks can learn from ani-
mal brains. Nat Commun. 2019;10(1):1–7.
34. Marr D. Artificial intelligence—a personal view. Artif Intell. 1977;9(1):37–48.
35. Winston PH. Artificial Intelligence. Reading, MA: Addison-Wesley; 1977.
36. Barr A, Feigenbaum EA. The handbook of artificial intelligence (Vol. 1). Los Altos, CA:
William Kaufman; 1981.
37. Luger GF, Stubblefield WA. Artificial intelligence (2nd ed.): structures and strategies for com-
plex problem-solving. USA: Benjamin-Cummings Publishing Co., Inc.; 1993.
38. McCarthy J. What is artificial intelligence? [Internet]. What is artificial intelligence? 2007
[cited 2021 Apr 20]. http://www-formal.stanford.edu/jmc/whatisai/whatisai.html.
39. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, Peng L, Webster
DR. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learn-
ing. Nat Biomed Eng. 2018;2(3):158–64.
40. Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K,
Bates R, Žídek A, Potapenko A, Bridgland A, Meyer C, Kohl SAA, Ballard AJ, Cowie A,
Romera-Paredes B, Nikolov S, Jain R, Adler J, Back T, Petersen S, Reiman D, Clancy E,
Zielinski M, Steinegger M, Pacholska M, Berghammer T, Bodenstein S, Silver D, Vinyals O,
Senior AW, Kavukcuoglu K, Kohli P, Hassabis D. Highly accurate protein structure prediction
with AlphaFold. Nature. 2021;15:1–11.
41. Berg M. Patient care information systems and health care work: a sociotechnical approach. Int
J Med Inform. 1999;55:87–101.
42. Shortliffe T, Davis R. Some considerations for the implementation of knowledge-based expert
systems. SIGART Bull. 1975;55:9–12.
43. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence.
Nat Med. 2019;25(1):44–56.
44. Wang Y. The theoretical framework of cognitive informatics. Int J Cogn Inform Nat Intell.
2007;1(1):1–27.
45. Patel VL, Kaufman DR. Cognitive science and biomedical informatics. In: Shortliffe EH,
Cimino JJ, editors. Biomedical informatics: computer applications in health care and biomedi-
cine. 5th ed. New York: Springer; 2021. p. 133–85.
46. Patel VL, Kannampallil TG. Cognitive informatics in biomedicine and healthcare. J Biomed
Inform. 2015;53:3–14.
47. Lesgold A, Rubinson H, Feltovich P, Glaser R, Klopfer D, Wang Y. Expertise in a complex
skill: diagnosing x-ray pictures. In: Chi MTH, Glaser R, Farr MJ, editors. The nature of exper-
tise. Hillsdale, NJ: Lawrence Erlbaum; 1988. p. 311–42.
48. Elstein AS, Shulman LS, Sprafka SA. Medical problem solving: an analysis of clinical reason-
ing. Cambridge, MA: Harvard University Press; 1978.
49. Patel VL, Arocha JF, Kaufman DR. Diagnostic reasoning and medical expertise. Psychol
Learn Motiv. 1994;31:187–252.
50. Kushniruk AW, Patel VL, Cimino JJ. Usability testing in medical informatics: cognitive
approaches to evaluation of information systems and user interfaces. Proceedings/AMIA
annual fall symposium. 1997. p. 218–222.
1 Introducing AI in Medicine 19
51. Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of
clinical information systems. J Biomed Inform. 2004;37:56–76.
52. Malhotra S, Jordan D, Shortliffe E, Patel VL. Workflow modeling in critical care: piecing
together your own puzzle. J Biomed Inform. 2007;40:81–92.
53. Cohen T, Blatter B, Almeida C, Shortliffe E, Patel V. A cognitive blueprint of collaboration
in context: distributed cognition in the psychiatric emergency department. Artif Intell Med.
2006;37:73–83.
54. Licklider JC. Man-computer symbiosis. IRE transactions on human factors in electronics.
IEEE. 1960;1:4–11.
55. Patel BN, Rosenberg L, Willcox G, Baltaxe D, Lyons M, Irvin J, Rajpurkar P, Amrhein T,
Gupta R, Halabi S, Langlotz C, Lo E, Mammarappallil J, Mariano AJ, Riley G, Seekins J, Shen
L, Zucker E, Lungren MP. Human–machine partnership with artificial intelligence for chest
radiograph diagnosis. NPJ Digit Med. 2019;2(1):1–10.
56. Hekler A, Utikal JS, Enk AH, Hauschild A, Weichenthal M, Maron RC, Berking C, Haferkamp
S, Klode J, Schadendorf D, Schilling B, Holland-Letz T, Izar B, von Kalle C, Fröhling S,
Brinker TJ, Schmitt L, Peitsch WK, Hoffmann F, Becker JC, Drusio C, Jansen P, Klode J,
Lodde G, Sammet S, Schadendorf D, Sondermann W, Ugurel S, Zader J, Enk A, Salzmann
M, Schäfer S, Schäkel K, Winkler J, Wölbing P, Asper H, Bohne A-S, Brown V, Burba B,
Deffaa S, Dietrich C, Dietrich M, Drerup KA, Egberts F, Erkens A-S, Greven S, Harde V,
Jost M, Kaeding M, Kosova K, Lischner S, Maagk M, Messinger AL, Metzner M, Motamedi
R, Rosenthal A-C, Seidl U, Stemmermann J, Torz K, Velez JG, Haiduk J, Alter M, Bär C,
Bergenthal P, Gerlach A, Holtorf C, Karoglan A, Kindermann S, Kraas L, Felcht M, Gaiser
MR, Klemke C-D, Kurzen H, Leibing T, Müller V, Reinhard RR, Utikal J, Winter F, Berking C,
Eicher L, Hartmann D, Heppt M, Kilian K, Krammer S, Lill D, Niesert A-C, Oppel E, Sattler
E, Senner S, Wallmichrath J, Wolff H, Gesierich A, Giner T, Glutsch V, Kerstan A, Presser D,
Schrüfer P, Schummer P, Stolze I, Weber J, Drexler K, Haferkamp S, Mickler M, Stauner CT,
Thiem A. Superior skin cancer classification by the combination of human and artificial intel-
ligence. Eur J Cancer. 2019;120:114–21.
57. Tschandl P, Rinner C, Apalla Z, Argenziano G, Codella N, Halpern A, Janda M, Lallas A,
Longo C, Malvehy J, Paoli J, Puig S, Rosendahl C, Soyer HP, Zalaudek I, Kittler H. Human–
computer collaboration for skin cancer recognition. Nat Med. 2020;26(8):1229–34.
58. Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional
neural networks for radiologic images: a Radiologist’s guide. Radiology. 2019;290(3):
590–606.
59. Kimeswenger S, Tschandl P, Noack P, Hofmarcher M, Rumetshofer E, Kindermann H, Silye
R, Hochreiter S, Kaltenbrunner M, Guenova E, Klambauer G, Hoetzenecker W. Artificial neu-
ral networks and pathologists recognize basal cell carcinomas based on different histological
patterns. Mod Pathol. 2020;13:1–9.
60. Horvitz E. One hundred year study on artificial intelligence: reflections and framing. Microsoft
com. 2014
61. Chapman GB, Elstein AS. Cognitive processes and biases in medical decision-making. In:
Chapman GB, Sonnenberg FS, editors. Decision-making in health care: theory, psychology,
and applications. Cambridge: Cambridge University Press; 2000. p. 183–210.
62. Franklin A, Liu Y, Li Z, Nguyen V, Johnson TR, Robinson D, Okafor N, King B, Patel VL,
Zhang J. Opportunistic decision making and complexity in emergency care. J Biomed Inform.
2011;44(3):469–76.
63. Bansal G, Nushi B, Kamar E, Horvitz E, Weld DS. Is the Most accurate AI the best teammate?
Optimizing AI for teamwork. Proc AAAI Conf Artif Intell. 2021;35(13):11405–14.
64. Middleton B, Bloomrosen M, Dente MA, Hashmat B, Koppel R, Overhage JM, Payne TH,
Rosenbloom ST, Weaver C, Zhang J. Enhancing patient safety and quality of care by improv-
ing the usability of electronic health record systems: recommendations from AMIA. J Am Med
Inform Assoc. 2013;20(e1):e2–8.
65. Nielsen J, Molich R. Heuristic evaluation of user interfaces. Proceedings of the SIGCHI con-
ference on human factors in computing systems. 1990. p. 249–256.
20 T. A. Cohen et al.
66. Horsky J, Kaufman DR, Oppenheim MI, Patel VL. A framework for analyzing the cognitive
complexity of computer-assisted clinical ordering. J Biomed Inform. 2003;36:4–22.
67. Ericsson KA, Simon HA. Protocol analysis: verbal reports as data. Cambridge, MA: MIT
Press; 1993.
68. Bansal G, Wu T, Zhou J, Fok R, Nushi B, Kamar E, Ribeiro MT, Weld D. Does the whole exceed
its parts? The effect of AI explanations on complementary team performance. Proceedings of
the 2021 CHI conference on human factors in computing systems. New York, NY: Association
for Computing Machinery; 2021. p. 1–16. https://doi.org/10.1145/3411764.3445717.
69. Shneiderman B. Human-centered artificial intelligence: reliable, safe & trustworthy. Int J Hum
Comput Interact. 2020;36(6):495–504.
Chapter 2
AI in Medicine: Some Pertinent History
After reading this chapter, you should know the answers to these questions:
• What are the roots of artificial intelligence in human history, even before the
general introduction of digital computers?
• How did computer science emerge as an academic and research discipline and
how was AI identified as a component of that revolution?
• How did a medical focus on AI applications emerge from the early general prin-
ciples of the field?
• How did the field of cognitive science influence early work on AI in Medicine
(AIM) and how have those synergies evolved to the present?
• What were the early medical applications of AI and how were they received in
the clinical and medical research communities?
• How has the focus of medical AI research and application evolved in parallel
with AI itself, and with the progress in computing power, communications tech-
nology, and interactive devices?
• To what extent are the early problems and methods developed by early AIM
researchers still relevant today? What has been lost and what has been gained?
• How have the advances in hardware and the availability of labeled data made
certain forms of AI popular? How can we combine these recent advances with
what we learned from the previous 40 years?
• How might we anticipate the further evolution of AI in medicine in light of the
way the field has evolved to date and its likely trajectory?
E. H. Shortliffe (*)
Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, USA
e-mail: ted@shortliffe.net
N. H. Shah
Stanford University School of Medicine, Stanford, CA, USA
e-mail: nigam@stanford.edu
Introduction
The history of artificial intelligence in medicine (AIM) is intimately tied to the his-
tory of AI itself, since some of the earliest work in applied AI dealt with biomedi-
cine. In this chapter we provide a brief overview of the early history of AI, but then
focus on AI in medicine (and in human biology), providing a summary of how the
field has evolved since the earliest recognition of the potential role of computers in
the modeling of medical reasoning and in the support of clinical decision making.
The growth of medical AI has been influenced not only by the evolution of AI itself,
but also by the remarkable ongoing changes in computing and communication tech-
nologies. Accordingly, this chapter anticipates many of the topics that are covered
in subsequent chapters, providing a concise overview that lays out the concepts and
progression that are reflected in the rest of this volume.
1
For more discussion, see “A Brief History of AI” at https://aitopics.org/misc/brief-history
(accessed August 13, 2022) and “History of Artificial Intelligence” at https://en.wikipedia.org/
wiki/History_of_artificial_intelligence. (accessed August 13, 2022).
2 AI in Medicine: Some Pertinent History 23
In the early twentieth century Bertrand Russell and Alfred North Whitehead
published Principia Mathematica, which revolutionized formal logic [1].
Subsequent philosophers pursued the logical analysis of knowledge. The first use of
the word “robot” in English occurred in a play by Karel Capek that was produced
in 1921.2 Thereafter a mechanical man, Electro, was introduced by Westinghouse
Electricat at the New York World’s Fair in 1939 (along with a mechanical dog
named Sparko). It was a few years earlier (1936–37) that Alan Turing proposed the
universal Turing Machine concept and proved notions of computability.3 Turing’s
analysis imagined an abstract machine that can manipulate symbols on a strip of
tape, guided by a set of rules. He showed that such a simple machine was capable
of simulating the logic of any computer algorithm that could be constructed. Also
relevant (in 1943) were the introduction of the term cybernetics, the publication by
McCulloch and Pitts of A Logical Calculus of the Ideas Immanent in Nervous
Activity (an early stimulus to the notion of artificial neural networks) [2], and
Emil Post’s proof that production systems are a general computational mecha-
nism [3].
Especially important for AI was George Polya’s 1945 book How to Solve It,
which introduced the notion of heuristic problem solving [4]—a key influential
concept in the AI community to this day. That same year Vannevar Bush published
As We May Think, which offered a remarkable vision of how, in the future, comput-
ers could assist human beings in a wide range of activities [5]. In 1950, Turing
published Computing Machinery and Intelligence, which introduced the Turing
Test as a way of defining and testing for intelligent behavior [6]. In that same year,
Claude Shannon (of information theory fame) published a detailed analysis show-
ing that chess playing could be viewed as search (Programming A Computer to
Play Chess) [7]. The dawn of computational artificial intelligence was upon us as
computers became viable and increasingly accessible devices.
Modern History of AI
The history of AI, as we think of it today, began with the development of stored-
program digital computers and the ground-breaking work of John von Neumann
and his team at Princeton University in the 1950s. As the potential of computers
2
Čapek K. Rossumovi Univerzální Roboti (Rossum’s Universal Robots). It premiered on 25
January 1921 and introduced the word “robot” to the English language and to science fiction as a
whole. https://en.wikipedia.org/wiki/R.U.R. (accessed August 13, 2022).
3
Turing submitted his paper on 31 May 1936 to the London Mathematical Society for its
Proceedings, but it was published in early 1937. https://en.wikipedia.org/wiki/Turing_machine.
(accessed August 13, 2022).
24 E. H. Shortliffe and N. H. Shah
4
http://shelf1.library.cmu.edu/IMLS/MindModels/logictheorymachine.pdf. (accessed August
13, 2022).
5
http://bitsavers.informatik.uni-stuttgart.de/pdf/rand/ipl/P-1 584_Report_On_A_General_
Problem-Solving_Program_Feb59.pdf. (accessed August 13, 2022).
6
McCarthy’s original paper is available at http://www-formal.stanford.edu/jmc/recursive.html.
(accessed August 13, 2022).
2 AI in Medicine: Some Pertinent History 25
7
See https://en.wikipedia.org/wiki/Douglas_Engelbart. (accessed August 13, 2022). SRI became
an independent entity outside of Stanford University and is known today simply as SRI
International.
8
https://www.sri.com/hoi/shakey-the-robot/. (accessed August 13, 2022).
9
Also often called DARPA, for Defense Advanced Research Projects Agency.
26 E. H. Shortliffe and N. H. Shah
10
See https://tools.ietf.org/html/rfc439 for a transcript of the interchange between the two pro-
grams. (accessed August 13, 2022).
Random documents with unrelated
content Scribd suggests to you:
A SUBJECT FOR SYMPATHY.
DISTRESSING POSITION OF CHARLES, WHO DOES NOT FEEL
WELL, AND WHO IS KEENLY ALIVE TO THE FACT THAT AMY IS
LOOKING AT HIM THROUGH HER OPERA-GLASS.
LITTLE DUCKS.
Georgy. "THERE NOW, CLARA—I CALL IT VERY PEEVISH OF YOU.
YOU PROMISED ME, IF I LET YOU GO IN FIRST, THAT YOU
WOULDN'T BE LONG, AND I DECLARE YOU HAVE BEEN EXACTLY
AN HOUR AND TWENTY MINUTES."
[Pouts.
SOLICITUDE.
Wife. "NOW PROMISE ME ONE THING, ADOLPHUS. YOU WON'T
GO FLYING OVER ANY HEDGES OR FIVE-BARRED GATES?"
AN UNCONSCIOUS VICTIM.
FEARFUL PRACTICAL JOKE PLAYED WITH A CHILD'S BALLOON
UPON A SWELL.
A HORRID BOY.
Frank. "OH, I SAY, EMILY! AIN'T THE SEA-SIDE JOLLY?"
Emily (who is reading The Corsair to Kate). "I DO NOT KNOW,
FRANK, WHAT YOU MEAN BY JOLLY.—IT IS VERY BEAUTIFUL!—IT
IS VERY LOVELY!"
Frank. "HAH! AND DON'T IT MAKE YOU ALWAYS READY FOR YOUR
GRUB, NEITHER?"
[Exit Young Ladies, very properly disgusted.
VERY ARTFUL CONTRIVANCE.
Clara. "WHY, DEAR ME! WHAT DO YOU WEAR YOUR HAT IN THE
WATER FOR?"
Mrs. Walrus. "OH, I ALWAYS WEAR IT WHEN I BATHE; FOR THEN,
YOU SEE, DEAR, NO ONE CAN RECOGNISE ME FROM THE BEACH!"
AT THE PLAY.
IN A HURRY.
Boy. "NOW THEN, SIR!—THE MORE YOU LOOK THE LESS YOU'LL
LOIKE IT!—GET OVER, OR ELSE LET US COME!"
ANGLING DELIGHTS.
ON ARRIVING AT THE BEST PART OF YOUR FISHING, YOU ARE OF
COURSE CHARMED TO FIND THAT OLD MUFFINS AND HIS LITTLE
BOY HAVE BEEN WHIPPING THE STREAM ALL THE AFTERNOON.
THE BEARD MOVEMENT.
Young Snobley (a regular Lady-killer). "HOW THE GIRLS DO STARE
AT ONE'S BEARD! I SUPPOSE THEY THINK I'M A HORFICER JUST
COME FROM THE CRIMEAR!"
CONSOLATION.
Young Snobley. "AH, JIM! NOBLE BIRTH MUST BE A GREAT
ADVANTAGE TO A COVE."
Jim (one of Nature's nobility). "H'M! P'RAPS!—BUT EGAD!
PERSONAL BEAUTY AIN'T A BAD SUBSTITUTE!"
AN UNDESIGNED COINCIDENCE.
TOMKINS RETIRES TO A SECLUDED VILLAGE, THAT HE MAY
GROW HIS MOUSTACHES, AND SO CUT OUT HIS ODIOUS RIVAL,
JONES. JONES, IT SO HAPPENS, HAS COME TO THE SAME PLACE
WITH THE SAME OBJECT.—FRIGHTFUL MEETING!
GOING TO THE PARK.
A LITTLE SHOOTING IN IRELAND.
"NO HIT AGAIN, I'M AFRAID, TIM!"
"O, NIVER MIND, YER 'ONOR! SURE, YE DO IT VERY NIST.
THERE'S SOME JINTLEMEN NOW COMES, AND THEY BLAZE AWAY,
AND THEY WOWNDES THE POOR BIRRDS IN THE LIGS AND THE
WINGS, AND SUCH LIKE; BUT YER 'ONOR! O, YE FIRES, AND
FIRES, AND ALWAYS MISSES 'EM CLANE AND CLEVER!"
LITTLE DINNER AT GREENWICH.
Fish Swell. "HERE, WAITAW!—ARE THE WHITEMEN PRETTY
GOOD?"
AFTER THE BATH.
POETRY AND PROSE.
Blanche. "OH, IS THERE NOT, DEAR EMILY, SOMETHING,
DELICIOUS ABOUT SPRING?—WE SHALL SOON HAVE ALL THE
DEAR LITTLE BIRDS SINGING, AND THE BANKS AND THE GREEN
FIELDS COVERED WITH BEAUTIFUL FLOWERS!"
Emily. "OH, YES!—AND WITH IT WILL COME ALL THE NEW
BONNET SHAPES FROM PARIS, AND THE LOVELY NEW PATTERNS
FOR MORNING DRESSES!"
A HAPPY NOTION.
Johnny. "OH, I SAY, GRANMA! S'POSE YOU PRETEND BEING A
LITTLE PONY, AND I RIDE ABOUT ON YOUR BACK ROUND THE
SQUARE!"—(N.B. Granma feels the heat a good deal.)
ebookbell.com