INTERNATIONAL SCHOOL
INTRODUCTION To History OF AI
COMPUTER PROJECT ️
INTRODUCTION To History OF AI
Artificial Intelligence (AI) is a branch of computer science that aims to
create systems capable of performing tasks that typically require human
intelligence, such as reasoning, learning, problem-solving, and language
understanding. The history of AI spans decades, with significant milestones
shaping its progress.
Early Foundations (Before 20th Century)
The idea of intelligent machines has its roots in ancient mythology and
philosophy. For instance, Aristotle’s syllogisms laid the foundation for formal
reasoning. In 17th and 18th centuries, philosophers like Descartes and
Leibniz speculated on mechanical reasoning and logic.
The Birth of AI (1940s-1950s)
Alan Turing (1943-1950): Proposed the Turing Machine, a model of
computation, and the Turing Test to evaluate machine intelligence.
Dartmouth Conference (1956): Widely considered the birth of AI as a field,
where the term “Artificial Intelligence” was coined by John McCarthy.
Challenges and AI Winters (1970s-
1980s)
Early enthusiasm waned as limitations in computational power and
overambitious promises led to reduced funding and interest—periods
known as AI winters. Despite this, expert systems (rule-based programs)
found commercial applications in medical diagnostics and business.
Resurgence and Modern AI (1990s-
Present)
Machine Learning (1990s): The focus shifted to data-driven approaches,
emphasizing statistical learning and neural networks. Deep Learning
(2010s): The rise of GPUs and large datasets enabled breakthroughs in
speech recognition, computer vision, and natural language processing.
Milestones: IBM’s Deep Blue defeated chess champion Garry Kasparov
(1997), and Alpha Go defeated the Go champion Sedol Lee(2016).
Ancient and Philosophical Roots
Early myths and stories featured intelligent beings created by humans
(e.g., automatons in Greek mythology). Philosophers like Aristotle explored
logic and reasoning, laying a conceptual foundation for AI.
Birth of AI as a Field (1956)
The term “Artificial Intelligence” was coined at the Dartmouth Conference
by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude
Shannon. Early AI programs like the Logic Theorist (1956) and General
Problem Solver (1957) emerged.
Rise of Big Data and Deep Learning
(1990s–2010s)
Access to vast amounts of data and better computational power
transformed AI. Breakthroughs in image and speech recognition using
deep learning models like convolutional neural networks (CNNs). 1997:
IBM’s Deep Blue defeated chess champion Garry Kasparov. 2011: IBM’s
Watson won the quiz show Jeopardy!.
Modern AI Era (2010s–Present)
Advances in natural language processing (e.g., GPT models, chatbots). AI
applied across industries: autonomous vehicles, healthcare diagnostics,
and personalized recommendations. Ethical concerns, such as bias and
job displacement, gained prominence.
The Birth of Computer Science (1930s-
1940s)
Alan Turing introduced the concept of a “universal machine” (1936), laying
the groundwork for computation. During World War II, Turing built early
computers for code-breaking, showcasing automated problem-solving.
Formalization of AI (1956)
The Dartmouth Conference coined the term Artificial Intelligence.
Researchers aimed to create machines that could think and learn like
humans.
Rise of Expert Systems (1980s)
AI gained traction with rule-based systems designed to mimic human
expertise in specific domains. Despite commercial success, these systems
lacked adaptability.
Machine Learning and Data
Revolution (1990s-2000s)
Development of statistical machine learning methods. AI milestones like
IBM’s Deep Blue defeating chess champion Garry Kasparov (1997).
Symbolic AI and Early Successes
(1950s-1960s)
Programs like Logic Theorist and General Problem Solver were developed.
Optimism grew as AI systems began solving complex mathematical and
logical problems.
Modern AI and Ethical Considerations
(Present)
AI is used in healthcare, finance, and other industries. Growing focus on
ethical challenges, including bias, job displacement, and AI regulation
Mechanical Machines and Computing
Concepts (1600s-1800s)
Blaise Pascal (1642): Invented the Pascaline, an early mechanical calculator capable
of addition and subtraction. Charles Babbage and Ada Lovelace (1830s): Babbage
designed the Analytical Engine, a theoretical general-purpose computer. Ada Lovelace
introduced the concept of programming, recognizing the potential of machines to
manipulate symbols beyond numbers.
Early AI Systems:
Logic Theorist (1955): Developed by Allen Newell and Herbert Simon to
prove mathematical theorems. IBM’s Shoebox (1961): An early speech
recognition system.
Golden Age of AI (1950s-1970s)
Game Playing AI: Machines like Samuel’s Checkers Program (1959) showed AI’s ability to learn from
Natural Language Processing (NLP): Programs like SHRDLU (1970) could interpret and respond to hum
within specific contexts. Expert Systems: Systems like DENDRAL for chemistry and MYCIN for medicine
decision-making in specialized fields.
Modern AI and Ethical Considerations
(Present)
AI is used in healthcare, finance, and other industries. Growing focus on
ethical challenges, including bias, job displacement, and AI regulation.