0% found this document useful (0 votes)
16 views16 pages

Categories of AI: What Is Artificial Intelligence?

Uploaded by

Esra Alazragh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views16 pages

Categories of AI: What Is Artificial Intelligence?

Uploaded by

Esra Alazragh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

8/31/2025

What is Artificial Intelligence?


• A broad definition: The science and engineering of making intelligent
machines, especially intelligent computer programs.
• A more specific definition: The study of how to make computers do things
that humans can do.
• Goals of AI
To create intelligent agents that can reason, learn, and act autonomously.
To develop systems that can understand and respond to human language.
To build machines that can perceive and interact with the physical world.

Categories of AI

Narrow AI: General AI: Superintelligent AI:


AI systems designed to perform a AI systems with the ability to Hypothetical AI that surpasses
specific task. understand or learn any human intelligence in every
Examples: Facial recognition, intellectual task that a human aspect.
recommendation systems, self- being can. This is a subject of much debate
driving cars. This level of AI has not yet been and speculation.
achieved.

1
8/31/2025

TYPES OF
ARTIFICIAL
INTELLIGENCE

Types of artificial intelligence


According to different experts, there are several types of artificial
intelligence. One of the main classifications is the following:
(a) Reactive machines
This type of AI does not have the ability to form memories or rely on past
experiences to make decisions. It is simply guided by the present or
future, but has no knowledge of the past
(b) Limited memory
They have information of the past but in a momentary way. Since their
storage is not unlimited, like the mind of a human being where it can
store memories of the past, they are machines that have information
from the past but in a momentary way

2
8/31/2025

• There main classifications for different types of Artificial Intelligence (AI)


• a) Reactive Machines:
• These are the most basic form of AI.
• They cannot learn from past experiences and have no memory.
• They react solely based on the current input they receive.
• Imagine a simple vending machine. When you insert money (input), it dispenses the selected product
(output). There's no memory of previous purchases or adjustments based on past experiences.
• b) Limited Memory:
• These AI systems have a limited ability to learn from recent experiences.
• They can store information for a short period of time and use it to inform their current decisions.
• Think of a basic chatbot. It might remember the keywords used in the previous message to tailor its
current response. However, it wouldn't remember details from a conversation hours ago.
• Here's an analogy:
• Reactive machines are like simple calculators, responding only to the numbers currently entered.
• Limited memory AI is like a person with short-term memory, able to use recent information but not
long-term knowledge.

Types of artificial intelligence


(c) Mind theory
These machines will be able to understand that human beings are
made up of feelings and thoughts that modify their interaction with
the world. The behavior of these machines will have to collaborate
with social interaction
(d) Self-awareness
The ultimate goal of artificial intelligence is to create machines that
are self-aware

3
8/31/2025

• c) Theory of Mind AI:


• This type of AI, also known as social AI or empathetic AI, is still largely theoretical.
• It refers to machines that can understand the mental states of others, including humans.
• This includes understanding things like emotions, beliefs, and intentions.
• With this understanding, they could adapt their behavior to better interact and collaborate in social settings.
• Imagine an AI assistant that can not only complete tasks but also sense your frustration or hesitation and
adjust its communication style accordingly.
• d) Self-aware AI:
• This is the most futuristic and controversial category.
• Self-aware AI refers to machines that have a conscious understanding of their own existence and internal state.
• This is a highly debated topic, with some experts believing it's achievable in the distant future, while others
consider it entirely theoretical.
• The implications of self-aware AI are vast and complex, raising philosophical and ethical questions about the
nature of consciousness and machine sentience.
• Current State of AI:
• It's important to note that while theory of mind and self-aware AI are fascinating concepts, most current AI
applications fall under reactive or limited memory categories. Research in social AI and self-awareness is
ongoing, but significant breakthroughs are likely still far in the future.

4
8/31/2025

Brief History and Evolution of AI

1980s: The Rise 2010s-Present:


1950s: The Birth of Expert Deep Learning
of AI Systems and Big Data

1960s-1970s: 1990s-2000s:
Early AI Machine
Research Learning and
Data-Driven
Approaches

A Brief History and Evolution of Artificial Intelligence


Artificial intelligence (AI) has a long and fascinating history, dating back to the early days of computing. Over the years, AI has
undergone many periods of growth and decline, but it has always been driven by the human desire to create machines that can
think and act like humans.
The 1950s: The Birth of AI
The 1950s are often considered to be the birth of AI. It was during this decade that many of the foundational concepts of AI were
developed, including the Turing Test, which is still used today to measure the intelligence of machines.
•Alan Turing: In his 1950 paper, "Computing Machinery and Intelligence," Turing proposed the idea of a test to determine
whether a machine could be considered intelligent. This test, now known as the Turing Test, is still used today to measure the
progress of AI research.
•John McCarthy: In 1956, McCarthy coined the term "artificial intelligence" at a conference at Dartmouth College. This event is
often considered to be the official start of the field of AI.
The 1960s and 1970s: Early AI Research
The 1960s and 1970s were a period of rapid growth for AI research. During this time, researchers developed a number of new AI
techniques, including:
•Symbolic reasoning: This approach to AI is based on the idea that intelligence can be represented using symbols.
•Search algorithms: These algorithms are used to find solutions to problems by systematically exploring all possible options.
•Machine learning: This field of AI is concerned with the development of algorithms that can learn from data.
The 1980s: The Rise of Expert Systems
The 1980s saw the rise of expert systems, which are AI programs that are designed to mimic the expertise of human experts.
Expert systems were used in a variety of applications, including medical diagnosis, financial planning, and engineering.
•MYCIN: MYCIN was an expert system developed at Stanford University in the 1970s. It was designed to diagnose and treat
bacterial infections.

10

5
8/31/2025

The 1990s and 2000s: Machine Learning and Data-Driven Approaches


The 1990s and 2000s saw a shift away from symbolic AI and towards machine learning. Machine learning algorithms are able
to learn from data without being explicitly programmed. This made it possible to develop AI systems that could perform tasks
that were previously considered too difficult for machines, such as image recognition and natural language processing.
•Deep Blue: Deep Blue was an IBM chess computer that defeated world champion Garry Kasparov in 1997. Deep Blue was the
first computer to defeat a reigning world champion under tournament conditions.
The 2010s and Present: Deep Learning and Big Data
The 2010s have seen the rise of deep learning, a type of machine learning that uses artificial neural networks. Deep learning
has been responsible for some of the most impressive AI breakthroughs in recent years, such as the development of self-
driving cars and the ability to generate human-quality text.
•AlphaGo: AlphaGo was a computer program developed by DeepMind that defeated world champion Go player Lee Sedol in
2016. Go is a complex game that was previously considered to be too difficult for machines to master.
AI is a rapidly evolving field with the potential to revolutionize many aspects of our lives. As AI research continues to progress,
we can expect to see even more amazing advances in the years to come.
Additional notes:
•The history of AI is often divided into periods of "hype" and "disappointment." This is because AI research has often been
over-promised and under-delivered. However, each period of hype has led to new advances in the field.
•AI is a multidisciplinary field that draws on a variety of disciplines, including computer science, mathematics, psychology, and
philosophy.
•AI has the potential to benefit society in many ways, such as by improving healthcare, education, and transportation.
However, it is important to use AI responsibly and ethically.

11

Key Techniques
• Artificial Intelligence: Example: A chatbot
that can answer customer service queries.
AI Deep Learning
Example: A facial
recognition system that
ML identifies individuals by
analyzing facial features.

Machine Learning:
Example: A spam filter that
DL learns to identify spam emails
based on user feedback and
email characteristics.

12

6
8/31/2025

Artificial Intelligence Machine Learning Deep Learning


is basically the study/process is the study that makes use of Neural
which enables machines to is the study that uses statistical methods Networks(similar to neurons present in
mimic human behavior through enabling machines to improve with experience. human brain) to imitate functionality just
particular algorithm. like a human brain.
DL is a ML algorithm that uses
AI is a computer algorithm which
ML is an AI algorithm which allows system to deep(more than one layer) neural
exhibits intelligence through
learn from data. networks to analyze data and provide
decision making.
output accordingly.
The efficiency Of AI is basically
Less efficient than DL as it can’t work for longer More powerful than ML as it can easily
the efficiency provided by ML and
dimensions or higher amount of data. work for larger sets of data.
DL respectively.
ML algorithms can be categorized as
AI can be further broken down supervised, unsupervised, or reinforcement DL algorithms are inspired by the
into various subfields such as learning. In supervised learning, the algorithm structure and function of the human
robotics, natural language is trained on labeled data, where the desired brain, and they are particularly well-
processing, computer vision, output is known. In unsupervised learning, the suited to tasks such as image and
expert systems, and more. algorithm is trained on unlabeled data, where speech recognition.
the desired output is unknown.
DL networks consist of multiple layers
AI systems can be rule-based, In reinforcement learning, the algorithm learns of interconnected neurons that process
knowledge-based, or data- by trial and error, receiving feedback in the form data in a hierarchical manner, allowing
driven. of rewards or punishments. them to learn increasingly complex
13

Agent Definition
An agent is an entity that can perceive its environment through sensors and
act upon that environment through actuators. It can be anything from a
simple thermostat to a complex robot or a software program.
Examples of Agents
 Humans: We perceive the world through our senses (sight,
hearing, touch, taste, smell) and act through our limbs and vocal
cords.
 Robots: Robots use sensors like cameras and lidar to perceive
their surroundings and actuators like motors and grippers to
interact with the world.
 Software Programs: Software agents, such as web crawlers or
game-playing programs, perceive input data and produce output
actions.
Environment
 The environment is the world or context in which the agent
operates. It can be fully observable or partially observable,
deterministic or stochastic, static or dynamic, discrete or
continuous.

14

7
8/31/2025

Rational Agent
• A rational agent, given a specific performance measure, always
takes the action that is expected to maximize that measure, given
the agent's percept history. This means that a rational agent:
• Perceives: It accurately senses its environment.
• Thinks: It reasons about its percepts to form beliefs about the world.
• Acts: It chooses actions that are expected to maximize its performance
measure.
• It's important to note that rationality is not about always making
the correct decision. It's about making the best decision given the
available information and the agent's goals.

15

The Structure of Agents


• An agent can be thought of as a system with four
main components:
• Perception: The agent's ability to receive
sensory input from its environment.
• Action: The agent's ability to take actions that
affect its environment.
• Goal: The desired outcome or objective of the
agent.
• Internal State: The agent's memory or
knowledge base, which allows it to maintain
information about the past and plan for the
future.

16

8
8/31/2025

agents category
Model-based Reflex
Simple Reflex Agents Goal-based Agents Utility-based Agents Learning Agents
Agents
• Definition: These agents • Definition: These agents • Definition: These agents • Definition: These agents • Definition: These agents
act solely based on the maintain an internal have explicit goals and consider the expected learn from experience
current percept. They state that represents use search and planning utility of different actions and improve their
don't consider the past their knowledge of the techniques to find and choose the action performance over time.
or future. world. They use this actions that will achieve that maximizes their • Components:
• Example: A thermostat model to predict the their goals. expected utility. • Learning Element:
that turns on the heater future and make • Example: A chess- • Example: A Responsible for making
when the temperature decisions. playing program that recommendation system improvements.
drops below a certain • Example: A self-driving uses a search algorithm that suggests products • Performance
threshold. car that uses a map to to find the best move. based on a user's Element: Selects
• Limitations: Simple navigate and avoid • Advantages: Goal-based preferences and past external actions.
reflex agents are limited obstacles. agents can plan ahead purchases.
• Critic Element:
in their ability to handle • Advantages: Model- and make strategic • Advantages: Utility- Provides feedback on
complex environments. based agents can handle decisions. based agents can handle how the agent is doing.
They can only react to dynamic environments complex environments
• Problem Generator:
immediate stimuli. and make more informed with uncertain
Suggests actions to
decisions. outcomes.
explore new parts of
the problem space.
• Example: A machine
learning model that
learns to recognize
patterns in data.

17

Types of Search

18

9
8/31/2025

Uninformed (Blind) Search Algorithms


• The search algorithms in this section have no additional
information on the goal node other than the one provided
in the problem definition. The plans to reach the goal
state from the start state differ only by the order and/or
length of actions. Uninformed search is also called Blind
search. These algorithms can only generate the
successors and differentiate between the goal state and
non goal state.

1.Depth First Search


2.Breadth First Search
3.Uniform Cost Search (UCS):

19

DEPTH FIRST SEARCH (DFS)

20

10
8/31/2025

DEPTH
FIRST
SEARCH
(DFS)

21

Depth First Search (DFS)


Definition: A graph/tree traversal algorithm that explores as far as possible along each branch before
backtracking.
Basic Steps:
1. Start at the root (or any arbitrary node).
2. Mark the current node as visited.
3. Recursively visit each unvisited neighbor, diving deeper before exploring siblings.
4. Backtrack when no unvisited neighbors remain.
Advantages
• Low Memory Overhead: Only needs to store a stack of nodes on the current path (O(h) where h is max depth).
• Simple to Implement: Easily coded with recursion or an explicit stack.
• Good for Path Finding: Quickly finds a solution in deep trees if the target is far from the root.
Disadvantages
• Not Guaranteed to Find Shortest Path: May explore a long branch first, missing shorter routes.
• Risk of Infinite Loops: Without proper “visited” checks, can revisit nodes endlessly in cyclic graphs.
• Poor for Wide Trees: Can consume time exploring deep irrelevant branches if the solution is near the root or in
another subtree.

22

11
8/31/2025

BREADTH-FIRST SEARCH
(BFS)
23

BREADTH-
FIRST
SEARCH
(BFS)

24

12
8/31/2025

Breadth-First Search (BFS)


Definition: A graph/tree traversal algorithm that explores all neighbors at the current depth before moving
on to nodes at the next depth level.
Basic Steps:
1. Start at the root (or any arbitrary node).
2. Enqueue the starting node and mark it as visited.
3. While the queue is not empty:
• Dequeue a node, process it.
• Enqueue all its unvisited neighbors and mark them as visited.
Advantages
• Finds Shortest Path: Guarantees the shortest number of edges to the target in an unweighted graph.
• Complete: Will find a solution if one exists (in finite graphs).
• Level-Order Traversal: Useful for problems requiring processing by layers (e.g., nearest neighbors).
Disadvantages
• High Memory Usage: Stores all nodes at the current frontier in a queue (O(b^d) where b is branching factor and d is
depth).
• Slower for Deep Solutions: Explores all nodes at each level before reaching deeper targets.
• Less Efficient on Deep Trees: May consume time and space exploring wide shallow levels when the target is deep.

25

26

13
8/31/2025

UNIFORM COST SEARCH


(UCS)
27

Informed Search
Algorithms
• the algorithms have
information on the goal
state, which helps in more
efficient searching. This
information is obtained by
something called a
heuristic.

1. Greedy Search
2. A*

28

14
8/31/2025

GREEDY SEARCH

29

GREEDY
SEARCH

30

15
8/31/2025

A*

31

A* Search
Definition: An informed search algorithm that uses both the cost to reach a node (g(n)) and a heuristic estimate to the goal (h(n)) to select the next
node to expand.
Evaluation Function:
• f(n)=g(n)+h(n) where
• g(n) = cost from start to node n
• h(n) = estimated cost from n to goal

Basic Steps
1. Initialize an open list (priority queue) with the start node, where f(start) = h(start).
2. While open list is not empty:
• Remove node n with lowest f(n).
• If n is the goal, reconstruct path and return.
• Otherwise, generate successors, compute g and h for each, set f = g + h, and add to open list (or update if a better path is found).
3. Maintain a closed list to avoid re-expanding nodes.
Advantages
• Optimality: Finds the least-cost path if the heuristic h(n) is admissible (never overestimates) and consistent.
• Efficiency: Explores fewer nodes than uninformed algorithms by focusing on promising paths.
• Flexibility: Heuristic can be tailored to the problem to improve performance.
Disadvantages
• Memory Intensive: Stores all generated nodes in memory (open and closed lists), which can be large.
• Heuristic Quality Dependent: Poor heuristics degrade performance toward that of Uniform-Cost Search.
• Can Be Slow

32

16

You might also like