0% found this document useful (0 votes)
27 views43 pages

I Unit

Uploaded by

Anbarasa Pandian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views43 pages

I Unit

Uploaded by

Anbarasa Pandian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

UNIT-I

INTRODUCTION
Introduction–Definition - Future of Artificial Intelligence – Characteristics of Intelligent
Agents–Typical Intelligent Agents – Problem Solving Approach to Typical AI problems. Search
Strategies- Uninformed- BFS-Greedy best first search-A* search

1 INTRODUCTION
1.1 Artificial Intelligence
Artificial Intelligence (AI) is the study of how to make computers do things which, at the
moment, people do better. Artificial Intelligence is a branch of computer science and
engineering that deals with intelligent behavior, learning, and adaptation in machines.

1.2 Applications of AI
AI has been dominant in various fields such as
a. Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe,
etc., where machine can think of large number of possible positions based on heuristic
knowledge.
b. Natural Language Processing − It is possible to interact with the computer that
understands natural language spoken by humans.
c. Expert Systems − There are some applications which integrate machine, software,
and special information to impart reasoning and advising. They provide explanation
and advice to the users.
d. Vision Systems − these systems understand, interpret, and comprehend visual input
on the computer. For example,
i. A spying airplane takes photographs, which are used to figure out spatial
information or map of the areas.
ii. Doctors use clinical expert system to diagnose the patient.
iii. Police use computer software that can recognize the face of criminal with the
stored portrait made by forensic artist.
e. Speech Recognition − Some intelligent systems are capable of hearing and
comprehendingthe language in terms of sentences and their meanings while a human
talks to it. It can handle different accents, slang words, noise in the background,
change in human’s noise due to cold, etc.
f. Handwriting Recognition − the handwriting recognition software reads the text
written on paper by a pen or on screen by a stylus. It can recognize the shapes of the
letters and convertit into editable text.
g. Intelligent Robots − Robots are able to perform the tasks given by a human. They
have sensors to detect physical data from the real world such as light, heat,
temperature, movement, sound, bump, and pressure. They have efficient processors,
multiple sensors and huge memory, to exhibit intelligence. In addition, they are
capable of learning from their mistakes and they can adapt to the new environment.

1
1.3 History of AI
Here is the history of AI during 20th century

Year Milestone / Innovation


The Gestation of AI
KarelKapek's play named “Rossum's Universal Robots” (RUR) opens in London,
1923 first use of the word "robot" in English.
The 1st AI thoughts were put by McCulloch & Walter Pitts. Their idea of AI
based on 3 theories:
1943  Phsycology (The function of neurons in the brain)
 Analysis of Propositional Logic
 Turings Theory of Computation

1945 Isaac Asimov, Columbia University alumni, coined the term Robotics.
Alan Turing introduced Turing Test for evaluation of intelligence and published
Computing Machinery and Intelligence. Claude Shannon published Detailed
1950
Analysis of Chess Playing as a search.

The Birth of AI
John McCarthy coined the term Artificial Intelligence. Demonstration of the first
1956 running AI program at Carnegie Mellon University.
Early Enthusiasm, great expectations
Newell & Simons presented General Problem Solver (GPS) within the limited
- class of puzzles it could handle. GPS was the 1st program which has thinking
humanly approach.
1958 John McCarthy invents LISP programming language for AI.
Danny Bobrow's dissertation at MIT showed that computers can understand natural
1964 language well enough to solve algebra word problems correctly.
Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on
1965 a dialogue in English.

Knowledge based systems


DENDRAL program was developed by Buchanan. It used domain specific
1969
knowledge.
MYCIN program developed to diagnose blood infections.
1970

AI becomes an Industry
Japanese announced 5th generation project 10-years plan to build intelligent
1981 computer running PROLOG. US formed Micro Electronics & Computer
Technology Corporation (MCC) for research in AI.

2
Judea Pearls Probabilistic Reasoning in Intelligent Systems led to a new
1988 acceptance of probability theory in AI.

AI adopts the scientific method


Hidden Markov Model comes to dominate AI. It is based on 2 aspects:
1990  Mathematical Model Theory
 Process of training large corpus real speech data
Emergence of Intelligent Agent
Major advances in all areas of AI
 Significant demonstrations in machinelearning
 Case-basedreasoning
 Multi-agentplanning
 Scheduling
 Data mining, WebCrawler
 natural language understanding andtranslation
1990  Vision, VirtualReality
 Games
The Deep Blue Chess Program beats the then world chess champion, Garry
1997 Kasparov.
Interactive robot pets become commercially available. MIT displays Kismet, a
robot with a face that expresses emotions. The robot Nomad explores remote
2000
regions of Antarctica and locates meteorites.

1.4 Intelligence

The ability of a system to calculate, reason, perceive relationships and analogies,


learn from experience, store and retrieve information from memory, solve
problems, comprehend complex ideas, use natural language fluently, classify,
generalize, and adapt new situations.

1.4.1 What is Intelligence Composed of?

Reasoning Intelligence Linguistic


Intelligence

Learning Perception Problem


Solving

1.4.2 Real Life Applications of AI Research Areas


3
 Expert Systems − Flight-tracking systems, Clinical systems.
 Natural Language Processing - Google Now feature, speech recognition,
Automaticvoice output.
 Neural Networks − Pattern recognition systems such as face recognition,
characterrecognition, handwriting recognition.
 Robotics − Industrial robots for moving, spraying, painting, precision
checking,drilling, cleaning, coating, carving, etc.
 Fuzzy Logic Systems − Consumer electronics, automobiles, etc.
1.5 Foundations of AI
i) Philosophy
• Provide relationship between physical brain & mental mind
• Provide information about knowledge origin & knowledge leads to action
ii) Mathematics
• Used for develop valid conclusions
• Deals with uncertain information
iii) Economics
• To make decisions & maximize payoff
iv) Neuroscience
 Related to brain processing
v) Psychology
• How do people behave, perceive, process Cognitive Science
information,represent knowledge
• Helps AI for developing process of thinking & actions
• Provide how humans and animals think
vi) Computer Engineering
 Building fast computers
vii) Control theory & Cybernetics
• Design systems that maximize an objective function over time
• Scientific study of control and communication in the animal and the machine
viii) Linguistics
• Linguistics is the scientific study of language. It involves analyzing
languageform, language meaning, and language in context.

2) Definitions of AI
https://www.dummies.com/article/technology/information-technology/ai/general-ai/4-ways-define-
artificial-intelligence-ai-254174
Some definitions of AI. They are organized into four categories:
1. Systems that act like humans.(Turing Test approach)
2. Systems that think like humans.(Cognitive Modeling Test approach)
3. Systems that act rationally. (Rational Agent approach)
4. Systems that think rationally.(Laws of thought approach)
1) Acting humanly: The Turing Test approach
 The Turing Test, proposed by Alan Turing

4
 Test based on common features that can match with the most
intelligent entityhuman beings

 Computer should be interrogated by a human via a teletype, and


passes the test ifthe interrogator cannot tell if there is a computer or a
human at the other end.
 The computer would need to possess the following capabilities:
• Natural language processing- to enable it to communicate
successfully inEnglish (or some other human language);
• Knowledge representation- to store information provided before or
during theinterrogation;
• Automated reasoning - to use the stored information to answer
questions and todraw new conclusions;
• Machine learning - to adapt to new circumstances and to detect and
extrapolatepatterns.
• Additional capabilities
 Computer Vision to perceive objects [to understand the content of Images]
 Robotics to manipulate objects and move about
2) Thinking humanly: The cognitive modeling approach
When a computer thinks as a human, it performs tasks that require intelligence (as
contrasted with rote procedures) from a human to succeed, such as driving a car. To determine
whether a program thinks like a human, you must have some method of determining how humans
think, which the cognitive modeling approach defines. This model relies on three techniques:
i) through introspection
Detecting and documenting the techniques used to achieve goals by monitoring one’s
own thought processes.
ii) Through psychological experiments
Observing a person’s behavior and adding it to a database of similar behaviors from other
persons given a similar set of circumstances, goals, resources, and environmental conditions
(among other things).
iii) Through brain imaging
Monitoring brain activity directly through various mechanical means, such as Computerized
Axial Tomography (CAT), Positron Emission Tomography (PET), Magnetic Resonance
Imaging (MRI), and Magnetoencephalography (MEG).
After creating a model, you can write a program that simulates the model. This category of
thinking humanly is often used in psychology and other fields in which modeling the human thought
process to create realistic simulations is essential.
3) Thinking rationally: The laws of thought approach
 A computer that thinks rationally relies on the recorded behaviors to create a guide as to how to
interact with an environment based on the data at hand. The goal of this approach is to solve
problems logically, when possible.
 For example, P: Socrates is a man
5
Q: all men are mortal
Therefore P -> Q : Socrates is mortal.
 These laws of thought were supposed to govern the operation of the mind, and initiated the field
of logic.
4) Acting rationally: The rational agent approach
 Acting rationally means acting so as to achieve one's goals, given one's beliefs.
 An agent is just something that perceives and acts.
 Making correct inferences is sometimes part of being a rational agent.
 One way to act rationally is to reason logically to the conclusion that a given
actionwill achieve one's goals, and then to act on that conclusion.

3) INTELLIGENT AGENTS

Introduction - Agents and Environments

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon
that environment through actuators.

Different types of agents

Example:
1) Agent: Human agent
Sensors: eyes, ears, etc.
Actuators: hands, legs, mouth
2) Agent: Robotic agent
Sensors: Cameras and Infrared range finder
Actuators: Motors

The term percept is to refer to the agent's perceptual inputs at any given instant.

PERCEPT SEQUENCE: Agent's percept sequence is the complete history of everything the agent has
ever perceived.

6
An agent's behavior is described by the agent function that maps any given percept sequence to an
action.

Structure of the Agents

The job of AI is to design the agent program that implements the agent function mapping percepts to
actions.

Intelligent agent = Architecture + Agent program

Agent programs take the current percept as input from the sensors and return an action to the
actuators

The agent program takes the current percept as input, and the agent function takes the entire percept
history

Architecture is a computing device used to run the agent program.

The agent programs will use some internal data structures that will be updated as new percepts arrive.
The data structures are operated by the agents decision making procedures to generated an action
choice, which is then passed to the architecture to be executed. Two types of agent programs are

1. A Skeleton Agent
2. A Table Lookup Agent

Skeleton Agent

The agent program receives only a single percept as its input.


If the percept is a new input then the agent updates the memory with the new percept

function SKELETON-AGENT( percept) returns action


static: memory, the agent’s memory of the world
memory <- UPDATE-MEMORY(memory, percept)
action <- CHOOSE-BEST-ACTION(memory)
memory <- UPDATE-MEMORY(memory, action)
return action

Table-lookup agent

A table which consists of indexed percept sequences with its corresponding action

The input percept checks the table for the same

function TABLE-DRIVEN-AGENT(percept) returns an action

static: percepts, a sequence initially empty


table, a table of actions, indexed by percept sequence
append percept to the end of percepts
action  LOOKUP(percepts, table)
return action
7
Drawbacks of table lookup agent

• Huge table
• Take a long time to build the table
• No autonomy
• Even with learning, need a long time to learn the table entries

Agent program Example: The vacuum-cleaner world has just two locations: squares A and B. The
vacuum agent perceives which square it is in and whether there is dirt in the square. It can choose to
move left, move right, suck up the dirt, or do nothing. One very simple agent function is the following:
if the current square is dirty, then suck, otherwise move to the other square.

Fig : A vacuum-cleaner world with just two locations

Partial tabulation of a simple agent function for the vacuum-cleaner world


• Percepts: location and status, e.g., [A,Dirty]
• Actions: Left, Right, Suck, NoOp

Percept Action
sequence

[A, Clean] Right

[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck

The agent program for a simple agent in the two-state vacuum environment for above
tabulation

function VACUUM-AGENT([location,status])
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
8
PEAS Representation
PEAS is a type of model on which an AI agent works upon.
P: Performance measure
E: Environment
A: Actuators
S: Sensors

Agent Type Performance Environment Actuators Sensors


Measure
An automated Safe, fast, Roads, traffic, Steering, Cameras,
taxi driver legal, customers acceleration, Speedometer
comfortable break, signal, ,GPS,
trip horn, display engine,
Sensor,
keyboard.
Hospital Patient’s Hospital, Prescription, Symptoms,
Management health, Doctors, Diagnosis, Patient’s
System Admission Patients Scan report response.
process,
Payment
Subject Tutoring Maximize Classroom, Smart Keyboard
scores, Desk, Chair, displays, entries
Improveme Board, Staff, Corrections
nt is Students
students
ATM system Secure ATM Machine, Display Touch
reliable Customer screens with Screen
options

Satellite Image Correct Image Downlink from Display Color pixel arrays
Analysis Categorization satellite categorization of
scene

Refinery controller Maximum purity, Refinery operators Valves, pumps, Temperature,


safety heaters, displays pressure, chemical
sensors

4 Characteristics of intelligent agent:


• Adaptive- It is capable of reacting flexibly to changes within its environment
and capable of learning from its own experiences, environment and
interaction with others.
• Autonomy-An agent is able to act without direct intervention from humans
or other agents.
• Sociability-An agent is capable of interacting in a peer-to-peer manner with
otheragents or humans.
• Situatedness-The agent receives some form of sensory input from its
environment, and it performs some action that changes its environment in
9
some way. Examples of environments: the physical world and the Internet.

Steps in designing an agent:

1. Define problem area


2. Tabulate PEAS
3. Tabulate agent function
4. Design agent program
5. Design architecture to implement agent program
6. Implement an agent program

Structure of an AI Agent
The task of AI is to design an agent program which implements the agent
function. The structure of an intelligent agent is a combination of architecture
and agent program. It can be viewed as:
Agent = Architecture + Agent program
• Architecture: Architecture is machinery that an AI agent executes on.
• Agent Function: Agent function is used to map a percept to an action.
• Agent program: Agent program is an implementation of agent function. An
agentprogram executes on the physical architecture to produce function f.

5 Typical Intelligent Agents


Agents can be grouped into five classes based on their degree of perceived
intelligence and capability. All these agents can improve their performance and
generate better action over the time.
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
i) Simple Reflex Agent
 Agents take decisions on the basis of the current percepts and ignore the
rest ofthe percept history.
 These agents only succeed in the fully observable environment.
 The Simple reflex agent works on Condition-action rule, which means it
maps the current state to action. Such as a Room Cleaner agent, it works
only if thereis dirt in the room.
 Problems for the simple reflex agent design approach:
 They have very limited intelligence
 They do not have knowledge of non-perceptual parts of the current state
 Mostly too big to generate and to store.

o Not adaptive to changes in the environment.

10
Agent Program for Simple Reflex

function SIMPLE-REFLEX-AGENT(percept) returns actionstatic: rules, a set of condition-


action rules
state<— lNTERPRET-lNPUT(percept)rule<- RULE-MATCH(state,
rules) action<- RULE-ACTION[rule]
return action
ii) Model-based reflex agent
 The Model-based agent can work in a partially observable environment,
andtrack the situation.
 A model-based agent has two important factors:
 Model: It is knowledge about "how things happen in the world," so it is
called a Model-based agent.
 Internal State: It is a representation of the current state based on percept
history.
 These agents have the model, "which is knowledge of the world" and
based onthe model they perform actions.
 Updating the agent state requires information about:
 How the world evolves
 How the agent's action affects the world.

iii) Goal-based agents


 The knowledge of the current state environment is not always sufficient
to decidefor an agent to what to do.

11
 The agent needs to know its goal which describes desirable situations.
 Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.
 They choose an action, so that they can achieve the goal.
 These agents may have to consider a long sequence of possible actions
before deciding whether the goal is achieved or not. Such considerations
of different scenario are called searching and planning, which makes an
agent proactive.

iv) Utility-based agents


 These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
 Utility-based agent act based not only goals but also the best way to
achieve the goal.
 The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
 The utility function maps each state to a real number to check how
efficiently each action achieves the goals.

12
v) Learning Agents
 A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt
automaticallythrough learning.
 A learning agent has mainly four conceptual components, which are:
 Learning element: It is responsible for making improvements by
learning from environment
 Critic: Learning element takes feedback from critic which describes
that how well the agent is doing with respect to a fixed performance
standard.
 Performance element: It is responsible for selecting external action
 Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look
for newways to improve the performance.

Properties of task environments(Types of Environment)


1) Fully observable vs. Partially Observable
2) Deterministic vs. nondeterministic
3) Episodic vs. non-episodic
4) Single-agent vs. Multi-agent
5) Static vs. dynamic
6) Discrete vs. continuous
Fully observable vs. Partially Observable:
 If an agent’s sensor can sense or access the complete state of an environment
at each point of time then it is a fully observable environment
 In some environment, if there is noise or agent is with inaccurate sensors or
may be some states of environment are missing then such environment is
partially observable.
13
 An agent with no sensors in all environments then such an environment is
called as unobservable.
 Example
 Fully observable : 1)chess 2)Image analysis
 Partially observable. 1)Interactive science tutor 2) Military planning

Deterministic vs. nondeterministic(stochastic).


 If the next state of the environment is completely determined by the current
state and the actions selected by the agents, then we say the environment is
deterministic.
 In principle, an agent need not worry about uncertainty in an accessible,
deterministic environment.
 If the environment is inaccessible, however, then it may appear to be
Non-deterministic (Stochastic).
 This is particularly true if the environment is complex, making it hard to
keeptrack of all the inaccessible aspects.
 Example:
 Deterministic : i)Vacuum Cleaner ii) Video analysis iii) Trading agent
 Non-deterministic (Stochastic). i)Taxi Driving ii) Robot firing in crowd
Episodic vs. non-episodic(sequential).
 In an episodic environment, the agent's experience is divided into "episodes."
 Each episode consists of the agent perceiving and then acting.
 The quality of its action depends just on the episode itself,
because subsequentepisodes do not depend on what actions occur
in previous episodes.
 Non Episodic (Sequential): Next episode depends on the actions
taken in previous episodes.
 Example:
 Episodic environment :i)Blood testing for patient ii)Card games
 Non Episodic (Sequential) :Chess and Refinery controller
Single-agent vs Multi-agent
 If only one agent is involved in an environment, and operating by itself
then suchan environment is called single agent environment.
 However, if multiple agents are operating in an environment, then such
anenvironment is called a multi-agent environment.
 Example
 single agent: i)Boat driving
 multi-agent: i) Fantasy football, ii)Trading agent, iii) War-games
Static vs. dynamic.
 Dynamic: - Environment changes while an agent is working and
makingdecisions as time goes.
 Semi dynamic: Environment itself does not change with the passage of
time but the agent's performance score does, then we say the environment
is semi dynamic.
 Example: Chess when played with a clock.
 Dynamic: i)Taxi Driving ii)Tutor
 Static: i) Crossword puzzle
Discrete vs. continuous.

14
 If there are a limited number of distinct, clearly defined percepts and
actions we say that the environment is discrete. It is not stable at any
given point of time and it changes randomly.
 Continuous : The environment is continuous where the stage changes are
continuous and agent needs to perceive continuously.
 Example
 Discrete: i) Chess, ii) 8 queen puzzle
 Continuous: i) Flight controller ii) Taxi driving

Environment Characteristics

Examples of task environments and their characteristics

Task Observable Deterministic Episodic Static Discrete Agent


Environment
Crossword Fully Deterministic Sequential Static Discrete Single
puzzle
Chess with a Fully Stochastic Sequential Semi Discrete Multi
clock

Poker Partially Stochastic Sequential Static Discrete Multi

Backgammon Fully Stochastic Sequential Static Discrete Multi

Taxi dnving Partially Stochastic Sequential Dynamic Continuous Multi

Medical Partially Stochastic Sequential Dynamic Continuous Single


diagnosis

Image- Fully Deterministic Episodic Semi Continuous Single


analysis

Part-picking Partially Stochastic Episodic Dynamic Continuous Single


robot

Refinery Partially Stochastic Sequential Dynamic Continuous Single


controller

Interactive Partially Stochastic Sequential Dynamic Discrete Multi


English tutor

6) Problem Solving Approach to Typical AI problems


Problem solving – Introduction

Search is one of the operational tasks that characterize AI programs best. Almost every AI
program depends on a search procedure to perform its prescribed functions. Problems are

15
typically defined in terms of state, and solution corresponds to goal states.

Problem solving using search technique performs two sequence of steps:

(i) Define the problem - Given problem is identified with its required initial and goal state.
(ii) Analyze the problem - The best search technique for the given: problem is chosen from
different an AI search technique which derives one or more goal state in minimum
number of states.

Types of problem

In general the problem can be classified under anyone of the following four types which depends
on two important properties. They are

(i) Amount of knowledge, of the agent on the state and action description.
(ii) How the agent is connected to its environment through its percepts and actions?

The four different types of problems are:

(i) Single state problem


(ii) Multiple state problem
(iii) Contingency problem
(iv) Exploration problem

Problem solving Agents

Problem solving agent is one kind of goal based agent, where the agent decides what to do by
finding sequence of actions that lead to desirable states. The complexity arises here is the
knowledge about the formulation process, (from current state to outcome action) of the agent.

If the agent understood the definition of problem, it is relatively straight forward to construct a
search process for finding solutions, which implies that problem solving agent should be an
intelligent agent to maximize the performance measure.

The sequence of steps done by the intelligent agent to maximize the performance measure:
i) Goal formulation - based on current situation is the first step in problem solving. Actions that
result to a failure case can be rejected without further consideration.
(ii) Problem formulation - is the process of deciding what actions and states to consider and
follows goal formulation.
(iii) Search - is the process of finding different possible sequence of actions that lead to state of
known value, and choosing the best one from the states.
(iv) Solution - a search algorithm takes a problem as input and returns a solution in the form of
action sequence.
(v) Execution phase - if the solution exists, the action it recommends can be carried out.

Well-defined problems and solutions

A problem can be defined formally by four components:

1. initial state
2. successor function
16
3. goal test
4. path cost
The initial state that the agent starts in.

Successor function (S) - Given a particular state x, S(x) returns a set of states reachable from x by
any single action.

The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit
set of possible goal states, and the test simply checks whether the given state is one of them.

A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a
cost function that reflects its own performance measure.

A solution to a problem is a path from the initial state to a goal state

Operator - The set of possible actions available to the agent.

State space (or) state set space - The set of all possible states reachable from the initial state by any
sequence of actions.

Path (state space) - The sequence of action leading from one state to another

The effectiveness of a search can be measured using three factors. They are:

1 Solution is identified or not?


2. Is it a good solution? If yes, then path cost to be minimum.
3. Search cost of the problem that is associated with time and memory required to find a solution.

Example:

1. Missionaries and cannibals


Problem Statement: “Three missionaries and three cannibals are present at one side of
a river and need to cross the river. There is only one boat available. At any point of time, the
number of cannibals should not outnumber the number of missionaries at that bank. It is also
known that only two persons can occupy the boat available at a time.”
1. States: three numbers (i,j,k) representing the number of missionaries,
cannibals, and canoes on the left bank of the river.
2. Initial state: (3, 3, 1)
3. Operators: take one missionary, one cannibal, two missionaries, two
cannibals, one missionary and one cannibal across the river in a given
direction (I.e. ten operators).
4. Goal Test: Nobody left on the initial river bank(reached state (0, 0, 0))
5. Path Cost: Number of crossings.
Solution = the sequence of actions within the path : [ (3,3,1)→ (2,2,0)→(3,2,1) →(3,0,0)
→(3,1,1) →(1,1,0) →(2,2,1) →(0,2,0) →(0,3,1) →(0,1,0) → (0,2,1) →(0,0,0)]; Cost =
i. crossings

17
2) 8 Puzzle
Problem Statement: Given an initial configuration of 8 numbered tiles on a 3 x 3
board,move the tiles in such a way so as to produce a desired goal configuration of
the tiles.

States: 3 x 3 array configuration of the tiles on the board.


Initial State: Any state can be designated as the initial state.
Operators: Move Blank square Left, Right, up or down.
Goal: A particular configuration of the board.

18
3) "8 Queens" problem
Problem Statement: Consider the problem of trying to place 8 queens on a chess
boardsuch that no queen can attack another queen.
1. Initial State- No queens On the board
2. Successor function- Add a queen to any empty square, such that it do not
attackother queen
3. Goal : 8 queens on the board
4. Path cost: each step cost 1
5. States: Any arrangement of 1 to 8 queens on the board is a state.

(1,1), (2,6), (3,8), (4,3), (5,7), (6,4), (7,2), (8,5)

5 × 5 Queens

19
(1,1), (2,4), (3,2), (4,5), (5,3)

4) Vaccum Cleaner Problem or Toy Problem

Problem Statement:
Vacuum cleaner problem is a well-known search problem for an agent which
works on Artificial Intelligence. In this problem, our vacuum cleaner is our
agent. It is a goal based agent, and the goal of this agent, which is the vacuum
cleaner, is to clean up the whole area. So, in the classical vacuum cleaner
problem, we have two rooms and one vacuum cleaner. There is dirt in both the
rooms and it is to be cleaned. The vacuum cleaner is present in any one of these
rooms. So, we have to reach a state in which both the rooms are clean and are
dust free. So, there are eight possible states possible in our vacuum cleaner
problem.

States: The agent is in one of two locations, each of which might or might not contain dirt. Thus there are
2 * 22 = 8 possible world states.
Initial state: Any state can be designated as the initial state.
Successor function: (Left, Right, and Suck,Noop).
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Here, states 1 and 2 are our initial states and state 7 and state 8 are our final states
20
(goal states). This means that, initially, both the rooms are full of dirt and the
vacuum cleaner can reside in any room. And to reach the final goal state, both the
rooms should be clean and the vacuum cleaner again can reside in any of
the two rooms. The vacuum cleaner can perform the following functions: move
left, move right, move forward, move backward and to suck dust. But as there are
only two rooms in our problem, the vacuum cleaner performs only the following
functions here: move left, move right and suck. Here the performance of our agent
(vacuum cleaner) depends upon many factors such as time taken in cleaning, the
path followed in cleaning, the number of moves the agent takes in total, etc. But we
consider two main factors for estimating the performance of the agent. They are:

5) Cryptarithmetic Problem

In crypt arithmetic problems letters stand for digits and the aim is to find a substitution of digits
for letters such that the resulting sum is arithmetically correct, each letter stand for a different digit

Rules

 There should be no more than 10 distinct characters


 The summation should be the longest word
 The summation cannot be too long
 There must be a one-to-one mapping between letters and digits
 The leftmost letter can't be zero in any word.

States: A crypt arithmetic puzzle with some letters replaced by digits


Initial state: No digits is assigned to the letters
Successor function: Replace all occurrences of a letter with a digit not already appearing in the
puzzle
Goal test: Puzzle contains only digits and represents a correct sum
Path cost: Zero

Example 1:

SEND
+MORE

MONEY

Solution : S=9 , E = 5, N = 6, D=7, M= 1, O= 0, R = 8, Y=2

Example 2:

FORTY
+TEN
+TEN

SIXTY

Solution : F=2, O=9, R=7, T=8 , Y=6, E=5, N=0

21
6) Water JugProblem
Problem statement: Given 3 jugs (4,3), a water pump and a sink, how do you get exactly 7
liters in to the 9 liter jug.

1) State: The state space for this problem can be described as the set of ordered pairs
of integers (x,y) such that x = 0, 1,2, 3 or 4 and y = 0,1,2 or 3; x represents the
number of gallons of water in the 4-gallon jug and y represents the quantity of
water in 3-gallon jug. The start state is (0,0). The goal state is (2,0)
2) Operations
i) Empty jug
ii) Fill jug
iii) Transfer
3) Initial state: (0,0)
4) Goal state: (2,0)
5) Solution sequence

Gallons in the Gallons in the Rule applied


4-gallon jug 3-gallon jug
0 0 Initial State
0 3 2
3 0 9
3 3 2
4 2 7
0 2 5
2 0 9

Real World Problem Examples


1. VLSI Layout Problem
Problem Statement: A VLSI layout problem requires positioning millions ofcomponents and
connections on minimize stray capacitances, and maximize manufacturing yield.
1) States: Positions of components, wires on a chip.
2) Initial state: No component placed.
3) Successor function (operators) : Place components, route wire.
4) Goal test: All components placed. Components connected as specified.
5) Path cost: May be complex. One can consider distance, capacity, number of connections per
component, for path cost computation.
Detail description of VLSI layout problem
When logical design is made, the layout problem come and the problem is divideinto two parts
1) Cell layout 2) Channel routing as below
1) Cell layout
a) The basic components of circuit are grouped into cells. The cells of standard size are
connected to each of the other cell.
22
b) The overlapping between the cell components is avoided. Some sort of room is provided
for connecting wires.
2) Channel routing
a) The cell components are fixed on a chip.
b) The channel routing task is to search a particular route for each wire. The functionaltask is
achieved through the gaps between the cells.
c) The complex algorithm is designed to perform the channel routing. The degree of
complexity is so high, but it is always possible to solve problem with specific algorithm.
2. Robot Navigation
1) States: Locations, Position of actuators.
2) Initial state: Start position (dependent on the task).
3) Successor function (operators): Movement, actions of actuators.
4) Goal test: Task-dependent.
5) Path cost: May be very complex, Distance, energy consumption.
3. Assembly Sequencing
Problem Statement: In assembly problems, the aim is to find an order in which assemble
the parts of some object. If the wrong order is chosen, there will be no way add some part
later inthe sequence without undoing some of the work already.
1) States: Location of components.
2) Initial state: No components assembled.
3) Successor function (operators): Place component.
4) Goal test: System fully assembled.
5) Path cost: Number of moves
4. Route Finding Problem
1) States: Locations
2) Initial state: Starting point
3) Successor function (operators): Move from one location to another
4) Goal test: Arrive at certain location
5) Path cost: May be complex, which involves factors like money, time, travel,
comfort,scenery.

23
21CS1612 – Artificial Intelligence

Searching for Solutions

Search techniques use an explicit search tree that is generated by the initial
state and the successor function that together define the state space. In general,
we may have a search graph rather than a search tree, when the same state can
be reached from multiple paths

Example Route finding problem

The root of the search tree is a search node corresponding to the initial state,
In(Arad).

The first step is to test whether this is a goal state.

Apply the successor function to the current state, and generate a new set of
states

In this case, we get three new states: In(Sibiu),In(Timisoara), and In(Zerind).


Now we must choose which of these three possibilities to consider further.

CSE – Panimalar Engineering College 1


21CS1612 – Artificial Intelligence
Continue choosing, testing, and expanding until either a solution is found or
there are no more states to be expanded.

The choice of which state to expand is determined by the search strategy

Tree Search algorithm

Task : Find a path to reach F from A

1. Start the sequence with the initial state and check whether it is a goal state or
not.

a, If it is a goal state return success.


b. Otherwise perform the following sequence of steps

From the initial state (current state) generate and expand the new set of states.
The collection of nodes that have been generated but not expanded is called as
fringe. Each element of the fringe is a leaf node, a node with no successors in
the tree.

Expanding A

Expanding B

Expanding C

CSE – Panimalar Engineering College 2


21CS1612 – Artificial Intelligence

Sequence of steps to reach the goal state F from (A = A - C - F)


2. Search strategy: In the above example we did the sequence of choosing,
testing and expanding until a solution is found or until there are no more states
to be expanded. The choice of which state to expand first is determined by
search strategy.
3. Search tree: The tree which is constructed for the search process over the
state space.
4. Search node: The root of the search tree that is the initial state of the
problem.

STATE: the state in the state space to which the node corresponds
PARENT-NODE: the node in the search tree that generated this node;
ACTION (RULE): the action that was applied to the parent to generate the
node;
PATH-COST: the cost, traditionally denoted by g(n) , of the path from the initial
state to the node
DEPTH: the number of steps along the path from the initial state.

The collection of nodes represented in the search tree is defined using set or
queue representation.

Set : The search strategy would be a function that selects the next node to be
expanded from the set

Queue: Collection of nodes are represented, using queue. The queue operations
are defined as:

MAKE-QUEUE(elements) - creates a queue with the given elements


EMPTY(queue)-returns true only if there are no more elements in the queue.
REM0VE-FIRST(queue) - removes the element at the front of the queue and
returns it
INSERT ALL (elements, queue) - inserts set of elements into the queue and
returns the resulting queue.
FIRST (queue) - returns the first element of the queue.
INSERT (element, queue) - inserts an element into the queue and returns the
resulting queue

The general tree search algorithm with queue representation

CSE – Panimalar Engineering College 3


21CS1612 – Artificial Intelligence
Example: Route finding problem

Task. : Find a path to reach E using Queuing function in general tree search
algorithm

Measuring problem solving performance

The search strategy algorithms are evaluated depends on four important


criteria’s. They are:

(i) Completeness : The strategy guaranteed to find a solution when there is


one.
(ii) Time complexity : Time taken to run a solution
(iii) Space complexity : Memory needed to perform the search.
(iv) Optimality : If more than one way exists to derive the solution then the
best one is Selected

Definition of branching factor (b): The number of nodes which is connected


to each of the node in the search tree. Branching factor is used to find space and
time complexity of the search strategy

CSE – Panimalar Engineering College 4


21CS1612 – Artificial Intelligence

1. Solving Problems by Searching


The searching algorithms are divided into two categories

1. Uninformed Search Algorithms (Blind Search)


2. Informed Search Algorithms (Heuristic Search)

There are six Uninformed Search Algorithms

1. Breadth First Search


2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening depth-first search
6. Bidirectional Search

There are three Informed Search Algorithms

1. Best First Search(Greedy search)


2. A* Search
3. Memory Bounded A* search

Uninformed search (Blind search) Vs Informed search (Heuristic


search)

Blind search Heuristic search

No information about the number of The path cost from the current state to
steps (or) path cost from current state the goal state is calculated, to select
to goal state the minimum path cost as the next
state.
Less effective in search method More effective in search method
Problem to be solved with the given Additional information can be added as
information assumption to solve the problem

CSE – Panimalar Engineering College 5


21CS1612 – Artificial Intelligence
1. Uninformed Search Algorithms (Blind Search)

a) Breadth-first search

Breadth-first search is a simple strategy in which the root node is expanded


first, then all the successors of the root node are expanded next, then their
successors, and so on. In general, all the nodes are expanded at a given depth
in the search tree before any nodes at the next level are expanded.

Breadth-first search can be implemented by calling TREE-SEARCH with an empty


fringe that is a first-in-first-out (FIFO) queue, assuring that the nodes that are
visited first will be expanded first.

In other words, calling TREE-SEARCH(Problem, FIFO-QUEUE())results in a


breadth-first search. The FIFO queue puts all newly generated successors at the
end of the queue, which means that shallow nodes are expanded before deeper
nodes

Breadth first search trees after node expansions

Example 1:

CSE – Panimalar Engineering College 6


21CS1612 – Artificial Intelligence
The traversing will start from the source node and push ‘a’ in queue. a will be marked as 'visited'.

First iteration

 a will be popped from the queue


 children of a i.e. b and c will be traversed
 b and c, which have not been traversed earlier, are traversed. They will be:
o Pushed in the queue
o b and c will be marked as visited

Second iteration

 b is popped from the queue


 children of b i.e. d and e are traversed
 d and e , which have not been traversed earlier, is traversed. It is:
o Pushed in the queue
o Marked as visited

Third iteration

 c is popped from the queue


 children of c i.e. f and g are traversed
 f and g, which have not been traversed earlier, is traversed. It is:
o Pushed in the queue
o Marked as visited

Fourth iteration

 d is popped from the queue


 d does not have children

Fifth iteration

 e is popped from the queue


 e does not have children

Sixth iteration

 f is popped from the queue


 f does not have children

Seventh iteration

 g is popped from the queue


 g does not have children

The queue is empty and it comes out of the loop. All the nodes have been traversed by using BFS.

Example 2: Route finding problem


If all the edges in a graph are of the same weight, then BFS can also be used to find the minimum
distance between the nodes in a graph.

CSE – Panimalar Engineering College 7


21CS1612 – Artificial Intelligence

Task: Find a ,path from. S to G using BFS

The path in the 2nd depth level is selected, (i.e) SBG{or) SCG.

Algorithm :
Step 1: Enter starting node on queue
Step 2: If queue is empty, then return fail and stop
Step 3: If first element on queue is goal node then return success and stop
else
remove and expand first element from queue and place children at end
of queue
Step 4: go to step 2

CSE – Panimalar Engineering College 8


21CS1612 – Artificial Intelligence
Time and space complexity:

Example:

Time complexity

= 1 +b + b 2 +............... + b d

= O(b d)

The space complexity is same as time complexity because all the leaf nodes of
the tree must be maintained in memory at the same time = O(b d)

Completeness: Yes

Optimality: Yes, provided the path cost is a non decreasing function of the
depth of the node

Advantage: Guaranteed to find the single solution at the shallowest depth level

Disadvantage: Suitable for only smallest instances problem (i.e.) (number of


levels to be minimum (or) branching factor to be minimum)
')

b) Depth-first search

Depth-first search always expands the deepest node in the current fringe of
the search tree

The search proceeds immediately to the deepest level of the search tree, where
the nodes have no successors. As those nodes are expanded, they are dropped
from the fringe, so then the search "backs up" to the next shallowest node that
still has unexplored successors. This strategy can be implemented by TREE-
SEARCH with a last-in-first-out (LIFO) queue, also known as a stack.

Depth first search tree with 3 level expansion

CSE – Panimalar Engineering College 9


21CS1612 – Artificial Intelligence

Algorithm:

Step 1: Enter root node to stack

Step 2: Do until stack is not empty


a) Remove node
i) if node = goal stop
ii) push all children of node in stack

Example 1

Example 2: Route finding problem

CSE – Panimalar Engineering College 10


21CS1612 – Artificial Intelligence

Task: Find a path from S to G using DFS

The path in the 3rd depth level is selected. (i.e. S-A-D-G

Time and space complexity:

In the worst case depth first search has to expand all the nodes

Time complexity : O(bm).

The nodes are expanded towards one particular direction requires memory for
only that nodes.

Space complexity : O(bm)

b=2
m = 2 : bm=4

Completeness: No

CSE – Panimalar Engineering College 11


21CS1612 – Artificial Intelligence

Optimality: No

Advantage: If more than one solution exists (or) number of levels is high then
DFS is best because exploration is done only in a small portion of the whole
space.

Disadvantage: Not guaranteed to find a solution

Difference Between BFS and DFS

Breadth First Search


S. No. Depth First Search (DFS)
(BFS)

DFS visit nodes of


BFS visit nodes level by graph depth wise. It visits nodes
1.
level in Graph. until reach a leaf or a node which
doesn’t have non-visited nodes.

A node is fully explored


Exploration of a node is suspended as
2. before any other can
soon as another unexplored is found.
begin.

Uses Queue data


Uses Stack data structure to store
3. structure to store Un-
Un-explored nodes.
explored nodes.

BFS is slower and DFS is faster and require less


4.
require more memory. memory.

CSE – Panimalar Engineering College 12


21CS1612 – Artificial Intelligence

Some Applications:

 Finding all
connected
components Some Applications:
in a graph.  Topological Sorting.
 Finding the  Finding connected
shortest components.
path  Solving puzzles such as
5. between maze.
two nodes.  Finding strongly
 Finding all connected components.
nodes  Finding articulation
within one points (cut vertices) of
connected the graph.
component.
 Testing a
graph for
bipartitenes
s.

c) Uniform-cost search

Breadth-first search is optimal when all step costs are equal, because it always
expands the shallowest unexpanded node. By a simple extension, we can find an
algorithm that is optimal with any step cost function. Instead of expanding the
shallowest node, uniform-cost search expands the node n with the lowest
path cost. Note that if all step costs are equal, this is identical to breadth-first
search.

Uniform-cost search does not care about the number of steps a path has, but
only about their total cost.

CSE – Panimalar Engineering College 13


21CS1612 – Artificial Intelligence

Time and space complexity:

Time complexity is same as breadth first search because instead of depth level
the minimum path cost is considered.

Time complexity: O(b d) Space complexity: O(b d)

Completeness: Yes Optimality: Yes

Advantage: Guaranteed to find the single solution at minimum path cost.

Disadvantage: Suitable for only smallest instances problem.

Advantages:

o Uniform cost search is optimal because at every state the path with the
least cost is chosen.

Disadvantages:

o It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in
an infinite loop.

CSE – Panimalar Engineering College 14


21CS1612 – Artificial Intelligence
d) Depth - limited search

A depth-limited search algorithm is similar to depth-first search with a


predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.

A cut off (maximum level of the depth) is introduced in this search technique to
overcome the disadvantage of depth first search. The cutoff value depends on
the number of states.

Example: Cutoff or limit is 2

Answer : S A C D B I J

Time and space complexity:

The worst case time complexity is equivalent to BFS and worst case DFS.
Time complexity : O(bl)

The nodes which is expanded in one particular direction above to be stored.

Space complexity : O(bl)

Optimality: No, because not guaranteed to find the shortest solution first in the
search technique.

Completeness : Yes, guaranteed to find the solution if it exists.

CSE – Panimalar Engineering College 15


21CS1612 – Artificial Intelligence

Advantage: Cut off level is introduced in the DFS technique

Disadvantage : Not guaranteed to find the optimal solution.


Iterative deepening search

e) Iterative deepening search

 The iterative deepening algorithm is a combination of DFS and BFS


algorithms. This search algorithm finds out the best depth limit and does it
by gradually increasing the limit until a goal is found.
 This algorithm performs depth-first search up to a certain "depth limit",
and it keeps increasing the depth limit after each iteration until the goal
node is found.
 This Search algorithm combines the benefits of Breadth-first search's fast
search and depth-first search's memory efficiency.
 The iterative search algorithm is useful uninformed search when search
space is large, and depth of goal node is unknown.

Example 1

1'st Iteration ---- > A


2'nd Iteration --- > A, B, C
3'rd Iteration ----- >A, B, D, E, C, F, G
4'th Iteration ----- >A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

CSE – Panimalar Engineering College 16


21CS1612 – Artificial Intelligence

Example 2: Route finding problem

Task: Find a path from A to G

Limit = 0

Limit = 1

Limit = 2

Solution: The goal state G can be reached from A in four ways. They are:

1. A – B – D - E – G ------ Limit 4
2. A - B - D - E - G-------- Limit 4
3. A - C - E - G-------- Limit 3
4. A - F - G ------ Limit2

Since it is a iterative deepening search it selects lowest depth limit (i.e.) A-F-G is
selected as the solution path.

CSE – Panimalar Engineering College 17


21CS1612 – Artificial Intelligence
The iterative deepening search algorithm :

function ITERATIVE-DEEPENING-SEARCH (problem) returns a


solution, or failure
inputs : problem
for depth <- 0 to  do
result <-DEPTH-LIMITED-SEARCH(problem, depth)
if result  cutoff then return result

Time and space complexity :

Iterative deepening combines the advantage of breadth first search and depth
first search (i.e) expansion of states is done as BFS and memory requirement is
equivalent to DFS.

Time complexity : O(bd)

Space Complexity : O(bd)

Optimality: Yes, because the order of expansion of states is similar to breadth


first search.

Completeness: yes, guaranteed to find the solution if it exists.


Advantage: This method is preferred for large state space and the depth of the
search is not known.

Disadvantage : Many states are expanded multiple times


Example : The state D is expanded twice in limit 2

f) Bidirectional search

Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-
search, to find the goal node.

Bidirectional search replaces one single search graph with two small subgraphs in
which one starts the search from an initial vertex and other starts from goal
vertex. The search stops when these two graphs intersect each other.

Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.

CSE – Panimalar Engineering College 18


21CS1612 – Artificial Intelligence

EXAMPLE 1:

FORWARD SEARCH: AEBGFH

BACKWARD SEARCH: O K N I J H

Step 1: Say, A is the initial node and O is the goal node, and H is the intersection
node.

Step 2: We will start searching simultaneously from start to goal node and
backward from goal to start node.

Step 3: Whenever the forward search and backward search intersect at one
node, then the searching stops.

EXAMPLE 2

Task : Find a path from A to E.

CSE – Panimalar Engineering College 19


21CS1612 – Artificial Intelligence

Search from forward (A) :

Search from backward (E) :

Time and space complexity:

The forward and backward searches done at the same time will lead to the
solution in O(2bd/2) = O(bd/2)step, because search is done to go only halfway
If the two searches meet at all, the nodes of at least one of them must all be
retained in memory requires O(bd/2) space.
Optimality: Yes, because the order of expansion of states is done in both the
directions.

Completeness: Yes, guaranteed to find the solution if it exists.

Advantage : Time and space complexity is reduced.

Disadvantage: If two searches (forward, backward) does not meet at all,


complexity arises in the search technique. In backward search calculating
predecessor is difficult task. If more than one goal state 'exists then explicit,
multiple state search is required

Comparing uninformed search strategies

Criterion Breadth Uniform Depth Depth Iterative Bi


First Cost First Limited Deepening direction
Complete Yes Yes No No Yes Yes
Time O(bd) O(bd) O(bm) O(bl) O(bd) O(bd/2)
Space O(bd) O(bd) O(bm) O(bl) O(bd) O(bd/2)
Optimal Yes Yes No No Yes Yes

CSE – Panimalar Engineering College 20

You might also like