Artificial Intelligence (CSE)
UNIT-I
1.1 Introduction
Formal Definition of Al:
AL is a branch of computer science which is concerned with the study and creation of computer
systems that exhibit
some form of intelligence
OR
those characteristics which we associate with intelligence in human behavior
AL is a broad area consisting of different fields, from machine vision, expert systems to the
creation of machines that can "think", In order to classify machines as "thinking", it is necessary
to define intelligence.
Intelligence:
Intelligence is a property of mind that encompasses many related mental abilities, such as the
capal
plan
solve problems
think abstractly
‘comprehend ideas and language and
learn
Intelligent System:
An intelligent system is a system that can imitate, automate some intelligent behaviors of human
being. Expert systems, intelligent agents and knowledge-base¢ systems are examples
of intelligent systems. Inte ligent systems perform search and optimization along with learning
capabilities. So they are technologically advanced machines that perceive and respond to the
world around them, The field of intelligent systems also focuses on how these systems interact
with human users in changing and dynamic physical and social environments.
Categories of Al System
‘Systems that think like humans
‘© Systems that act like humans
© Systems that think rationally
© Systems that act rationally
Systems that think like humans
Most of the time it is a black box where we are not clear about our thought process. One has
to know functioning of brain and its mechanism for possessing information, Tt is an area of
cognitive science. The stimuli are converted into mental representation. Cognitive processesmanipulate representation to build new representations that are used to generate actions. Neural
network is a computing model for processing information similar to brain.
Systems that act like humans
The overall behavior of the system should be human like. It could te achieved by observation
Systems that think rationally
Such systems rely on logie rather than human to measure correctness. For thinking r
or logically, logic formulas and theories are used for synthesizing outcomes.
For example,
given John is a human and all humans are mortal then one can conclude logically that John
is mortal
Not all intelligent behavior are mediated by logical deliberation.
mally
Systems that act rationally
Rational behavior means doing right thing, Even if method is illogical, the observed behavior
must be rational.
Foundations of AI
Foundation of Al is based on
Philosophy
Mathematics
Economies
Neuroscience
Control Theory
Linguistics
Computer Engineering
Psychology
Philosophy:
© Can formal rules be used to draw valid conclusions?
© How does the mind arise from a physical brain?
© Where does knowledge come from?
© How does knowledge lead to action?
Mathematics:
© More formal logical methods
* Boolean logic
* Fuzzy logic
© Uncertainty
© The basis for most moder approaches to handle uncertainty in Al
applications can be handled by Probability theory, modal and temporal logicsEconomics:
© How should we make decisions so as to maximize payoff?
‘© How should we do this when others may not go along?
How should we do this when the payoff may be far in the futu-e?
© How do the brain works?
© Early studies (1824) relied on injured and abnormal people to understand what
parts of brain work
© More recent studies use accurate sensors to correlate brain activity to human
thought
By monitoring individual neurons, monkeys can now control a computer
mouse using thought alone
© How close are we to have a mechanical brain?
+ Parallel computation, remapping, interconnections...
‘modily their behavior in response to the envirorment (sense/action loop)
Water-flow regulator, steam engine governor, tiermostat
© The theory of stable feedback systems (1894)
= Build systems that transition from initial
state to goal state with minimum energy
+ In 1950, control theory ~— could ~— only describe
linear systems and. = AT_—sargely rose. = asa
response to this shortcoming
©” How does language relate to thought?
* Speech demonstrates so much of human intelligence
© Analysis of human language reveals thought taking place in ways not understood
in other settings
* Children can create sentences they have never heard before
© Language and thought are believed to be tightly intertwined
Computer Engineering:
© How can we build an efficient computer ?
Psychology:
© How do humans and animals think and act ?History of Al
Maturation / Gestation of Artificial Intelligence (1943-1952):
© Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Wa ter pits in 1943, They proposed a model o” artificial neurons
© Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
© Year 1950; The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956):
© Year 1985: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program which was named as "Logic ‘Theorist". This program had proved 38
of 52 Mathematics theorems, and find new and more elegant proofs for some theorems.
© Year 1956: The word "Artificial Intelligence” first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, Al coined as an
academic field.
© At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for Al was very high at that time.
The Golden years-Early enthusiasm (1956-1974):
* Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named as ELIZA.© Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1
The first AI winter (1974-1980):
© The duration between years 1974 to 1980 was the first Al winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of funding
from government for Al researches,
© During Al winters, an interest of publicity on arti
1 intelligence was decreased.
A boom of AT (1980-1987):
© Year 1980: Afler Al winter duration, Al came back with "Expert System", Expert
systems were programmed that emulate the decision-making ability of a human expert.
© In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University
The second AI winter (1987-1993)
© The duration between the years 1987 to 1993 was the second AI Winter duration.
© Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.
The emergence of intelligent agents (1993-201 1)
Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and becane the first computer to beat a world chess champion.
* Year 2002: for the first time, AI entered the home in the form of Roomba, @ vacuum
cleaner.
* Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter and Netflix also started using AL
Deep learning, big data and artificial general intelligence 2011-present):
Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions qu ckly.
© Year 2012: Google has launched an Android app feature "Gcogle now", which was able
to provide information to the user as a prediction
© Year 2014: In the year 2014, Chatbot "Eugene Goostman” won a competition in the
infamous "Turing test."
Sub-areas of AT
Artificial Intelligence is having various sub fields in its domain, All the Sub-Fields can be
distinguished as per various techniques
Neural Networks
© Neural Networks are inspired by human brains and copies the working process of human
brains. It is based on a collection of connected units or nodes called artificial
neurons or perceptrons.© The Objective of this approach was to solve the problems in the same way that a human
brain does.
sion
© In Artificial Intelligence Vision (Visioning Applications) means processing any
image/video sources to extract meaningful information and take action based on that
© In this ficld of artificial Intelligence we have also developed such kind of robots which
are acquiring human activities within some days or sometimes some hours and train
themselves . For e.g. object recognition, image understanding , Playing Robots etc.
Machine Learning
© The capability of Artificial Intelligence systems to learn by extracting pattems from data
is known as Machine Learning
© It is an approach or subset of Artificial Intelligence that is based on the idea that
machines can be given access to data along with the ability to learn from it
Speech Processing / Speech Recognition
© Speech Processing / Recognition is the ability of a computer and a program to identify
words and phrases in the spoken language and convert them to machine readable format.
© The real life examples of Speech processing are Google Assistant Amazon Alexa and
Apple’s Siri Application ete.
Roboties
* Robots are the artificial agents which behaves like human and build for the purpose of
manipulating the objects by perceiving, picking, moving, modifying the physical
properties of object, or to have an effect thereby freeing manpower from doing repetitive
functions without getting bored, distracted, or exhausted.
Applications
Some of the applications are given below:
* Bu mancial strategies, give advice
check design, offer suggestions to create new product
Manufacturing: Assembly, inspection & maintenance
Mining: used when conditions are dangerous
Hospital : monitoring, diagnosing & prescribing
Education : In teaching
Household : Advice on cooking, shopping ete.
Farming : prune trees & selectively harvest mixed crops.Structure of Agents
nm: An agent perceives its environment via sensors and acts upon that environment
actuators
sensors
actuators
agent = architecture + program
Example: Vacuum Cleaner World
iRobot Corporation
Founder Rodney Brooks (MIT)
* Powerful suction and rotating brushes
‘+ Automatically navigates for best cleaning coverage
+ Cleans under and around furniture, into corners and along walll edges
+ Self-adjusts from carpets to hard floors and back again
‘* Automatically avoids stairs, drop-offs and off-limit areas
‘* Simple to use— just press the Clean button and Roomba does the rest
Rational Agents:
An agent should strive to "do the right thing”, based on what= it can perceive and
— the actions i: can perform.
‘The right action is the one that will cause the agent to be most successful
Definition:
For each possible percept sequence, a rational agent should select an action that maximizes
its performance measure (in expectation) given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
Fully observable / Partially observable Environments
© If an agent’s sensors give it access to the complete state of the environment
needed to choose an action, the environment is fully observable.
= (eg. Chess)
© An environment might be partially observable because of noisy and inaccurate
sensors or because parts of the state are simply missing from the sensor data
= (eg. Kriegspiel chess )
Characterizing a Task Environment
PEAS:
erformance measure, Environment, Actuators, Sensors.
Example: the task of designing a self-driving car
Performance measure Safe, fast, legal, comfortable trip
Environment Roads, other traffic, pedestrians
Actuators Steering wheel, accelerator, brake, signal, hom
Sensors Cameras, LIDAR (lighU/radar), speedometer, GPS, odometer, engine sensors,
keyboard
Types of Agents
1, Simple reflex agents
2. Model based reflex agents
3. Goal based agents
4. Utility based agents
5. Learning agents
1, Simple reflex agents:
* Agents do not have memory of past world states or percepts. So, actions depend solely on
‘current percept
© In this agent Action becomes a “reflex.”
So it uses condition-action rules.
* Agent selects actions on the basis of current percept only.
Ex: If tail-light of car in front is red, then brake.‘What the workd
is like now
" " ‘What action |
Crsion-in is
‘Schematic diagram of Simple reflex agent
funetion SIMPLE-REFLEX-AGENT (percept ) returns an action
persistent: rules, a set of condition-action rules
state INTERPRET-INPUT(percept )
rule-RULE-MATCH(state, rules)
action rule, ACTION
return action
Example: The agent program for a simple reflex agent in the two-state vacuum
environment.
function REFLEX-VACUUM-AGENT({location,status}) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
2. Model-based reflex agents
They use a model of the world to choose their actions. They maintain an internal state.
Model ~ knowledge about “how the things happen in the world”.
Internal State ~ It is a representation of unobserved aspects of current state depending on
percept history.
Updating the state requires the information about ~
+ How the world evolves.
+ How the agent’s actions affect the world.Model-based reflex agents
: Winer cient
.
‘Schematic diagram of Model based reflex agent
function MODEL-BASED-REFLEX-AGENT (percept ) returns an action
persistent:
state, the agent’s current conception of the world state
‘model , a description of how the next state depends on current state ard action
rules, a set of condition-action rules
action, the most recent action, initially none
state—UPDATE-STATE(state, action, percept model )
rule-RULE-MATCH(state, rules)
action —rule.ACTION
return action
An example: Brooks’ Subsumption Architecture
Main idea: build complex, intelligent robots by decomposing behaviors into a hierarchy of
skills, each defining a percept-action cycle for one very specific task.
* Examples: collision avoidance, exploring, recognizing doorways, :tc.
* Each behavior is modeled by a finite-state machine with a few states (though each state may
correspond to a complex function or module; provides internal state to the agent)
* Behaviors are loosely coupled via asynchronous interactions.
3. Goal-based agents
‘They choose their actions in order to achieve goals. Goal-based approach is more flexible than
reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing
for modifications
Goal = It is the description of desirable situations.Problem Solving Goal-based agents
‘What action |
Should do now
Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses
‘actions that will (eventually) lead to the goal(s).
More flexible than reflex agents > may involve search and planning
4. Utility-based agents
They choose actions based on a preference (utility) for cach state
Goals are inadequate when —
+ There are conflicting goals, out of which only few can be achieved.
+ Goals have some uncertainty of being achieved and you necd to weigh likelihood of
‘success against the importance of a goal.
ooo Utility-hased agents
How ine worsevores Poel Wises |
Ciinat my actions do FY ViPS Saws AMS
RSs Sot
wnat don
Souk dS now
5. Learning agents
© Learning agents are such agents which adapts and improve over time.
© More complicated when agent needs to learn utility informaticn: Reinforcement learning,Learning agents
Overview of Structure of Agents:
‘An agent perceives and acts in an environment, has an architecture, and is implemented by an
agent program,
A rational agent always chooses the action which maximizes its expected performance, given its
percept sequence so far.
‘An autonomous agent uses its own experience rather than built-in knowledge of the environment
by the designer.
An agent program maps from percept to action and updates its internal state.
© Simple reflex agents
are based on condition-action rules, implemented with an appropriate production
system. They are stateless devices which do not heve memory of past world
states.
© Agents with memory - Model-based reflex agents
have internal state, which is used 10 keep track of past states of the world.
© Agents with goals — Goal-based agents
are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration,
© Utility-based agents
‘base their decisions on classic axiomatic utility theory in order to act rationally.
* Learning agents
they have the ability to improve performance through learning.
Representing knowledge is important for successful agent design.1.2 Problem Solving
Well defined Problems and Solutions:
A problem can be defined formally by five components:
Initial Stat
The state where agents starts to perform the search to reach the goal state
Action:
Set of applicable actions perform in state S.
Transition model:
What each actions does
Goal Te
Which determines whether a given state is a goal state or not
Path Cos
Assigns a numeric cost to each path
State Space
© Together the initial state, actions and transition model implicitly define the state space of
the problem — the set of all states reachable from the initial state by any sequence of
actions.
© The state space forms a directed network or graph in which the nodes are states and the
links between nodes are actions.
© A path in the state space is a sequence of states connected by 2 sequence of actions.
Example: Travelling in Romania
Scenario
® On holiday in Romania; currently in Arad
© Flight leaves tomorrow from Bucharest
Formulate Goal : Be in Bucharest
Formulate problem
States: various cities
Actions: drive between cities
Find Solution: Appropriate sequence of ci
eg.: Arad, Sibiu, Fagaras, BucharestFormulating Problems:
* We proposed a formulation of the problem of getting to Bucharest in terms of the initial
state, actions, transition model, goal test and path cost.
* This formulation seems reasonable, but it is still a model—an abstract mathematical
description.
© We left out many things while travelling like the travelling companions, the current radio
program, the scenery out of the window, the proximity of law enforcement officers, the
distance to the next rest stop, the condition of the road, the weather and so on.
© These details are irrelevant to the problem of finding a route to Bucharest. The process of
removing detail from a representation is called abstraction.
Problem types
1. Single~state problem
2. Multiple -state problem
3. Contingency problem,
Single ~state problem
© observable (at least initial state)
* deterministic
© static
© discrete
Multiple -state problem
© partially observable (initial state not observable)
* deterministic© static
© discrete
Contingeney problem
© partially observable (initial state not observable)
* non-deterministic
Single-state problem formulation
Defined by the following four items
1. Initial state
Example: Arad
2. Successor function S
Example: S(Arad)= { (goZerind,Zerind), (goSibiu, Sibiu), ..
3. Goal test
Example: *= Bucharest (explicit test)
noDirt(x) (implicit test)
4. Path cost (optional)
Example: sum of distances, number of operators executed, etc.
Solution :
A sequence of operators leading from the initial state to a goal state
Selecting a state space
Abstraction
Real world is absurdly complex
ite space must be abstracted for problem solving
(Abstract) state Set of real states
(Abstract) operator
Complex combination of real actionsExample: Arad — Zerind represents complex set of possible routes
(Abstract) solution
Set of real paths that are so.utions in the real world
Problems on State Space representation
Example: The 8-puzzle
© It is also called as 8 puzzle problem or sliding puzzle problem.
© N-puzzle that consists of N tiles (N+ titles with an empty tile) where N can be 8, 15, 24
and so on.
In our example N = 8. (that is square root of (8+1) =3 rows and 3 columns)
In the same way, if we have N = 15, 24 in this way, then they have Row and columns as
follow (square root of (N+1) rows and square root of (N+1) columns).
'5 than number of rows and columns= 4, and if N= 24 number of ro
© So, basically in these types of problems we have given a
configuration (Start state) and a Goal state or Goal Configurat
Here we are solving a problem of 8 puzzle that is a 3x3 matrix.
ial state or initial
Start State Goal State
States: Integer location of tiles
Actions: left, right, up, down
c?
Path Cost:
per move
8 Puzzle has 9! / 2 = 181440 statesExample: Vacuum-cleaner world state space graph
States: Integer dirt and robot locations
Actions: left, right, suck, nx0p
Goal Test: not dirty?
Path Cos
per operation (0 for noOp)
Example: Eight Queens
Place eight queens on a chess board such that no queen can attack another queen
No path cost because only the final state counts!
Incremental formulations
Complete state formulations
Solutions:Eight Queens Problem Formulation 1:
© States:
© Any arrangement of 0 to 8 queens on the board
Initial state:
‘© No queens on the board
Successor funetion:
© Adda queen to an empty square
Goal Test:
© 8 queens on the board and none are attacked
64%63*...*57 = 1.810" possible sequences
Empty Board Desired Goal
Eight Queens Problem Formulation 2:
© States:
© Any arrangement of 8 queens on the board
© Initial state:
© All queens are at column 1© Successor function:
© Change the position of any one queen
© Goal Test:
© 8 queens on the board and none are attacked
Eight Queens Problem Formulation 2
Initial State Desired Goal
Eight Queens Problem Formulation 3:
© States:
© Any arrangement of k queens in the first k rows such that none are attacked
© Initial state:
‘No queens on the board
* Successor function:
‘© Add a queen to the (k+1)"" row so that none are attacked
© Goal Test:
© 8 queens on the board and none are attackedInitial State
Eight Queens Problem Formulation 3
(k)" row , here k=3 Adding a queen at (k+1)" row
Solving 8 Puzzle Problem :
The puzzle can be solved by moving the tiles one by one in the single empty
space and thus achieving the Goal state.Rules of solving puzzle
© Instead of moving the tiles in the empty space we can visualize moving the empty space
in place of the tile.
© The empty space can only move in four directions (Movement of empty space)
° Up
© Down
© Right
© Left
© The empty space cannot move diagonally and can take only one step at a time.
© 0- Position total possible moves are (2),
* x- position total possible moves are (3) and
+ #position total possible moves are (4)
Let's solve the problem without Heuristic Search that is Uninformed Search or Blind Search
(Breadth First Search and Depth First Seareh)
stert Node
|
Right Move Up move Down move
ames STs = ps Ts
: : zee “BoE
zee
Pa XZ
all the move for each node (i.e. U,D, R,L)
own move <~ Left move Pe Up move
es ——-
a{e |e —- 2 lie -
71s Ts 7 [sts
Explore gif possibte. move for each node
z= 7 {ste
Goat node
Breadth earch to solve Eight puzzle problemSeal
1.3 Search Strategies
rch Algorithm
Search Algorithm teaches computers to “act rationally” by achieving a certain goal with a
certain input value
Search: Searching is a step by step procedure to solve a search-problem in a given search space.
A search problem can have three main factors;
Search Space
have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the goal
state is achieved or not
Search space represents a set of possible solutions, which a system may
Search tree: A tree representation of search problem is called Search tree. The root of the search
tree i
Seat
isthe root node which is corresponding to the initial state.
rch Trees
(2) Then
() After eee
(0 After expnating Sibiu
Shas
Partial search trees for finding a route from Arad to Bucharest
function TRE!
E ARCH(problem) returns a solution, or failure
initialize the frontier using the initial state of problem,
loop do
the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state then return the corresponding solutionexpand the chosen node, adding the resulting nodes to the frontier
function GRAPH-SEARCH(problem) returns a solution, or failure
ize the frontier using the initial state of problem
initialize the explored set to be empty
loop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state then return the corresponding solution
‘add the node to the explored set
expand the chosen node, adding the resulting nodes to the frontier
only if not in the frontier or explored set
An informal description of the general tree-search and graph-search algorithms,
Following are the four essential properties of search algorithms to compare the efficiency of
these algorithms:
© Completeness: A search algorithm is said to be complete if it guarantees to return a solution
if at least any solution exi put.
© Optimality: Ifa solution found for an algorithm is guaranteed to be the best solution (lowest
path cost) among all other solutions, then such a solution for is said to be an optimal
solution.
© Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task,
* Space Complexity: It is the maximum storage space required at any point during the search,
as the complexity 0° the problem.
Types of search algorithms:
Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and Informed search (Heuristic search) algorithms.
Uninformed/Blind Search:
© The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes.
© Uninformed search aprlies a way in which search tree is searched without any information
about the search space like initial state operators and test for the goal, so it is also called
blind search, It examines each node of the tree until it achieves the goal node.
So Distance to goal not taken into account
* Ex: DFS, BFS, Iterative deepening DFS
Informed Seare
* Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search. Informed search strategies can find a
solution more efficiently than an uninformed search strategy. Informed search is also
called a Heuristic search,‘A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in another
way.
The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristics, so itis also called Heuristic search.
‘o Information about cost to goal taken into account
Ex: Best first searck , A* search
Heuristics function: Heur:stic is a function which is used in Informed Search, and it finds the
most promising path
It takes the current state of the agent as its input and produces the estimation of how close
agent is from the goal
‘The heuristic method, however, might not always give the best solution, but it guaranteed
to find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal, Its represented by h(n), and
it calculates the cost of an optimal path between the pair of states.
The value of the heuristic function is always positive, it is also called Heuristic search.
Admissibility of the heuristic function is given as:
h(n) <= h(n)
Here h(n) is heuristic cost, and h(n) is the estimated cost. Hence heuristic cost should
be less than or equal to the estimated cost.
Pure Heuristic Search
Best-first Search Algorithm (Greedy
Pure heuristic searc’ is the simplest form of heuristic search algorithms. It expands nodes
based on their heuristic value h(n). It maintains two lists, OPEN and CLOSED list. In the
CLOSED list, it places those nodes which have already expanded and in the OPEN list, it
places nodes which have yet not been expanded,
On each iteration, each node n with the lowest heuristic value is expanded and generates
all its successors and n is placed to the closed list. The algorithm continues unit a goal
state is found
In the informed search we will discuss two main algorithms which are given below:
Best First Search Algorithm (Greedy search)
A* Search Algorithm
earch)
Greedy best-first search algorithm always selects the path which appears best at that
moment.
Its the combination of depth-first search and breadth-first search algorithms.
It uses the heuristic function and search. Best-first search allows us to take the advantages
of both algorithms
With the help of best-first search, at each step, we can choose the most promising node
In the best first search algorithm, we expand the node which is closest to the goal node
and the closest cost is estimated by heuristic function,
ie f(n)= g(n)Where, h(n)= estimated cost from node n to the goal.
© The greedy best first algorithm is implemented by the priority queue
Example:
© Let us see how this works for route-finding problems in Romania; we use the straight
line distance heuristic, which we will call hsp.
* If the goal is Bucharest, we need to know the straight-line distances to Bucharest, which
are shown in below figure.
© For example, hsip(In(Arad))=366. Notice that the values of hsp cannot be computed
from the problem description itself.
© Moreover, it takes a certain amount of experience to know that hs.p is correlated with
actual road distances and is, therefore, a useful heuristic.
Romanian path finding problem
Base eg on GPS info. Straight-line
No map needed. dist. to Bucharest
Searching for good path from Arad to Bucharest,
what is a reasonable “desirability measure” to expand nodes
on the fringe?
* The above shows the progress of a greedy best-first search using hstp to find a path from
Arad to Bucharest.
© The first node to be expanded from Arad will be Sibiu because it is closer to Bucharest
than either Zerind or Timisoara
© The next node to be expanded will be Fagaras because it is closest. Fagaras in turn
generates Bucharest, which is the goal.
* For this particular problem, greedy best-first search using ha.p finds a solution without
‘ever expanding a node that is not on the solution path; hence, its search cost is minimal. It is not‘optimal, however: the path via Sibiu and Fagaras to Bucharest is 32 kilometers longer than the
path through Rimnicu Vilcea and Pitesti
This shows why the algorithm is called “greedy"—at each step it tries to get as close to the goal
as it can,
Greedy best-first search example
me
By
°
10
a
et
176
7
1st
Be
pry
mL
Be
30
10
193
253
x
2
19
374
of greedy best-first search:
Greedy hest-first tree search is also incomplete even in a finite state space, much like
depth-first search,
Consider the problem of getting from Iasi to Fagaras. The heuristic suggests that Neamt
be expanded first because it is closest to Fagaras, but it is a dead end.
The solution is to go first to Vaslui—a step that is actually farther from the goal
according to the heuristic—and then to continue to Urziceni, Bucharest, and Fagaras,
‘The algorithm will never find this solution, however, because expanding Neamt puts Tasi
back into the frontier.
lasi is closer to Fagaras than Vaslui is, and so Iasi will be expanded again, leading to an
infinite loop.
‘The worst-case time and space complexity for the tree version is O(b"),
where b is the maximum branching factor of the search tree
m is the maximum depth of the search space.
With a good heuristic function, however, the complexity can be reduced substantially.
‘The amount of the reduction depends on the particular problem and on the quality of the
heuristic.A’ search
© A®* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cast to reach the node n from the start state a(n),
© A® search algorithm finds the shortest path through the search space using the heuristic
function.
‘This search algorithm expands less search tree and provides optimal result faster.
In A* search algorithm, we use search heuristic as well as the cost to reach the node.
Hence we can combine both costs as following, and this sum is called as a fitness
number.
f{n) = g(n) + n(n)
Estimated cost
ofthe cheapest
Algorithm A*:
OPEN = nodes on frontier CLOSED=expanded nodes
OPEN = {
}
OPEN is not empty
remove from OPEN the node with minimum f(n)
place on CLOSED
ifn is a goal node, return success (path p)
for each edge connecting n & m with cost ¢
if is on CLOSED and {ple} is cheaper than q
then remove n fiom CLOSED , put on OPEN
else if is on OPEN AND {ple} is cheaper than q
then replace q with {ple}
else if m is not on OPEN put on OPEN
return failure
Example 1:
State. ir)
hs
o
I
olelo|m| >| aSolution:
Start State: {(S, 5)}
Iteration1: {(S-> A, 4), (S>G, 10)}
Iteration2: {(S-> A-->C, 4), (S-> A->B, 7), (S->G, 10)}
Iteration 3: {(S—> A->C—>G, 6), (S=> A=>C-—>D, 11), (S—> A->D, 7), (8G, 10)}
Iteration 4 will give the final result, as S->A-—->C-—>G it provides the optimal path with cost
6
So A* avoid expanding paths that are already expensive
Using: fn) = g(n) + h(n) *
A“ search example
Arad 6
Dicharest °
Craiova 1
Dabreta 2
Ei ‘et
fagaras 1%
Giargia 7
Vince a
lea x6
Lago) 4
‘ehadin xi
Newet a
Oradea i»
Pitesti 10
* — Rimnicu Vikea 03
Sibie 23
Timisoara 9
Ursicent 2
» You 9
Zarind aSP Soa>
OAERSETNGG ATEBTG TG OTHSITGIGO $1321
<>
> =D
eneaaieee satbere eT eaTICO
sRB=B08rIGD HTRSTT+1OO 555-9000259
eae onmios :
Fete
SOTaHEEa sbetio—sSeDSEICO TaTaT00 seb
Bucharest appears on the fringe but not selected for expansion since its cost (450)
than that of Pitesti (417). Important to understand for the proof of optimality of A*
higherhas
. ate
D>
Pr saan
SeiT640 6T5=t550160 6OT=ATHVTOI
Optimal Path Found: Arad
--- Sibiu -- Rimnicu —- Pitesti --- Bucharest
Under some reasonable corditions for the heuristics, we have:
© Complete : Yes, unless there are infinitely many nodes with f(n) < {(Goal)
ime Complexity: ‘The time complexity of A* search algorthm depends on heuristic
function, and the number of nodes expanded is exponential to the depth of solution d. So the
time complexity is O(b*d) , where b is the branching factor.
© Space Complexity: The space complexity of A* search algorithm is O(b*d)
© Optimal: Yes
Also, optimal use of heuristies information!
© Widely used. E.g. Google maps.
After almost 40 yrs, still new applications found.
* Also, optimal use of heuristic information.
A*: Difficulties
© It becomes often difficult to use A* as the OPEN queue grows very large.
* A solution is to use algorithms that work with less memory
* Memory bounded heuristic search algorithms reduce the memory requirements for A* by
introducing IDA*.
© IDA* isan iterative deepening algorithm.
* The cut-off for nodes expanded in an iteration is decided by the f-value of the nodes.
IDA* Algorithm
IDA* = A* + Iterative Deepening DFS
The key feature of the IDA* algorithm is that it doesn’t keep a track of each visited node
which helps in saving memory consumption and can be used where memory is
constrained.
© It is path search algorithm which finds shortest path from start node to goal node in
weighted graph,
© It borrows the idea to use a heuristic function from A*Its memory usage is weaker than in A’, thereby reducing memory problem in A*
IDA* is optimal in terms of space and cost
Problem: Excessive node generation
Algorithm:
Set certain threshold/f-bound
If f(node) > threshold/f-bound, prune the node
Set threshold/f-bound = minimum cost of any node that is pruned
Terminates when goal is reached.
IDA* Search Procedure
Consider the threshold/f-bound value and cycle through the algorithm
In every branch visit the depth till the f(node) is greater than the threshold/f-bound and
note down that (node) value, do this till all branches are explored upto certain depth.
‘Then the cycle continues from the starting node again with the new threshold/f-new value
that is the minimum of f{node) values noted down.
‘This continues until the goal is found or the time li
is exceeded
Example
Example© Time: Depends strongly on the number of different values that the heuristic value can
take on, If A* expands N nodes, IDA* expands 1+2+......+N=O(N?) nodes.
© Space: It is DFS, it only requires space proportional to the longest path it explores.
© Optimal: Yes, similar to A*
1.4 Adversarial Search
Introduction:
© Adversarial search is a game-playing technique where the agents are surrounded by @
‘competitive environment.
© A conflicting goal is given to the agents (multi agent). These agents compete with one
another and try (0 defeat one another in order (0 win the game, Such conflicting goals
give rise to the adversarial search
© Here, game-playing means discussing those games where human intelligence and logic
factor is used, excluding other factors such as luck factor.
+ Tic-tac-toc, chess, chcekers, cte., are such type of games where no luck factor works,
‘only mind works.
© Mathematically, this search is based on the concept of “Game Theory.”
* According to game theory, a game is played between two players. To complete the game,
‘one has to win the game and the other looses automatically
Perfect decision games:
* In game theory, a sequential game has perfect information if each player, when making
any decision, is perfectly informed of all the events that have previously occurred,
including the "initialization event” of the game (e.g. the starting hands of each player in a
card game).
© Perfect information is importantly different from complete information, which
implies common knowledge of each’ player's utility functions, payors, strategies and
"types". A game with perfect information may or may not have complete information.
© Chess is an example of a game with perfect information as zach player can see all the
pieces on the board at all times.
© Other examples of games with perfect information include tic-tac-toc, checkers, infinite
chess.
Imperfect decision games:
© In game theory, a game is imperfect game if each player, when making any decision, is
uninformed of all the events that have previously occurred.
© Imperfect information implies no idea of the move taken by the opponent it means the
player is unaware 0° the actions chosen by other players
© Card games where each player's cards are hidden fiom other players such
‘as poker and bridge are examples of games with imperfect information
Game Setup:
* Two players: MAX and MIN
© MAX moves first and they take turns until the game is over© Winner gets award, loser gets penalty.
© Games as search:
© Initial state: e.g. board configuration of chess
‘Successor function: list of (move , state) pairs specifying legal moves.
Terminal test: Is the game finished?
Utility function: Gives numerical value of terminal states.
E.g. win (+1), lose (-1) and draw (0) in tic-tac-toe or chess
°
oo
© Size of Search tree is O(b!)
where b = branching factor , d = number of moves by both players
:b~35 and D~100
Game-playing empaasizes being able to make optimal decisions in a finite amount of
time, somewhat realistic as a model of a real-world agent and even if games themselves.
are artificial
Partial Game Tree for Tic-Tac-Toe
max ox)
san (oy Es
maxon |
xP] Plo] pel
RPE] PO) Eo] I
rermmat fotx] fotote| [1
ol) bixte) [zoo
cuuy + e rT
MINI - MAX ALGORITHM:
* Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-
making and game theory. It provides an optimal move for the player assuming that
opponent is also playing optimally.
+ Mini-Max algorithm uses recursion to search through the game-tree.Min-Max algorithm is mostly used for game playing in Al, Such as Chess, Checkers, tie-
tac-toe, go, and various tow-players game. This Algorithm computes the minimax
decision for the current state,
In this algorithm two players play the game, one is called MAX and other is called MIN.
Both the players fight it as the opponent player gets the minimum benefit while they get
the maximum benefit.
Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
‘The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree
The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.
Two-Ply Game Tree
= 2 o&
a to aw KR» A
Two-Ply Game TreeTwo-Ply Game Tree
Max
mM
IN
Pseudo code for Minimax Algorithm:
function MINIMAX-DECISION (stare) returns an action
Inputs: state, current stare in game
ve-MAX-VALUE(siate)
return the action in SUCCESSORS(state) with value v
function MAX-VALUE(state) returns a utility value
if TERMINAL-TEST (state) then return UTILITY (state)
ve
for a,s in SUCCESSORS(stare) do
v 3awe >
san x < SAE <3
man = <2 Sit SEZ
Rules of Thumb.
© is the best ( highest) found so far along the path for Max
* Bis the best (lowest) found so fur along the path for Min
© Search below a MIN node may be alpha-pruned if the its B $1 of some MAX ancestor
© Search below a MAX node may be beta-pruned if the its a2 B of some MIN ancestor.
Example 2:MAX ancestor
2, Search below a
MAX node may,
be beta-pruned if
the alpha value 1s
Value of some
MIN ancestor,
| Search below a
MIN node may
alpha-pruned
the beta value
is <= to the alpha
value of some
MAX ancestor.
2, Search below a
MAX node may.
be beta-pruncs
the alpha value is
SS tothe beta
value of some
MIN ancestor.
| Search below a
MIN node may
be alpha-pruned
if the beta value
is <= to the alpha
value of some
MAX ancestor,
2, Search below a
MAX node may.
be beta-pruned if)
the alpha value is,
>= tothe beta
value of some
MIN ancestor,The o-f algorithm
function ALPHA-BETA-SEARCH(stafe) returns an action
inputs: state, current state in game
vs MAX-VALUR(state, co, +00)
return the action in SUCCESSORS(state) with value w
function Max-Varur(state, a, 8) returns a utility value
inputs: state, current state in game
cx, the value of the best alternative for MAX along the path to state
B, the value of the best alternative for MIN along the path to state
if TexainaL-Tesr(state) then return Urinrry(state)
for a,5 in SucCESSONS( state) do
ve Max(v, MIN-VALUR(s.@.8))
if v > 8 then return v
ee MAX(a, v)
return v
function MIN-VaLuE(state, a, 8) returns a utility value
inputs: state, current state in game
@, the value of the best alternative for MAX along the path to state
B, the value of the best alternative for MIN along the path to state
if TeRMINat-Test (state) then return Uritrry(state)
ve too
for a,s in SUCOESSORS(state) do
ue MIN(v. MAX-VALUE(s,a.,))
if v Sa then return v
B— MIN(B, v)
return v
Properties of « B Prune:
* Pruning does not af‘ect final result
* Good move ordering improves effectiveness of pruning b(eg., chess, try captures first,
then threats, forward moves, then backward moves:
© With "perfect ordering," time complexity = O(b"?)
© > doubles dept of search that alpha-beta pruning can exploreArtificial Intelligence (CSE)
UNIT-II
2.1.1 Logical Agents:
Knowledge-based agents — agents that have an explicit representation of knowledge that can be
reasoned with.
‘These agents can manipulate this knowledge to infer new things at the “knowledge level”
A Knowledge Based Agent
* A knowledge-based agent includes a knowledge base and an inference system.
A knowledge base is a set of representations of facts of the world.
Each individual representation is called a sentence.
The sentences are expressed in a knowledge representation language.
‘The agent operates as follows:
1. It TELLs the knowledge base what it perceives.
2. ILASKs the knowledge base what action it should perform.
3. Itperforms the chosen a
Architecture of Knowledge based agent:
Knowledge Level.
© The most abstract level: describe agent by saying what it knows.
© Example: A taxi agent might know that the Golden Gate Bridge connects San Francisco
with the Marin County
Logical Level.
© The level at which the knowledge is encoded into sentences.
. imple: Links(GoldenGateBridge. SanFrancisco, MarinCounty).
Implementation Level.
© The physical representation of the sentences in the logical level.
© Example: “(links goldengatebridge sanfrancisco marincounty)
Knowledge Bases:
Inference engine
Knowledge base
* Knowledge base = set of sentences in a formal language
* Declarative approach to building an agent (or other system):
Tell it what it needs to know
© Then it can Ask itself what to do - answers should follow from the KB
© Agents can be viewed at the knowledge level - ie., what they know, regardless of how
implemented
© Orat the implementation level