ARTIFICIAL INTELLIGENCE
RCS-702
Faculty Name: Ms. CHAHAT SHARMA (chahat.sharma@ipec.org.in)
Department of CSE, Inderprastha Engineering College
Topics Covered : Unit: 2 - Introduction to search, Searching for
solutions, Some searching problems , Terminologies of Search Tree,
Construction of State Space, Performance Measures of Search Tree,
Uninformed Search Strategies, Informed Search Strategies
Books:
• Russell & Norvig, “Artificial Intelligence, A Modern Approach”, Pearson
Education
• Rich and Knight, “Artificial Intelligence”, McGraw Hill
AI- RCS 702 Chahat Sharma (Asst. Prof.)
AI Problem Solving
• (Problem Definition) Define the problem precisely by
including specification of initial situation, and final
situation constituting the solution of the problem.
• (Problem Analysis) Analyze the problem to find a few
important features for appropriateness of the solution
technique.
• (Knowledge Representation) Isolate and represent the
knowledge that is necessary for solution.
• (Problem Solving) Select the best problem solving
technique.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Aspects of the Problem
• Initial Solution
• Ability to perform
• Implicit criteria for success
• Explicit Goal of the problem
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Well Defined Problems
• A well defined problem can be defined formally by four
components:
– Initial State
– Description of possible actions given to agent
(Successor function)
– Goal Test (Check whether given state is goal or not)
– Path cost function (performance measure); c (x, a, y)
where x , y are states and a is the action of the problem
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Types of Problem Formulation
• Problem formulation is the process of deciding what actions and
states to consider
1. Incremental Formulation
2. Complete Formulation
• Incremental Formulation
– Starts with an empty state. Involves operators that augment the state
description.
– Generates certain sequences
– Less requirement of memory as all states are not explored
– Ex: 8 Queen Problem
• States: Arrangement of up to 8 queens on board
• Initial State: No queen on board
• Successor Function: Add a queen to any square
• Goal Test: All queens are on board. No queen is attacked.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Types of Problem Formulation
• Complete State Formulation
– Initially have some basic configuration
– More requirement of memory as all states are explored
– Ex: 8 Queen Problem
• States: Arrangement of 8 queens on board
• Initial State: All 8 queens on board
• Successor Function: Move a queen to different
square
• Goal Test: No queen is attacked.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Examples - Problem Formulation for Toy Problems
A. Black Ball Picker Agent
1. Problem Statement: 2 buckets are full of balls. A ball is either
white or black in color. Aim is to pick all black color balls from
bucket.
2. Initial State: Any state can be designated as initial state
3. Successor Function: Generates legal states that result from
trying three actions (Left, Right, Pick)
4. Goal Test: It checks if all black colored balls are picked up
from both the buckets.
5. Path Cost: Each step cost is 1, so path cost=No. of steps in
path
6. States: Agent is at either at location (L1,L2) of which black
color balls may have been picked up or may not be!
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Examples - Problem Formulation for Toy Problems
B. Crypt-arithmetic Agent
1. Problem Statement: Find an assignment of unique digits (0..9)
to letters so that a given arithmetic expression is true.
FORTY
TEN
+ TEN
------------------
SIXTY
------------------
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Examples - Problem Formulation for Toy Problems
1. Initial State: Only letters present
2. Successor Function: Replace all occurrences of a letter by a
digit that has not been used previously.
3. Goal Test: Only digits in the puzzle and calculation is correct.
F=2; O=9; R=7; T=8; Y=6; E=5; N=0; S=3; I=1; X=4
4. Path Cost: Each step cost is 1, so path cost=No. of steps in
path.
5. States: Puzzle with letters and digits.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Examples - Problem Formulation for Toy Problems
C. Water Jug Problem
1. Problem Statement: Given 2 jugs (4gallon & 3gallon), a water
pump, a sink. Neither jug has any measuring markers on it.
How can you get exactly 2 gallons of water into the 4 gallon
jug?
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Rules for Water Jug Problem
The state space for WJP can be described as a set of ordered pairs of
integers (x,y) such that x=0,1,2,3,or 4 and y= 0,1,2,or 3.
Initial State: is (0,0) and Goal Test is (2,n)
1. {(x, y) | x<4 } (4,y)
2. {(x, y) y<3 } (x,3)
3. {(x, y) x>0 } (0,y)
4. {(x, y) | y>0 } (x,0)
5. {(x, y) | x + y ≥ 4 and y>0} (4, x + y -4 )
6. {(x, y) x + y ≥3 and x>0} (x+y-3, 3)
7. {(x, y) | x+y≤4 and y>0} ( x + y , 0)
8. {(x, y) | x+y≤3 and x>0} (0, x + y)
9. (0,2) (2,0)
10. (2,y) (0,y)
11. { (x , y) | y >0} (x, y-d) Useless rule
12. { (x , y) | x>0 } (x-d, y) Useless rule
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Problem Formulation for Real World Problems
A. Travelling Salesman Problem
1. Problem Statement: It is a touring problem in which each city
must be visited exactly once. Find shortest tour.
2. Initial State: Starting Point; No city is visited.
3. Successor Function: Move from one location to another
4. Goal Test: All locations visited. Agent at initial position.
5. Path Cost: Distance between locations
6. States: Location/Cities
Legal Sate: Each city visited exactly once.
Visited cities kept as state information.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Problem Formulation for Real World Problems
B. Robot Navigator
1. Problem Statement: It is route finding problem. Robot can
move in a continuous space with an infinite set of possible
states and actions.
2. Initial State: Starting Position
3. Successor Function: Movement, action of actuator
4. Goal Test: Task dependent
5. Path Cost: Distance and energy consumption
6. States: Locations; Position of actuators
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Problem Solving Agents
• Intelligent agents are supposed to act in such a way that the
environment goes through a sequence of states that maximizes the
performance measure.
• Example: Suppose the agent is in Auckland and wishes to get to
Wellington. There are a number of factors to consider e.g. cost, speed
and comfort of journey.
Step 1
• Goal Setting
Step 2
• Goal Formulation
Step 3
• Problem Formulation
Step 4
• Search Formulation
Step 5
• Execution Phase
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Searching Solutions: Terminologies in Search Tree
• State space may be a tree or a graph.
• The state space of a problem includes :
1. An initial state,
2. One or more goal states.
3. Sequence of intermediate states through which the system makes
transition while applying various rules.
• Node- Configuration of a state in search tree.
• State- World configuration of search tree i.e. mapping of state and
action to another state.
• Fringe- Collection of generated nodes but not expanded.
• Leaf Node- Each Node of fringe is a leaf node.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Searching Solutions: Terminologies in Search Tree
• Search Strategy- Function that selects next node to be expanded from
current fringe.
• Parent-Node- Node in search tree which generates further node.
• Action- Successor function is applied to parent node to generate this
node.
• Path Cost- Denoted by function g(n) represents path from initial state
to the current node/goal node
• Depth- No of steps along the path from initial state.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
State Space
Figure: Search Tree Exploration
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Problem Characteristics
Problems can be :
• Ignorable (e.g: Theorem Proving)
• Recoverable (e.g : 8 - puzzle)
• Irrecoverable (e.g: Chess , Playing cards)
Note :
1. Ignorable problems can be solved using a simple control structure that
never back tracks. Such a structure is easy to implement.
2. Recoverable problems can be solved by a slightly more complicated
control strategy that can be error prone. (Here solution steps can be
undone).
3. Irrecoverable are solved by a system that expands a great deal of effort.
Making each decision since decision must be final. (Here solution steps
can’t be undone).
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Search Strategies Performance Measure
• A search strategy is defined by picking the order of node
expansion.
• Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
• Time and space complexity are measured in terms of
b: maximum branching factor, (max no. of successor nodes of any
node) of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Classification of Search Strategies
I. Uninformed Search Strategies use only the information available in
the problem definition
Breadth-first search
Depth-first search
Uniform Cost Search
Depth-limited search
Iterative deepening search
Bi Directional Search
II . Informed Search (Heuristic Search)
Best First Search
Greedy Best First Search
A*, AO * algorithms
Memory Bounded Search
Hill Climbing
Simulated Annealing
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Uninformed Search Strategies
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Breadth First Search (B.F.S.)
• The breadth first search algorithm visits the nodes of the
tree along its breadth, starting from the level with depth 0
to the maximum depth.
• Here, the nodes in the tree are traversed following their
ascending ordered labels.
• Uses Queue as a data structure.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Algorithm
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Algorithm
AI- RCS 702 Chahat Sharma (Asst. Prof.)
How the algorithm works?
• If the current node is not the goal add the offspring of the
current in any order to the rear end of the queue and
redefine the front element of the queue as the current.
• The algorithm terminates, when the goal is found.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example: Breadth First Search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at end
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example: Breadth First Search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at end
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example: Breadth First Search
• Expand shallowest unexpanded node
• Implementation:
–fringe is a FIFO queue, i.e., new successors go at end
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example: Breadth First Search
• Expand shallowest unexpanded node
• Implementation:
–fringe is a FIFO queue, i.e., new successors go at end
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Properties of Breadth First Search
• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
• Space? O(bd+1) (keeps every node in memory)
• Optimal? Yes (if cost = 1 per step)
• Space is the bigger problem (more than time)
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Time and Memory Requirement : Example
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search (D.F.S.)
• The depth first search generates nodes and compares them
with the goal along the largest depth of the tree and moves
up to the parent of the last visited node, only when no
further node can be generated below the last visited node.
• After moving up to the parent, the algorithm attempts to
generate a new offspring of the parent node.
• Uses Stack as data structure.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Algorithm
AI- RCS 702 Chahat Sharma (Asst. Prof.)
How the algorithm works?
• In the above algorithm, a starting node is placed in the
stack, the top of which is pointed to by the stack-top. For
examining the node, it is popped out from the stack.
• If it is the goal, the algorithm terminates, else its children
are pushed into the stack in any order.
• The process is continued until the stack is empty.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
–fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Properties of Depth First Search
• Complete? Yes: If b is finite
No: fails in infinite-depth spaces, spaces with
loops
Modify to avoid repeated states along path
Complete in finite spaces.
• Time? O(bm): terrible if m is much larger than d
But if solutions are dense, may be much
faster than breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? Not necessarily
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Uniform Cost Search
• Breadth first search is optimal when all step costs are equal.
• Uniform Cost search is optimal when step costs varies.
• Uniform-cost search expands the node n with lowest path
cost.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example: Uniform Cost Search
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Properties of Uniform Cost Search
• Let C* be the cost of optimal solution and assume every
action costs at least E.
• Complete? Yes
• Time? O(b[C*/E])
• Space? O(b[C*/E])
• Optimal? Yes
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Depth Limited Search
• Depth-limited search avoids the pitfalls of depth-first search by
imposing a cutoff on the maximum depth of a path.
• This cutoff can be implemented with a special depth-limited search
algorithm, or by using the general search algorithm with operators that
keep track of the depth.
• To avoid the infinite depth problem of DFS, we can decide to only
search until depth L, i.e. we don’t expand beyond depth L.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Properties of Depth Limited Search
• Complete? No
• Time? O(bl); l is the level of tree
• Space? O(bl)
• Optimal? Maybe
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Iterative Deepening Depth-First Search (IDDFS)
• To avoid the infinite depth problem of DFS, we can decide
to only search until depth L, i.e. we don’t expand beyond
depth L ->Depth first search.
• What if solution is deeper than L? -->Increase L iteratively.
->Iterative Deepening Search
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Iterative Deepening Depth-First Search (IDDFS)
• When the initial depth cut-off is one, it generates only the root
node and examines it.
• If the root node is not the goal, then depth cut-off is set to two
and the tree up to depth 2 is generated using typical depth first
search.
• Similarly, when the depth cut-off is set to m, the tree is
constructed up to depth m by depth first search.
• Iterative deepening search is a strategy that sidesteps the issue of
choosing the best depth limit by trying all possible depth limits:
first depth 0, then depth 1, then depth 2, and so on.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Algorithm
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example : IDDFS
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Properties of IDDFS
• Complete? Yes
• Time? O(bd); d is the depth of search tree
• Space? O(bd)
• Optimal? Yes if step cost=1 or increasing
function of depth
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Bi-Directional Search
• Alternate searching from the start state toward the goal and
from the goal state toward the start.
• Stop when the frontiers intersect.
• Works well only when there are unique start and goal
states.
• Requires the ability to generate “predecessor” states.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example: Bi-Directional Search
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Properties of Bi-Directional Search
• Complete? Yes
• Time? O(bd/2)
• Space? O(bd/2)
• Optimal? Yes
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Comparing Uniformed Search Strategies
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Comparison of Uninformed/Informed Search
1. Nodes in the state are searched 1. More info. About initial state &
mechanically, until the goal is operators is available .
reach or time limit is over / failure
occurs.
2. Some info. About goal is always
2. Info about goal state may not be
given.
given
3. Based on heuristic methods
3. Blind grouping is done
4. Searching is fast
4. Search efficiency is low.
5. Less computation required
5. Practical limits on storage
available for blind methods.
6. Impractical to solve very large 6. Can handle large search problems.
problems.
7. Best solution can be achieved. 7. Mostly a good enough solution is
E.g : DFS , BFS , Branch & accepted as optimal solution.
Bound, Iterative Deepening …etc. E.g: Best first search , A* , AO *,
hill climbing…etc
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed Search Strategies
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed/Heuristic Search
• Search strategies like DFS and BFS can find out solutions for
simple problems.
• For complex problems also although DFS and BFS guarantee to
find the solutions but these may not be the practical ones.
• Thus, it is better to sacrifice completeness and find out efficient
solution.
• Heuristic search techniques improve efficiency by sacrificing
claim of completeness and find a solution which is very close to
the optimal solution.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed/Heuristic Search
• When more information than the initial state , the operators &
goal state is available, size of search space can usually be
constrained. If this is the case, better the info. available more
efficient is the search process.
This is called Informed Search Methods.
• Depend on Heuristic information.
• They are good to the extent that they point in generally interesting
directions . Bad to the extent that they may miss points of interest
to a particular individuals.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed/Heuristic Search
• Heuristic function is a function that maps from problem state
descriptions to measures of desirability and it is usually represented
as a number.
• Well designed heuristic functions can play an important role in
efficiently guiding a search process towards a solution.
• Heuristic function f(n)=g(n) + h(n)
g(n) the cost (so far) to reach the node.
h(n) estimated cost to get from the node to the goal.
f(n) estimated total cost of path through n to goal.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed/Heuristic Search Techniques
• Best First Search
– It exploits state description to estimate how “good” each search
node is.
• Generate and Test (Simple Hill Climbing)
– The generate and test algorithm is as follows :
1.Generate possible solutions.
2. Test to see if this is the expected solution.
3. If the solution has been found quit else go to step 1.
• Branch and Bound Algorithm (A*)
– This strategy applies to problems having a graph search space
where more than one alternative path may exist between two nodes.
This strategy saves all path lengths from a node to all generated
nodes and chooses the shortest path for further expansion. Then it
compares the new path lengths with all old ones and again chooses
the shortest path for expansion. In this way, any path to a good
node is certain to be a minimal length path.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed/Heuristic Search Techniques
• Problem Reduction Technique (AO*)
– The divide and conquer strategy, a solution to a problem can be
obtained by decomposing it into smaller sub-problems. Each of this
sub-problem can then be solved to get its sub solution. These sub
solutions can then recombined to get a solution as a whole. That is
called is Problem Reduction.
• Constraint Satisfaction Technique (Crypt-arithmetic, Map
Coloring, Water Jug Problem)
– A Constraint Satisfaction Problem (CSP) is a program that requires
its solution within some limitations or conditions that is called
Constraints
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed/Heuristic Search Techniques
• Means Ends Analysis first to solve the major part of a problem and
then go back and solve the small problems arise during combining the
big parts of the problem. Such a technique is called Means-Ends
Analysis.
• Hill Climbing
– Simple Hill Climbing
– Steepest Hill Climbing
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed/Heuristic Search Techniques
• Best First Search
(also refer AI_Unit2_best_first_8 Puzzle pdf)
• Greedy Best First Search
(refer AI_Unit2_Greedy BFS_Astar.pdf)
• A* Search
(refer AI_Unit2_Greedy BFS_Astar.pdf)
• AO* Search
• Iterative Deepening A*
(refer AI_Unit2_Informed_IDAstar.pdf)
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Best First Search
• It exploits state description to estimate how “good” each search
node is
• An evaluation function f maps each node N of the search tree to
a real number f(N) >= 0
[Traditionally, f(N) is an estimated cost; so, the smaller f(N), the
more promising N]
• Best-first search sorts the FRINGE in increasing f
[Arbitrary order is assumed among nodes with equal f]
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Best First Search
• How to construct f?
• Typically, f(N) estimates:
– either the cost of a solution path through N
• Then f(N) = g(N) + h(N), where
– g(N) is the cost of the path from the initial node to N
– h(N) is an estimate of the cost of a path from N to a goal
node
– or the cost of a path from N to a goal node
• Then f(N) = h(N) i.e. Greedy best-search
• But there are no limitations on f. Any function of your choice is
acceptable.
But will it help the search algorithm?
AI- RCS 702 Chahat Sharma (Asst. Prof.)
How does Best First Search works?
• At each step select most promising of the nodes generated
so far.
• Implementation of the best first search requires the
following
– Node structure containing description of the problem
state, heuristic value, parent link, and the list of nodes
that were generated from it.
– Two lists named OPEN: containing nodes that have
been generated, their heuristic value calculated but not
expanded so far. Generally a priority list. CLOSED:
containing nodes that have already been expanded
(required in case of graph search)
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Algorithm
1.INSERT(initial-node,FRINGE)
2.Repeat:
a.If empty(FRINGE) then return failure
b.N REMOVE(FRINGE)
c.s STATE(N)
d.If GOAL?(s) then return path or goal state
e.For every state s’ in SUCCESSORS(s)
i.Create a node N’ as a successor of N
ii.INSERT(N’,FRINGE)
AI- RCS 702 Chahat Sharma (Asst. Prof.)
8 Puzzle Problem (Full Working in Notes)
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Properties of Best First Search
• Complete? Yes (If the heuristic function will be
admissible and consistent)
• Time? O(bm)
• Space? O(bm)
• Optimal? Yes (If the heuristic function will be
admissible and consistent)
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example: Best First Search
AI- RCS 702 Chahat Sharma (Asst. Prof.)
AO* Search
• Our real-life situations can’t be exactly decomposed into
either AND tree or OR tree but is always a combination of
both. So, we need an AO* algorithm where O stands for
‘ordered’. AO* algorithm represents a part of the search
graph that has been explicitly generated so far.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
AO* Search Algorithm
Step-1: Create an initial graph with a single node (start node).
Step-2: Transverse the graph following the current path, accumulating node that has not
yet been expanded or solved.
Step-3: Select any of these nodes and explore it. If it has no successors then call this
value- FUTILITY else calculate f'(n) for each of the successors.
Step-4: If f'(n)=0, then mark the node as SOLVED.
Step-5: Change the value of f'(n) for the newly created node to reflect its successors by
back propagation.
Step-6: Whenever possible use the most promising routes, If a node is marked as
SOLVED then mark the parent node as SOLVED.
Step-7: If the starting node is SOLVED or value is greater than FUTILITY then stop
else repeat from Step-2.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Example
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Properties of AO* Search
• Complete? Yes
• Optimal? No
• Data Structure? Graph
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Local Search & Optimization
Strategies
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Local Search & Optimization Techniques
• Hill Climbing
– (refer AI_Unit2_hill_&_simulated.pdf, also AI_Unit2_hillclimbing.pdf
(slide 136-149))
• Simulated Annealing Search
– (refer AI_Unit2_hill_&_simulated.pdf)
• Local Beam Search
• Genetic Algorithms
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Simulated Annealing Search
• Simulated Annealing (SA) is an effective and general form
of optimization.
• It is useful in finding global optima in the presence of large
numbers of local optima.
• “Annealing” refers to an analogy with thermodynamics,
specifically with the way that metals cool and anneal.
• Simulated annealing uses the objective function of an
optimization problem instead of the energy of a material.
• Implementation of SA is surprisingly simple.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Simulated Annealing Search
• The algorithm is basically hill-climbing except instead of
picking the best move, it picks a random move.
• If the selected move improves the solution, then it is always
accepted. Otherwise, the algorithm makes the move
anyway with some probability less than 1.
• The probability decreases exponentially with the “badness” of
the move, which is the amount deltaE by which the solution is
worsened (i.e., energy is increased.)
• Prob(accepting uphill move) ~ 1 - exp(deltaE / kT))
• A parameter T is also used to determine this probability.
• It is analogous to temperature in an annealing system. At higher
values of T, uphill moves are more likely to occur.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Simulated Annealing Search
• As T tends to zero, they become more and more unlikely,
until the algorithm behaves more or less like hill-climbing.
• In a typical SA optimization, T starts high and is gradually
decreased according to an “annealing schedule”.
• The parameter k is some constant that relates temperature to
energy (in nature it is Boltzmann’s constant.)
• Simulated annealing is typically used in discrete, but very
large, configuration spaces, such as the set of possible
orders of cities in the Traveling Salesman problem and in
VLSI routing. It has a broad range of application that is still
being explored.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Local Beam Search
• Keep track of k states instead of one
Initially: k random states
Next: determine all successors of k states
If any of successors is goal finished
Else select k best from successors and repeat.
• Major difference with random-restart search
• Information is shared among k search threads.
• Can suffer from lack of diversity.
• Stochastic variant: choose k successors at proportionally
to state success.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
A Genetic Algorithm is a search heuristic that is inspired by
Charles Darwin’s theory of natural evolution. This
algorithm reflects the process of natural selection where the
fittest individuals are selected for reproduction in order to
produce offspring of the next generation.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
• The process of natural selection starts with the selection of fittest
individuals from a population.
• They produce offspring which inherit the characteristics of the parents
and will be added to the next generation.
• If parents have better fitness, their offspring will be better than parents
and have a better chance at surviving.
• This process keeps on iterating and at the end, a generation with the
fittest individuals will be found.
• This notion can be applied for a search problem. We consider a set of
solutions for a problem and select the set of best ones out of them.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
• Variant of local beam search with parent recombination.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
1. Five phases are considered in a genetic algorithm:
• Initial population (Data set of solutions)
• Fitness function (Quality of the solution)
• Selection (Two best parents)
• Crossover (Mixing the values of best parents)
• Mutation (Changing of existing values)
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
• The process begins with a set of
individuals which is called
a Population. Each individual is a
solution to the problem you want to
solve.
• An individual is characterized by a set
of parameters (variables) known
as Genes. Genes are joined into a string
to form a Chromosome (solution).
• In a genetic algorithm, the set of genes
of an individual is represented using a
string, in terms of an alphabet. Usually,
binary values are used (string of 1s and
0s). We say that we encode the genes in
a chromosome.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
1. Fitness Function
The fitness function determines how fit an individual is (the ability of an
individual to compete with other individuals). It gives a fitness
score to each individual. The probability that an individual will be
selected for reproduction is based on its fitness score.
2. Selection
The idea of selection phase is to select the fittest individuals and let them
pass their genes to the next generation.
Two pairs of individuals (parents) are selected based on their fitness
scores. Individuals with high fitness have more chance to be selected
for reproduction.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
3. Crossover
Crossover is the most significant phase in a
genetic algorithm. For each pair of parents
to be mated, a crossover point is chosen at
random from within the genes.
For example, consider the crossover point to
be 3 as shown below.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
4. Offspring are created by exchanging
the genes of parents among themselves
until the crossover point is reached.
The new offspring are added to the population.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
5. Mutation
In certain new offspring formed, some of
their genes can be subjected to
a mutation with a low random probability.
This implies that some of the bits in the bit
string can be flipped.
Mutation occurs to maintain diversity within the population and prevent
premature convergence.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
6. Termination
The algorithm terminates if the population has converged (does not
produce offspring which are significantly different from the previous
generation). Then it is said that the genetic algorithm has provided a set of
solutions to our problem.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
function GENETIC_ALGORITHM( population, FITNESS-FN) return an individual
input: population, a set of individuals
FITNESS-FN, a function which determines the quality of the individual
repeat
new_population empty set
loop for i from 1 to SIZE(population) do
x RANDOM_SELECTION(population, FITNESS_FN) y
RANDOM_SELECTION(population, FITNESS_FN)
child REPRODUCE(x,y)
if (small random probability) then child MUTATE(child )
add child to new_population
population new_population
until some individual is fit enough or enough time has elapsed
return the best individual
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Genetic Algorithms
• Variant of local beam search with parent recombination.
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Adversarial Search
AI- RCS 702 Chahat Sharma (Asst. Prof.)