0% found this document useful (0 votes)
34 views11 pages

Unit 2 16 Marks

Hill climbing is an uninformed search algorithm that starts with a random solution and iteratively moves to a neighbor with a better value until no better neighbors exist. It is simple but can get stuck in local optima. The example given is of hill climbing to find the shortest path on a hilly landscape, where it moves uphill at each step to a higher neighbor until reaching a peak.

Uploaded by

Uma Maheswari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views11 pages

Unit 2 16 Marks

Hill climbing is an uninformed search algorithm that starts with a random solution and iteratively moves to a neighbor with a better value until no better neighbors exist. It is simple but can get stuck in local optima. The example given is of hill climbing to find the shortest path on a hilly landscape, where it moves uphill at each step to a higher neighbor until reaching a peak.

Uploaded by

Uma Maheswari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

PART – B

1 Discuss in detail about Constraints Satisfaction Problem with Backtracking search Algorithm.
 Constraint satisfaction problems (CSPs) need solutions that satisfy all the associated constraints.
 Consider a Sudoku game with some numbers filled initially in some squares.
 You are expected to fill the empty squares with numbers ranging from 1 to 9 in such a way that no
row, column or a block has a number repeating itself.
 This is a very basic constraint satisfaction problem. You are supposed to solve a problem keeping in
mind some constraints.
 The remaining squares that are to be filled are known as variables, and the range of numbers (1-9)
that can fill them is known as a domain.
 Variables take on values from the domain. The conditions governing how a variable will choose its
domain are known as constraints.
 A constraint satisfaction problem (CSP) is a problem that requires its solution within some
limitations or conditions also known as constraints.
 It consists of the following:
o A finite set of variables which stores the solution (V = {V1, V2, V3,...., Vn})
o A set of discrete values known as domain from which the solution is picked (D
= {D1, D2, D3,....,Dn})
o A finite set of constraints (C = {C1, C2, C3,....., Cn})
 The elements in the domain can be both continuous and discrete but in AI, we generally only deal with
discrete values.
 Also, all these sets should be finite except for the domain set.
 Each variable in the variable set can have different domains.
 For example, consider the Sudoku problem again.
 Suppose that a row, column and block already have 3, 5 and 7 filled in.
 Then the domain for all the variables in that row, column and block will be {1, 2, 4, 6, 8, 9}.
Backtracking Search for CSPs
 The term backtracking search is used for depth-first search that chooses values for one variable
at a time and backtracks when a variable has no legal values left to assign.

Part of search tree generated by simple backtracking for the map colouring problem.
Propagating information through constraints
 So far our search algorithm considers the constraints on a variable only at the time that the
Variable is chosen by SELECT-VNASSIGNED-VARIABLE.
 But by looking at some of the constraints earlier in the search, or even before the search has started, we
can drastically reduce the search space.
Forward checking
 One way to make better use of constraints during search is called forward checking.
 Whenever a variable X is assigned, the forward checking process looks at each unassigned variable Y
that is connected to X by a constraint and deletes from Y 's domain any value that is inconsistent with the
value chosen for X.
 The following figure shows the progress of a map-coloring search with forward checking.

Constraint propagation
 Although forward checking detects many inconsistencies, it does not detect all of them.
 Constraint propagation is the general term for propagating the implications of a constraint on one
variable onto other variables.
Arc Consistency

K-Consistency
2 List out the types of uninformed search. BR
(1) Breadth-first Search: Te
 Breadth-first search is the most common search strategy for traversing a tree or graph. Lm
1e
 This algorithm searches breadth wise in a tree or graph, so it is called breadth-first search.
m
 BFS algorithm starts searching from the root node of the tree and expands all successor node at the b
current level before moving to nodes of next level. e
 The breadth-first search algorithm is an example of a general-graph search algorithm. r
Breadth-first search implemented using FIFO queue data structure. i
 Example: n
 In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root g
node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which is
shown by the dotted arrow, and the traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I >K
 Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.
 T (b) = 1+b2+b3+. + bd= O (bd)
 Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
 Completeness: BFS is complete, which means if the shallowest goal node is at some
finite depth, then BFS will find a solution.
 Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
(2)Depth-first Search:
 Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
 It is called the depth-first search because it starts from the root node and follows each path to its
greatest depth node before moving to the next path.
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.
Example:
 In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:
 Root node--->Left node > right node.
 It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will
traverse node C and then G, and here it will terminate as it found goal node.
 Completeness: DFS search algorithm is complete within finite state space as it will expand every node
within a limited search tree.
 Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:
 T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
 Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution
depth)
 Space Complexity: DFS algorithm needs to store only single path from the root node, hence space
complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost
to reach to the goal node.
(3)Depth-limited-search
 The problem of unbounded trees can be alleviated by supplying depth-first-search with a pre-
determined depth limit l.
 That is,nodes at depth l are treated as if they have no successors.
 This approach is called depth-limited-search.
 The depth limit solves the infinite path problem.
 Depth limited search will be nonoptimal if we choose l > d. Its time complexity is O(bl) and its
space compleiy is O(bl).
 Depth-first-search can be viewed as a special case of depth-limited search with l = oo
Sometimes, depth limits can be based on knowledge of the problem.
 For,example,on the map of Romania there are 20 cities.
 Therefore,we know that if there is a solution.,it must be of length 19 at the longest,So l = 10 is a
possible choice.
 However,itoocan be shown that any city can be reached from any other city in at most 9 steps.
 This number known as the diameter of the state space,gives us a better depth limit.
 Depth-limited-search can be implemented as a simple modification to the general tree- search
algorithm or to the recursive depth-first-search algorithm.
 The pseudocode for recursive depth-limited-search is shown.
 It can be noted that the above algorithm can terminate with two kinds of failure : the standard
failure value indicates no solution; the cutoffvalue indicates no solution within the depth limit.
Depth-limited search = depth-first search with depth limit l, returns cut off if any path is cut off
by depth limit

Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
o Depth-limited search also has a disadvantage of incompleteness.
o It may not be optimal if the problem has more than one solution.
Properties:
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even if
ℓ>d.

Explain the Hill Climbing method in detail with an example. BR


 Hill climbing algorithm is a local search algorithm which continuously moves in the direction of Te
3 increasing elevation/value to find the peak of the mountain or best solution to the problem. It terminates Lm
when it reaches a peak value where no neighbor has a higher value. 1e
m
 Hill climbing algorithm is a technique which is used for optimizing the mathematical problems. One of
b
the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which we e
need to minimize the distance traveled by the salesman. r
 It is also called greedy local search as it only looks to its good immediate neighbor state and not beyond i
that. n
 A node of hill climbing algorithm has two components which are state and value. g
 Hill Climbing is mostly used when a good heuristic is available.
 In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a single
current state.
Types of Hill Climbing Algorithm:
 Simple hill Climbing
 Steepest-Ascent hill-climbing
 Stochastic hill Climbing
(a)Simple Hill Climbing:
 Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the
neighbor node state at a time and selects the first one which optimizes current cost and set it as a
current state. It only checks it's one successor state, and if it finds better than the current state, then
move else be in the same state. This algorithm has the following features:
 Less time consuming
 Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
 Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
 Step 2: Loop Until a solution is found or there is no new operator left to apply.
 Step 3: Select and apply an operator to the current state.
 Step 4: Check new state:
o If it is goal state, then return success and quit.
o Else if it is better than the current state then assign new state as a current state.
o Else if not better than the current state, then return to step2.
 Step 5: Exit.
(b) Steepest-Ascent hill climbing:
 The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm examines
all the neighboring nodes of the current state and selects one neighbor node which is closest to the goal
state. This algorithm consumes more time as it searches for multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state
as initial state.
 Step 2: Loop until a solution is found or the current state does not change.
o Let SUCC be a state such that any successor of the current state will be better than it.
o For each operator that applies to the current state:
 Apply the new operator and generate a new state.
 Evaluate the new state.
 If it is goal state, then return it and quit, else compare it to the SUCC.
 If it is better than SUCC, then set new state as SUCC.
 If the SUCC is better than the current state, then set current state to SUCC.
 Step 5: Exit.
(c)Stochastic hill climbing:
 Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search
algorithm selects one neighbor node at random and decides whether to choose it as a current state or
examine another state.
4 Implement toy problem for i)Vacuum world ii)8 Queen iii)8 Puzzle BU
A toy problem is intended to illustrate various problem solving methods. It can be easily used by different Tn
researchers to compare the performance of algorithms. Ld
Vacuum World Example 2e
 States: The agent is in one of two locations.,each of which might or might not contain dirt. Thus r
there are 2 x 22 = 8 possible world states. s
 Initial state: Any state can be designated as initial state. t
 Successor function : This generates the legal states that results from trying the three actions (left, right, a
suck). The complete state space is shown in figure n
 Goal Test : This tests whether all the squares are clean. d
 Path test : Each step costs one ,so that the the path cost is the number of steps in the path. i
n
Figure The state space for the vacuum world. g
Arcs denote actions: L = Left,R = Right,S = Suck

The 8-puzzle
An 8-puzzle consists of a 3x3 board with eight numbered tiles and a blank space. A tile adjacent to
the balank space can slide into the space. The object is to reach the goal state ,as shown in figure

The problem formulation is as follows :


 States : A state description specifies the location of each of the eight tiles and the blank in one of the nine
squares.
 Initial state: Any state can be designated as the initial state. It can be noted that any given goal can be
reached from exactly half of the possible initial states.
 Successor function : This generates the legal states that result from trying the four actions(blank moves
Left,Right,Up or down).
 Goal Test : This checks whether the state matches the goal configuration shown in figure
 (Other goal configurations are possible)
 Path cost : Each step costs 1,so the path cost is the number of steps in the path
 The 8-puzzle belongs to the family of sliding-block puzzles,which are often used as test problems for
new search algorithms in AI. This general class is known as NP-complete.
 The 8-puzzle has 9!/2 = 181,440 reachable states and is easily solved.
8-queens problem
 The goal of 8-queens problem is to place 8 queens on the chessboard such that no queen attacks any other.
(A queen attacks any piece in the same row, column or diagonal).
Figure .shows an attempted solution that fails: the queen in the right most column is attacked by
the queen at the top left.

.
The first incremental formulation one might try is the following :
 States: Any arrangement of 0 to 8 queens on board is a state.
 Initial state : No queen on the board.
 Successor function: Add a queen to any empty square.
 Goal Test : 8 queens are on the board,none attacked
 In this formulation,we have 64.63…57 = 3 x 1014 possible sequences to investigate.
 States : Arrangements of n queens ( 0 <= n < = 8 ) ,one per column in the left most columns ,with no
queen attacking another are states.
 Successor function : Add a queen to any square in the left most empty column such that it
 is not attacked by any other queen.
This formulation reduces the 8-queen state space from 3 x 1014 to just 2057, and solutions are easy to find.
5 a) Evaluate the measuring performance of uninformed search strategy. BA
Tp
Lp
3l
y
i
n
g

 Evaluation of search strategies,b is the branching factor; d is the depth of the shallowest
solution;
 m is the maximum depth of the search tree;
 l is the depth limit. Superscript caveats are as follows:
 a complete if b is finite; b complete if step costs >= E for positive E; c optimal if step costs
are all identical; d if both directions use breadth-first search.

b)Explain in detail,
i)Iterative Deepening depth first search
ITERATIVE DEEPENINGDEPTH-FIRST SEARCH
 Iterative deepening search (or iterative-deepening-depth-first-search) is a general strategy often used in
combination with depth-first-search,that finds the better depth limit.
 It does this by gradually increasing the limit - first 0,then 1,then 2, and so on - until a goal is found.
 This will occur when the depth limit reaches d,the depth of the shallowest goal node.
 Iterative deepening combines the benefits of depth-first and breadth-first-search
 Like depth-first-search,its memory requirements are modest;O(bd) to be precise.
 Like Breadth-first-search,it is complete when the branching factor is finite and optimal when the path
cost is a non decreasing function of the depth of the node.

1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Advantages:
Itcombines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.
Disadvantages:
The main drawback of IDDFS is that it repeats all the work of the previous phase.
Completeness:
This algorithm is complete is ifthe branching factor is finite.
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time complexity is O(bd).
Space Complexity:
The space complexity of IDDFS will be O(bd).
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the node.

ii)Bidirectional search.
Bidirectional Search
The idea behind bidirectional search is to run two simultaneous searches
 one forward from the initial state and
 other backward from the goal,
 It stops when the two searches meet in the middle.
 The motivation is that bd/2 + bd/2 much less than bd/

Advantages:
o Bidirectional search is fast.
o Bidirectional search requires less memory
Disadvantages:
o Implementation of the bidirectional search tree is difficult.
o In bidirectional search, one should know the goal state in advance.

Completeness: Bidirectional Search is complete if we use BFS in both searches.


Time Complexity: Time complexity of bidirectional search using BFS is O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).
Optimal: Bidirectional search is Optimal.

Discuss in detail about informed search strategy.


6  Informed search strategy is one that uses problem-specific knowledge beyond the definition of the problem
itself. It can find solutions more efficiently than uninformed strategy
 Best-first search is an instance of general TREE-SEARCH or GRAPH-SEARCH algorithm in
which a node is selected for expansion based on an evaluation function f(n). The node with
lowest evaluation is selected for expansion, because the evaluation measures the distance to the
goal.
1.Greedy Best-first search
 Greedy best-first search tries to expand the node that is closest to the goal, on the grounds
that this is likely to a solution quickly.
 It evaluates the nodes by using the heuristic function f(n) = h(n).
 Taking the example of Route-finding problems in Romania , the goal is to reach Bucharest
starting from the city Arad. We need to know the straight-line distances to Bucharest from
various cities as shown in Figure. For example, the initial state is In(Arad) ,and the straight line
distance heuristic hSLD(In(Arad)) is found to be 366.
 Using the straight-line distance heuristic hSLD , the goal state can be reached faster.
 Figure shows the progress of greedy best-first search using h SLD to find a path from Arad to Bucharest.
 The first node to be expanded from Arad will be Sibiu, because it is closer to Bucharest than either
Zerind or Timisoara. The next node to be expanded will be Fagaras, because it is closest. Fagaras in
turn generates Bucharest, which is the goal
2.A* Search
 A* Search is the most widely used form of best-first search. The evaluation function f(n) is
obtained by combining
(1) g(n) = the cost to reach the node,and
(2) h(n) = the cost to get from the node to the goal :
f(n) = g(n) + h(n).
 A* Search is both optimal and complete. A* is optimal if h(n) is an admissible heuristic. The
obvious example of admissible heuristic is the straight-line distance hSLD. It cannot be an
overestimate.
 An obvious example of an admissible heuristic is the straight-line distance h SLD that we
used in getting to Bucharest. The progress of an A* tree search for Bucharest is shown in
Figure
 The values of ‘g ‘ are computed from the step costs shown in the Romania map.The values of hSLD are
given in Figure

You might also like