0% found this document useful (0 votes)
28 views20 pages

Unit 3 Fai

The document outlines the fundamentals of Artificial Intelligence and Machine Learning, focusing on problem-solving agents and their processes, including goal formulation, problem formulation, search, and execution. It discusses various example problems such as the Vacuum Cleaner Problem, 8 Queens Problem, and real-world applications like route finding and the Traveling Salesman Problem. Additionally, it introduces search strategies and algorithms used by intelligent agents to find optimal solutions in complex problem spaces.

Uploaded by

sitharavashisht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views20 pages

Unit 3 Fai

The document outlines the fundamentals of Artificial Intelligence and Machine Learning, focusing on problem-solving agents and their processes, including goal formulation, problem formulation, search, and execution. It discusses various example problems such as the Vacuum Cleaner Problem, 8 Queens Problem, and real-world applications like route finding and the Traveling Salesman Problem. Additionally, it introduces search strategies and algorithms used by intelligent agents to find optimal solutions in complex problem spaces.

Uploaded by

sitharavashisht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

VISHNU INSTITUTE OF TECHNOLOGY ::

BHIMAVARAM
DEPARTMENT OF CSE(ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING)

FUNDAMENTALS OF ARTIFICIAL INTELLIGENCE AND MACHINE


LEARNING

UNIT 3
Topics :
1. Problem Solving Agents
2. Example Problems
a) Two State Vaccum Cleaner
b) 8 Queens
c) 8 Puzzle
3. Real World Problems
a) Route Finding Problems
b) Touring Problem
c) Travelling Salesman Problem
d) VLSI Layout Problem
4. Searching for Solutions
5. Uninformed Search Strategies
a) Breadth First Search
b) Depth First Search
c) Uniform Cost Search
d) Depth Limited Search
e) Iterative Deepening Depth Limited Search
f) Bidirectional Search

Problem Solving Agents


Problem Solving Agents decide what to do by finding a
sequence of actions that leads to a desirable state or solution.
An agent may need to plan when the best course of action is not
immediately visible. They may need to think through a series of
moves that will lead them to their goal state. Such an agent is
known as a problem solving agent, and the computation it
does is known as a search.Intelligent agents are supposed to
maximize its performance measure. Achieving this can be
simplified if the agent can adopt a goal and aim to satisfy it.
Setting goals help the agent organize its behavior by limiting
the objectives that the agent is trying to achieve and hence the
actions it needs to consider. This Goal formulation based on the
current situation and the agent’s performance measure is the first
step in problem solving.We consider the agent’s goal to be a set of
states. The agent’s task is to find out actions in the present and in
the future that could reach the goal state from the present state.
The problem solving agent follows this four phase problem
solving process:
1. Goal Formulation: This is the first and most basic phase in
problem solving. It arranges specific steps to establish a
target/goal that demands some activity to reach it. AI agents
are now used to formulate goals.
2. Problem Formulation: It is one of the fundamental steps in
problem-solving that determines what action should be taken
to reach the goal.
3. Search: After the Goal and Problem Formulation, the agent
simulates sequences of actions and has to look for a sequence
of actions that reaches the goal. This process is
called search, and the sequence is called a solution. The
agent might have to simulate multiple sequences that do not
reach the goal, but eventually, it will find a solution, or it will
find that no solution is possible. A search algorithm takes a
problem as input and outputs a sequence of actions.
4. Execution: After the search phase, the agent can now
execute the actions that are recommended by the search
algorithm, one at a time. This final stage is known as the
execution phase.
Formulating the Problem :Before we move into the problem
formulation phase, we must first define a problem in terms of
problem solving agents.
A formal definition of a problem consists of five components:
1. Initial State
2. Actions
3. Transition Model
4. Goal Test
5. Path Cost
Initial State : It is the agent’s starting state or initial step
towards its goal. For example, if a taxi agent needs to travel to a
location(B), but the taxi is already at location(A), the problem’s
initial state would be the location (A).

Actions : It is a description of the possible actions that the agent


can take. Given a state s, Actions(s) returns the actions that can
be executed in s. Each of these actions is said to be appropriate
in s.

Transition Model :It describes what each action does. It is


specified by a function Result(s, a) that returns the state that
results from doing action an in states.
The initial state, actions, and transition model together
define the state space of a problem, a set of all states reachable
from the initial state by any sequence of actions. The state space
forms a graph in which the nodes are states, and the links
between the nodes are actions.

Goal Test : It determines if the given state is a goal state.


Sometimes there is an explicit list of potential goal states, and
the test merely verifies whether the provided state is one of
them. The goal is sometimes expressed via an abstract attribute
rather than an explicitly enumerated set of conditions.
Path Cost : It assigns a numerical cost to each path that leads to
the goal. The problem solving agents choose a cost function that
matches its performance measure. Remember that the optimal
solution has the lowest path cost of all the solutions.
After Goal formulation and problem formulation, the
agent has to look for a sequence of actions that reaches the goal.
This process is called Search. A search algorithm takes a problem as
input and returns a sequence of actions as output. After the search
phase, the agent has to carry out the actions that are recommended
by the search algorithm. This final phase is called execution phase.
Formulate --> Search --> Execute
Thus the agent has a formulate, search and execute design to it.

Example Problems

The problem solving approach has been used in a wide range of


work contexts. There are tw0o kinds of problem approaches
1. Standardized/ Toy Problem: Its purpose is to demonstrate
or practice various problem solving techniques. It can be
described concisely and precisely, making it appropriate as a
benchmark for academics to compare the performance of
algorithms.
2. Real-world Problems: It is real-world problems that need
solutions. It does not rely on descriptions, unlike a toy
problem, yet we can have a basic description of the issue.
The solution to the problem is an action sequence that leads
from initial state to goal state and the solution quality is measured
by the path cost function. An optimal solution has the lowest path
cost among all the solutions.
An Example Toy Problem 1: Vaccum World Problem
There is a vacuum cleaner agent and it can move left or right
and its jump is to suck up the dirt from the floor.
The problem for vacuum world can be formulated as follows:

States: The state is determined by both the agent location and the
dirt location. The agent is in one of two locations, each of which
might or might not contain dirt. Therefore, there are 2 x 2² =
8 possible world states.
A larger environment would have n x 2 to the power of n states.
Initial State: Any state can be assigned as the initial state in this
case.
Action: In this environment there are three actions, Move Left ,
Move Right , Suck up the dirt.
Transition Model: All the actions have expected effects, except for
when the agent is in leftmost square and the action is Left, when the
agent is in rightmost square and the action is Right and the square is
clean when the action is to Suck.
Goal Test: Goal test checks whether all the squares are clean.
Path Cost: Each step costs 1, so the path cost is the number of
steps in the path.
The vacuum world problem is a toy problem and involves only
discrete locations, discrete dirt etc. Therefore, this problem is a Toy
Problem. There are many Real-World Problems like the
automated taxi world. Try to formulate problems of real world and
see what would be the states be and what actions could be chosen
etc.
An Example Toy Problem 2: 8 Puzzle Problem
In a sliding-tile puzzle, a number of tiles (sometimes
called blocks or pieces) are arranged in a grid with one or more
blank spaces so that some of the tiles can slide into the blank
space. One variant is the Rush Hour puzzle, in which cars and
trucks slide around a 6 x 6 grid in an attempt to free a car from
the traffic jam. Perhaps the best-known variant is the 8-
puzzle (see Figure below ), which consists of a 3 x 3 grid with
eight numbered tiles and one blank space, and the 15-puzzle on
a 4 x 4 grid. The object is to reach a specified goal state, such as
the one shown on the right of the figure. The standard
formulation of the 8 puzzles is as follows:
STATES: A state description specifies the location of each of the
tiles.
INITIAL STATE: Any state can be designated as the initial state.
(Note that a parity property partitions the state space—any given
goal can be reached from exactly half of the possible initial
states.)
ACTIONS: While in the physical world it is a tile that slides, the
simplest way of describing action is to think of the blank space
moving Left, Right, Up, or Down. If the blank is at an edge or
corner then not all actions will be applicable.
TRANSITION MODEL: Maps a state and action to a resulting
state; for example, if we apply Left to the start state in the Figure
below, the resulting state has the 5 and the blank switched.
A typical instance of the 8-puzzle
GOAL STATE: It identifies whether we have reached the correct
goal state. Although any state could be the goal, we typically
specify a state with the numbers in order, as in the Figure above.
ACTION COST: Each action costs 1.
An Example Toy Problem 3: 8 Queens Problem
The 8-queens problem can be defined as follows: Place 8
queens on an (8 by 8) chess board such that none of the queens
attacks any of the others. A configuration of 8 queens on the board
is shown in figure 1, but this does not represent a solution as the
queen in the first column is on the same diagonal as the queen in
the last column.

Figure 1: Almost a solution of the 8-queens problem


This problem can be solved by searching for a solution. The
initial state is given by the empty chess board. Placing a queen on
the board represents an action in the search problem. A goal state is
a configuration where none of the queens attacks any of the others.
Note that every goal state is reached after exactly 8 actions.
This formulation as a search problem can be improved when
we realize that, in any solution, there must be exactly one queen in
each of the columns. Thus, the possible actions can be restricted to
placing a queen in the next column that does not yet contain a
queen. This reduces the branching factor from (initially) 64 to 8.
Furthermore, we need only consider those rows in the next
column that are not already attacked by a queen that was
previously on the board. This is because the placing of further
queens on the board can never remove the mutual attack and turn
the configuration into a solution.

Real World Problems


1. Route Finding Problem : It is defined in terms of specified
locations and transitions along links between them. Route-finding
algorithms are used in a variety of applications. Some, such as Web
sites and in-car systems that provide driving directions are relatively
straightforward. Others, such as routing video streams in computer
networks, military operations planning, and airline travel-planning
systems, involve much more complex specifications. Consider the
airline travel problems that must be solved by a travel-planning Web
site:
• States: Each state obviously includes a location (e.g., an
airport) and the current time. Furthermore, because the cost of
an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or
international, the state must record extra information about
these “historical” aspects.
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any
seat class, leaving after the current time, leaving enough time
for within-airport transfer if needed.
• Transition model: The state resulting from taking a flight
will have the flight's destination as the current location and the
flight's arrival time as the current time.
• Goal test: Are we at the final destination specified by the
user?
• Path cost: This depends on monetary cost, waiting time,
flight time, customs and immigration procedures, seat quality,
time of day, type of airplane, frequent-flyer mileage awards,
and so on.
2. Touring Problem : Touring problems are closely related to
route-finding problems, but with an important difference. Consider,
for example, the problem “Visit every city in Figure below, at least
once, starting and ending in Bucharest.” As with route finding, the
actions correspond to trips between adjacent cities. The state space,
however, is quite different. Each state must include not just the
current location but also the set of cities the agent has visited. So
the initial state would be In(Bucharest), Visited({Bucharest}), a
typical intermediate state would be In(Vaslui), Visited({Bucharest,
Urziceni, Vaslui}), and the goal test would check whether the agent
is in Bucharest and all 20 cities have been visited.

3. Travelling Salesman Problem : The traveling salesperson


problem (TSP) is a touring problem in which each city must be
visited exactly once. The aim is to find the shortest tour. The
problem is known to be NP-hard, but an enormous amount of effort
has been expended to improve the capabilities of TSP algorithms. In
addition to planning trips for traveling salespersons, these
algorithms have been used for tasks such as planning movements of
automatic circuit-board drills and of stocking machines on shop
floors
4. VLSI Layout Problem : A VLSI layout problem requires
positioning millions of components and connections on a chip to
minimize area, minimize circuit delays, minimize stray
capacitance's, and maximize manufacturing yield. The layout
problem comes after the logical design phase and is usually split
into two parts: cell layout and channel routing. In cell layout, the
primitive components of the circuit are grouped into cells, each of
which performs some recognized function. Each cell has a fixed
footprint (size and shape) and requires a certain number of
connections to each of the other cells. The aim is to place the cells
on the chip so that they do not overlap and so that there is room for
the connecting wires to be placed between the cells. Channel
routing finds a specific route for each wire through the gaps
between the cells. These search problems are extremely complex,
but definitely worth solving.
5. Robot Navigation Problem : Robot Navigation is a
generalization of the route finding problem described earlier. Rather
than following a discrete set of routes, a robot can move in a
continuous space with an infinite set of possible actions and states.
For a circular robot moving on a flat surface, the space is essentially
two-dimensional. When the robot has arms and legs or wheels that
must also be controlled, the search space becomes many-
dimensional. Advanced techniques are required just to make the
search space finite.

Searching for Solutions


In Artificial Intelligence, Search techniques are universal problem-
solving methods. Rational agents or Problem-solving agents in
AI mostly used these search strategies or algorithms to solve a
specific problem and provide the best result. Problem-solving agents
are the goal-based agents and use atomic representation.
Terminologies:
o Search: Searching is a step by step procedure to solve a
search-problem in a given search space. A search problem can
have three main factors:
1. Search Space: Search space represents a set of
possible solutions, which a system may have.
2. Start State: It is a state from where agent begins the
search.
3. Goal test: It is a function which observe the current
state and returns whether the goal state is achieved or
not.
o Search tree: A tree representation of search problem is
called Search tree. The root of the search tree is the root node
which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to
the agent.
o Transition model: A description of what each action do, can
be represented as a transition model.
o Path Cost: It is a function which assigns a numeric cost to
each path.
o Solution: It is an action sequence which leads from the start
node to the goal node.
o Optimal Solution: If a solution has the lowest cost among all
solutions.
Properties of Search Algorithms:
Following are the four essential properties of search algorithms to
compare the efficiency of these algorithms:
1. Completeness: A search algorithm is said to be complete if it
guarantees to return a solution if at least any solution exists for any
random input.
2. Optimality: If a solution found for an algorithm is guaranteed to
be the best solution (lowest path cost) among all other solutions,
then such a solution for is said to be an optimal solution.
3. Time Complexity: Time complexity is a measure of time for an
algorithm to complete its task.
4. Space Complexity: It is the maximum storage space required at
any point during the search, as the complexity of the problem.
Types of search algorithms
Based on the search problems we can classify the search
algorithms into uninformed (Blind search) search and informed
search (Heuristic search) algorithms.

Uninformed Search Strategies


Uninformed search is a class of general-purpose search algorithms
which operates in brute force-way. Uninformed search algorithms do
not have additional information about state or search space other
than how to traverse the tree, so it is also called blind search.
Following are the various types of uninformed search
algorithms:
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
Breadth-first Search
o Breadth-first search is the most common search strategy for
traversing a tree or graph. This algorithm searches
breadthwise in a tree or graph, so it is called breadth-first
search.
o BFS algorithm starts searching from the root node of the tree
and expands all successor node at the current level before
moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-
graph search algorithm.
o Breadth-first search implemented using FIFO queue data
structure.
Advantages:
o BFS will provide a solution if any solution exists.
o If there are more than one solutions for a given problem, then
BFS will provide the minimal solution which requires the least
number of steps.
Disadvantages:ackward Skip 10sPlay VideoForward Skip 10s
o It requires lots of memory since each level of the tree must be
saved into memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root
node.
Example:
In the below tree structure, we have shown the traversing of the
tree using BFS algorithm from the root node S to goal node K. BFS
search algorithm traverse in layers, so it will follow the path which is
shown by the dotted arrow, and the traversed path will be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be
obtained by the number of nodes traversed in BFS until the
shallowest Node. Where the d= depth of shallowest solution and b is
a node at every state.
Space Complexity: Space complexity of BFS algorithm is given by
the Memory size of frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest
goal node is at some finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function
of the depth of the node.
Depth-first Search
o Depth-first search is a recursive algorithm for traversing a tree
or graph data structure.
o It is called the depth-first search because it starts from the
root node and follows each path to its greatest depth node
before moving to the next path.
o DFS uses a stack data structure for its implementation.
The process of the DFS algorithm is similar to the BFS algorithm.
Advantage:
o DFS requires very less memory as it only needs to store a
stack of the nodes on the path from root node to the current
node.
o It takes less time to reach to the goal node than BFS algorithm
(if it traverses in the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring,
and there is no guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it
may go to the infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first
search, and it will follow the order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then
D and E, after traversing E, it will backtrack the tree as E has no
other successor and still goal node is not found. After backtracking it
will traverse node C and then G, and here it will terminate as it
found goal node.
Completeness: DFS search algorithm is complete within finite
state space as it will expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the
node traversed by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be
much larger than d (Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path
from the root node, hence space complexity of DFS is equivalent to
the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a
large number of steps or high cost to reach to the goal node.

Depth-Limited Search Algorithm

A depth-limited search algorithm is similar to depth-first search with


a predetermined limit. Depth-limited search can solve the drawback
of the infinite path in the Depth-first search. In this algorithm, the
node at the depth limit will treat as it has no successor nodes
further.
Depth-limited search can be terminated with two Conditions of
failure:
o Standard failure value: It indicates that problem does not have
any solution.
o Cutoff failure value: It defines no solution for the problem
within a given depth limit.
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
o Depth-limited search also has a disadvantage of
incompleteness.
o It may not be optimal if the problem has more than one
solution.
Example:

Completeness: DLS search algorithm is complete if the solution is


above the depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
Optimal: Depth-limited search can be viewed as a special case of
DFS, and it is also not optimal even if ℓ>d.
Uniform-cost Search Algorithm
Uniform-cost search is a searching algorithm used for
traversing a weighted tree or graph. This algorithm comes into play
when a different cost is available for each edge. The primary goal of
the uniform-cost search is to find a path to the goal node which has
the lowest cumulative cost. Uniform-cost search expands nodes
according to their path costs form the root node. It can be used to
solve any graph/tree where the optimal cost is in demand. A
uniform-cost search algorithm is implemented by the priority queue.
It gives maximum priority to the lowest cumulative cost. Uniform
cost search is equivalent to BFS algorithm if the path cost of all
edges is the same.
Advantages:
o Uniform cost search is optimal because at every state the
path with the least cost is chosen.
Disadvantages:
o It does not care about the number of steps involve in
searching and only concerned about path cost. Due to which
this algorithm may be stuck in an infinite loop.
Example:

Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS
will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get
closer to the goal node. Then the number of steps is = C*/ε+1. Here
we have taken +1, as we start from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform-cost search
isO(b1 + [C*/ε])/.
Space Complexity:
The same logic is for space complexity so, the worst-case space
complexity of Uniform-cost search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with
the lowest path cost.

Iterative deepening depth-first Search

The iterative deepening algorithm is a combination of DFS and


BFS algorithms. This search algorithm finds out the best depth limit
and does it by gradually increasing the limit until a goal is found.
This algorithm performs depth-first search up to a certain
"depth limit", and it keeps increasing the depth limit after each
iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first
search's fast search and depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search
when search space is large, and depth of goal node is unknown.
Advantages:
o Itcombines the benefits of BFS and DFS search algorithm in
terms of fast search and memory efficiency.
Disadvantages:
o The main drawback of IDDFS is that it repeats all the work of
the previous phase.
Example:
Following tree structure is showing the iterative deepening
depth-first search. IDDFS algorithm performs various iterations until
it does not find the goal node. The iteration performed by the
algorithm is given as:

1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Completeness:
This algorithm is complete is ifthe branching factor is finite.
Time Complexity:
Let's suppose b is the branching factor and depth is d then the
worst-case time complexity is O(bd).
Space Complexity:
The space complexity of IDDFS will be O(bd).
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function
of the depth of the node.
Bidirectional Search Algorithm
Bidirectional search algorithm runs two simultaneous
searches, one form initial state called as forward-search and other
from goal node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two small
subgraphs in which one starts the search from an initial vertex and
other starts from goal vertex. The search stops when these two
graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS,
DLS, etc.
Advantages:
o Bidirectional search is fast.
o Bidirectional search requires less memory
Disadvantages:
o Implementation of the bidirectional search tree is difficult.
o In bidirectional search, one should know the goal state
in advance.
Example:
In the below search tree, bidirectional search algorithm is applied.
This algorithm divides one graph/tree into two sub-graphs. It starts
traversing from node 1 in the forward direction and starts from goal
node 16 in the backward direction.
The algorithm terminates at node 9 where two searches meet.

Completeness: Bidirectional Search is complete if we use BFS in


both searches.
Time Complexity: Time complexity of bidirectional search using
BFS is O(bd).
Space Complexity: Space complexity of bidirectional search
is O(bd).
Optimal: Bidirectional search is Optimal.

You might also like