Artificial Intelligence (BCAI501) ITECH WORLD AKTU
ITECH WORLD AKTU
Artificial Intelligence (BCAI501)
Unit 2: Problem Solving Methods
Syllabus
• Problem Solving Methods
• Search Strategies: Uninformed, Informed, Heuristics, Local Search
• Algorithms and Optimization Problems
• Searching with Partial Observations
• Constraint Satisfaction Problems: Constraint Propagation, Backtracking Search
• Game Playing: Optimal Decisions in Games, Alpha-Beta Pruning, Stochastic Games
1
Artificial Intelligence (BCAI501) ITECH WORLD AKTU
Problem Solving Methods
Definition: Problem solving in AI involves using algorithms to find solutions
to given problems, either by searching through a space of possible solutions or by
employing heuristics to make informed decisions.
Search Strategies
Uninformed Search
Uninformed search strategies do not have any additional information about states beyond
the problem definition. They explore the search space blindly. Common uninformed
search methods include:
• Breadth-First Search (BFS): Explores all nodes at the present depth before
moving to the next level. Example: Navigating through a maze.
• Depth-First Search (DFS): Explores as far as possible along a branch before
backtracking. Example: Solving a puzzle by trying all possible moves.
2
Artificial Intelligence (BCAI501) ITECH WORLD AKTU
Informed Search (Heuristic Search)
Informed search uses problem-specific knowledge beyond the problem definition. It uses
heuristics to find solutions more efficiently:
• A* Search: Combines the cost to reach the node and the estimated cost to the
goal. Example: Pathfinding in maps.
• Greedy Best-First Search: Expands the node that is closest to the goal according
to the heuristic.
Explanation: Greedy Best-First Search (GBFS) is a search algorithm that selects
the node that appears to be closest to the goal based on a given heuristic function
h(n). The heuristic function estimates the cost from the current node n to the
goal. GBFS is called ”greedy” because it always chooses the path that looks most
promising, based solely on the heuristic value. It does not consider the path cost
already incurred, only the estimated cost to reach the goal.
3
Artificial Intelligence (BCAI501) ITECH WORLD AKTU
Local Search Algorithms and Optimization Problems
Local search algorithms operate using a single current state and try to improve it itera-
tively. Common methods:
• Hill Climbing: An iterative optimization algorithm that starts with an arbi-
trary solution to a problem and attempts to find a better solution by incrementally
changing a single element of the solution. At each step, Hill Climbing selects the
neighboring state with the highest value according to an objective function. The
process continues until it reaches a peak, where no neighbor has a higher value.
Explanation: Hill Climbing is a local search algorithm that operates by evaluating
the neighboring states of the current state. It moves to the neighbor with the highest
value (for maximization problems) or the lowest value (for minimization problems).
The algorithm terminates when it reaches a state that is better than all of its
neighbors, known as a ”local maximum” or ”local minimum.” Hill Climbing can be
quick but is prone to getting stuck in local optima, ridges, or plateaus. Variants like
Stochastic Hill Climbing, Random Restart Hill Climbing, or Simulated Annealing
are used to overcome these limitations. .
• Simulated Annealing: A probabilistic optimization technique used to approxi-
mate the global optimum of a given function.
Key Points:
– Basic Idea: Mimics the annealing process in metallurgy where a material is
heated and then slowly cooled to remove defects, allowing it to reach a stable,
low-energy state.
– Exploration vs. Exploitation: Balances between exploring new solutions
and exploiting known good solutions. At higher temperatures, it is more likely
to accept worse solutions to avoid local minima.
– Probability of Acceptance: Accepts a new solution based on a probability
that decreases with time (temperature) and increases if the solution is better.
4
Artificial Intelligence (BCAI501) ITECH WORLD AKTU
– Cooling Schedule: Gradually lowers the temperature according to a pre-
defined schedule. A slower cooling rate increases the chances of finding the
global optimum.
– Avoiding Local Minima: By allowing occasional moves to worse solutions,
the algorithm can escape from local minima and continue searching for a global
optimum.
– Termination: The process stops when the system reaches a stable state, or
the temperature reaches near zero, making further exploration unlikely.
Constraint Satisfaction Problems (CSPs)
Constraint Satisfaction Problems (CSPs) are mathematical problems defined as a set of
objects whose state must satisfy a number of constraints or limitations.
• Constraint Propagation: A technique used to reduce the search space by deduc-
ing variable domains based on the constraints.
– Basic Concept: Propagates the implications of a constraint through the vari-
ables involved, thereby reducing the number of possible values that variables
can take.
– Types:
∗ Node Consistency: Ensures each variable meets its unary constraints.
∗ Arc Consistency: Ensures that for every value of a variable, there is a
consistent value of a connected variable.
5
Artificial Intelligence (BCAI501) ITECH WORLD AKTU
∗ Path Consistency: Extends arc consistency to triples of variables to
ensure any consistent pair of assignments can be extended to a third vari-
able.
– Advantages: Reduces the need for backtracking by narrowing down the
search space before searching begins.
– Limitations: Might not completely eliminate the need for search; only sim-
plifies the problem.
• Backtracking Search: A depth-first search algorithm that incrementally builds
candidates for the solution and abandons each candidate (“backtracks”) as soon as
it determines that the candidate cannot possibly be completed to a valid solution.
– Basic Concept: Starts with an empty assignment and makes a series of incre-
mental assignments to variables. When a constraint is violated, the algorithm
backtracks to the most recent decision point and tries a different path.
– Advantages: Guarantees finding a solution if one exists and is simpler to
implement compared to more advanced techniques.
– Heuristics:
∗ Minimum Remaining Values (MRV): Chooses the variable with the
fewest legal values.
∗ Degree Heuristic: Chooses the variable with the most constraints on
remaining variables.
∗ Least Constraining Value: Chooses the value that rules out the fewest
choices for neighboring variables.
– Limitations: Can be slow if the problem is large or if many constraints are
present.
– Improvements:
∗ Forward Checking: Keeps track of remaining legal values for the unas-
signed variables.
∗ Constraint Learning: Records constraints discovered during search to
avoid exploring the same failed paths.
Game Playing
Game playing in Artificial Intelligence involves designing agents that can make optimal
decisions in competitive environments. These agents use search algorithms and heuristics
to evaluate possible moves and select the most advantageous one.
• Optimal Decisions in Games:
– Definition: Involves making the best possible moves to maximize the chance
of winning in a two-player zero-sum game.
– Minimax Algorithm: A decision rule for minimizing the possible loss for
a worst-case scenario. In games like chess, it aims to maximize a player’s
minimum payoff.
6
Artificial Intelligence (BCAI501) ITECH WORLD AKTU
– Strategy: Considers all possible moves and countermoves by the opponent to
choose the optimal move that leads to the most favorable outcome.
– Limitations: The computation can be very intensive for games with large
search spaces (e.g., Go, Chess).
• Alpha-Beta Pruning:
– Definition: An optimization technique for the minimax algorithm that re-
duces the number of nodes evaluated in the search tree.
– How it Works: Maintains two values, Alpha (the best value that the max-
imizer can guarantee) and Beta (the best value that the minimizer can guar-
antee).
– Pruning: If the value of a node is found to be worse than the current alpha
or beta value, it stops evaluating further, effectively ”pruning” the search tree
and reducing computational time.
– Advantage: Significantly decreases the number of nodes that are evaluated
by the minimax algorithm without affecting the final result, making it more
efficient.
– Stochastic Games:
∗ Definition: Games that incorporate elements of chance or randomness,
making them more complex to analyze and solve.
7
Artificial Intelligence (BCAI501) ITECH WORLD AKTU
∗ Characteristics: Unlike deterministic games (e.g., Chess), stochastic
games have outcomes that depend on both players’ strategies and ran-
dom events (e.g., rolling dice in Backgammon).
∗ Strategies: Use probabilistic methods and decision theory to compute
the optimal strategy that maximizes the expected payoff, considering all
possible outcomes.
∗ Approach: Monte Carlo simulations and Expectimax algorithms are of-
ten used to handle the uncertainty in these games.