0% found this document useful (0 votes)
12 views44 pages

PAI Unit-1

The document provides an overview of Artificial Intelligence (AI), detailing its definition, history, applications, and various techniques used in problem-solving. It covers the evolution of AI from early foundations to the deep learning era, along with real-world applications in gaming, computer vision, natural language processing, and robotics. Additionally, it explains different search strategies and algorithms, such as A* and AO*, used in AI problem-solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views44 pages

PAI Unit-1

The document provides an overview of Artificial Intelligence (AI), detailing its definition, history, applications, and various techniques used in problem-solving. It covers the evolution of AI from early foundations to the deep learning era, along with real-world applications in gaming, computer vision, natural language processing, and robotics. Additionally, it explains different search strategies and algorithms, such as A* and AO*, used in AI problem-solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Principles of Artificial Intelligence

Asfiya Abidi
Assistant Professor
Dr. Akhilesh Das Gupta Institute of Professional Studies (ADGIPS)
Unit-1 Introduction to AI
 What is Artificial Intelligence?

 Artificial Intelligence is a branch of computer science that aims to create


systems capable of performing tasks that typically require human intelligence.
 These tasks include problem solving, speech recognition, language translation,
decision making, etc.
 In short, AI enables machines to "think" and act intelligently.

Thought Process Thinking Humanly Thinking Rationally


and Reasoning

Behaviour Acting Humanly Acting Rationally


History of Artificial Intelligence

 Early Foundations (1940s–1950s)


 Alan Turing proposes the Turing Test (1950).
 Claude Shannon develops information theory.
 John von Neumann builds early computer architecture.

 Birth of AI (1956)
 Term 'Artificial Intelligence' coined by John McCarthy.
 Dartmouth Conference marks the beginning of AI research.
 Attended by Marvin Minsky, Herbert Simon, Allen Newell.
 Early Years (1956–1970s)
 AI focused on symbolic methods like problem solving and games like chess was
developed in early years (1956-1970s).

 Machine Learning (1990s–2000s)


 Shift to data-driven approaches.
 Rise of Support Vector Machines, Bayesian networks.

 Deep Learning Era (2010–Present)


 Big data and computational power drive progress.
 Rise of Large Language Models (LLMs) like GPT, BERT, ChatGPT.
Applications of AI in the Real World
 Gaming
 AI Role: Controls non-player characters (NPCs), decision-making, difficulty adjustments.
 Examples:
 Chess
 Video Games: Enemy behavior and strategy adaptation in games.

 Computer Vision
 AI Role: Enables machines to interpret and understand images or videos.
 Examples:
 Facial Recognition: Used in phones, surveillance.
 Medical Imaging: AI detects tumors or fractures in X-rays, MRIs.

 Expert Systems
 AI Role: Mimics decision-making ability of a human expert.
 Example: Medical Diagnosis Systems - Suggest treatment based on symptoms.
 Natural Language Processing (NLP)
 AI Role: Enables machines to understand, interpret, and respond in human language.
 Examples:
 Chatbots: Customer support bots on websites.
 Language Translation: Google Translate.
 Speech Assistants: Alexa, Siri, Google Assistant.

 Robotics
 AI Role: Empowers robots with autonomy, learning, and decision-making capabilities.
 Examples:
 Industrial Robots: Automated manufacturing, like in car factories.
 Medical Robots: Surgical robots for precision operations.
 Service Robots: Used in hotels, delivery, or elderly care.
 Autonomous Vehicles: AI-based navigation and control.
AI Techniques
 Problem Solving in AI

 Problem solving in AI means finding a sequence of actions that leads from an initial
state to a desired goal state.
 To do this, AI uses several structured methods:

 Production Systems  Best First Search


 State Space Search  A*
 Depth First Search (DFS)  Problem Reduction
 Breadth First Search (BFS)  AO*
 Heuristic Search  Constraint Satisfaction
 Hill Climbing  Means-End Analysis
Production Systems
 Definition: A production system is a framework for solving problems using a set of
rules and a working memory.

 Components:
• Set of production rules: If–then rules (e.g., If condition is true, then take action).
• Working memory: Stores the current state of the problem.
• Control strategy: Decides which rule to apply next.

 Example: In a medical diagnosis system:


• Rule: If patient has fever and cough, then possible flu.
• Working memory: Patient symptoms.
• Control strategy: Match symptoms to rules and apply the most relevant.
State Space Search
 Definition: A representation of all possible states of a problem and the transitions
between them.

 Idea: Start from the initial state → move through intermediate states → reach the goal
state.

 Components:
• Initial state (where you start).
• Goal state (where you want to reach).
• Operators/actions (how you move from one state to another).
 Example: Robot Path Planning

• Initial state: A warehouse robot is at its starting location (e.g., near the
charging station).
• Goal state: The robot must reach a target location (e.g., a shelf to pick up an
item).
• Operators: Move forward, backward, left, or right by one grid cell (or turn in
place).
• State space: All possible positions and orientations of the robot in the
warehouse.
• Search process: The AI searches through possible sequences of moves until it
finds a safe path from start to goal while avoiding obstacles (walls, boxes,
other robots).
Depth First Search (DFS)

 Definition: A search strategy that explores as far as possible along one branch before
backtracking.

 Key Points:
• Uses a stack (LIFO order).
• Can be implemented using recursion.
• May find a solution faster if it's deep in the tree.
• Risk: Can get stuck in an infinite loop if cycles exist and not handled.
 Example: File System Search

• Imagine your computer’s folders form a tree structure.


• Initial state: Start at the root directory.
• Goal: Find a file named “xyz”.
• DFS process:
1. Open the first folder you find.
2. Go inside it, then inside its first subfolder, and so on — going as deep as possible.
3. If the file isn’t found, backtrack to the previous folder and try the next subfolder.
Breadth First Search (BFS)

 Definition: A search strategy that explores all the nodes at the current depth before
going deeper.

 Key Points:
• Uses a queue (FIFO order).
• Guaranteed to find the shortest path in terms of number of steps.
• Drawback: Can require a lot of memory.
 Example: File System Search

• Imagine your computer’s folders form a tree structure.


• Initial state: Start at the root directory.
• Goal: Find a file named “xyz”.
• BFS process:
1. Open the first folder you find.
2. Go inside it, then check all the folder present in it, then open one of the folders and then
again check all the folders present in it and so on.
Comparison Between BFS and DFS
Feature BFS (Breadth-First Search) DFS (Depth-First Search)

Explores all nodes at the current depth level Explores as far as possible along one branch
Definition
before going deeper. before backtracking.

Data Structure Used Queue (FIFO: First In, First Out) Stack (LIFO: Last In, First Out)

Search Order Level by level Depth by depth

May get stuck in an infinite path (if cycles not


Completeness Always finds a solution if one exists
handled)

Optimality Finds the shortest path (least number of steps) Does not guarantee shortest path

O(b^d) where b = branching factor, d = depth of


Time Complexity O(b^m) where m = maximum depth
shallowest solution

Low: needs to store only one path at a time →


Space Complexity High: stores all nodes at a given level → O(b^d)
O(bm)

Finding the shortest/quickest solution (e.g.,


Best for Searching deep solutions when memory is limited
shortest route in maps)

Checking all your friends, then friends-of-friends, Searching for a file by opening one folder, then its
Example
to find someone in a social network. subfolders, going deeper until the file is found.
Heuristic Search
 Definition: A heuristic search is a problem-solving technique in AI that uses heuristics
(rules of thumb, educated guesses, or domain knowledge) to guide the search process
towards the goal more efficiently than blind methods like BFS or DFS.

 Idea: Instead of exploring all possibilities, the algorithm uses a heuristic value that
estimates how close a state is to the goal.

 Real-Life Examples

• Google Maps / GPS Navigation: Uses heuristics like straight-line distance (“as the
crow flies”) to estimate which route is most promising before refining with actual road
distances.
• Robot Path Planning: A robot uses distance-to-goal as a heuristic to move efficiently in
a warehouse.
Hill Climbing

 Definition: Hill Climbing is a heuristic search algorithm that continuously moves


towards the best neighbour (the state that looks closest to the goal according to a
heuristic function) until it reaches a peak (solution).

 Idea: Imagine standing on a hill in the fog. You can’t see the whole mountain, but you
want to reach the top.
• You feel around your immediate surroundings.
• If a nearby spot is higher (better), you move there.
• You keep moving uphill until no higher move is possible.
 Algorithm (Simple Hill Climbing Steps)

• Start at an initial state.


• Evaluate all neighbouring states.
• Move to the neighbor with the highest evaluation (best heuristic value).
• Repeat until no better neighbor exists.

 Types of Hill Climbing

• Simple Hill Climbing → Chooses the first better neighbor, not necessarily the best.
• Steepest-Ascent Hill Climbing → Evaluates all neighbors and picks the best one.
• Stochastic Hill Climbing → Randomly selects from uphill moves to avoid getting
stuck.
 Real-Life Examples

• Route optimization: A delivery app chooses slightly shorter roads step by step to reach
the quickest route.
• Job scheduling: Assigning tasks to workers by improving the schedule incrementally.
• Game playing: A chess program improving its position move by move using
evaluation functions.
• Business Strategy: A company gradually improves profits by making small local
decisions (reduce cost here, add feature there). They may stop at a “local maximum”
profit instead of finding the best possible global strategy.
• Website SEO Optimization: A webmaster tweaks keywords, improves load speed, and
adjusts layout step by step. Each improvement raises ranking slightly until no more
obvious changes help.
Best-First Search
 Definition: A search strategy that selects the next node to expand based on an
evaluation function 𝑓(𝑛).
• This function estimates how promising a node is.
• The search always chooses the “best-looking” node next.

 Why it’s useful: Unlike blind searches (DFS, BFS), best-first search uses knowledge
(heuristics) to guide the search more intelligently.

 How it works:
• Maintains a list (often a priority queue) of nodes to be explored.
• Each node is given a value based on the evaluation function 𝑓( 𝑛).
• Always expands the node with the lowest 𝑓(𝑛) (the “best” one).
A* Search Algorithm

 Definition: A* is a best-first heuristic search algorithm that finds the shortest/optimal


path to the goal by considering both:
• Cost so far from the start to the current node (𝑔(𝑛)g(n)).
• Estimated cost to reach the goal from the current node ( ℎ( 𝑛)).

 Evaluation Function:
f(n) = g(n) + h(n)
• g(n): Actual cost from start → node n.
• h(n): Heuristic estimate (how far from goal)
• f(n): Estimated total cost of path through n
 Why A* is powerful

 Combines advantages of Uniform Cost Search (uses actual cost, 𝑔( 𝑛)g(n)) and Greedy
Search (uses heuristic, ℎ(𝑛)).
 If the heuristic is:
• Admissible (never overestimates the true cost), then A* is guaranteed to find the
optimal solution.
• Consistent (monotonic), then A* is both optimal and efficient.
 Steps of A*

• Start at the initial node.


• Put it in an open list (priority queue sorted by 𝑓( 𝑛)).
• Expand the node with the lowest 𝑓(𝑛).
• For each neighbour:
 Compute 𝑔(𝑛), ℎ(𝑛), and 𝑓(𝑛).
 Add to the open list if not already explored.
• Repeat until the goal node is reached.
 Example (Pathfinding in a Grid Map)

• You want to find the shortest path from your home to a park.
• g(n): Distance already traveled (e.g., 2 km).
• h(n): Straight-line distance to park (e.g., 5 km).
• f(n) = g(n) + h(n) = 7 km (estimated total).
• The algorithm always expands the path with the smallest 𝑓( 𝑛).

 Real-World Applications

• Robotics → Helps robots move efficiently in warehouses or terrain.


• Video Games (NPC Pathfinding) → Enemies/characters find shortest paths.
• Network Routing → Optimal packet delivery across networks.
Problem Reduction
 Definition: Problem reduction is a problem-solving technique in AI where a
complex problem is broken down into smaller, simpler subproblems, which are
easier to solve. The solutions of the subproblems are then combined to form the
solution of the original problem.
• It is based on the principle of “divide and conquer.”

 Key Idea
 Instead of solving a difficult problem directly, reduce it into simpler versions.
 Each subproblem may itself be reducible, until we reach primitive problems that
can be solved directly.
 These relationships are often represented using AND-OR graphs:
 AND nodes: All subproblems must be solved (e.g., to bake a cake: mix ingredients
AND bake in oven).
 OR nodes: Only one choice among subproblems is enough (e.g., to travel: go by bus
OR train OR flight).
 Steps in Problem Reduction
• Identify the main problem.
• Break it into subproblems.
• Solve subproblems recursively until reaching solvable base cases.
• Combine solutions to solve the original problem.

 Example: Planning a Family Vacation Trip


Main Problem: “Plan and execute a family vacation.”
Instead of tackling it all at once, break it down:
• Book travel: Choose mode of travel (car OR train OR flight), Book tickets (if
train/flight), Arrange rental car/local transport at destination.
• Arrange stay: Search hotels/resorts online, Compare prices/reviews, Confirm booking.
• Plan activities: Research sightseeing spots, Pick some (since can’t do all → OR
choice), Make a daily itinerary.
• Pack essentials
AO* Algorithm
 Definition: AO* (And-Or Star) is a graph search algorithm used to solve problems that can
be represented as an AND-OR graph.
• It finds the least-cost solution graph (not just a path) from a given start node.
• It is an extension of the A* search algorithm, designed for problems that require solving
subproblems (AND) or choosing between alternatives (OR).

 Why AO*?
• Standard A* works only on simple path problems.
• Many AI problems (planning, diagnosis, game playing) involve AND-OR structures:
• OR-node: Choose one of the alternatives.
• AND-node: Must solve all subproblems together.
• AO* handles both.
 Basic Idea of AO*
• Start from the root node (initial problem).
• Expand nodes using heuristics, just like A*.
• For each node:
• If it is an OR-node: select the child with minimum cost.
• If it is an AND-node: expand all children and add up their costs.
• Update the costs of parent nodes using the rule:
f(n)=g(n)+h(n)
(same as A*, but extended to handle AND-combinations).
• Continue expanding until the least-cost solution subgraph is found.
 AO* Algorithm Steps (simplified):

 Place the start node in OPEN list.


 Expand nodes using best-first strategy.
 Use heuristics to estimate costs.
 Update costs of ancestors when subproblems are solved.
 Mark nodes as solved when all their subproblems are solved.
 Repeat until the start node is solved.
 Example: Medical Diagnosis
• Suppose a doctor is diagnosing a patient:
• Initial Problem (Root): Find disease causing symptoms.
• Reduction (AND/OR):
• Disease A (OR)
• Disease B (OR)
• Disease C (OR) requires Test1 AND Test2 results.
• AO* will:
• Explore possible diseases.
• For Disease C, it will ensure both tests (AND) are solved.
• Finally, it picks the disease with minimum combined cost (time, resources, or
likelihood).
Constraint Satisfaction
 Definition: A Constraint Satisfaction Problem (CSP) is a problem where the solution
must satisfy a set of constraints (rules or conditions).
 A CSP is defined by:
• Variables – things we need to assign values to.
• Domains – possible values each variable can take.
• Constraints – restrictions that specify which combinations of values are allowed.
 The goal is to assign values to all variables without violating any constraints.
 Structure of a CSP
 A CSP is usually written as a triple:
 CSP = (V, D, C) where:
• V = {V1, V2,....., Vn} : set of variables
• D = {D1, D2,....., Dn} : domains of values
• C = {C1, C2,....., Cn} : set of constraints
 Examples of Constraint Satisfaction
 Map Coloring Problem
• Variables: {WA, NT, SA, Q, NSW, V, T}
• Domains: {Red, Green, Blue}
• Constraints: Neighboring states must have different colors.
 Sudoku Puzzle
• Variables: Each empty cell.
• Domain: {1, 2, …, 9}.
• Constraints: Each row, column, and 3×3 grid must contain all digits without repetition.
 Timetable Scheduling
• Variables: Time slots for each subject/class.
• Domains: Available times.
• Constraints: No teacher or classroom can be double-booked.
 Solving CSPs
 Backtracking Search
• Assign a value to a variable.
• If consistent, move to the next variable.
• If inconsistent, backtrack (undo and try another value).
Means–End Analysis (MEA)

 Definition
Means–End Analysis (MEA) is a problem-solving strategy in AI where the system
repeatedly compares the current state and goal state, identifies differences, and applies
actions (operators) to reduce these differences until the goal is achieved.

 Steps
 Identify Differences: Compares the current state with the goal state and note the
differences.
 Set Sub-Goals: Create intermediate goals aims at reducing identified differences.
 Find Operators: Look for actions that can achieve these sub-goals.
 Apply Operators: Execute the most promising actions.
 Iterate: Repeat these steps until the goal is achieved.
 Example 1: Real Life – Planning a Trip
 Goal: Be in Delhi tomorrow.
 Current state: You are in Lucknow.
 Difference: Location mismatch (Lucknow vs. Delhi).
 Operator: Take a train, bus, or flight.
 Subgoals: If flight → need ticket → book online → pay.
 Keep applying steps until goal state is reached (you in Delhi).
Thank You

You might also like