0% found this document useful (0 votes)
35 views77 pages

AI - Unit 1

The document outlines an agenda for an Artificial Intelligence course, detailing the syllabus, evaluation components, and reference materials. It covers various units on topics such as intelligent agents, knowledge representation, learning methods, expert systems, and AI applications in different fields. Additionally, it discusses the history of AI, the definition of AI, and the characteristics of agents and environments in AI systems.

Uploaded by

dadavalibdadu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views77 pages

AI - Unit 1

The document outlines an agenda for an Artificial Intelligence course, detailing the syllabus, evaluation components, and reference materials. It covers various units on topics such as intelligent agents, knowledge representation, learning methods, expert systems, and AI applications in different fields. Additionally, it discusses the history of AI, the definition of AI, and the characteristics of agents and environments in AI systems.

Uploaded by

dadavalibdadu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 77

AGENDA

1. Quick look on Syllabus


2. Evaluation Components
3. Text / Reference Books
4. Introduction to AI

Abhay Srivastav
Assistant Professor
2
QUICK LOOK ON SYLLABUS
Unit 1– 13 Hrs
Artificial Intelligence: Definition, AI Problems-Task Domains of Artificial Intelligence; The
Underlying Assumption - Physical Symbol System Hypothesis; AI technique - Knowledge properties,
Knowledge Representation.

Problems, Problem Spaces and Search: Steps in building a System; Production Systems; Control
Strategies-Requirements of a good control strategy; Problem Characteristics; Production System
Characteristics-Categories of Production Systems .

Heuristic Search Techniques: Generate-and-test, Hill Climbing-Simple Hill Climbing, Best First
Search-OR Graphs

3
QUICK LOOK ON SYLLABUS
Unit 2– 13 Hrs
Intelligent Agents: Agents and Environments; A Rational Agent – Performance measures; Examples
of Agent Types and PEAS (Performance Measure, Environment, Actuators, Sensors ) descriptions,
Properties of Task environment; The Structure of Agents, Types of Agent Programs.

Knowledge Representation: Introduction, Definition, Importance, Representation and Mappings-


mappings between facts and representations, Representation of Facts; Approaches to Knowledge
Representation-Properties, Types of Knowledge; Issues in Knowledge Representation-Important
Attributes, Relationship among Attributes.

4
QUICK LOOK ON SYLLABUS
Unit 3– 13 Hrs
Symbolic Reasoning under Uncertainty: Introduction to Nonmonotonic Reasoning; Logics for Nonmonotonic
Reasoning, Default Reasoning and Minimalist Reasoning.

Learning: Introduction, Different methods of Learning – Rote Learning, Inductive Learning, Reinforcement
Learning, Unsupervised Learning, Supervised Learning, Analogy , Derivational and Transformational.

Expert Systems: Introduction, Rule based and Knowledge based, knowledge acquisition, Maintenance and
Manipulations.

Planning: Components of a Planning System; Goal Stacks Planning - A very Simple Blocks World Problem;
Reactive Systems; Other Planning techniques.

5
QUICK LOOK ON SYLLABUS
Unit 4– 13 Hrs
Parallel and Distributed AI: Psychological modeling; Parallelism in Reasoning Systems; Distributed Reasoning Systems.

Perception and Action: A design for Autonomous Robot; Perception-Vision, Speech Recognition; Action-navigation,
Manipulation; Robot Architectures.

Fuzzy Logic Systems: Introduction; Crisp Sets; Fuzzy Sets; Fuzzy Terminology; Fuzzy Logic Control-Fuzzy Room Cooler.

Prolog: Introduction; Converting English to Prolog Facts and Rules; Goals; Prolog Terminology; Variables; Control
Structures; Arithmetic Operators; Matching in Prolog; Backtracking; Recursion.

LISP: Introduction, Syntax and Numeric Functions, Basic List Manipulation Functions, Functions, Predicates and
Conditionals Input, Output and Local variables, Iteration and Recursion, Property List and Arrays.

6
7
TEXT BOOKS

1. Elaine Rich, Kevin Knight, Shivashankar B Nair, ”Artificial


Intelligence”,
3rd Edition,Tata McGrawHill, 2013.
2. Ivan Bratko, “PROLOG: PROLOG, Programming for Artificial
Intelligence”,
3rd dition, , Pearsonpublication
.
3. Dan W. Patterson,“Introductionto ArtificialIntelligenceand Expert
System”,
, Prentice- Hall of India,1992.

8
TEXT BOOKS

9
REFERENCE BOOKS

1. Jean- Louis Ermine,”Expert


Systems: Theoryand Practice”,Prentice
Hall of India,1995
2. Stuart Russel, Peter Norvig,”Artificial Intelligence
: A Modern
Approach”,3rdPearson3rd edition2013.

10
11
Applications of Artificial
Intelligence

Robotic Surgeries Precision Medicine

Virtual Nursing Assistants

12
Applications of Artificial
Intelligence

• Plagiarism Detection.
• Exam Integrity.
• Chatbotsfor Enrollment and Retention.
• Learning Management Systems.
• Transcription of Faculty Lectures.
• Enhanced Online Discussion Boards.
• Analyzing Student Success Metrics.
• Academic Research.

13
Applications of Artificial
Intelligence

• Crop and Soil Monitoring.


• Insect and Plant Disease Detection.
• Livestock Health Monitoring.
• Intelligent Spraying.
• Automatic Weeding.
• Aerial Survey and Imaging.
• Produce Grading and Sorting.

14
Applications of Artificial
Intelligence

15
ARTIFICIAL INTELLIGENCE
- DEFINITION

Is the study of how to make computersdo things which, at the


moment,peopledo better.
- ElaineRich,KevinKnight,ShivshankarB Nair. -

16
ARTIFICIAL INTELLIGENCE
- DEFINITION

ArtificialIntelligence(AI) is the branch of computersciencesthat


emphasizesthe developmentof intelligencemachines,thinkingand
workinglike humans
.
- InternetSources-

17
ARTIFICIAL INTELLIGENCE
- DEFINITION

Examples
:
• Manufacturing robots • Automated financial investing

• Self- driving cars • Virtual travel booking agent

• Smart assistants • Social media monitoring

• Proactive healthcare management • Inter- team chat tool

• Disease mapping • Conversational marketing bot


• Natural Language Processing (NLP) tools

18
HISTORY OF AI
• The origin of artificial intelligence lies in the earliest days of machine
computations. During the 1940s and 1950s, AI begins to grow with the
emergence of the modern computer. Among the first researchers to attempt to
build intelligent programs were Newell and Simon.
• Their first well known program, logictheorist, was a program that proved
statements using the accepted rules of logic and a problem solving program of
their own design. By the late fifties, programs existed that could do a passable
job of translating technical documents and it was seen as only a matter of extra
databases and more computing power to apply the techniques to less formal,
more ambiguous texts. Most problem solving work revolvedaround the work of
Newell, Shaw and Simon, on the general problem solver (GPS). Unfortunately
the

19
HISTORY OF AI
GPS did not fulfill its promise and did not because of some simple lack of
computing capacity. In the 1970’s the most important concept of AI was
developed known as Expert System which exhibits as a set rules the knowledge
of an expert. The application area of expert system is very large. The 1980’s saw
the development of neural networks as a method learning examples.
Prof. Peter Jackson (University of Edinburgh) classified the history of AI into
three periods as:
1. Classical
2. Romantic
3. Modern
20
HISTORY OF
AI
1. Classical Period:
It was started from 1950. In 1956, the concept of Artificial Intelligence came into
existance. Duringthis period
, the main research work carried out includes game
plying, theorem proving and concept stateofspace
approach for solving a
problem.
2. Romantic Period :
It was started from the mid 1960 and continues until the mid 1970. During this
period peoplewere interested in making machine understand, that is usually
mean the understanding of natural language
. Duringthis period the knowledge
representation technique “semantic net” was developed
.

21
HISTORY OF
AI
3. Modern Period
:
It was started from 1970 and continues to the present day. This period was
developed to solvemore complex problems. This period includes the
research on both theories and practical aspectsArtificial
of Intelligence
. This
period includes the birth of concepts like Expert system, Artificial Neurons,
Pattern Recognition
etc. The research of the various advanced concepts of
Pattern Recognition and
Neural Network are still going on.

22
THE AI
PROBLEMS
 Intelligence does not imply perfect understanding; every intelligent being
has limited perception, memory and computation.
 Manypoints on the spectrum of intelligence versus cost are viable, from
insects to humans.
 AI seeks to understand the computations required from intelligent
behaviour and to produce computer systems that exhibit intelligence
.
 Aspects of intelligence studied by AI include perception, communicational
using human languages, reasoning, planning, learning and . memory

23
THE AI
PROBLEMS
The following questions are to be considered before we can step forward:
1. What are the underlying assumptions about intelligence?
2. What kinds of techniques will be useful for solving AI problems?
3. At what level human intelligence can be modelled?
4. When will it be realized when an intelligent program has been built?

24
THE AI PROBLEMS

25
THE AI PROBLEMS

 OtherTasksinclude:

26
THE AI PROBLEMS

ExpertSystems

ExpertSystemsare programsthat are operationalin our day- to- day life


throughoutall areasof industryand government
.

Each of these systemsattemptsto solve part, or all of a practical,


significantproblemthatpreviouslyrequiredscarcehumanexpertise
.

Expertsystemshavespecificknowledgeto one problemdomain,e.g., medicine,science,


engineering,
etc.

27
AI RESEARCH AREAS

28
AI - AGENTS & ENVIRONMENTS

An AI systemis composedof an agent and its environment


. The
agents act in their environment
. The environmentmay contain
otheragents.

29
WHAT ARE AGENT AND ENVIRONMENT?

An agent is anything that can perceive its environment through


sensors and acts upon that environment through effectors.
 Human-Agent: A human agent has eyes, ears, and other organs
which work for sensors and hand, legs, vocal tract work for
actuators.
 Robotic Agent: A robotic agent can have cameras, infrared range
finder, NLP for sensors and various motors for actuators.
 Software Agent: Software agent can have keystrokes, file
contents as sensory input and act on those inputs and display
output on the screen.

30
WHAT ARE AGENT AND ENVIRONMENT?

31
WHAT ARE AGENT AND ENVIRONMENT?

• Sensor: Sensor is a device which detects the change in the


environment and sends the information to other electronic devices.
An agent observes its environment through sensors.
• Actuators: Actuators are the component of machines that
converts energy into motion. The actuators are only responsible
for moving and controlling a system. An actuator can be an electric
motor, gears, rails, etc.
• Effectors: Effectors are the devices which affect the environment.
Effectors can be legs, wheels, arms, fingers, wings, fins, and
display screen.
INTELLIGENT AGENTS:

32
An intelligent agent is an autonomous entity which act upon an
environment using sensors and actuators for achieving goals. An
intelligent agent may learn from the environment to achieve their
goals. A thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent: Rule
1: An AI agent must have the ability to perceive the
environment.
Rule 2: The observation must be used to make decisions.
Rule 3: Decision should result in an action.
Rule 4: The action taken by an AI agent must be a rational action.
RATIONAL AGENT:

33
A rational agent is an agent which has clear preference, models
uncertainty, and acts in a way to maximize its performance measure with
all possible actions.
 A rational agent is said to perform the right things. AI is about creating
rational agents to use for game theory and decision theory for various
real-world scenarios.
 For an AI agent, the rational action is most important because in AI
reinforcement learning algorithm, for each best possible action, agent
gets the positive reward and for each wrong action, an agent gets a
negative reward.

34
AGENT ENVIRONMENT IN AI

• An environment can be described as a situation in which an agent is


present.
• The environment is where agent lives, operate and provide the agent
with something to sense and act upon it.
• An environment is mostly said to be non
-feministic.

35
FEATURES OF ENVIRONMENT

As per Russell and Norvig, an environment can have various features


from the point of view of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

36
1 FULLY OBSERVABLE
V PARTIALLY
. OBSERVABL S
E
• If an agent sensor can sense or access the complete state of an
environment at each point of time thenaitfully
is
observableenvironment, else it partially
is observable.
• A fully observable environment is easy as there is no need to maintain
the internal state to keep track history of the world.
• An agent with no sensors in all environments then such an
environment is called asunobservable .

37
2. STATIC
V DYNAMIC
S

• If the environment can change itself while an agent is deliberating then


such environment is called a dynamic environment else it is called a
static environment.
• Static environments are easy to deal because an agent does not need
to continue looking at the world while deciding for an action.
• However for dynamic environment, agents need to keep looking at the
world at each action.
• Taxi driving is an example of a dynamic environment whereas
Crossword puzzles are an example of a static environment.

38
3. DISCRETE
V CONTINUOUS
S

• If in an environment there are a finite number of percepts and actions


that can be performed within it, then such an environment is called a
discrete environment else it is called continuous environment.
• A chessgame comesunder discrete environment as there is a finite
number of moves that can be performed.
• A self-driving car is an example of a continuous environment.

39
4. DETERMINISTIC
V STOCHASTIC
S

• If an agent's current state and selected action can completely


determine the next state of the environment, then such environment is
called a deterministic environment.
• A stochastic environment is random in nature and cannot be
determined completely by an agent.
• In a deterministic, fully observable environment, agent does not need
to worry about uncertainty.

40
5. -AGENT
V MULTI
-AGENT
SINGLE S
• If only one agent is involved in an environment, and operating by itself
then such an environment is called single agent environment
.

• However, if multiple agents are operating in an environment, then such


an environment is called a multi
-agent environment.

• The agent design problems in the multi


-agent environment are different
from single agent environment.

41
6. EPISODIC
VSSEQUENTIAL

• In an episodic environment, there is a series -of


shot
oneactions, and
only the current percept is required for the .action

• However, in Sequential environment, an agent requires memory of past


actions to determine the next best actions.

42
8. ACCESSIBLE V INACCESSIBL
• If an agent canSobtain
E complete and accurate information about the
state's environment, then such an environment is called an Accessible
environment else it is called inaccessible
.

• An empty room whose state can be defined by its temperature is an


example of an accessible environment
.

• Information about an event on earth is an example of Inaccessible


environment.

43
7. KNOWN V UNKNOWN
Sunknown are not actually a feature of an environment, but it
Known and

is an agent's state of knowledge to perform an. action

• In a known environment, the results for all actions are known to the
agent. While in unknown environment, agent needs to learn how it
works in order to perform an action
.

• It is quite possible that a known environment to be partially observable


and an Unknown environment to be fully observable.

44
5. SINGLE
-AGENTVSMULTI-AGENT
As per Russell and Norvig, an environment can have various features
from the point of view of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

45
RATIONAL AGENT:

Rationality:
The rationality of an agent is measured by its performance measure.
Rationality can be judged on the basis of following points:
• Performance measure which defines the success criterion.
• Agent prior knowledge of its environment.
• Best possible actions that an agent can perform.
• The sequence of percepts.

46
STRUCTURE OF AN AI AGENT
The task of AI is to design an agent program which implements the agent
function. The structure of an intelligent agent is a combination of
architecture and agent program. It can be viewed as:

Agent = Architecture + Agent program


Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action.
f:P* → A
Agent program: Agent program is an implementation of agent function. An
agent program executes on the physical architecture to produce function f.

47
STRUCTURE OF AN AI AGENT
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we
define an AI agent or rational agent, then we can group its properties
under PEAS representation model. It is made up of four words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors

48
STRUCTURE OF AN AI AGENT
Example of Agents with their PEAS representation
Agent Performance measure Environment Actuators Sensors

1. Medical Diagnose •Healthy patient •Patient •Tests Keyboard


•Minimized cost •Hospital •Treatments (Entry of symptoms)
•Staff

2. Vacuum Cleaner •Cleanness •Room •Wheels •Camera


•Efficiency •Table •Brushes •Dirt detection sensor
•Battery life •Wood floor •Vacuum Extractor •Cliff sensor
•Security •Carpet •Bump Sensor
•Various obstacles •Infrared Wall Sensor

49
STRUCTURE OF AN AI
AGENT
Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and capability :
1. Simple Reflex Agents
2. Model-Based Reflex Agents
3. Goal-Based Agents
4. Utility-Based Agents
5. Learning Agent

50
STRUCTURE OF AN AI
AGENT
•Simple Reflex Agents
• Simple reflex agents ignore the rest of the percept history and act only oncurrent
the basis of the
percept
.
• The agent function is based on
condition
the -actionrule
• For simple reflex agents operating in partially observable environments, infinite loops are often
unavoidable
Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of -non
perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of rules needs to be updated.
51
STRUCTURE OF AN AI AGENT
•Simple Reflex Agents

52
STRUCTURE OF AN AI AGENT
2. Model-Based Reflex Agents
• A model-based agent can handle
partially observable environments
by the use of a model about
the world. The agent has to keep track of the
internal statewhich is adjusted by each percept and
that depends on the percept history
.
Updating the state requires information about:
How the world evolves independently from the agent?
How do the agent’s actions affect the world?

53
STRUCTURE OF AN AI AGENT
2. Model-Based Reflex Agents

54
STRUCTURE OF AN AI
AGENT
3. Goal-Based Agents
• These kinds of agents take decisions based on how far they are currently
goal(description
from their
of desirable situations). Their every action is intended to reduce their distance from the goal.
• This allows the agent a way to choose among multiple possibilities, selecting the one which reaches
a goal state
.
• This allows the agent a way to choose among multiple possibilities, selecting the one which reaches
a goal state.

55
STRUCTURE OF AN AI AGENT
3. Goal-Based Agents

56
STRUCTURE OF AN AI
AGENT
4. Utility-Based Agents

• The agents which are developed having their end uses as building blocks are called
-basedutility
agents.
• They choose actions based on
preference
a (utility)
for each state. Sometimes achieving the desired
goal is not enough.
• Wemay look for a quicker, safer, cheaper trip to reach a destination.

57
STRUCTURE OF AN AI
AGENT
4. Utility-Based Agents

• The agents which are developed having their end uses as building blocks are called
-basedutility
agents.
• They choose actions based on
preference
a (utility)
for each state. Sometimes achieving the desired
goal is not enough.
• Wemay look for a quicker, safer, cheaper trip to reach a destination.

58
STRUCTURE OF AN AI AGENT
4. Utility-Based Agents

59
STRUCTURE OF AN AI
AGENT
5. Learning Agent

• A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities.
• It starts to act with basic knowledge and then is able to act and adapt automatically
through learning.

60
STRUCTURE OF AN AI AGENT
5. Learning Agent
A learning agent has mainly four conceptual
components, which are:
Learning element: It is responsible for making
improvements by learning from the
environment.
Critic: The learning element takes feedback
from critics which describes how well the agent
is doing with respect to a fixed performance
standard.
Performance element: It is responsible for
selecting external action.
Problem Generator: This component is
responsible for suggesting actions that will lead
to new and informative experiences.
61
PROBLEM SOLVING IN ARTIFICIAL
INTELLIGENCE

The problem-solving agent performs precisely by defining problems and several solutions
problem solving is a part of artificial intelligence that encompasses a number of techniques such as a
tree, B-tree, heuristic algorithms to solve a problem.

62
PROBLEM SOLVING IN ARTIFICIAL
INTELLIGENCE

The problem-solving agent performs precisely by defining problems and several solutions
problem solving is a part of artificial intelligence that encompasses a number of techniques such as a
tree, B-tree, heuristic algorithms to solve a problem.
There are basically three types of problem in artificial intelligence:
1. Ignorable: In which solution steps can be ignored.
2. Recoverable: In which solution steps can be undone.
3. Irrecoverable: Solution steps cannot be undo.

63
STEPS -SOLVING IN AI:
PROBLEM

Theproblemof AI is directlyassociatedwiththenatureof humansandtheiractivities


.
weneeda numberof finitestepsto solvea problemwhichmakeshumaneasyworks.
These are the following steps which require to solve a problem :
Problem definition:
Detailed specification of inputs and acceptable system solutions.
Problem analysis:
Analysethe problem thoroughly.
Knowledge Representation:
collect detailed information about the problem and define all possible
techniques.
Problem
-solving:Selection of best techniques.

64
STEPS PROBLEM-SOLVING IN AI:
Components to formulate the associated problem:
Initial State: This state requires an initial state for the problem which starts the AI agent towards a
specified goal. In this state new methods also initialize problem domain solving by a specific class.
Action: This stage of problem formulation works with function with a specific class taken from the
initial state and all possible actions done in this stage.
Transition: This stage of problem formulation integrates the actual action done by the previous action
stage and collects the final stage to forward it to their next stage.
Goal test: This stage determines that the specified goal achieved by the integrated transition model or
not, whenever the goal achieves stop the action and forward into the next stage to determines the cost to
achieve the goal.

65
Path costing: This component of problem-solving numerical assigned what will be the cost to achieve
the goal. It requires all hardware software and human working cost.

66
PROBLEM EXAMPLE
8-PUZZLE
States:A state specifies the location of each of the eight tiles and the blank in one of the nine squares.
Initial state: Any state can be designated as the initial state.
• Note that goal can be reached from exactly half of the possible initial states.
Actions: Movements of the blank space Left, Right, Up, or Down.
• Different subsets of these are possible depending on where the blank is.
Transition
model: Given a state and action, this returns the
state
resulting
Goaltest: This checks whether the state matches the goal configuration Path Cost: Each step costs 1, so the
path cost is the number of steps in the path.

67
SOLVING PROBLEM BY SEARCHING
Searchalgorithmsareoneof themostimportantareasof ArtificialIntelligence

Problem-solvingagents:
 In Artificial Intelligence, Search techniques are universal problem
-solving methods.
 Rationalagentsor Problem-solving agents
in AI mostly used these search strategies or
algorithms to solve a specific problem and provide the best result.
 Problem-solvingagents are the goal
-based agents and use atomic representation. In this topic, we
will learn various problem
-solving search algorithms.

68
SOLVING PROBLEM BY SEARCHING
Search Algorithm Terminologies:
Search:Searchingisa step by step procedure to solve a search
-problem in a given search space. A
search problem can have three main factors:
• Search Space:Search space represents a set of possible solutions, which a system may have.
• Start State:It is a state from where agent begins
the search.
• Goal test:It is a function which observe the current state and returns whether the goal state
is achieved or not .
Search tree:A tree representation of search problem is called Search tree. The root of the search
tree is the root node which is corresponding to the initial state.
Actions:It gives the description of all the available actions to the agent.
Transition model:
A description of what each action do, can be represented as a transition model.
Path Cost:It is a function which assigns a numeric cost to each path.
Solution:It is an action sequence which leads from the start node to the goal node.
Optimal Solution:If a solution has the lowest cost among all solutions.
69
SOLVING PROBLEM BY SEARCHING
Properties of Search Algorithms
:

Followingare the four essential properties of search algorithms to compare the


efficiency of these algorithms
:

Completeness:A search algorithm is said to be complete if it guarantees to return a


solution if at least any solution exists for any random input
.
Optimality:If a solution found for an algorithm is guaranteed to be the best solution
(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.
Time Complexity:Time complexity is a measure of time for an algorithm to complete its
task.
Space Complexity:It is the maximum storage space required at any point during the
search, as the complexity of the problem.

70
SOLVING PROBLEM BY SEARCHING
Types of search
algorithms:
Basedon the search problems we can classify the search
algorithms into uninformed (Blind search) search and informed search (Heuristic search)
algorithms.

71
SOLVING PROBLEM BY
Uninformed/Blind Search
:
SEARCHING
 The uninformed search does not contain any domain knowledge such as closeness, the location of
the goal.
 It examines each node of the tree until it achieves the goal node
Informed Search
 Informed search algorithms use domain knowledge.
 Informed
search strategies can find a solution more efficiently than an uninformed
. search strategy
 Informed search is also called a Heuristic search.
 A heuristic
is a way which might not always be guaranteed for best solutions but guaranteed to
find a good solution in reasonable time.

72
BREADTH
-FIRST SEARCH:
 This algorithm searches breadthwise in a tree or graph, so it is called breadth
-first
search.
 BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
 The breadth-first search algorithm is an example of a general
-graph search algorithm.
 Breadth-first search implemented using FIFO queue data structure.

Advantages:
 BFS will provide a solution if any solution exists.
 If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.

Disadvantages:
 It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
 BFS needs lots of time if the solution is far away from the root node.

73
BREADTH
-FIRST SEARCH:
traversed pathwillbe:
S-- >A-- >B--->C-- >D--->G-- >H-->E--->F--->I--->K
- - - - - - - - - -
Time Complexity: Time Complexity of BFS algorithm can be
obtained by the number of nodes traversed in BFS until the
shallowest Node. Where the d= depth of shallowest solution and
b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given


by the Memory size of frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest


goal node is at some finite depth, then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing


function of the depth of the node.
74
DEPT-FIRST
H SEARCH
 This algorithm searches breadthwise in a tree or graph, -firstso it is called
 BFS algorithm starts searching from the root node of search.
breadth the tree and expands all
the currentnode
successor levelatbefore moving to nodes of next
 The
level. -first search algorithm is an example
-graph
of asearch
 Breadt
breadth-first search implemented using FIFOalgorithm.
general queue data
Advantag
h structure.
 DFS requires very less memory as it only needs to store a stack of the nodes on the path
es:
node to the current
from root
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
node.
Disadvanta :
right path).
 There
ges is the possibility that many states
-occurring, and there is no guarantee of
solution
keep re finding the
 DFS
. algorithm goes for deep down searching and sometime it may go to the
infinite loop.

75
DEPTH
-FIRST SEARCH
it will follow the order as:
Root node -- >Left node--->right node.
- -
Completeness: DFS search algorithm is complete within finite state
space as it will expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the
node traversed by the algorithm. It is given by:
2
T(n)= 1+ + 3 +.........m=O(nm)
Where, m=
n n +maximum n depth of any node and this can be much
larger than d (Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path
from the root node, hence space complexity of DFS is equivalent
to the size of the fringe set, which O(bm
is).
Optimal : DFSsearchalgorithmis non-optimal,asit maygeneratea
largenumberof stepsor highcostto reachto thegoalnode.

76
Thank You

You might also like