0% found this document useful (0 votes)
39 views42 pages

AI Unit-1

The document provides an introduction to Artificial Intelligence (AI), covering its definition, history, intelligent systems, foundations, and applications. AI is described as a technology that enables machines to mimic human intelligence, with various advantages such as high accuracy and reliability, but also disadvantages like high costs and lack of emotional understanding. Key historical milestones in AI development are highlighted, along with its applications in fields like robotics, speech recognition, and autonomous planning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views42 pages

AI Unit-1

The document provides an introduction to Artificial Intelligence (AI), covering its definition, history, intelligent systems, foundations, and applications. AI is described as a technology that enables machines to mimic human intelligence, with various advantages such as high accuracy and reliability, but also disadvantages like high costs and lack of emotional understanding. Key historical milestones in AI development are highlighted, along with its applications in fields like robotics, speech recognition, and autonomous planning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

UNIT-I

Introduction to Artificial Intelligence:


1. Introduction
2. History
3. Intelligent Systems
4. Foundations of AI
5. Applications
6. Tic- tac toe game playing
7. Development of AI languages
8. Current trends in AI
9. Problem Solving: State-Space Search and Control Strategies: Introduction
10. General Problem Solving
11. Characteristics of Problem
12. Exhaustive searches
13. heuristic search techniques
14. iterative deepening A*
15. Constraint Satisfaction

1. INTRODUCTION
Artificial Intelligence is one of the booming technologies of computer science, which is ready to
create a new revolution in the world by making intelligent machines. AI is now all around us. It is
currently working with a variety of subfields, ranging from general to specific, such as self-driving cars,
playing chess, proving theorems, playing music, painting etc. AI holds a tendency to cause a machine to
work as a human.

What is Artificial Intelligence?


Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial
defines"man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking
power."
So, we can define AI as, "It is a branch of computer science by which we can create intelligent
machineswhich can behave like a human, think like humans, and able to make decisions."

Why Artificial Intelligence?


 With the help of AI, we can create such software or devices which can solve real-world
problems veryeasily and with accuracy such as health issues, marketing, traffic issues, etc.
 With the help of AI, we can create your personal virtual Assistant, such as Cortana, Google
Assistantetc.
 With the help of AI, we can build such Robots which can work in an environment where
survival ofhumans can be at risk.
 AI opens a path for other new technologies, new devices, and new Opportunities.

Goals of Artificial Intelligence: Following are the main goals of Artificial Intelligence:
i.Replicate human intelligence
ii.Solve Knowledge-intensive tasks
iii.An intelligent connection of perception and action
iv. Building a machine which can perform tasks that requires human intelligence such as:
a. Proving a theorem
b. Playing chess
c. Plan some surgical operation
v. Driving a car in traffic Creating some system which can exhibit intelligent behavior, learn new
things byitself, demonstrate, explain, and can advise to its user.
1
Definition: “AI is the study of how to make computers do things at which, at the moment, people are
better”.

Advantages of Artificial Intelligence:

i. High Accuracy with fewer errors: AI machines or systems are prone to less errors and high accuracy as
it takes decisions as per pre-experience or information.
ii. High-Speed: AI systems can be of very high-speed and fast-decision making; because of that AI
systems can beat a chess champion in the Chess game.

iii. igh reliability: AI machines are highly reliable and can perform the same action multiple times with
high accuracy.
iv. Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the
ocean floor, where to employ a human can be risky.
v. Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is
currently used by various E-commerce websites to show the products as per customer requirement.
vi. Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can
makeour journey safer and hassle-free, facial recognition for security purpose, Natural language
processing to communicate with the human in human-language, etc.

Disadvantages of Artificial Intelligence:

i. High Cost: The hardware and software requirement of AI is very costly as it requires lots of maintenance
to meet current world requirements.
ii. Can't think out of the box: Even we are making smarter machines with AI, but still they cannot work
out of the box, as the robot will only do that work for which they are trained, or programmed.
iii. No feelings and emotions: AI machines can be an outstanding performer, but still it does not have the
feeling so it cannot make any kind of emotional attachment with human, and may sometime be
harmful for users if the proper care is not taken.
iv. Increase dependency on machines: With the increment of technology, people are getting more dependent
on devices and hence they are losing their mental capabilities.
v. No Original Creativity: As humans are so creative and can imagine some new ideas but still AI
machines cannot beat this power of human intelligence and cannot be creative and imaginative.
2. History

2
• Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits
in 1943. They proposed a model of artificial neurons.
• Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between
neurons. His rule is now called Hebbian learning.
• Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950.
Alan Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. The test
can check the machine's ability to exhibit intelligent behavior equivalent to human intelligence, called
a Turing test.
• Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John
McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field.
• Year 1966: The researchers emphasized developing algorithms which can solve mathematical problems.
Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
• Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.
• The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the time
period where computer scientist dealt with a severe shortage of funding from government for AI
researches.
• During AI winters, an interest of publicity on artificial intelligence was decreased.
• Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems were
programmed that emulate the decision-making ability of a human expert.
• In the Year 1980, the first national conference of the American Association of Artificial Intelligence was
held at Stanford University.
• The duration between the years 1987 to 1993 was the second AI Winter duration.
• Again Investors and government stopped in funding for AI research as due to high cost but not efficient
result. The expert system such as XCON was very cost effective.
• Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became
the first computer to beat a world chess champion.
• Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
• Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex
questions as well as riddles. Watson had proved that it could understand natural language and can solve
tricky questions quickly.
• Year 2012: Google has launched an Android app feature "Google now", which was able to provide
information to the user as a prediction.
• Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing
test."
• Year 2015: Amazon Echo, often shortened to Echo, is an American brand of smart speakers developed
by Amazon. Echo devices connect to the voice-controlled intelligent personal assistant service Alexa,
which will respond when a user says "Alexa". Users may change this wake word to "Amazon", "Echo",
“Computer”, as well as some other options.

3
3. INTELLIGENT SYSTEMS
AI is the combination of computer science, physiology and philosophy. AI is a broad topic consisting of
different fields from Machine vision to expert systems. John McCarthy was one of the founders of AI field
who stated that “AI is the science and engineering of making intelligent machines, especially intelligent
computer programs”.

Other Definitions of AI:


1. AI is the study of mental faculties through the use computational models.
2. The art of creating machines that performs functions which require intelligence when performed
bypeople.
3. AI is a field of study that seeks to explain & emulate intelligent behavior in terms of
computationalprocesses.
4. AI is the study of how to make computers do things at which, at the moment, people are better.
5. AI is the study of the computations that make it possible to perceive reason & act.
6. AI is the branch of computer science that is concerned with the automation of intelligent behavior.

ELIZA: ELIZA was a program that conversed with user in English. The program was able to converse about
any subject, because it stored subject information in data banks. Another feature of ELIZA was its ability to
pick up speech patterns from user’s questions & provide responses using those patterns. The following is a
passage that contains dialogue between Eliza & a teenage girl.
ELIZA: Hello. I am Eliza.
GIRL: Can you help me?
ELIZA: Don’t you believe that I can help you?
GIRL: Yes.
ELIZA: I see.
GIRL: I feel depressed.
ELIZA: How long have you been feeling depressed?
GIRL: Since last month.
ELIZA: That is quite interesting.
GIRL: What is interesting in this?
ELIZA: Have you asked anyone else?
GIRL: NO.
ELIZA: Are you saying no just to be negative?
GIRL: NO I did not ask anyone.
ELIZA: Why no?
GIRL: I do not believe other people.
ELIZA: Say, do you have any psychological problems?
GIRL: I do not know.
ELIZA: Tell me more.
Since then a lot many versions of ELIZA have been developed & are available on the internet. The basic
philosophy & characteristics in all these programs are same.
Characteristics of ELIZA:
i. Simulation of Intelligence
ii. Quality of Response
iii. Coherence
iv. Semantics
i. Simulation of Intelligence: Eliza programs are not intelligent at all in real sense. The do not
understand the meaning of utterance. Instead these programs simulate intelligent behavior quite
effectively by recognizing keywords & phrases. By using a table lookup, one of a few ways of
responding question is chosen.
ii. Quality of Response: It is limited by the sophistication of the ways in which they can process the
input text at a syntactic level. For example, the number of templates available is a serious limitation.
However, the success depends heavily on the fact that the user has a fairly restricted notion of the
4
expected response from the system.
iii. Coherence: The earlier version of the system imposed no structure on the conversation. Each
statement was based entirely on the current input & no context information was used. More complex
versions of Eliza can do a little better. Any sense of intelligence depends strongly on the coherence of
the conversation as judged by the user.
iv. Semantics: Such systems have no semantic representation of the content of either the user’s input or
the reply. That is why we say that it does not have intelligence of understanding of what we are saying.
But it looks that it imitates the human conversation style. Because of this, it passed Turing test.
Categorization of Intelligent Systems:
i. System that thinks like humans
ii. System that acts like humans
iii. System that thinks rationally
iv. Systems that acts rationally

i. System that thinks like humans: This requires cognitive modeling approaches. Most of the time, it
is a black box where we are not clear about our thought process. One has to know the functioning of
the brain & its mechanism for processing information.
ii. System that acts like humans: This requires that the overall behavior of the system should be human
like which could be achieved by observation. Turing test is an example.
Turing Test: Turing Test was introduced by Alan Turing in 1950. A Turing Test is a method of inquiry in
artificial intelligence (AI) for determining whether or not a computer is capable of thinking like a human.
To conduct this test, we need two people and a machine to be evaluated. One person plays the role of an
interrogator, who is in a separate room from the computer and the other person. The interrogator can ask
questions of either the person or the computer by typing the questions and receiving typed responses. The
interrogator knows them only as A and B and aims to determine which is the person and is a machine. The
goal of the machine is to fool the interrogator into believing that the machine can think. If a large
multiplication, for example, given the computer can give a wrong answer by taking a long time for calculation
as a man can do, to fool the interrogator.

Turing proposed that if the human interrogator in Room C is not able to identify who is in Room A or in Room
B, then the machine possesses intelligence. Turing considered this is a sufficient test for attributing thinking
capacity to a machine. As of today, Turing test is the ultimate test a machine must pass in order to be called
as intelligent test.

iii. System that thinks rationally: This relies on logic rather than human to measure correctness. For
example, given John is a human and all humans are mortal than one can conclude logically that John
is mortal.
iv. System that acts rationally: This is the final category of intelligent system whereby rational behavior
we mean doing the right thing. Even if the method is illogical, the observed behavior must be rational.

5
Components of AI:
i. Knowledge base
ii. Control strategy
iii. Inference mechanism

i. Knowledge base: Ai programs should be learning in nature & update its knowledge accordingly.
Knowledge base generally consists of facts & rules & has the following characteristics:
 It is voluminous in nature & requires proper structuring
 It may be incomplete & imprecise
 It may be dynamic & keep on changing

ii. Control Strategy: It determines which rule to be applied. To know this rule, some heuristics or thumb
rules based on problem domain may be used.
iii. Inference mechanism: It requires search through knowledge base & derives new knowledge using
the existing knowledge with the help of inference rules.

6
4. FOUNDATIONS OF AI
The foundations of AI in various fields are as follows
 Mathematics
 Neuroscience
 Control theory
 Linguistics

Mathematics: AI systems use formal logical methods & Boolean logic, Analysis of limits to what
to becomputed, probability theory & uncertainty that forms the basis for most approaches to AI, fuzzy
logic etc.
Neuroscience: This science of medicine helps in studying the function of brain. Recent studies use accurate
sensors to correlate brain activity to human thought. By monitoring individual neurons, monkeys can now
control a computer mouse using thought alone. Moore’s law states that the computers will have as many
gatesas humans have neurons. Researchers are working to know as to how to have a mechanical brain. Such
systems will require parallel computation, remapping and interconnections to a large extent.
Control Theory: Machines can modify their behaviour in response to the environment. Steam engine
governor, thermostat & water flow regulator are few examples of Control Theory. This theory of stable
feedback systems helps in building systems that transition from initial state to goal state with minimum
energy.
Linguistics: Speech demonstrates so much of human intelligence. Analysis of human language reveals
thought taking place in ways not understood in other settings. Children can create sentences they have never
heard before. Languages & thoughts are believed to be tightly intertwined.

5. Applications of AI:
A concise answer is difficult because there are so many activities in so many subfields. Here we sample a few
applications;
i. Robotic vehicles:
ii. Speech recognition:
iii. Autonomous planning and scheduling:
iv. Game playing:
v. Spam fighting:
vi. Logistics planning:
vii. Robotics:
viii. Machine Translation:

i. Robotic vehicles:
 A driverless robotic car named STANLEY sped through the rough terrain of the Mojave desert at
22 mph, finishing the 132-mile course first to win the 2005 DARPA Grand Challenge.
 STANLEY is a Volkswagen Touareg outfitted with cameras, radar, and laser rangefinders to sense
the environment and onboard software to command the steering, braking, and acceleration (Thrun,
2006).
 The following year CMU’s BOSS won the Urban Challenge, safely driving in traffic through the
streets of a closed Air Force base, obeying traffic rules, and avoiding pedestrians and other
vehicles.
ii. Speech recognition:
 A traveler calling United Airlines to book a flight can have the entire conversation guided by
automated speech recognition and dialog management system.

7
iii. Autonomous planning and scheduling:
 A hundred million miles from Earth, NASA’s Remote Agent program became the first on-board
autonomous planning program to control the scheduling of operations for a spacecraft (Jonsson et
al., 2000).
 REMOTE AGENT generated plans from high-level goals specified from the ground and monitored
the execution of those plans—detecting, diagnosing, and recovering from problems as they
occurred.
 Successor program MAPGEN (Al-Chang et al., 2004) plans the daily operations for NASA’s Mars
Exploration Rovers, and MEXAR2 (Cesta et al., 2007) did mission planning—both logistics and
science planning—for the European Space Agency’s Mars Express mission in 2008.
iv. Game playing:
 IBM’s DEEP BLUE became the first computer program to defeat the world champion in a chess
match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match (Goodman
and Keene, 1997). Kasparov said that he felt a “new kind of intelligence” across the board from
him.
 Newsweek magazine described the match as “The brain’s last stand.” The value of IBM’s stock
increased by $18 billion.
 Human champions studied Kasparov’s loss and were able to draw a few matches in subsequent
years, but the most recent human-computer matches have been won convincingly by the computer.
v. Spam fighting:
 Each day, learning algorithms classify over a billion messages as spam, saving the recipient from
having to waste time deleting what, for many users, could comprise 80% or 90% of all messages,
if not classified away by algorithms.
 Because the spammers are continually updating their tactics, it is difficult for a static programmed
approach to keep up, and learning algorithms work best (Sahami et al., 1998; Goodman and
Heckerman, 2004).
vi. Logistics planning:
 During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and Replanning
Tool, DART (Cross and Walker, 1994), to do automated logistics planning and scheduling for
transportation.
 This involved up to 50,000 vehicles, cargo, and people at a time, and had to account for starting
points, destinations, routes, and conflict resolution among all parameters.
 The AI planning techniques generated in hours a plan that would have taken weeks with older
methods.
 The Defense Advanced Research Project Agency (DARPA) stated that this single application more
than paid back DARPA’s 30-year investment in AI.
vii. Robotics:
 The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners for home use.
 The company also deploys the more rugged PackBot to Iraq and Afghanistan, where it is used to
handle hazardous materials, clear explosives, and identify the location of snipers.
viii. Machine Translation:
 A computer program automatically translates from Arabic to English, allowing an English speaker
to see the headline “Ardogan Confirms That Turkey Would Not Accept Any Pressure, Urging
Them to Recognize Cyprus.”
 The program uses a statistical model built from examples of Arabic-to-English translations and
from examples of English text totalling two trillion words (Brants et al., 2007).
 None of the computer scientists on the team speak Arabic, but they do understand statistics and
machine learning algorithms.
These are just a few examples of artificial intelligence systems that exist today. Not magic or science fiction—
but rather science, engineering, and mathematics, to which this book provides an introduction.

8
6. TIC-TAC-TOE GAME PLAYING
TIC-TAC-TOE is a two-player game with one player marking O & other marking X, at their turn in the spaces
in a 3X3 grid. The player who succeeds in playing 3 respective marks in any horizontal, vertical or diagonal
row wins the game.

Here we are considering one human player & the other player to be a computer program. The objective to play
this game using computer is to write a program which never loses. Below are 3 approaches to play this game
which increase in
• Complexity
• Use of generalization
• Clarity of their knowledge
• Extensibility of their approach

Approach 1:

Let us represent 3X3 board as nine elements vector. Each element in a vector can contain any of the following
3 digits:
0 - representing blank position
1 - indicating X player move
2 - indicating O player move

It is assumed that this program makes use of a move table that consists of vector of 39 (19683) elements.

Index Current Board Position New Board Position


0 000000000 000010000
1 000000001 020000001
2 000000002 000100002
3 000000010 002000010
.
.
.

All the possible board positions are stored in Current Board Position column along with its corresponding next
best possible board position in New Board Position column.
Algorithm:
 View the vector (board) as a ternary number.
 Get an index by converting this vector to its corresponding decimal number.
 Get the vector from New Board Position stored at the index. The vector thus selected represents the
way the board will look after the move that should be made.
 So set board position equal to that vector.

Advantage:
 Very efficient in terms of time.
Disadvantages:
 Requires lot of memory space to store move table.
 Lot of work is required to specify entries in move table manually.
 This approach cannot be extended to 3D TIC-TAC-TOE as 327 board positions are to be stored.

9
Approach 2:

10
The board B[1..9] is represented by a 9-element vector.
2 - Representing blank position
3 - Indicating X player move
5 - Indicating O player move

In his approach we use the following 3 sub procedures.


 Go(n) – Using this function computer can make a move in square n.
 Make_2 – This function helps the computer to make valid 2 moves.
 PossWin(P) – If player P can win in the next move then it returns the index (from 1 to 9) of the square
that constitutes a winning move, otherwise it returns 0.

The strategy applied by human for this game is that if human is winning in the next move the human plays in
the desired square, else if human is not winning in the next move then one checks if the opponent is winning.
If so then block that square, otherwise try making valid 2 in any row, column (or) diagonal.
The function PossWin operates by checking, one at a time, for each of rows/columns & diagonals.
 If PossWin(P)=0, then P cannot win. Find whether opponent can win. If so then block it. This can be
achieved as follows:
 If (3*3*2=18) then X player can win as there is one blank square in row, column (or) diagonal.
 If (5*5*2=50) then player O can win.

 The algorithm has a built-in strategy for each move it may have to make.
 It makes the odd-numbered moves if it is playing X, the even numbered moves if it is playing 0.
 The strategy for each turn is as follows:
 Turn=1 : Go(1) (Upper left corner)
 Turn=2 : if Board [5] is blank, Go(5), else Go(1)
 Turn=3 : if Board [9] is blank, Go(9), else Go(3)
 Turn=4 : if Posswin(X) is not 0, then Go(Posswin(X)) [i.e., block opponent’s
win],
 else Go(Make2)
 Turn=5 : if Posswin(X) is not 0 then Go(Posswin(X)) [i.e., win]
o else if Posswin(0) is not 0, then Go(posswin(0))[i.e., block win],
o else if Board[7] is blank, then Go(7),
o else Go(3). [Here the program is trying to make a fork.]
 Turn=6 : if Posswin(0) is not 0 then Go(Posswin(0)),
o else if Posswin(X) is not 0, then Go(Posswin(X)),
o else Go(Make2).
 Turn=7 : if Posswin(X) is not 0 then Go(Posswin(X)),
o else if Posswin(0) is not 0, then Go(Posswin(0)),
o else go anywhere that is blank.
 Turn=8 : if Posswin(0) is not 0 then Go(Posswin(0)),
o else if Posswin(X) is not 0, then Go(Posswin(X)),
o else go anywhere that is blank.
 Turn=9 : same as Turn=7.

Advantages:
 Memory Efficient
 Easy to understand
Disadvantages:
 Not as efficient as first one with respect to time
 This approach cannot be extended to 3D TIC-TAC-TOE

11
Approach 3:
In this approach, we choose board position to be a magic square of order 3; blocks numbered by magic
number. The magic square of order n consists of n2 distinct numbers (from 1 to n2), such that the
numbers in all rows, all columns & both diagonals sum to be 15.

In this approach, we maintain a list of the blocks played by each player. For the sake of convenience, each
block is identified by its number.
The following strategy for possible win for a player is used.
 Each pair of blocks a player owns is considered.
 Difference D between 15 & the sum of the two blocks is computed.
o If D<0 or D>9, then these two blocks are not collinear & so can be ignored. Otherwise if the
block representing difference is blank (not in either list) then player can move in that block.

• This strategy will produce a possible win for a player.

Execution: Here, Human uses mind & Machine uses calculations. Assuming that Machine is the first player
Move 1: Machine: Takes the element 5.
Move 2: Human: Takes the element 8.
Move 3: Machine: Takes the element 4. So, the machine elements consists of 5, 4.
Move 4: Human: Takes the element 6. So, the human elements consists of 8, 6.
Move 5: Machine: From the above moves machine has the elements 5, 4. First machine checks whether it can
win. 5+4=9, 15-9=6. Since the element 6 is already taken by the human, there is no possibility of machine
winning in this move. Now machine checks whether human can win (or) not. The elements taken by human
8, 6. So, 8+6=14, 15-14=1. Since the element 1 is available the machine takes the element 1. Finally, the
elements taken by the Machine are 5, 4, 1.
Move 6: Human: Takes the element 3. So, the human elements consist of 8, 6, 3.
Move 7: Machine: Considering all the elements taken by the 5, 4, 1 now 5+1=6, 15-6=9. Since the element 9
is available machine takes the element 9.
Therefore, 1+5+9=15 which leads to Machine winning state.
Advantage:
 This approach could extend to handle three-dimensional TIC-TAC-TOE.

Disadvantage:
 Requires more time than other 2 approaches as it must search a tree representing all possible move
sequences before making each move.

7. DEVELOPMENT OF AI LANGUAGES
AI Languages have traditionally been those which stress on knowledge representation schemes,
patternmatching, flexible search & programs as data. Examples of such languages are
 LISP
 Prolog
 Pop-2
 Machine Learning (ML)

LISP: LISP is a functional Language based on lambda Calculus.


Prolog: It is based on first-order predicate logic. Both the languages are declarative languages where one
is concerned about ‘what to compute’ & not ‘how to compute’.
Pop-2: It is based on stack-based language providing greater flexibility. It is similar to LISP.
Machine Learning (ML): It is an application of AI that provides systems the ability to automatically learn
and improve from experience without being explicitly programmed. Machine learning focuses on the
12
development of computer programs that can access data and use it learn for themselves. The primary aim is
to allow the computers learn automatically without human intervention.

8. CURRENT TRENDS IN AI
 Hard Computing Techniques
 Soft Computing Techniques
Conventional Computing (Hard Computing) is based on the concept of precise modeling & analyzing to
yield accurate results. Hard Computing Techniques works well for simple problems, but is bound by NP-
complete set which include problems often occurring in biology, medicine, humanities, management
sciences & similar fields.
Soft Computing is a formal computer Science area of study which refers to a collection of computational
techniques in computer science, machine learning & some engineering disciplines, which study, model &
analyze very complex phenomena. Components of Soft Computing include Neural Networks, Fuzzy
Systems, Evolutionary Algorithms, Swarm Intelligence etc.
 Neural Networks have been developed based on functioning of human brains. Attempts to model
the biological neuron have led to development of the field called Artificial Neuron Network.

 Fuzzy Logic (FL) is a method of reasoning that resembles human reasoning. The approach of FL
imitates the way of decision making in humans that involves all intermediate possibilities between
digital values YES and NO. The conventional logic block that a computer can understand takes
precise input and produces a definite output as TRUE or FALSE, which is equivalent to human’s
YES or NO.

 Evolutionary techniques mostly involve meta-heuristic optimization algorithms such as


evolutionary algorithms & Swarm Intelligence.

 Genetic algorithms were developed mainly by emulating nature & behavior of biological
chromosome.

13
 Ant colony algorithm was developed to emulate the behavior of real ants. An ant algorithm is one
in which a set of artificial ants (agents) cooperate to find the solution of a problem by exchanging
information on graph edges.

 Swarm Intelligence (SI) is a type of AI based on the collective behavior of decentralized, self
organized systems. Social insects like ants, bees, wasps & termites perform their tasks independent
of other members of the colony. However, they are able to solve complex problems emerging in
their daily lives by mutual cooperation. This emergent behavior of self organization by a group of
social insects is known as Swarm Intelligence.

14
 Expert system continues to remain an attractive field for its practical utility in all walk of real life.
 Emergence of Agent technology as a subfield of AI is a significant paradigm shift for software
development & break through as a new revolution. Agents are generally suited in some environment
& are capable of taking autonomous decisions while solving a problem. Multi Agent Systems are
designed using several independent & interacting agents to solve the problems of distributed nature.

15
9.PROBLEM SOLVING:
STATE-SPACE SEARCH AND CONTROL STRATEGIES
 Problem Solving is a method of deriving solution steps beginning from initial description of the problem to the
desired solution.
 In AI, the problems are frequently modeled as a state space problem where the state space is a set of all possible
states from start to goal states.
 The 2 types of problem-solving methods that are generally followed include
i. general purpose &
ii. ii.special purpose methods
A general purpose method is applicable to a wide variety of problems, where a special purpose method is a tailor
method made for a particular problem.  The most general approach for solving a problem is to generate the
solution & test it.  For generating new state in the search space, an action/operator/rule is applied & tested
whether the state is the goal state or not.  In case the state is not the goal state, the procedure is repeated.  The
order of application of rules to the current state is called control strategy
10.GENERAL PROBLEM SOLVING Production System:  Production System (PS) is one of the formalisms that
help AI programs to do search process more conveniently in state-space problems.  This system consists of start
(initial) state(s) & goal (final) state(s) of the problem along with one or more databases consisting of suitable &
necessary information for a particular task.  Production System consists of a number of production rules.
•Example 1: Water Jug Problem •Problem Statement: There are two jugs a 4- Gallon one and 3-Gallon one.
Neither has any measuring marker on it. There is a pump that can be used to fill the jugs with water. How can we
get exactly 2- Gallon water into the 4- Gallon jug?
The state space for this problem can be described as the set of ordered pairs of integers (x, y) such that x = 0, 1, 2,
3 or 4 and y = 0, 1, 2 or 3. x represents the number of gallons of water in the 4- Gallon jug. y represents the
number of gallons of water in the 3- Gallon jug. The start state is (0, 0). The goal state is (2, n) for any value of n.

16
17
Example 2: Water Jug Problem Problem Statement: There are two jugs a 5- Gallon one and 3-Gallon one. Neither
has any measuring marker on it. There is a pump that can be used to fill the jugs with water. How can we get
exactly 4- Gallon water into the 5- Gallon jug?
The state space for this problem can be described as the set of ordered pairs of integers (x, y) such that x = 0, 1, 2,
3, 4 or 5 and y = 0, 1, 2 or 3. x represents the number of gallons of water in the 5- Gallon jug. y represents the
number of gallons of water in the 3- Gallon jug. The start state is (0, 0). The goal state is (4, n) for any value of n

• State Space Search :


 A State Space Search is another method of problem representation that facilitates easy search.
 Using thismethod, we can also find a path from start state to goal state while solving a problem.
 A state space basically consists of 4 components:
1. A set S containing start states of the problem.
2. A set G containing goal states of the problem.
3. Set of nodes (states) in the graph/tree. Each node represents the state in problem-solving process.
4. Set of arcs connecting nodes. Each arc corresponds to operator that is a step in a problem-solving
process.
 A solution path is a path through the graph from a node in S to a node in G.
 The main objective of a search algorithm is to determine a solution path in graph.
 There may be more than one solution paths, as there may be more than one ways of solving the problem

18
Example: The Eight-Puzzle Problem
• Problem Statement:
 The eight-puzzle problem has a 3X3 grid with 8 randomly numbered (1 to 8) tiles arranged on it with one
empty cell.
 At any point, the adjacent tile can move to the empty cell, creating a new empty cell.
 Solving this problem involves arranging tiles such that we get the goal state from the start state.

A state for this problem should keep track of the position of all tiles on the game board, with 0 representing the
blank (empty cell) position on the board.
The start & goal states may be represented as follows with each list
representing corresponding row:
1. Start state: [ [3, 7, 6], [5, 1, 2], [4, 0, 8] ]
2. Goal state: [ [5, 3, 6], [7, 0, 2], [4, 1, 8] ]
3. The operators can be thought of moving {Up, Down, Left, Right}, the direction in which blank space effectively
moves

Solution: Following is a Partial Search Tree for Eight Puzzle Problem

19
The search will be continued until the goal state is reached.
Search Tree for Water Jug Problem:

Control Strategies:
Control strategy is one of the most important components of Problem Solvingthat describes the order of
application of the rules to the current state.
Control strategy should be such that it causes motion towards a solution.
The second requirement of control strategy is that it should explore the solution space in a systematic manner.
Depth-First & Breadth-First are systematic control strategies.
There are 2 directions in which a search could proceed
Data-Driven Search, called Forward Chaining, from the Start State
Goal-Driven Search, called Backward Chaining, from the Goal State Forward Chaining:
The process of forward chaining begins with known facts & works towards a solution.
For example, in 8-puzzle problem, we start from the start state & work forward to the goal state.
In this case, we begin building a tree of move sequences with the root of the tree as the start state.

20
The states of next level of the tree are generated by finding all rules whose left sides match with root & use their
right side to create the new state.
This process is continued until a configuration that matches the goal state is Generated
Backward Chaining:
It is a goal directed strategy that begins with the goal state & continues working backward, generating more sub-
goals that must also be satisfied to satisfy main goal until we reach to start state.
Prolog (Programming in Logic) language uses this strategy.
In this case, we begin building a tree of move sequences with the goal state of the tree as the start state.
The states of next level of the tree are generated by finding all rules whose right sides match with goal state &
use their left side to create the new state.
This process is continued until a configuration that matches the start state is Generated.
Note: We can use both Data-Driven & Goal-Driven strategies for Problem Solving, depending on the nature of
the problem.
• One Solution to the Water Jug Problem:

11. CHARACTERISTICS OF PROBLEM


• Type of Problems: There are 3 types of problems in real life,
i. Ignorable
ii.Recoverable
iii.Irrecoverable
• Ignorable:
These are the problems where we can ignore the solution steps.
For example, in proving a theorem, if some lemma is proved to prove a
theorem & later on we realize that it is not useful, then we can ignore this
solution step & prove another lemma.
Such problems can be solved using simple control strategy.
• Recoverable:
These are the problems where solution steps can be undone.
For example, in Water Jug Problem, if we have filled up the jug,
we can empty it also.
Any state can be reached again by undoing the steps.
These problems are generally puzzles played by a single player.
Such problems can be solved by back tracking, so control strategy
can be implemented using a push-down stack.
•Irrecoverable:
The problems where solution steps cannot be undone.
For example, any 2-Player games such as chess, playing cards, snake & ladder etc are example of this category.
Such problems can be solved by the planning Process.

21
Decomposability of a Problem:
Divide the problem into a set of independent smaller sub-problems, solve them and combine the solution to get
the final solution.
The process of dividing sub-problems continues till we get the set of the smallest sub-problems for which a
small collection of specific rules are used.
Divide- And-Conquer technique is the commonly used method for solving such problems.
It is an important & useful characteristic, as each sub-problem is simpler to solve & can be handed over to a
different processor.
Thus, such problems can be solved in the parallel processing environment.
•Role of Knowledge:
Knowledge plays an important role in solving any problem.
Knowledge could be in the form of rules & facts which help generate search space for finding the solution.
• Consistency of Knowledge Base used in Solving Problem:
Make sure that knowledge base used to solve problem is consistent. An inconsistent knowledge base will lead to
wrong solutions.
For example, if we have knowledge in the form of rules & facts as follows:
o If it is humid, it will rain.
o If it is sunny, then it is day time. It is sunny day. It is night time.
This knowledge is not consistent as there is a contradiction because ‘it is aday time’ can be deduced from the
knowledge, & thus both ‘it is night time’ and ‘it is a day time’ are not possible at the same time.
If knowledge base has such inconsistency, then some methods may be used to avoid such conflicts.
•Requirement of Solution:
We should analyze the problem whether solution require is absolute (or) relative.
We call solution to be absolute if we have to find exact solution, where as it is relative if we have reasonable
good & approximate solution.
For example, in Water Jug Problem, if there are more than one ways to solve a problem, then we follow one path
successfully.
There is no need to go back & find a better solution.
In this case, the solution is absolute.
In travelling sales man problem, our goal is to find the shortest route, unless all routes are known, it is difficult to
know the shortest route.
This is the Best-Path problem;
where as Water Jug is Any-Path problem.
Any-Path problem is generally solved in a reasonable amount of time by using heuristics that suggest good paths
to explore.
Best-Path problems are computationally harder compared with
Any-Path problems.
12. EXHAUSTIVE SEARCHES (OR) UNINFORMED SEARCHES
i.Breadth-First Search
ii.Depth-First Search
iii.Depth-First Iterative Deepening
iv.Bidirectional Search
i. Breadth-First Search (BFS):
BFS is the most common search strategy for traversing a tree or graph.
This algorithm searches breadth wise in a tree or graph.
BFS algorithm starts searching from the root node of the tree and expands all successor node at the current level
before moving to nodes of next level.
The breadth-first search algorithm is an example of a General Graph search algorithm.
Breadth-first search implemented using FIFO queue data structure.
• Algorithm:
1. Create a variable called NODE-LIST and set it to the initial state.
2. Loop until the goal state is found or NODE-LIST is empty.
a. Remove the first element, say E, from the NODE-LIST. If NODE-LIST was empty then quit.
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state.
22
ii. If the new state is the goal state, quit and return this state.
iii. Otherwise add this state to the end of NODE-LIST.

• Advantages:
BFS will provide a solution if any solution exists.
If there is more than one solution for a given problem, then BFS will provide the minimal solution which
requires the least number of steps.
• Disadvantages:
BFS requires lots of memory since each level of the tree must be saved into memory to expand the nextlevel.
BFS needs lots of time if the solution is far away from the root node.
• Example 1: S---> A--->B---->C--->D---->G--->H--->E---->F---->I >K.

• Example 2: a--- > b--- > c--- > d---> e--- > f--- > g--- > h--- > i--- > j > k.

23
• Example 3: BFS for Water Jug Problem

• Example 4: 8-Puzzle Problem

24
• Example 5:

Depth-First Search (DFS):


DFS is a recursive algorithm for traversing a tree or graph data structure.
It is called the depth-first search because it starts from the root node and follows each path to its greatest depth
node before moving to the next path.
DFS uses a stack data structure for its implementation.
• Algorithm:
1.If the initial state is a goal state, quit and return success.
2.Otherwise, loop until success or failure is signaled.
a) Generate a state, say E, and let it be the successor of the initial state. If there are no more successors,signal
failure.
b) Call Depth-First Search with E as the initial state.
c) If success is returned, signal success. Otherwise continue in this loop.

25
• Advantages:
DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the
current node.
It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
• Disadvantages:
There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.
DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
• Example 1:

Note: It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will traverse
node C and then G, and here it will terminate as it found goal node.
• Example 2:

26
• Example 3: Water Jug Problem

• Example 4: 8- Puzzle Problem

Depth-First Iterative Deeping (DFID):


DFID is a combination of DFS and BFS algorithms.
This search algorithm finds out the best depth limit and does it by gradually increasing the limit until a goal is
found.
This algorithm performs depth-first search up to a certain "depth limit",and it keeps increasing the depth limit
after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first search'smemory
efficiency.
The iterative search algorithm is useful uninformed search when search space is large, and depth of goal node is
unknown.

27
• Example:

• Iteration 4: A, B, D, H, I, E, C, F, K, G In the fourth iteration, the algorithm will find the goal node

Advantages:
It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.
Disadvantages:
Repeats all the work of the previous phase.

28
Bidirectional Search:
Bidirectional search is a graph search algorithm that runs 2 simultaneous searches.
One search moves forward from the start state & other moves backward from the goal state & stops when the
two meet in the middle.
It is useful for those problems which have a single start state & single goal state.
Advantages:
Bidirectional search is fast.
Bidirectional search requires less memory
Disadvantages:
Implementation of the bidirectional search tree is difficult.
In bidirectional search, one should know the goal state in advance.
• Example:

If match is found, then path can be traced from start to the matched state & from matched to
the goal state.
It should be noted that each node has link to its successors as well as to its parent.
These links will be generating complete path from start to goal states.
The trace of finding path from node 1 to 16 using Bidirectional Search is as given below.
The Path obtained is 1, 2, 6, 11, 14, 16.

29
Travelling Salesman Problem (TSP):
• Statement: In travelling salesman problem (TSP), one is
required to find the shortest route of visiting all the cities
once & running back to starting point. Assume that there
are ‘n’ cities & the distance between each pair of the cities is
given.
• The problem seems to be simple, but deceptive.
• All the possible paths of the search tree are explored &the shortest path is returned.
• This will require (n-1)! paths to be examined for ‘n’ cities.
Start generating complete paths, keeping track of the shortest path found so far.
Stop exploring any path as soon as its partial length becomes greater than the shortest path
length found so far.

In this case, there will be 4!=24 possible paths.


• In below performance comparison, we can notice that out of 13 paths shown, 5 paths are partially evaluated.

30
14. HEURISTIC SEARCH TECHNIQUES
Heuristic: It is helpful in improving the efficiency of search
process.
Generate & Search
Branch & Bound Search (Uniformed Cost Search)
Hill Climbing
Beam Search
Best-First Search
A* Algorithm
Generate & Test:
The Generate and test strategy is the simplest of all approaches.
This method generates a solution for the givenproblem and tests the generated solution with the required
solution.
Algorithm:
Start
Generate a possible solution
Test if it is a goal
If not go to start else quit
End
Advantage:
Guarantee in finding a solution if a solution really exists.
Disadvantage:
Not suitable for the larger problems

31
Branch & Bound Search (Uniform Cost Search):
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge.
The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest cumulative
cost.
Uniform-cost search expands nodes according to their path costs from the root node.
It can be used to solve any graph/tree where the optimal cost is in demand.
A uniform-cost search algorithm is implemented by the priority queue.
It gives maximum priority to the lowest cumulative cost.
• Algorithm: Branch and Bound Technique:
• Step 1: Place the start node of zero path length on the queue.
• Step 2: Until the queue is empty or a goal node has been found;
a. Determine if the first path in the queue contains a goal node
b. If the first path contains a goal node exit with success
c. If the first path does not contain a goal node remove the path from the queue and form new paths by extending
the removal path by one step.
d. Compute the cost of new paths and add them to the queue.
e. Sort the paths on the queue with lowest cost paths in front.
• Step 3: Otherwise exit with failure
Example:

Advantage:
Uniform cost search is optimal because at every state the path with the least cost is chosen.
Disadvantage:
It does not care about the number of steps involve in searching and only concerned about path cost.
Due to which this algorithm may be stuck in an infinite loop.
Hill Climbing:
i. Simple Hill Climbing
ii. Steepest-Ascent Hill Climbing (Gradient Search)
Simple Hill Climbing:
Simple hill climbing is the simplest way to implement a hill climbing
algorithm.
It only evaluates the neighbor node state at a time and selects the first one
which optimizes current cost and set it as a current state.
It only checks it's one successor state, and if it finds better than the current
state, then move else be in the same state.
Algorithm:

32
Step 1: Evaluate the initial state, if it is goal state then return success
and Stop.
Step 2: Loop Until a solution is found or there is no new operator
left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
If it is goal state, then return success and quit.
Else if it is better than the current state then assign new state as a current state.
Else if not better than the current state, then return to
step2.
Step 5: Exit.
Steepest-Ascent Hill Climbing (Gradient Search):
The steepest-Ascent algorithm is a variation of simple hill
climbing algorithm.
This algorithm examines all the neighboring nodes of the current
state and selects one neighbor node which is closest to the goal
state.
This algorithm consumes more time as it searches for multiple
neighbors.
Algorithm: Steepest-Ascent Hill Climbing (Gradient Search)
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make
current state as initialstate.
Step 2: Loop until a solution is found or the current state does not change.
a) Let SUCC be a state such that any successor of the current state will be better than it.
b) For each operator that applies to the current state:
Apply the new operator and generate a new state.
Evaluate the new state.
If it is goal state, then return it and quit, else compare it to the SUCC.
If it is better than SUCC, then set new state as SUCC.
If the SUCC is better than the current state, then set current state to
SUCC.
Step 3: Exit.
Disadvantages of Hill Climbing:
i. Local Maximum:
It is a state that is better than all its neighbours but not better than some other states which are far away.
From this state all moves looks to be worse.
In such situation backtrack to some earlier state & try going in different direction to find a solution.

ii. Plateau:
It is a flat area of the search space where all neighboring states has the same value.
It is not possible to determine the best direction.
In such situation make a big jump to some direction & try to get to new section of the search space.

33
iii.Ridge:
It is an area of search space that is higher than surrounding areas but that cannot be traversed by single
moves in any one direction.
It is a special kind of Local Maxima.

• Beam Search:
Beam Search is a heuristic search algorithm in which W number of best nodes at each level is always
expanded.
It progresses level-by-level & moves downward only from the best W nodes at each level.
Beam Search uses BFS to build its search tree.
At each level of tree it generates all its successors of the states at the current level, sorts them in order of
increasing heuristic values.
However it only considers W number of statesat each level, whereas other nodes are ignored.
Here,
W - Width of Beam Search
B - Branching Factor
There will only be W * B nodes under consideration at any depth but only W nodes will be selected.
Algorithm:
1. Node=Root_Node; Found= false
2. If Node is the goal node, then Found=true else find SUCCs of Node, if any with its estimated
cost and store in OPEN list;
3. While (Found=false and not able to proceed further) do
{
Sort OPEN list;
Select top W elements from OPEN list and put it in W_OPEN list and empty OPEN list
For each NODE from W_OPEN list
{
If NODE=Goal state then FOUND=true else find SUCCs of NODE. If any with its
estimated cost and store in OPEN list;
}
}
4. If FOUND=true then return Yes otherwise return No;
5. Stop
Example:
Below is the search tree generated using Beam Search Algorithm.
Assume, W=2 & B=3.
Here blacknodes are selected based on their heuristic values for further expansion.
34
Example:

Best-First Search:
It is a way of combining the advantages of both Depth-First and Breadth-First Search
into a single method.
At each step of the best-first search process, we select the most promising of the nodes
we have generated so far.
This is done by applying an appropriate heuristic function to each of them.

35
In some cases we have so many options to solve but only any one of them must be solved.
In AI this can be represented as OR graphs.
In this among all available sub problems either of them must be solved.
Hence the name OR graph.
To implement such a graph-search procedure, we will need to use two lists of nodes.
OPEN:
This list contains all the nodes those have been generated and have had the heuristic function applied to them but
which have not yet been examined.
OPEN is actually a priority queue in which the elements with the highest priority are those with the most
promising value of the heuristic function.
CLOSED:
This list contains all the nodes that have already been examined.
We need to keep these nodes in memory if we want to search a graph rather than a tree.
Algorithm: Best-First Search:
1. Start with OPEN containing just the initial state
2. Until a goal is found or there are no nodes left on OPEN do:
a) Pick the best node on OPEN
b) Generate its successors
c) For each successor do:
i. If it has not been generated before, evaluate it, add it to OPEN, and
record its parent.
ii. If it has been generated before, change the parent if this new path is better than the previous one. In that case,
update the cost of getting to this node and to any successors that this node may already have.
• Example:

Step 1: At this level we have only one node, i.e., initial node A

36
• Step 2: Now we generate the successors A, three new nodes are generated
namely B, C, and D with the costs of 3, 5 and 1 respectively. So these nodes are
added to the OPEN list and A can be shifted to CLOSED list since it is
processed.

Among these three nodes D is having the least cost, and hence selected for expansion.
So this node is shifted to CLOSED list.
• Step 3: At this stage the node D is expanded generating the new nodes E and F with the costs 4 and 6
respectively. The newly generated nodes will be added to the OPEN list. And node D will be added to CLOSED
list.

• Step 4: At this stage node B is expanded generating the new nodes G & H with costs 6 and 5 respectively. The
newly generated nodes will be added to the OPEN list. And node B will be added to CLOSED list.

• Step 5: this stage node E is expanded generating the new nodes I & J with costs 2 and 1 respectively. The newly
generated nodes will be added to the OPEN list. And node E will be added to CLOSED list.

A* Algorithm:
• A* is a Best First Search algorithm that finds the least cost path from initial node to one goal
node (out of one or more possible goals)
A* search algorithm finds the shortest path through the search space using the heuristic function.
It uses h(n),& cost to reach the node n from the start state g(n).
This algorithm expands less search tree and provides optimal results faster.
It is similar to uniform cost search except that it uses f(n)=g(n)+h(n) instead of g(n).
A* use search heuristic as well as the cost to reach the node.

37
Hence we combine both costs as,

f(n)=g(n)+h(n)

f(n)=estimated cost of the cheapest solution.


g(n)=cost to reach node n from start state.
h(n)=cost to reach from node n to goal node.

Example 1:

38
39
Example 2 :

Example 3: Assuming all the values (arcs) as ‘1’

40
Optimal Solution by A* Algorithm:

Underestimation
Overestimation
Underestimation:
From the below start node A is expanded to B, C & D with f values as 4, 5 & 6 respectively.
Here, we are assuming that the cost of all arcs is 1 for the sake of simplicity.
Note that node B has minimum f value, so expand this node to E which has f value as 5.
Since f value of C is also 5, we resolve in favour of E, the path currently we are expanding.
Now node E is expanded to node F with f value as 6.
Clearly, expansion of a node F is stopped as f value of C is not the smallest.
Thus, we see that by underestimating heuristic value, we have wasted some effort by eventually discovered that
B was farther away than we thought.
Now we go back &try another path & will find the optimal path.

Overestimation:
Here we are overestimating heuristic value of each node in the graph/tree.
We expand B to E, Eto F & F to G for a solution path of length 4.
But assume that there is direct path from D to a solution giving a path of length 2 as h value of D is also
overestimated.
We will never find it because of overestimating h(D).
Wemay find some other worse solution without ever expanding D.
So, by overestimating h, we cannot be guaranteed to find the shortest path.
Admissibility of A*
A search algorithm is admissible, if for any graph, it always terminates in an optional path from start state to goal
state, if path exists.
We have seen earlier that if heuristic function ‘h’ underestimates the actual value from current state to goal state,
then it bounds to give an optimal solution & hence is called admissible function.
So, we can say that A* always terminates with the optimal path in case h is an admissible heuristic function

41
Constraint Satisfaction Problem
CSP or Constraint Satisfaction Problem is a set of questions that requires its answer inside certain
confinements/conditions otherwise called limitations. It comprises of :
• Limited arrangement of factors which stores the arrangement.
(V = {V1, V2, V3,….., Vn} )
• Lot of discrete qualities from which the arrangement is picked.
(D = {D1, D2, D3,…..,Dn} )
• Limited arrangement of limitations.
(C = {C1, C2, C3,……, Cn} )
In the case of AI, we most of the time, deal with discrete quantities.
The common problems which can be solved using a CSP are Sudoku problems, Cryptarithmetic, Crossword,
etc.

42

You might also like