0% found this document useful (0 votes)
67 views113 pages

AI Basics for Beginners

Uploaded by

Amisha Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views113 pages

AI Basics for Beginners

Uploaded by

Amisha Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

Unit I

Basics of AI
Unit I Foundation
References
Lecture Topic To Be Covered Books/Text
Books/Websites

Unit I : Basics of AI (04)

Categories of AI, applications of AI , Applications of Artificial Intelligence, R1, R2, R3


Game Playing, Expert Systems, Natural Language Processing, Image Understanding,
1.
Robotics, Pattern Recognition,Virtual Reality, Computer Vision, Intelligent Control

Intelligent agents, Agents and environments, Good behaviour R1


2.

The nature of environments, R1


3.
Structure of agents R1
4.

R1. Stuart Russell, Peter Norvig, 'Artificial Intelligence”, A Modern Approach ', Pearson Education/Prentice Hall of India,
(3rd Edition), (2010)
R2.Vinod Chandra, Anand Hareendran, “ Artificial Intelligence and Machine Learning,, PHI Learning Pvt. Ltd., (2014)
R3. Elaine Rich, Kevin Knight and Shivshankar Nair, 'Artificial Intelligence', Tata McGraw Hill, (3rd
Edition), (2009)
Intelligent Agents
What is AI : https://www.youtube.com/watch?v=ad79nYk2keg&t=30s

Sofia First Human Robot https://www.youtube.com/watch?v=E8Ox6H64yu8&t=46s

https://www.youtube.com/watch?v=G-zyTlZQYpE

How alexa works : https://www.youtube.com/watch?v=95jDQf78lbc


ML behind Alexa working: https://www.youtube.com/watch?v=Dkg1ULBASNA
NLP in alexa : https://www.youtube.com/watch?v=U1yT_4xcglY
Game to color the map puzzle : https://www.novelgames.com/en/fourcolour/
Cleaning Robot:
https://www.google.com/search?sca_esv=559611812&q=automatic+cleaning+robot
&tbm=vid&source=lnms&sa=X&sqi=2&ved=2ahUKEwjKtuTht_SAAxVbxDgGHcF1ClgQ
0pQJegQIDBAB&biw=1299&bih=664&dpr=1#fpstate=ive&vld=cid:1a82f054,vid:hoY2
YxLGV98
Intelligent Agents
● What is AI: the designing and building of intelligent agents that receive precepts from the
environment and take actions that affect that environment.” This definition by its nature
unites many different splintered fields – speech recognition, machine vision, learning
approaches, etc
● Strong AI versus Weak AI
● Strong AI / deep AI – A computer can be made or raised to intelligence levels that match
human beings’.
● Weak AI / Narrow AI – Computers have the ability, features that mirror or mimic thought
or thinking processes, making them useful tools for figuring out how our own mind works.
● Narrow AI systems enhance or augments human “intelligence” by delivering calculations,
patterns and analyses more efficiently than can be done by a human brain.

● Difference in AI and ML:
● Terminologies in AI:
https://oilgains.medium.com/why-machine-learning-is-not-artificial-intelligence-61b174a3c9a2#:~:text=If%20we%20follow%20Norvig%20and,proces
%20vision%2C%20and%20robotics.
https://oilgains.medium.com/why-machine-learning-is-not-artificial-intelligence-61b174a3c9a2#:~:text=If%20we
%20follow%20Norvig%20and,processing%2C%20vision%2C%20and%20robotics.
https://oilgains.medium.com/why-machine-learning-is-not-artificial-intelligence-61b174a3c9a2#:~:text=If%20we
Applications of AI
● Applications of Artificial Intelligence: Game Playing Expert Systems, Natural Language
Processing, Image Understanding, Robotics, Pattern Recognition,Virtual Reality, Computer
Vision, Intelligent Control
● Game Playing : IBM’s DEEP BLUE became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in
an exhibition match (Goodman and Keene, 1997).
● Video :
https://www.google.com/search?sca_esv=601759512&rlz=1C1CHBD_enIN817IN818&q=ibm+deep+blue&tbm=vid&source=lnms&sa=X
&sqi=2&ved=2ahUKEwiKio3ou_uDAxX41jgGHRRNAkQQ0pQJegQICRAB&biw=1220&bih=557&dpr=1.31#fpstate=ive&vld=cid:f1b
06a17,vid:KF6sLCeBj0s,st:0

● Deep Blue was a supercomputer developed by IBM specifically for playing chess
● Deep Blue, with its capability of evaluating 200 million positions per second
● It was a then-state-of-the-art expert system, relying upon rules and variables defined and
fine-tuned by chess masters and computer scientists.
● Current chess engines such as Leela Chess Zero typically use reinforcement machine
learning systems that train a neural network to play, developing its own internal logic rather
than relying upon rules defined by human experts
● Deep Blue is an Expert System designed to play chess.
● https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)#:~:text=It%20was%20a%20massively%20parallel,all%20housed%20in%20two%20cabine
ts.
Applications of AI
● Applications of Artificial Intelligence: Game Playing, Expert Systems, Natural Language
Processing, Image Understanding, Robotics, Pattern Recognition,Virtual Reality, Computer
Vision, Intelligent Control
● Natural Language Processing, :
● It concerned with giving computers the ability to understand text and spoken words in much
the same way human beings can.
● NLP combines computational linguistics—rule-based modeling of human language—with
statistical, machine learning, and deep learning models.
● Together, these technologies enable computers to process human language in the form of
text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s
intent and sentiment.
● NLP drives computer programs that translate text from one language to another, respond to
spoken commands, and summarize large volumes of text rapidly—even in real time.
● We have used NLP in the form of voice-operated GPS systems, digital assistants,
speech-to-text dictation software, customer service chatbots
● Video :
https://www.google.com/search?q=language+translator+hand+held+device&sca_esv=601759512&rlz=1C1CHBD_enIN817IN818&biw=1220&bih=557&tbm=vid&ei=_d-zZb_5MdLk1e8P0req2AE&ved=0ahUKEwj_69rkvvuDAxVScvUHHdKbChsQ4dUDCA0&uact=5&oq=l
anguage+translator+hand+held+device&gs_lp=Eg1nd3Mtd2l6LXZpZGVvIiRsYW5ndWFnZSB0cmFuc2xhdG9yIGhhbmQgaGVsZCBkZXZpY2UyBRAhGJ8FSOJpUKYPWJNfcAB4AJABAJgBuAegAbY-qgELMC4xOC4xOC42LTG4AQPIAQD4AQHCAgsQABiABBiKB
RiRAsICCxAAGIAEGLEDGIMBwgIIEAAYgAQYsQPCAgUQABiABMICDhAAGIAEGIoFGLEDGIMBwgIKEAAYgAQYigUYQ8ICEBAAGIAEGIoFGEMYsQMYgwHCAg4QABiABBiKBRiRAhixA8ICCxAAGIAEGIoFGLEDwgIREAAYgAQYigUYkQIYsQMYgwH
CAgsQABiABBiKBRiGA8ICBhAAGBYYHsICCBAAGIAEGKIEwgIIEAAYCBgeGA3CAgYQIRgKGAqIBgE&sclient=gws-wiz-video#fpstate=ive&vld=cid:71e6f8ed,vid:WeByuOD8k1c,st:0
Applications of AI
● Applications of Artificial Intelligence: Game Playing, Expert Systems, Natural Language
Processing, Image Understanding, Robotics, Pattern Recognition,Virtual Reality, Computer
Vision, Intelligent Control
● Robotics, :
● In industrial robots, machine learning is used to improve production efficiency and reduce
errors on the assembly line in order to improve its production capacity.
● AI is also being used to increase the capabilities of these tools so that they can perform
increasingly complex tasks, from soldering and assembling electronic components, to
complex surgery with greater precision and control, and less invasive procedures for the
patient.
● In the field of health it is also very important in the field of diagnosis, as AI and Machine
Learning can be added to the application of Big Data tools to collect and analyse large
volumes of information relating to other diagnoses.
● Video :
https://www.google.com/search?q=Amazon+warehouses+robots+video&sca_esv=601759512&rlz=1C1CHBD_enIN817IN818&biw=1220&bih=557&ei=xOCzZbvKC5nd1e8Puoy9oA4&ved=0ahUKEwi7vKbDv_uDAxWZbvUHHTpGD-QQ4dUDCBE&uact=5&oq=Amazon+
warehouses+robots+video&gs_lp=Egxnd3Mtd2l6LXNlcnAiHkFtYXpvbiB3YXJlaG91c2VzIHJvYm90cyB2aWRlbzIHEAAYgAQYDTIIEAAYCBgeGA0yCxAAGIAEGIoFGIYDMgsQABiABBiKBRiGAzILEAAYgAQYigUYhgMyCxAAGIAEGIoFGIYDMgsQABiABBiK
BRiGA0iatAFQ-z9Yl6sBcAJ4AZABAJgB2QKgAasvqgEIMC4yNS42LjK4AQPIAQD4AQHCAgoQABhHGNYEGLADwgILEAAYgAQYigUYkQLCAgoQABiABBiKBRhDwgIREC4YgAQYsQMYgwEYxwEY0QPCAgsQLhiDARixAxiABMICFxAuGIAEGIoFGJECGLE
DGIMBGMcBGNEDwgILEAAYgAQYsQMYgwHCAggQABiABBixA8ICJhAuGIAEGIoFGJECGLEDGIMBGMcBGNEDGJcFGNwEGN4EGOAE2AEBwgINEAAYgAQYigUYQxixA8ICBRAAGIAEwgIGEAAYFhgewgIFECEYoAHCAgkQIRgKGKABGArCAggQABi
ABBiiBMICBhAAGAcYHsICCBAAGAgYBxgewgIGEAAYCBge4gMEGAAgQYgGAZAGCLoGBggBEAEYFA&sclient=gws-wiz-serp#fpstate=ive&vld=cid:670d0642,vid:TUx-ljgB-5Q,st:0

https://www.linkedin.com/posts/stevenouri_artificialintelligence-technology-innovation-ugcPost-7155841105475854336-8SYt?utm_source=share
&utm_medium=member_android
Applications of AI
● Applications of Artificial Intelligence: Game Playing, Expert Systems, Natural Language
Processing, Image Understanding, Robotics, Pattern Recognition,Virtual Reality, Computer
Vision, Intelligent Control
● Pattern Recognition :
● Pattern recognition is applied for data of all types, including image, video, text, and audio.
● As the pattern recognition model can identify recurring patterns in data, predictions made by
such models are quite reliable.
● Pattern recognition involves three key steps: analyzing the input data, extracting patterns,
and comparing it with the stored data.
● Video:
https://www.google.com/search?q=pattern+recognition+in+ai&sca_esv=601748582&rlz=1C1CHBD_enIN817IN818&biw=1220&bih=557&tbm=vid&ei=j-KzZaroLoXS1e8Ph8WmyA
s&oq=+Pattern+recognition&gs_lp=Eg1nd3Mtd2l6LXZpZGVvIhQgUGF0dGVybiByZWNvZ25pdGlvbioCCAAyCxAAGIAEGIoFGJECMgsQABiABBiKBRiRAjIKEAAYgAQYigU
YQzIGEAAYBxgeMgYQABgHGB4yChAAGIAEGIoFGEMyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAESJ4TUABYAHAAeACQAQCYAaIBoAGiAaoBAzAuMbgBA
cgBAPgBAQ&sclient=gws-wiz-video#fpstate=ive&vld=cid:50fd672f,vid:edbyqB9raew,st:0
Applications of AI
● Applications of Artificial Intelligence: Game Playing, Expert Systems, Natural Language
Processing, Image Understanding, Robotics, Pattern Recognition,Virtual Reality, Computer
Vision, Intelligent Control
● Virtual Reality :
● AI-generated content for VR and AR involves using machine learning algorithms to analyze
data and generate virtual objects and environments.
● This technology is changing the way we experience VR and AR, and is allowing developers
to create increasingly immersive and realistic virtual worlds for users to explore.
● Gaming: AI algorithms can generate realistic 3D models of game characters and
environments, and even create dynamic game scenarios based on user interactions creating
more immersive and engaging gaming experiences for users
● Training and Simulation: in health care and aviation
● Education and learning
● Video:
https://www.google.com/search?sca_esv=601748582&rlz=1C1CHBD_enIN817IN818&q=applications+of+ai+in+virtual+reality&tbm=vid&source=ln
ms&sa=X&ved=2ahUKEwiGo-S6wvuDAxWhdfUHHfGMCzYQ0pQJegQICxAB&biw=1220&bih=557&dpr=1.57#fpstate=ive&vld=cid:0347d3eb,vi
d:YXziJqnODh0,st:0

Categories of AI

Categories of AI
• Acting humanly: The Turing Test approach
• The Turing Test, proposed by Alan Turing TURING TEST (1950), was designed to
provide a satisfactory operational definition of intelligence.
• A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person from a computer.
• Programming a computer to pass a rigorously applied test provides plenty to work on.
• The computer would need to possess the following capabilities:
• NATURAL LANGUAGE PROCESSING : To enable it to communicate successfully
in English;
• KNOWLEDGE REPRESENTATION : To store what it knows or hears;
• AUTOMATED REASONING : To use the stored information to answer questions
and to draw new conclusions;
• MACHINE LEARNING • machine learning to adapt to new circumstances and to detect
and extrapolate patterns
https://www.google.com/search?sca_esv=
599381486&q=turing+test+how+does+it+
work&tbm=vid&source=lnms&sa=X&ved=
2ahUKEwjTr7GfpuaDAxWHwzgGHbBAAI
oQ0pQJegQIChAB&biw=1299&bih=654&d
pr=1#fpstate=ive&vld=cid:eb749dc2,vid:sX
x-PpEBR7k,st:0

● The Turing test is based on a party game "Imitation game," with some modifications.
● This game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two
players and his job is to find that which player is machine among two of them.
● Consider, Player A is a computer, Player B is human, and Player C is an interrogator.
Interrogator is aware that one of them is machine, but he needs to identify this on the basis
of questions and their responses.
https://www.javatpoint.com/turing-test-in-ai
Categories of AI
• Acting humanly: The Turing Test approach
• Turing’s test deliberately avoided direct physical interaction between the interrogator and
the computer, because physical simulation of a person is unnecessary for intelligence.

• TOTAL TURING TEST Test includes a video signal so that the interrogator can test the
subject’s perceptual abilities, as well as the opportunity for the interrogator to pass
physical objects “through the hatch.”

• To pass the total Turing Test, the computer will need


• COMPUTER VISION :computer vision to perceive objects, and
• ROBOTICS : robotics to manipulate objects and move about.
Captcha and Turing Test
● A Turing test assesses a computer's ability to mimic human behavior
● A computer program "passes" the Turing test if its performance during the test is
indistinguishable from that of a human – if it acts the way that a human would act.
● A Turing test is not dependent on getting answers correct; it's about how "human" the
answers sound, regardless of whether they're right or wrong.
● A "Public Turing test," a CAPTCHA is really the opposite of a Turing test – it
determines whether a supposedly human user is actually a computer program (a bot) or
not, instead of trying to determine if a computer is human
● CAPTCHA needs to assign a brief task that people tend to be good at and computers
struggle with. Identifying text and images usually fits those criteria.
● Two widely used CAPTCHA services are hCaptcha an independent company and
reCAPTCHA, offered by Google.
● It takes the average person approximately 10 seconds to solve a typical CAPTCHA.
https://www.cloudflare.com/learning/bots/how-captchas-work/#:~:text=Although%20it's%20called%20a%20%22Public,if%20a%20computer%20is%20human.
https://en.wikipedia.org/wiki/CAPTCHA#:~:text=Because%20the%20test%20is%20administered,and%20reCAPTCHA%2C%20offered%20by%20Google .
Categories of AI

Captcha and Turing Test
● Thinking humanly: The cognitive modeling approach
● If we say that a given program thinks like a human, we must have some way of
determining how humans think and get inside the actual workings of human
minds.
● There are three ways to do this:
● Through introspection—trying to catch our own thoughts as they go by;
● Through psychological experiments—observing a person in action;
● Through brain imaging—observing the brain in action.
● If the program’s input–output behavior matches corresponding human behavior, that
is evidence that some of the program’s mechanisms could also be operating in
humans.
● COGNITIVE SCIENCE solving the same problems :
● The interdisciplinary field of cognitive science brings together computer models from
AI and experimental techniques from psychology to construct precise and testable
theories of the human mind.
Categories of AI

Captcha and Turing Test
● Thinking rationally:
● The “laws of thought” approach The Greek philosopher Aristotle was one of the first
to attempt to codify “right thinking,” that is, irrefutable reasoning processes.
● His syllogisms provided patterns for argument structures that always yielded correct
conclusions when given correct premises—for example, “Socrates is a man; all men
are mortal; therefore, Socrates is mortal.”
● These laws of thought were supposed to govern the operation of the mind; their study
initiated the field called logic.
● If no solution exists, the program might loop forever, The so-called logicist tradition
within artificial intelligence hopes to build on such programs to create intelligent
systems.
Captcha and Turing Test
● Thinking rationally:
● There are two main obstacles to this approach.
● First, it is not easy to take informal knowledge and state it in the formal terms
required by logical notation, particularly when the knowledge is less than 100%
certain.
● Second, there is a big difference between solving a problem “in principle” and
solving it in practice.
● Even problems with just a few hundred facts can exhaust the computational
resources of any computer unless it has some guidance as to which reasoning steps
to try first.
● Although both of these obstacles apply to any attempt to build computational
reasoning systems, they appeared first in the logicist tradition.
Categories of AI

Captcha and Turing Test
● Acting rationally: The rational agent approach
● An agent is just something that acts
● All computer programs do something, but computer agents are expected to do more:
operate autonomously, perceive their environment, persist over a prolonged time
period, adapt to change, and create and pursue goals.
● A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
● In the “laws of thought” approach to AI, the emphasis was on correct inferences.
● Making correct inferences is sometimes part of being a rational agent, because one way to
act rationally is to reason logically to the conclusion that a given action will achieve
one’s goals and then to act on that conclusion.
● Correct inference is not all of rationality;
● In some situations, there is no provably correct thing to do, but something must still be
done.
● There are also ways of acting rationally that cannot be said to involve inference.
● For example, recoiling from a hot stove is a reflex action that is usually more successful
than a slower action taken after careful deliberation.
Captcha and Turing Test
● Acting rationally: The rational agent approach
● In some situations, there is no provably correct thing to do, but something must still be
done.
● There are also ways of acting rationally that cannot be said to involve inference.
● For example, recoiling from a hot stove is a reflex action that is usually more
successful than a slower action taken after careful deliberation.
● All the skills needed for the Turing Test also allow an agent to act rationally.
● Knowledge representation and reasoning enable agents to reach good decisions.
● We need to be able to generate comprehensible sentences in natural language to get by in
a complex society.
● We need learning not only for great knowledge , but also because it improves our ability
to generate effective behavior.
● The rational-agent approach has two advantages over the other approaches.
○ It is more general than the “laws of thought” approach because correct inference is
just one of several possible mechanisms for achieving rationality.
○ It is more amenable to scientific development than are approaches based on human
behavior or human thought.
Captcha and Turing Test
● Acting rationally: The rational agent approach
● The standard of rationality is mathematically well defined and completely general, and
can be “unpacked” to generate agent designs that provably achieve it.
● Human behavior, on the other hand, is well adapted for one specific environment and is
defined by the sum total of all the things that humans do
Intelligent Agents
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through effectors.
• A human agent has eyes, ears, and other organs for sensors, and hands, legs,
mouth, and other body parts for effectors.
• A robotic agent substitutes cameras and infrared range finders for the sensors and
various motors for the effectors.
• An agent's behaviour is described by the agent function that maps any given
percept sequence to an action.
• Agent function is implemented by an agent program.
• The agent function is an abstract mathematical description;
• The agent program is a concrete implementation, running on the agent
architecture.
Intelligent Agents
• Examples of Agents
1. Humans can be looked upon as agents. They have eyes, ears, skin, taste buds, etc.
for sensors; and hands, fingers, legs, mouth for effectors.
2. Robots are agents. Robots may have camera, sonar, infrared, bumper, etc. for
sensors. They can have grippers, wheels, lights, speakers, etc. for actuators. Some
examples of robots are Xavier from CMU, COG from MIT, etc.
3. Software agents or softbots or soft robots that have some functions as sensors
and some functions as actuators.
4. Expert systems like the Cardiologist is an agent.
5. Autonomous spacecrafts.

AIBO entertainment robot from SONY, Xavier Robot (CMU)


Agent

Human Robotic

Percepts Actuators Percepts Actuators

Body part Infrared


Eyes, ears ,
movements cameras, Various motor
nose, etc
etc sensors
Intelligent Agents
• An agent acts in an environment and perceives its environment through sensors.
• The complete set of inputs at a given time is called a percept.
• The current percept, or a sequence of percepts can influence the actions of an
agent.
• The agent can change the environment through actuators or effectors.
• An operation involving an effectors is called an action.
• Actions can be grouped into action sequences.
• The agent can have goals which it tries to achieve.
• Thus, an agent can be looked upon as a system that implements a mapping
from percept sequences to actions.
• A performance measure has to be used in order to evaluate an agent.
• An autonomous agent decides autonomously which action to take in the current
situation to maximize progress towards its goals.
Behavior of an
Agents

Autonomous
Omniscient

Intelligent Ideal Rational


• An autonomous agent decides autonomously which action to take in the current situation
to maximize progress towards its goals

• Ideal Rational Agent: For each possible percept sequence, an ideal rational agent should
do whatever action is expected to maximize its performance measure, on the basis of the
evidence provided by the percept sequence and whatever built-in knowledge the agent has.

• An Intelligent Agent must sense, must act, must be autonomous (to some extent) and
rational

• An omniscient agent knows the actual outcome of its actions, and can act accordingly
Intelligent Agents
• Rationality And Omniscience:
• An omniscient agent knows the actual outcome of its actions, and can act
accordingly; but omniscience is impossible in reality.
• Example:
• I am walking along the some road only for pedestrians and I see an old friend
across the street.
• There is no traffic nearby and I'm not otherwise engaged, so, being rational, I start
to cross the street.
• Meanwhile, at 33,000 feet, a cargo door falls off a passing airliner, and before I
make it to the other side of the street I am flattened.
• Was I irrational to cross the street? It is unlikely that my obituary would read "Idiot
attempts to cross street."
• So Rationality is concerned with expected success given what has been
perceived.
• Crossing the street was rational because most of the time the crossing would be
successful, and there was no way I could have foreseen the falling door.
Intelligent Agents
• Rationality And Omniscience:
• Another agent that was equipped with radar for detecting falling doors or a
steel cage strong enough to repel them would be more successful, but it would
not be any more rational.
• So we cannot blame an agent for failing to take into account something it could not
perceive, or for failing to take an action (such as repelling the cargo door) that it is
incapable of taking.
• But relaxing the requirement of perfection is not just a question of being fair to
agents.
• An intelligent agent should always do what is actually the right thing, it will be
impossible to design an agent to fulfill this specification
• Rational at any given time depends on four things:
– The performance measure that defines degree of success.
– Everything that the agent has perceived so far. We will call this complete
perceptual history the percept sequence.
– What the agent knows about the environment.
– The actions that the agent can perform.
Intelligent Agents
• Ideal Rational Agent: For each possible percept sequence, an ideal rational agent
should do whatever action is expected to maximize its performance measure, on the
basis of the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
• For Previous example:
• if an agent does not look both ways before crossing a busy road, then its
percept sequence will not tell it that there is a large truck approaching at high
speed.
• The definition seems to say that it would be OK for it to cross the road. In fact, this
interpretation is wrong on two counts.
• First, it would not be rational to cross the road: the risk of crossing without looking
is too great.
• Second, an ideal rational agent would have chosen the "looking" action before
stepping into the street, because looking helps maximize the expected performance.
• Doing actions in order to obtain useful information is an important part of
rationality
Intelligent Agents
• An Intelligent Agent must sense, must act, must be autonomous (to some
extent) and rational.
• AI is about building rational agents. An agent is something that perceives and acts.
• A rational agent always does the right thing.
• Agent Performance
• The performance measure is a subjective measure to characterize how successful
an agent is.
• The success can be measured in various ways :
– It can be measured in terms of speed or efficiency of the agent.
– It can be measured by the accuracy or the quality of the solutions achieved by
the agent.
– It can also be measured by power usage, money, etc.
• Example : consider the an agent that is supposed to vacuum a dirty floor.
• AI assistants, like Alexa and Siri, are examples of intelligent agents as they use sensors to
perceive a request made by the user and the automatically collect data from the internet without
the user's help
• Alexa Video
Intelligent Agents
• The vacuum-cleaner world :
• There are two locations: squares A and B. The vacuum agent perceives which
square it is in
• The vacuum agent perceives in which square it is in and whether there is dirt in
the square.
• if the current square is dirty, then suck, otherwise move to the other square.
• Partial tabulation of a simple agent function for the vacuum-cleaner world
Intelligent Agents
• Agent Performance
• A performance measure embodies the criterion for success of an agent's
behaviour.
• Example : consider the an agent that is supposed to vacuum a dirty floor.
• The performance measure would be the amount of dirt cleaned up in a single
eight-hour shift.
• A more sophisticated performance measure would factor in the amount of
electricity consumed and the amount of noise generated as well.
• A third performance measure might give highest marks to an agent that not only
cleans the floor quietly and efficiently, but also finds time to go windsurfing at the
weekend
• Thus evaluating performance is also important.
• If we measured how much dirt the agent had cleaned up in the first hour of the day,
we would be rewarding those agents that start fast (even if they do little or no work
later on), and punishing those that work consistently. Thus,
• we want to measure performance over the long run, be it an eight-hour shift or a
lifetime.
• The notion of an agent is meant to be a tool for analyzing systems, not an absolute
characterization that divides the world into agents and non-agents.
• Consider a clock it can be thought of as a simple agent.
• As an agent, most clocks always do the right action: moving their hands (or
displaying digits) in the proper fashion.
• Clocks are a kind of degenerate agent in that their percept sequence is empty; no
matter what happens outside, the clock's action should be unaffected.
Intelligent Agents
• The notion of an agent is meant to be a tool for analyzing systems, not an
absolute characterization that divides the world into agents and non-agents.
• Consider a clock it can be thought of as a simple agent.
• As an agent, most clocks always do the right action: moving their hands (or
displaying digits) in the proper fashion.
• Clocks are a kind of degenerate agent in that their percept sequence is empty; no
matter what happens outside, the clock's action should be unaffected.
• This is not quite true. If the clock and its owner take a trip from California to
Australia, the right thing for the clock to do would be to turn itself back six
hours.
• We do not get upset at our clocks for failing to do this because we realize that they
are acting rationally, given their lack of perceptual equipment
• So an agent's behavior depends only on its percept sequence to date
• We can describe any particular agent by making a table of the action it takes
in response to each possible percept sequence and have a mapping from
percept sequences to actions
• So mappings describe agents, then ideal mappings describe ideal agents.
• Specifying which action an agent ought to take in response to any given percept
sequence provides a design for an ideal agent.
Intelligent Agents
• Avery simple agent: the square-root function on a calculator.: ideal mapping
and ideal agent design
• The percept sequence for this agent is a sequence of keystrokes representing a
number, and the action is to display a number on the display screen.
• The ideal mapping is that when the percept is a positive number x, the right action is
to display a positive number z such that z2 « x, accurate to, say, 15 decimal places.
• This specification of the ideal mapping does not require the designer to actually
construct a table of square roots. Nor does the square-root function have to use a
table to behave correctly
The Nature of Environment
• Structure of Task Environment
• Agent program: a function that implements the agent mapping from percepts to
actions.
• This agent program runs on computing device called as architecture which can
be computer or special hardware with a software that isolates /connects the
computer and the agent program
• The architecture makes the percepts from the sensors available to the
program, runs the program, and feeds the program's action choices to the
effectors as they are generated.
• The relationship among agents, architectures, and programs can be
– agent = architecture + program
• Software agents (or software robots or softbots) exist in rich, unlimited
domains.
• A softbot designed to fly a flight simulator for a 747.
• The simulator is a very detailed, complex environment, and the software agent must
choose from a wide variety of actions in real time.
• Simple health care softbots
The Nature of Environment
• Task Environment
• For the rationality of an agent(right activity so as to make agent successful) we
require to Performance, Environment ,Actuators, Sensors (PEAS)

Agent Type Performance Environment Actuators Sensors


Measure

Taxi Driver Safe, fast, Roads, traffic, Steering, Cameras,


legal, customers, accelerator, speedometer,
comfortable pedestrians brake horn, GPS,
trip, maximize signal display odometer,
profit accelerometer,
engine
sensors,
keyboard

Performance tells desirable qualities, Environment tells what the agent will face,
actuators are the controls and sensors are the inputs based on which actuators work
or react
Examples of agent types and their PEAS descriptions.
Sensor Actuator Performance Environment

AI Videos for PEAS


The Nature of Environment
Properties of Task
Environment

Fully vs partially
observable Single agent/
Multi-agent

Deterministic vs Episodic vs Discrete vs


Stochastic sequential Continuous

Static Vs Dynamic
The hardest case is partially observable, stochastic, sequential, dynamic, continuous, and
multiagent.
It also turns out that most real situations are so complex that whether they are really
deterministic is a moot point. For practical purposes, they must be treated as stochastic.
Properties of Task Environments
• Fully observable :
• If an agent's sensors give it access to the complete state of the environment at each
point in time, then we say that the task environment is fully observable.
• A task environment is effectively fully observable if the sensors detect all
aspects that are relevant to the choice of action; relevance, in turn, depends on the
performance measure.
• Fully observable environments are convenient because the agent need not maintain
any internal state to keep track of the world.
• For example, a checkers game can be classed as fully observable, because the
agent can observe the full state of the game (how many pieces the opponent has,
how many pieces we have etc.)
• An example of this could be a Poker game. The Agent may not know what cards
the opponent has and will have to make best decision based on what cards the
opponent has played.
• Crossword Puzzle , chess with clock
Properties of Task Environments
• Partially observable:
• An environment might be partially observable because of noisy and inaccurate
sensors or because parts of the state are simply missing from the sensor data-for
• Example,
● a vacuum agent with only a local dirt sensor cannot tell whether there is dirt
in other squares
● an automated taxi cannot see what other drivers are thinking.


Properties of Task Environments
• Deterministic vs Stochastic
• If the next state of the environment is completely determined by the current
state and the action executed by the agent, then we say the environment is
deterministic; otherwise, it is stochastic (pattern that may be analysed statistically
but may not be predicted precisely)
• An agent need not worry about uncertainty in a fully observable, deterministic
environment.
• deterministic or stochastic environment is from the point of view of the agent.
• Taxi driving is clearly stochastic in this sense, because one can never predict
the behavior of traffic exactly; moreover, one's tires blow out and one's engine
seizes up without warning.
• The vacuum world as we described it is deterministic, but variations can include
stochastic elements such as randomly appearing dirt and an unreliable suction
mechanism

Properties of Task Environments
• Deterministic vs Stochastic
• Deterministic environment is one where your agent’s actions uniquely determine the
outcome.
• Example, in chess, there’s really no randomness when you move a piece.
• The effect of moving a piece is completely predetermined, and no matter where I’m
going to move the same piece, the outcome is the same.
• Crossword
• Games with dice, for example, like backgammon, are stochastic. While you can
still deterministically move your pieces, the outcome of an action also involves
throwing of the dice, and you can’t predict those.

• There’s a certain amount of randomness involved for the outcome of dice, and
therefore, we call this stochastic
Properties of Task Environments
• Episodic vs sequential
• In an episodic task environment, the agent's experience is divided into atomic
episodes.
• Each episode consists of the agent perceiving and then performing a single action.
• Crucially, the next episode does not depend on the actions taken in previous
episodes.
• In episodic environments, the choice of action in each episode depends only on the
episode itself.
• Example :classification tasks, an agent that has to spot defective parts on an
assembly line bases each decision on the current part, regardless of previous
decisions also the current decision doesn't affect whether the next part is defective.
• In sequential environments, the current decision could affect all future
decisions.
• Example: Chess and taxi driving are sequential
• In both cases, short-term actions can have long-term consequences.
• Episodic environments are much simpler than sequential environments because the
agent does not need to think ahead.
Properties of Task Environments
• Episodic vs sequential
• In an episodic task environment, the agent's experience is divided into atomic
episodes.
• Each episode consists of the agent perceiving and then performing a single action.
• Crucially, the next episode does not depend on the actions taken in previous
episodes.
• In episodic environments, the choice of action in each episode depends only on the
episode itself.
• Example :classification tasks, an agent that has to spot defective parts on an
assembly line bases each decision on the current part, regardless of previous
decisions also the current decision doesn't affect whether the next part is defective.
• In sequential environments, the current decision could affect all future
decisions.
• Example: Chess and taxi driving are sequential
• In both cases, short-term actions can have long-term consequences.
• Episodic environments are much simpler than sequential environments because the
agent does not need to think ahead.
Properties of Task Environments
• Static vs Dynamic
• If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static.
• Static environments are easy to deal with because the agent need not keep looking
at the world while it is deciding on an action, nor need it worry about the passage of
time.
• Dynamic environments, are continuously asking the agent what it wants to do; if it
hasn't decided yet, that counts as deciding to do nothing.
• If the environment itself does not change with the passage of time but the agent's
performance score does, then we say the environment is semi dynamic.
• Taxi driving is clearly dynamic: the other cars and the taxi itself keep moving
while the driving algorithm dithers about what to do next.
• Chess, when played with a clock, is semi dynamic.
• Crossword puzzles are static.
Properties of Task Environments
• Discrete vs Continuous
• The discrete/continuous distinction can be applied to the state of the environment, to
the way time is handled, and to the percepts and actions of the agent.
• For example, a discrete-state environment such as a chess game has a finite
number of distinct states.
• Chess also has a discrete set of percepts and actions.
• Taxi driving is a continuous state and continuous-time problem: the speed and
location of the taxi and of the other vehicles sweep through a range of continuous
values and do so smoothly over time.
• Taxi-driving actions are also continuous (steering angles, etc.).
• Input from digital cameras is discrete, but is typically treated as representing
continuously varying intensities and locations.
Properties of Task Environments
• Single agent/ Multi-agent
• Example, an agent solving a crossword puzzle by itself is clearly in a single-agent
environment,
• An agent playing chess is in a two-agent environment.
• we have described how an entity may be viewed as an agent, but we have not
explained which entities must be viewed as agents.
• Does an agent A (the taxi driver) have to treat an object B (another vehicle) as an
agent, or can it be treated merely as a stochastically behaving object, analogous to
waves at the beach or leaves blowing in the wind?
• The key distinction is whether B's behavior is best described as maximizing a
performance measure whose value depends on agent A's behavior.
• For example, in chess, the opponent entity B is trying to maximize its performance
measure, which, by the rules of chess, minimizes agent A's performance measure.
• Thus, chess is a competitive multiagent environment.
• In the taxi-driving environment, avoiding collisions maximizes the performance
measure of all agents, so it is a partially cooperative multiagent environment.
• It is also partially competitive because, for example, only one car can occupy a
parking space.
The hardest case is partially observable, stochastic, sequential, dynamic, continuous, and
multiagent.
It also turns out that most real situations are so complex that whether they are really
deterministic is a moot point. For practical purposes, they must be treated as stochastic.
Structure of Agents
• Agent programs

• The TABLE-DRIVEN-AGENT program is invoked for each new percept and returns
an action each time.
• It keeps track of the percept sequence using its own private data structure.
trivial agent program that keeps track of the percept sequence and then uses it to index
into a table of actions to decide what to do.
• The table represents explicitly the agent function that the agent program embodies.
• To build a rational agent in this way, we must construct a table that contains the
appropriate action for every possible percept sequence.
• An agent is meant to be a tool for analyzing systems, not an absolute
characterization that divides the world into agents and non-agents.
Structure of Agents
• Table based agent
• In table based agent the action is looked up from a table based on information about
the agent’s percepts.
• A table is simple way to specify a mapping from percepts to actions.
• The mapping is implicitly defined by a program. The mapping may be implemented
by a rule based system, by a neural network or by a procedure.
• There are several disadvantages to a table based system.
– The tables may become very large.
– Learning a table may take a very long time, especially if the table is large.
– Such systems usually have little autonomy, as all actions are pre-determined.
• Four basic kinds of agent program that embody the principles underlying almost
all intelligent systems:
– Simple reflex agents;
– Model-based reflex agents;
– Goal-based agents; and
– Utility-based agents.
The Nature of Environment
Types of Agents

Simple reflex
agents Learning agents

Model-based reflex Goal-based Utility-based


agents agents agents
Simple Reflex agent: Simple Reflex agent:
● They are the simplest agents.

● They take decisions on the basis of


the current percepts and ignore the
rest of the percept history.
● These agents only succeed in the

fully observable environment.


● The Simple reflex agent does not

consider any part of percepts


history during their decision and
action process.
● The Simple reflex agent works on
A reflex agent, for example, could be a home
thermostat that knows to start heating or cooling
Condition-action rule, which
your house based on reaching a certain means it maps the current state to
temperature. action. Such as a Room Cleaner
For example if a mars lander found a agent, it works only if there is dirt in
rock in a specific place it needed to the room.
collect then it would collect it, if it was ● Example : the vacuum agent whose
a simple reflex agent then if it found
agent function is tabulated is a simple
the same rock in a different place it
would still pick it up as it doesn't take reflex agent, as its decision is based
into account that it already picked it only on the current location and on
up. whether that contains dirt.
https://www.doc.ic.ac.uk/project/examples/2005/163/g05163
● Simple Reflex agent:
● Example : the vacuum agent whose agent function is tabulated is a simple reflex
agent, as its decision is based only on the current location and on whether that
contains dirt.
● Problems for the simple reflex agent design approach:
● They have very limited intelligence
● They do not have knowledge of non-perceptual parts of the current state
● Mostly too big to generate and to store.
● Not adaptive to changes in the environment.
● Model-based reflex agent
● The Model-based agent can work in
a partially observable environment,
and track the situation.
● A model-based agent has two
important factors:
● Model: It is knowledge about "how
things happen in the world," so it is
called a Model-based agent.
● Internal State: It is a representation
of the current state based on
percept history.
● These agents have the model,
"which is knowledge of the world"
and based on the model they
perform actions.
● Updating the agent state requires
information about:
● How the world evolves
● How the agent's action affects the world.
● Goal-based agents
● The knowledge of the current state
environment is not always sufficient
to decide for an agent to what to do.
● The agent needs to know its goal
which describes desirable
situations.
● Goal-based agents expand the
capabilities of the model-based
agent by having the "goal"
information.
● They choose an action, so that they
can achieve the goal.
Google's Waymo driverless cars are good ● These agents may have to consider
examples of a goal-based agent when they are a long sequence of possible actions
programmed with an end destination, or goal,
in mind.
before deciding whether the goal is
if our mars Lander needed to get up a hill the
achieved or not. Such
agent can update it’s knowledge on how much considerations of different scenario
power to put into the wheels to gain certain are called searching and planning,
speeds, through this all relevant behaviors will
now automatically follow the new knowledge on which makes an agent proactive.
moving
● Utility-based agents
● These agents are similar to
the goal-based agent but
provide an extra component
of utility measurement which
makes them different by
providing a measure of
success at a given state.
● Utility-based agent act
based not only on goals but
also the best way to achieve
the goal.
● The Utility-based agent is
useful when there are multiple
For example let’s show our mars Lander on possible alternatives, and an
the surface of mars with an obstacle in its agent has to choose in order
way. In a goal based agent it is uncertain to perform the best action.
which path will be taken by the agent and
some a re clearly not as efficient as others ● The utility function maps each
but in a utility based agent the best path state to a real number to
will have the best output from the utility check how efficiently each
function and that path will be chosen.
action achieves the goals.
Simple reflex agents Model-based reflex agents

Goal-based agents
Utility-based agents
• Example
• Utility function maps a state (or a sequence of states) onto a real number,
which describes the associated degree of happiness.
• A complete specification of the utility function allows rational decisions in two
kinds of cases where goals are inadequate.
• First, when there are conflicting goals, only some of which can be achieved (for
example, speed and safety), the utility function specifies the appropriate trade off.
• Second, when there are several goals that the agent can aim for, none of which
can be achieved with certainty, utility provides a way in which the likelihood of
success can be weighed up against the importance of the goals
Learning agents

• Learning element, is responsible for making improvements, and the performance


element, which is responsible for selecting external actions.
• The performance element takes in percepts and decides on actions.
• The learning element uses CRITIC feedback from the critic on how the agent is
doing and determines how the performance element should be modified to do better
in the future.
• The design of the learning element depends very much on the design of the
performance element.
• Problem generator is responsible for suggesting actions that will lead to new and
informative experiences
Learning agents
• The critic tells the learning element how well the agent is doing with respect to a
fixed
performance standard.
• The critic is necessary because the percepts themselves provide no
indication of the agent's success.
• For example, a chess program could receive a percept indicating that it has
checkmated its opponent, but it needs a performance standard to know
that this is a good thing; the percept itself does not say so.
• It is important that the performance standard be fixed.
• Problem generator is responsible for suggesting actions that will lead to new and
informative experiences
• The external performance standard must inform the agent that the loss of tips is a
negative contribution to its overall performance; then the agent might be able to learn
that violent maneuvers do not contribute to its own utility.
• In a sense, the performance standard distinguishes part of the incoming percept as a
reward (or penalty) that provides direct feedback on the quality of the agent's
behaviour.
Simple reflex agents Model-based reflex agents

Goal-based agents Utility-based agents

Learning
Agent
The major points to recall are as follows:
• An agent is something that perceives and acts in an environment. The
agent function for an agent specifies the action taken by the agent in
response to any percept sequence.
• The performance measure evaluates the behavior of the agent in an
environment.
• A rational agent acts so as to maximize the expected value of the
performance measure, given the percept sequence it has seen so far.
• A task environment specification includes the performance measure,
the external environment, the actuators, and the sensors.
• In designing an agent, the first step must always be to specify the task
environment as fully as possible. Task environments vary along several
significant dimensions. They can be fully or partially observable,
deterministic or stochastic, episodic or sequential, static or dynamic,
discrete or continuous, and single-agent or multiagent.
The major points to recall are as follows:
• The agent program implements the agent function. There exists a
variety of basic agent-program designs, reflecting the kind of information
made explicit and used in the decision process. The designs vary in
efficiency, compactness, and flexibility. The appropriate design of the
agent program depends on the nature of the environment.
• Simple reflex agents respond directly to percepts, whereas
model-based reflex agents maintain internal state to track aspects of the
world that are not evident in the current percept.
Goal-based agents act to achieve their goals, and utility-based agents try
to maximize their own expected "happiness.“
• All agents can improve their performance through learning.
Problem-solving agent

Problem Solving agent • A simple problem-solving agent-


• It first formulates a goal and a
problem,
Goal formation • searches for a sequence of actions
that would solve the problem,
• then executes the actions one at a
Problem Formulation time.
• When this is complete, it formulates
another goal and starts over.
Search Algorithm • when it is executing the sequence it
ignores its percepts: it assumes that
the solution it has found will always
Execution Phase work.
Problem-solving Agent.
• Problem-solving agents decide what to do by finding sequences of actions that
lead to desirable states.
• Goal formulation, based on the current situation and the agent's performance
measure, is the first step in problem solving.
– Goals help organize behaviour by limiting the objectives that the agent is
trying to achieve.
• Problem formulation is the process of deciding what actions and states to
consider, given a goal
• An agent with several immediate options of unknown value can decide what to
do by just examining different possible sequences of actions that lead to states of
known value, and then choosing the best sequence
• This process of looking for such a sequence is called search.
• Search algorithm takes a problem as input and returns a solution in the form of
an action sequence.
• Execution Phase: Once a solution is found, the actions it recommends can be
carried out.
Example: Romania

• On holiday in Romania; currently in Arad.


• Flight leaves tomorrow from Bucharest

• Formulate goal:
• be in Bucharest

• Formulate problem:
– states: various cities
– actions: drive between cities

• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Well-defined problems and solutions

• A problem can be defined formally by four components:


1. The initial state that the agent starts in
2. Description of the possible actions available to the agent. The most common
formulation uses a successor function.
– Given a particular state x, SUCCESSOR-FN(x)
– returns a set of (action, successor) ordered pairs,
– where each action is one of the legal actions in state x
– and each successor is a state that can be reached from x by applying the action.
– Together, the initial state and successor function implicitly define the state
space of the problem-the set of all states reachable from the initial state.
– The state space forms a graph in which the nodes are states and the arcs
between nodes are actions.
3. The goal test, which determines whether a given state is a goal state.
– Sometimes there is an explicit set of possible goal states, and the test simply
checks whether the given state is one of them.
Well-defined problems and solutions

• A problem can be defined formally by four components:


4. A path cost function that assigns a numeric cost to each path. The
problem-solving agent chooses a cost function that reflects its own performance
measure
– The step cost of taking action A to go from state X to state Y is denoted by C (
X,A,Y).
• The preceding 4 elements define a problem and can be gathered together into as
single data structure that is given as input to a problem-solving algorithm.
• A solution to a problem is a path from the initial state to a goal state.
• Solution quality is measured by the path cost function, and an optimal solution
has the lowest path cost among all solutions.
Well-defined problems and solutions

• A problem can be defined formally by four components:

Initial State the initial state for our agent in Romania might be
described as In(Arad)

from the state In(Arad), the successor function for the Romania
Actions problem would return { (G o(Sibzu),I n(Sibiu)), (Go(T imisoara), In(
Tzmisoara)), (Go(Zerznd),I n(Zerind)))
State Space is the map of Romania with path as roads

Goal The agent's goal in Romania is the singleton set {In(Bucharest)}

For the agent trying to get to Bucharest, time is of the essence, so


Path Cost
the cost of a path might be its length in kilometers.
Problem Solving Approaches
• The problem-solving approach has been applied to a vast array of task
environments.
1) A toy problem is intended to illustrate or exercise various problem-solving
methods.
– It can be given a concise, exact description.
– This means that it can be used easily by different researchers to compare the
performance of algorithms.
– Ex. Vacuum world problem , 8 puzzle problem , 8 queens problem
2) A real-world problem is one whose solutions people actually care about.
• They tend not to have a single agreed-upon description, but we will attempt to
give the general flavor of their formulations.
• Ex.
– Route finding problem, Touring problem, Travelling Salesperson problem
– VLSI Layout problem, Robot Navigation
– Automatic assembly sequencing, Protein design problem
– Internet searching problem
Rubik's Cube

Sudoku
Problem Solving
Approaches
• A toy problem is intended to illustrate or exercise various problem-solving methods.
• It can be given a concise, exact description.
• This means that it can be used easily by different researchers to compare the
performance of algorithms.
• Example Vaccum agent:
• This can be formulated as a problem as follows:
• States: The agent is in one of two locations, each of which might or might not
contain dirt. Thus there are 2 x 2^2 = 8 possible world states.
• Initial state: Any state can be designated as the initial state.
• Successor function: This generates the legal states that result from trying the three
actions (Left, Right, and Suck).
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
The state space for the vacuum world. Arcs denote actions: L = Left, R = Right, S =
Suck.

States : integer dirt and robot location


Actions: Left, Right, Suck
goal test: no dirt at all locations
path cost: 1 per action
8 Puzzle problem

● State Space Search : Number of states in which the problem can go till it reaches
the goal state
● s : {S,A, Actions(s), Result(s,a), Cost(s,a)}
● S: various states namely start, goal and intermediate
● A: Set of all possible actions : up, down, left, right movement in puzzle problem
● Precise representation of 8 puzzle problem: 3 * 3 board has 9 spaces with 8
numbers on tiles and 1 is empty space
● As the intermediate state are created they will be compared with goal state and if
they match search is stopped
● Action(s) : is selected action out of available up down left or right
● Result(s,a): Resultant state after action is completed
● Cost(s,a): cost of moving to the next state
8 Puzzle problem

The 8-puzzle, consists of a 3 x 3 board with eight numbered tiles and a blank space.
A tile adjacent to the blank space can slide into the space. The object is to reach a
specified goal state, such as the one shown on the right of the figure.
The standard formulation is as follows:
• States: A state description specifies the location of each of the eight tiles and the
blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Any given goal
can be reached from exactly half of the possible initial states
• Successor function: This generates the legal states that result from trying the
four actions (blank moves Left, Right, Up, or Down).
• Goal test: This checks whether the state matches the goal configuration
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
States: locations of tiles
Actions/operators: move blank left, right,
up, down
goal test: goal state (given)
path cost: 1 per move

• The 8-puzzle belongs to the family of sliding-block puzzles, which are often used
as test problems for new search algorithms in AI.
• This general class is known to be NP-complete ie. Nondeterministic Polynomial
time (is a complexity class used to classify decision problems.)
• The State Space of the (n^2 -1) problem has only half of these states are
reachable from any given state
• The 8-puzzle has 9!/2 = 181,440 reachable states ( total States are 9!)and is
easily solved.
• The 15-puzzle (on a 4 x 4 board) has around 1.3 trillion states, and random
instances can be solved optimally in a few milliseconds by the best search
algorithms.
• The 24-puzzle (on a 5 x 5 board) has around 10^25 states, and random instances
are still quite difficult to solve optimally with current machines and algorithms.
Initial State Final State

1 2 3 1 2 3

4 8 - 4 5 6

7 6 5 7 8 -

1 2 3 1 2 3 1 2 3 1 2 3

4 8 5 4 8 5 4 - 5 4 5 -

7 6 - 7 - 6 7 8 6 7 8 6

Path Cost is 5 i.e. 5 moves required to achieve final goal


state 1 2 3
Initial State { (1 2 3) (4 8 -) (7 6 5) }
Move 1 : { (1 2 3) (4 8 5) (7 6 -) } 4 5 6
Move 2 : { (1 2 3) (4 8 5) (7 - 6) }
Move 3 : { (1 2 3) (4 - 5) (7 8 6) } 7 8 -
Move 4 : { (1 2 3) (4 5 -) (7 8 6) }
Move 5 : : { (1 2 3) (4 5 6) (7 8 -) }
Initial State Final State

1 2 3 1 2 3

4 - 8 4 5 6

7 6 5 7 8 -

1 2 3 1 2 3 1 2 3 1 2 3

4 8 - 4 8 5 4 8 5 4 - 5

7 6 5 7 6 - 7 - 6 7 8 6

Path Cost is 6 i.e. 6 moves required


to achieve final goal state 1 2 3 1 2 3

4 5 6 4 5 -

7 8 - 7 8 6
Initial State Final State

1 2 3 1 2 3

4 - 8 4 5 6

7 6 5 7 8 -

1 2 3 1 2 3 1 2 3 1 2 3

4 6 8 4 6 8 4 6 - 4 - 6

7 - 5 7 5 - 7 5 8 7 5 8

Path Cost is 6 i.e. 6 moves required


to achieve final goal state 1 2 3 1 2 3

4 5 6 4 5 6

7 8 - 7 - 8
8 Queens Problem
• The goal of the 8-queens problem is to place
eight queens on a chessboard (8 by 8) such
that no queen attacks any other.
• Almost a solution of the 8-queens problem
but the queen in the first column is on the
same diagonal as the Queen in the last column.

• This problem can be solved by searching for a solution.


• The initial state is given by the empty chess board.
• Placing a queen on the board represents an action in the search problem.
• A goal state is a configuration where none of the queens attacks any of the others.
Note that every goal state is reached after exactly 8 actions.
8 Queens Problem

• There is an efficient special-purpose algorithms to solve this n-queens problems


• Its an interesting test problem for search algorithms.
• There are two main kinds of formulation.
– An incremental formulation involves operators that augment the state
description, starting with an empty state; for the 8-queens problem, this means
that each action adds a queen to the state
– A complete-state formulation starts with all 8 queens on the board and moves
them around.
• In either case, the path cost is of no interest because only the final state counts
• Incremental formulation :
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Successor function: Add a queen to any empty square.
• Goal test: 8 queens are on the board, none attacked.
4 Queens Problem Attacking Mode

Q1
Q1
Q2
Q2
Q4 Q3
Q4
Q3

Place the queens such that there is no queen in that row or column or in
diagonal
Q1
Q2
Q3
Q5
Q4
Q6
Q7
Q8
Q1
Q3
Q2
Q4
Q5
Q6
Q7
Q8
12 Unique Solutions
8-Queens Problem
8 Queens Problem

• In this formulation, we have 64 . 63 . . .57 3 1.8 x 1014 possible sequences to


investigate.
• A better formulation would prohibit placing a queen in any square that is already
attacked:
• States: Arrangements of n queens (0 < n < 8), one per column in the leftmost n
columns, with no queen attacking another are states.
• Successor function: Add a queen to any square in the leftmost empty column such
that it is not attacked by any other queen.
• This formulation reduces the 8-queens state space from 3 x 1014 to just 2,057, and
solutions are easy to find.
Water Jug Problem
• Given two jugs, a 4-gallon one and a 3-gallon one. Neither has any measuring mark
on it.
• There is a pump that can be used to fill the jugs with water. How can you get exactly
2 gallons of water into the 4-gallon jug.
• The state space for this problem can be described as the set of ordered pairs of
integers (x,y)
• Where, X represents the quantity of water in the 4-gallon jug X= 0,1,2,3,4
• Y represents the quantity of water in 3-gallon jug Y=0,1,2,3
• Start State: (0,0)
• Goal State: (2,0)

Water Jug Problem Production Rules
● Generate
Rule State Process
production
rules for the 1 (X,Y | X<4) (4,Y) {Fill 4-gallon jug}
water jug 2 (X,Y |Y<3) (X,3) {Fill 3-gallon jug}
problem
3 (X,Y |X>0) (0,Y) {Empty 4-gallon jug}

4 (X,Y | Y>0) (X,0) {Empty 3-gallon jug}

5 (X,Y | X+Y>=4 ^ (4,Y-(4-X)) {Pour water from 3-gallon jug


Y>0) into 4-gallon jug until 4-gallon jug is full}

6 (X,Y | X+Y>=3 (X-(3-Y),3) {Pour water from 4-gallon jug


^X>0) into 3-gallon jug until 3-gallon jug is full}

7 (X,Y | X+Y<=4 (X+Y,0) {Pour all water from 3-gallon jug


^Y>0) into 4-gallon jug}

8 (X,Y | X+Y <=3^ (0,X+Y) {Pour all water from 4-gallon jug
X>0) into 3-gallon jug}

9 (0,2) (2,0){Pour 2 gallon water from 3 gallon jug


into 4 gallon jug}

http://aimaterials.blogspot.com/p/blog-page_18.html
Initialization: Iteration 1: Iteration 4:
Start State: (0,0) Current State: (X,3) Current State : (4,2)
Apply Rule 2: Apply Rule 7: Apply Rule 3:
(X,Y | Y<3) -> (X,Y | X+Y<=4 ^Y>0) (X,Y | X>0)
(X,3) (X+Y,0) (0,Y)
{Fill 3-gallon jug} {Pour all water from 3-gallon jug {Empty 4-gallon jug}
Now the state is (X,3) into 4-gallon jug} Now state is (0,2)
Now the state is (3,0)
Iteration 5:
Current State : (0,2)
Iteration 2: Apply Rule 9:
Current State : (3,0) (0,2)
Apply Rule 2: (2,0)
(X,Y | Y<3) -> {Pour 2 gallon water from 3 gallon
(3,3) jug into 4 gallon jug}
{Fill 3-gallon jug} Now the state is (2,0)
Now the state is (3,3)
Goal Achieved.
Iteration 3:
Current State:(3,3)
Apply Rule 5:
(X,Y | X+Y>=4 ^ Y>0)
(4,Y-(4-X))
{Pour water from 3-gallon jug into
4-gallon jug until 4-gallon jug is full}
Now the state is (4,2)
State Space Tree:

https://www.eecis.udel.edu/~mccoy/courses/cisc4-681.10f/lec-materials/handouts/search-water-jug-handout.pdf
Real World Problems: Route Finding Problem
• Route-finding algorithms are used in a variety of applications, such as routing in
computer networks, military operations planning, and airline travel planning
systems.
• These problems are typically complex to specify.
• Example of an airline travel problem :
• States: Each is represented by a location (e.g., an airport) and the current time.
• Initial state: This is specified by the problem.
• Successor function: This returns the states resulting from taking any scheduled
flight (perhaps further specified by seat class and location), leaving later than the
current time plus the within-airport transit time, from the current airport to another.
• Goal test: Are we at the destination by some prespecified time
• Path cost: This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer
mileage awards, and so on.
Real World Problems: Route Finding Problem
• Commercial travel advice systems use a problem formulation of this kind, with
many additional complications to handle the fare structures that airlines impose.
• Any seasoned traveller knows, however, that not all air travel goes according to
plan.
• A really good system should include contingency plans-such as backup
reservations on alternate flights to the extent that these are justified by the cost and
likelihood of failure of the original plan.
Real World Problems: Travelling Salesperson Problem

• Touring problems are closely related to route-finding problems with a small


difference.
• In Route finding, the actions correspond to trips between adjacent cities
• The Traveling Salesperson Problem (TSP) is a touring problem in which each
city must be visited exactly once.
• The aim is to find the shortest tour.
• These algorithms are used for
● planning trips for traveling salespersons,
● planning movements of automatic circuit-board drills
● Stocking machines on shop floors.
Search Solutions
• The toy and the real world problems can be solved by a search through the state
space (successive configurations or states of an instance are considered, with the
goal of finding a goal state with a desired property).

• Search techniques used are an explicit search tree that is generated by the initial
state and the successor function that together define the state space.

• A search graph will be used rather than a search tree, when the same state can be
reached from multiple paths

• The root of the search tree is a search node corresponding to the initial state


Search Solutions
• Example : finding a route from Arad to Bucharest.
• Initial State :In Arad
• The first step is to test whether this is a goal state.
• Its not so not a goal state, we need to consider some other states.
• This is done by expanding the current state i.e. applying the successor
function to the current state thereby generating a new set of states.
• In this case, we get three new states: In(Sibiu), In(Timisoara), and In(Zerind)
• Every time we have to check whether its a goal state or not, if not ,expand the
current state
• The choice of which state to expand is determined by the search strategy by
using general tree-search algorithm
Search Solutions
• It is important to distinguish between the state space and the search tree.
• For the route finding problem, there are only 20 states in the state space, one for
each city.
• But there are an infinite number of paths in this state space, so the search tree
has an infinite number of nodes.
• For example, the three paths Arad-Sibiu, Arad-Sibiu-Arad, Arad-Sibiu-Arad-Sibiu
are the first three of an infinite sequence of paths.
• A good search algorithm avoids such repeated paths
• A node is a data structure with five components:
• STATE: the state in the state space to which the node corresponds;
• PARENT-NODE: the node in the search tree that generated this node;
• ACTION: the action that was applied to the parent to generate the node;
• PATH-COST: the cost, traditionally denoted by g ( n ) , of the path from the initial
state to the node, as indicated by the parent pointers; and
• DEPTH: the number of steps along the path from the initial state.
• A node is a bookkeeping 'data structure used to represent the search tree.
• A state corresponds to a configuration of the world
Search Solutions
• A node is a bookkeeping 'data structure used to represent the search tree.
• A state corresponds to a configuration of the world
• Nodes are on particular paths, as defined by PARENT-NODE pointers, whereas
states are not.
• Two different nodes can contain the same world state, if that state is generated via
two different search paths.
• The collection of nodes that have been generated but not yet expanded-this
collection is called the fringe.
• Each element of the fringe is a leaf node i.e. a node with no successors in the
tree.
• The collection of nodes is implemented as a queue
Search Solutions
• The operations on a queue are as follows:
• Make Queue (Elements ) Creates a queue with the given element(s).
• Empty (queue) returns true only if there are no more elements in the queue.
• First(queue) returns the first element of the queue.
• Remove First (queue): returns first (queue) and removes it from the queue.
• Insert (Element, queue) inserts an element into the queue and returns the resulting
• queue. (
• Insert ALL (Elements, Queue) inserts a set of elements into the queue and returns
the resulting queue

Performance measure for Problem Solving

• Measuring problem-solving performance:


• The output of a problem-solving algorithm is either failure or a solution.
• The algorithm's performance can be evaluated in 4 ways:
– Completeness: Is the algorithm guaranteed to find a solution when there is one?
– Optimality: Does the strategy find the optimal solution,
– Time complexity: How long does it take to find a solution?
– Space complexity: How much memory is needed to perform the search?

Performance measure for Problem Solving

• Measuring problem-solving performance:


• Time and space complexity are always considered with respect to some
measure of the problem difficulty
• The measure is the size of the state space graph, because the graph is viewed as an
explicit data structure that is input to the search program.
• In AI, where the graph is represented implicitly by the initial state and successor
function and is frequently infinite, complexity is expressed in terms of three
quantities:
– the branching factor or maximum number of successors of any node;
– d, the depth of the shallowest goal node;
– m, the maximum length of any path in the state space.
• Search Cost: To assess the effectiveness of a search algorithm it is used and
depends on the time complexity n memory usage
– Time in terms of the number of nodes generated during the search,
– Space in terms of the maximum number of nodes stored in memory.
• Total cost,: it combines the search cost and the path cost of the solution found.
Search Algorithms

Uninformed Search Algorithms Informed Search Algorithms


or Blind search

1. Breadth first 1. Best First Search

2. Depth-first search 2. A* Search


Backtracking search
Depth-limited search
Iterative deepening depth-first
search 3. Memory Bounded Heuristic Search
Iterative deepening A* (IDA)
3. Bidirectional search Recursive best-first search

Uninformed search is a searching technique which have no additional


information about the distance from current state to the goal.
Informed Search is a another technique which have additional information
about the estimate distance from the current state to the goal.
Chat GPT by Dr. Yogesh HArbhau Kulkarni
https://www.youtube.com/watch?v=ZX-vtEGoP7M

You might also like