0% found this document useful (0 votes)
15 views32 pages

Unit 1 Notes

Uploaded by

resoce3697
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views32 pages

Unit 1 Notes

Uploaded by

resoce3697
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT-1 AI

Introduction: AI problems, foundation of AI and history of AI intelligent agents: Agents and


Environments, the concept of rationality, the nature of environments, structure of agents, problem solving
agents, problem formulation.

Artificial Intelligence:
• Artificial intelligence (AI) refers to computer systems capable of performing complex tasks that
historically only a human could do, such as reasoning, making decisions, or solving problems.
• AI is one of the newest fields in science and engineering. Work started in earnest soon after World
War II, and the name itself was coined in 1956.
• AI currently encompasses a huge variety of subfields, ranging from the general (learning and
perception) to the specific, such as playing chess, proving mathematical theorems, writing poetry,
driving a car on a crowded street, and diagnosing diseases.
AI is relevant to any intellectual task; it is truly a universal field.
More Formal Definition of AI
AI is a branch of computer science which is concerned with the study and creation of computer systems
that exhibit
o some form of intelligence

OR
 those characteristics which we associate with intelligence in human behavior

The definitions of AI:

a) "The exciting new effort to make b) "The study of mental faculties through the
computers think . . . machines with minds, in use of computational models" (Charniak and
the full and literal sense" (Haugeland, 1985) McDermott, 1985)

"The automation of] activities that we "The study of the computations that make it
associate with human thinking, activities possible to perceive, reason, and act"
such as decision-making, problem solving, (Winston, 1992)
learning..."(Bellman, 1978)

1
UNIT-1 AI

c) "The art of creating machines that perform d) "A field of study that seeks to explain and
functions that require intelligence when emulate intelligent behavior in terms of
performed by people" (Kurzweil, 1990) computational processes" (Schalkoff, 1 990)

"The study of how to make computers do "The branch of computer science that is
things at which, at the moment, people concerned with the automation of intelligent
are better" (Rich and Knight, 1991 ) behavior" (Luger and Stubblefield, 1993)

The definitions on the top, (a) and (b) are concerned with reasoning, whereas those on the
bottom, (c) and (d) address behavior.The definitions on the left, (a) and (c) measure success in
terms of human performance, and those on the right, (b) and (d) measure the ideal concept of
intelligence called rationality.
Intelligence
 Intelligence is a property of mind that encompasses many related mental abilities, such as the
capabilities to
 reason
 plan
 solve problems
 think abstractly
 comprehend ideas and language and
 Learn
• Intelligence-how we think; that is, how we can perceive, understand, predict, and manipulate
a world far larger and more complicated than itself.
• Artificial intelligence, or AI: it attempts not just to understand but also to build intelligent
entities.

Categories of Intelligent Systems:


In order to design intelligent systems, it is important to categorize them into four categories
(Luger and Stubberfield 1993), (Russell and Norvig, 2003)
1. Systems that think like humans

2
UNIT-1 AI

2. Systems that think rationally


3. Systems that behave like humans
4. Systems that behave rationally

1. Systems that think like humans - Acting humanly: The Turing Test approach
The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational
definition of intelligence. A computer passes the test if a human interrogator, after posing some
written questions, cannot tell whether the written responses come from a person or from a computer.

The test is for a program to have a conversation (via online typed messages) with an interrogator for
five minutes. The interrogator then has to guess if the conversation is with a program or a person; the
program passes the test if it fools the interrogator 30% of the time. Turing conjectured (guessed)that,
by the year 2000, a computer with a storage of 10 power9 units could be programmed well enough
to pass the test.

The computer would need to possess the following capabilities:


•Natural language processing to enable it to communicate successfully in English;
•knowledge representation to store what it knows or hears;
•automated reasoning to use the stored information to answer questions and to draw new conclusions;
•machine learning to adapt to new circumstances and to detect and extrapolate patterns.

3
UNIT-1 AI

• Turing’s test deliberately avoided direct physical interaction between the interrogator and the
computer, because physical simulation of a person is unnecessary for intelligence. However, the
so-called Total Turing Test includes a video signal so that the interrogator can test the subject’s
perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
“through the hatch.” To pass the total Turing Test, the computer will need
• computer vision to perceive objects, and
• robotics to manipulate objects and move about.
2. Systems that think rationally - Thinking humanly: The cognitive modeling approach
 If we are going to say that a given program thinks like a human, we must have some way of
determining how humans think.
 We need to get inside the actual workings of human minds.
 There are three ways to do this: through introspection—trying to catch our own thoughts as
they go by; through psychological experiments—observing a person in action; and through
brain imaging—observing the brain in action.
 Once we have a sufficiently precise theory of the mind, it becomes possible to express the
theory as a computer program.
 If the program’s input–output behavior matches corresponding human behavior, that is
evidence that some of the program’s mechanisms could also be operating in humans

3. Systems that behave like humans


The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,” that is,
irrefutable reasoning processes.
His syllogisms(logic) provided patterns for argument structures that always yielded correct
conclusions when given correct premises—
For example, “Socrates is a man; all men are mortal; therefore, Socrates is mortal.”
These laws of thought were supposed to govern the operation of the mind; their study initiated the
field called logic.
The so-called logicist( who follows logic)tradition within artificial intelligence hopes to build on such
programs to create intelligent systems

4
UNIT-1 AI

3. Systems that behave rationally – The rational agent approach


 An agent is just something that acts. Of course, all computer programs do something, but
computer agents are expected to do more: operate autonomously, perceive their environment,
persist over a prolonged time period, adapt to change, and create and pursue goals.
 A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty,
the best expected outcome.
 Making correct inferences is sometimes part of being a rational agent, because one way to act
rationally is to reason logically to the conclusion that a given action will achieve one’s goals
and then to act on that conclusion.
The Foundations of Artificial Intelligence
Foundations of AI are in various fields such as
1) Philosophy
2) Mathematics
3) Economics
4) Neuroscience
5) Psychology
6) Computer engineering
7) Linguistics
1) Philosophy
The final element in the philosophical picture of the mind is the connection between knowledge and
action. This question is vital to AI because intelligence requires action as well as reasoning.

 The AI point of view is that philosophical theories are useful to include human-level
artificial systems and provide a basis for designing systems with beliefs, do reasoning, and
plan.
 A key problem for both AI and philosophy is understanding common sense knowledge and
abilities.

5
UNIT-1 AI

2) Mathematics
 Fundamental ideas of AI required a level of mathematical formalization in three areas:
logic, computation, and probability.
 AI systems use formal logical methods and Boolean logic, analysis of limits to what can
be computed, probability theory, uncertainty that forms the basis for most modern
approaches to AI, fuzzy logic etc.,
 Mathematical concepts give the real solution of hypothetical or virtual problems.
 It is about structure, developing principles that remain true even if you make any
alteration in the components.
3) Economy
 In financial economics, there is a wide use of AI in making decisions of trading in
financial securities like stocks based on prediction of their prices.
 It can increase the efficiency and improve the decision-making process by analyzing
large amounts of data.
 AI deployment can increase overall productivity and create new products, leading to job
creation and economic growth.
4) Neuroscience
Neuroscience is the study of the nervous system, particularly the brain. The truly amazing
conclusion is that a collection of simple cells can lead to thought, action, and consciousness .

 Neuroscience’s studies are characterized for recording high dimensional and complex
brain data, making the data analysis computationally expensive and time consuming.
 Neuroscience takes advantage of AI techniques and the increasing processing power in
modern computers, which helped improving the understanding of brain behavior.
5) Psychology
 Since psychology is the study of human brain and its nature, and AI is the branch which
deals with the intelligence in machine, so for understanding the intelligence of a machine
we must compare with human intelligence.
 Artificial intelligence (AI), sometimes known as machine intelligence, refers to the ability
of computers to perform human-like including learning, problem-solving, perception,
decision-making, and speech and language.

6
UNIT-1 AI

 AI is closely linked to human psychology.


6) Computer engineering
 Artificial intelligence is the simulation of human intelligence processes by machines,
especially computer systems.
 Specific applications of AI include expert systems, natural language processing and
speech recognition.
 An Artificial Intelligence engineer works with different algorithms, neural networks, and
other tools to advance the field of artificial intelligence.
7) Linguistics
 AI aims at simulating human intelligence by computer. Language is one of the primary
expressions of human intelligence.
 Natural Language Processing (NLP) used to refer to all processes related to analysis of
texts in natural language - imitation of a human language by a machine.
 By tradition, the term Computational Linguistics refers to written language, whereas
Speech Technology is used for the analysis and synthesis of spoken language.

The History of Artificial Intelligence


The gestation (the time between conception and birth) of artificial intelligence (1943–1955)
 The first work that is now generally recognized as AI was done by Warren McCulloch and
Walter Pitts (1943).
 They drew on three sources: knowledge of the basic physiology and function of neurons in
the brain; a formal analysis of propositional logic; and Turing’s theory of computation.
 They proposed a model of artificial neurons in which each neuron is characterized as being
“on” or “off,” in response to stimulation by a sufficient number of neighboring neurons.
 Donald Hebb (1949) demonstrated a simple updating rule for modifying the connection
strengths between neurons.His rule is called Hebbian learning,
 Marvin Minsky and Dean Edmonds students from Harvard university built the first neural
network computer, called SNARC in 1950.
 Alan Turing introduced the Turing Test, machine learning, genetic algorithms, and
reinforcement learning.

7
UNIT-1 AI

The birth of artificial intelligence (1956)


 John McCarthy organized a conference on Machine Intelligence in 1956 and then the field
was known as Artificial Intelligence.
 Two researchers Allen Newell and Herbert Simon have invented a computer program
capable of thinking non-numerically, and thereby solved the venerable mind–body problem.
Early enthusiasm, great expectations (1952–1969)
 In 1957, the first version of a new program named as General Problem Solver (GPS) was
developed and tested, by Newell and Simon.
 The GPS was capable of solving to some extent the problems requiring common sense.
 GPS was probably the first program to add the “thinking humanly” approach.
 Then, many programs weredeveloped and McCarthy announced his new development called
List Processing Language (LISP) in 1958.
 LISP was adopted as the language of choice by most of AI developers.
 In 1958, John McCarthy made three crucial contributions in MIT lab.
 McCarthy defined the high-levellanguage Lisp, which was to become the dominant AI
programming language for the next 30years.
 With Lisp, McCarthy found access to scarce (insufficient) and expensive computing resources
were also a serious problem. In response, he and others at MIT invented timesharing.
 Also in 1958, McCarthy published a paper entitled Programs with Common Sense,in which he
describedAdvice Taker, a hypothetical program that can be the firstcomplete AI system.
 McCarthy’sprogram was designed to use knowledge to search for solutions to problems.
 Then, Marvin Minsky of MIT demonstrated that computer programs could solve spatial
(related to) and logical programs when limited to a specific domain.
 Another program SUTDENT was developed during late 1960 which could solve algebra story
problems and fuzzy logic that had unique ability to make decisions under uncertain conditions.
 Also, neural networks were considered as possible ways of achieving artificial intelligence.

8
UNIT-1 AI

 During the same time, the system named SHRDLU (the keys on machine are arranged in the
descending order of usage frequency in English) was developed by Winograd at MIT, AI
laboratory as a part of the micro world’s project.
 SHRDLU is a program that carried out a simple dialogue from user in English.
 Frame theory was one the new methods developed by Minsky during 1970 for storing
structured knowledge to be used by AI systems.
 Another development during the same time was PROLOG (Programmation en Logique –
Programming in Logic) proposed by Robert Kowalski. PROLOG is a declarative language
not like a procedural language.
Knowledge based systems : The key to power(1969 – 1979)
 Advent (coming) of the Expert system in 1970, Expert systems were designed and developed
to predict the probability of a solution under set conditions.
 Research organizations and the corporate sectors during 1980 started developing AI systems.
 Companies such as Digital Electronics were using XCON, an Expert system designed for the
large DEC’s VAX computers.
 The Expert system R1(internally called XCON, for eXpertCONfigurer) program was a
production rule-based system written in OPS5by John P. McDermott in 1978 to assist in the
ordering of DEC's VAX computer systems by automatically selecting the computer system
components based on the customer's requirements. DEC stands for Digital Electronic
Corporation and VAX stands Virtual Address eXtension
 Then working of AI has started from agent point of view.
 So, agents can be used in solving any problem which requires intelligence, understanding of a
problem and situations is embedded in agents and the problem is solved.

The State of the Art


The following are the subfields or applications of AI.
• Autonomous planning and scheduling: A hundred million miles from Earth, NASA's Remote
Agent program became the first on-board autonomous planning program to control the
scheduling of operations for a spacecraft.Remote Agent generated plans from high-level goals
specified from the ground, and it monitored the operation of the spacecraft as the plans were
executed-detecting, diagnosing, and recovering from problems as they occurred.
9
UNIT-1 AI

• Game playing: IBM's Deep Blue became the first computer program to defeat the world
champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an
exhibition match. Kasparov said that he felt a "new kind of intelligence" across the board from
him. The value of IBM's stock increased by $18 billion.

• Diagnosis: Medical diagnosis programs based on probabilistic analysis have been able to
perform at the level of an expert physician in several areas of medicine. Heckerman describes a
case where a leading expert on lymph-node pathology scoffs at a program's diagnosis of an
especially difficult case.

• Logistics Planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic
Analysis and Replanning Tool, DART , to do automated logistics planning and scheduling for
transportation. This involved up to 50,000 vehicles, cargo, and people at a time, and had to
account for starting points, destinations, routes, and conflict resolution among all parameters.
The AI planning techniques allowed a plan to be generated in hours that would have taken weeks
with older methods.

• Robotics: Many surgeons now use robot assistants in microsurgery. HipNav is a system that
uses computer vision techniques to create a three-dimensional model of a patient's internal
anatomy and then uses robotic control to guide the insertion of a hip replacement prosthesis

• Language understanding and problem solving: PROVERB is a computer program that solves
crossword puzzles better than most humans, using constraints on possible word fillers, a large
database of past puzzles, and a variety of information sources including dictionaries and online
databases such as a list of movies and the actors that appear in t
Agents and Environments
 An agent is something that perceives and acts in an environment.
 It can also be described as a software entity that conducts operations in the place of users or
programs after sensing the environment.
 An environment is the surrounding of the agent.
 The agent takes input from the environment through sensors and delivers the output to the
environment through actuators.

10
UNIT-1 AI

 The following are the examples of agents:


1) Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
2) Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
3) Software Agent: Software agent can have keystrokes, file contents as sensory input and
act on those inputs and display output on the screen.
 A sensor is a device that detects and responds to some type of input from the physical
environment. The output is generally a signal that is converted to human-readable display at
the sensor location or transmitted electronically over a network for reading or further
processing.
 The following are the example of sensors.
1) Temperature Sensor.
2) Proximity Sensor.
3) Accelerometer.
4) IR Sensor (Infrared Sensor)
5) Pressure Sensor.
6) Light Sensor.

11
UNIT-1 AI

7) Ultrasonic Sensor.
8) Smoke sensor
9) Alcohol Sensor.
 Actuator is a device that converts the electrical signals into the physical events or
characteristics.

 The following are the examples of actuators.

1) Electric motors
2) Stepper motors
3) Jackscrews
 The agent function for an agent specifies the action taken by the agent in response to any
percept sequence.
 The performance measure evaluates the behavior of the agent in an environment.
 A rational agent acts so as to maximize the expected value of the performance measure,
given the percept sequence it has seen so far.
 The agent program implements the agent function.
 There exists a variety of basic agent-program designs reflecting the kind of information made
explicit and used in the decision process.
 The designs vary in efficiency, compactness, and flexibility.
 The appropriate design of the agent program depends on the nature of the environment.
 We use the term percept to refer to the agent’s perceptual inputs at any given instant.
 An agent’s percept sequence is the complete history of everything the agent has ever perceived.
 In general, an agent’s choice of action at any given instant can depend on the entire percept
sequence observed to date, but not on anything it hasn’t perceived.
 Mathematically speaking, we say that an agent’s behavior is described by the agent function that
maps any given percept sequence to an action.
 The agent function for an artificial agent will be implemented by an agent program.
 The agent function is an abstract mathematical description; the agent program is a concrete
implementation, running within some physical system.

12
UNIT-1 AI

Perceptsequence Action
[A,Clean] Right
[A,Dirty] Suck
A B [B,Clean] Left
[B,Dirty] Suck
[A,Clean],[A,Clean] Right
[A,Clean],[A,Dirty] Suck
. .
[A,Clean],[A,Clean],[A,Clean] Right
[A,Clean],[A,Clean],[A,Dirty] Suck
. .
Figure 2.3 Partial tabulation of a simple agentfunctionfor the vacuum-cleaner world
shownin Figure 2.2.
Figure 2.2 Avacu m-cleanerworld with just wolocations.

This particular world has just two locations: squares A and B. The vacuum agent perceives which square
it is in andwhether there is dirt in the square. It can choose to move left, move right, suck up the dirt, or
do nothing. One very simple agent function is the following: if the current square is dirty, then suck,
otherwise move to the other square. A partial tabulation of this agent function is shown in the above
figure.

Good Behavior: The Concept of Rationality


 A rational agent is one that does the right thing—conceptually speaking, every entry in the
Table for the agent function is filled out correctly. Obviously, doing the right thing is better
than doing the wrong thing.
 An agent is something that perceives and acts in an environment.
 It can also be described as a software entity that conducts operations in the place of users or
programs after sensing the environment.
 An environment is the surrounding of the agent.

13
UNIT-1 AI

 The performance measure evaluates the behavior of the agent in an environment.


 A task environment specification includes the performance measure, the external
environment, the actuators, and the sensors.
 In designing an agent, the first step must always be to specify the complete task environment.
Rationality
 Rationality depends on four things:
1) The performance measure that defines the criterion of success.
2) The agent’s prior knowledge of the environment.
3) The actions that the agent can perform.
4) The agent’s percept sequence to date.
 This leads to a definition of a rational agent. i.e., For each possible percept sequence, a rational
agent should select an action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in knowledge the agent has.
Omniscience, learning, and autonomy
 An omniscient (perfect) agent knows the actual outcome of its actions and can act
accordingly; but perfection is impossible in reality.
 Rationality is NOT the same as perfection. Rationality maximizes expected performance,
while perfection maximizes actual performance.
 Successful agents split the task of computing the agent function into three different
periods:
 when the agent is being designed, some of the computation is done by its
designers;
 when it is deliberating on its next action, the agent does more computation;
 and as it learns from experience, it does even more computation to decide how
to modify its behavior.

 A rational agent not only to gather information but also to learn as much as possible from
what it perceives.
 The agent’s initial configuration could reflect some prior knowledge of the environment, but
as the agent gains experience this may be modified and augmented.

14
UNIT-1 AI

 A rational agent should be autonomous—it should learn.


 After sufficient experience of its environment, the behavior of a rational agent can become
effectively independent of its prior knowledge.
 Hence, the incorporation of learning allows one to design a single rational agent that will
succeed in a vast variety of environments.
The Nature of the Environment
Specifying the task environment
 A task environment specification includes the performance measure, the external
environment, the actuators, and the sensors.
 We group all these under the heading of the task environment, called PEAS (Performance,
Environment, Actuators, Sensors).
 In designing an agent, the first step must always be to specify the complete task environment.
 The performance measure evaluates the behavior of the agent in an environment.
• In designing an agent, the first step must always be to specify the task environment as fully as
possible.
• let us consider a more complex problem: an automated taxi driver.

Agent Type Performance Environment Actuators Sensors


Measure
Taxidriver Safe, fast,legal, Roads, other Steering, Cameras, sonar,
comfortable trip, traffic, accelerator, speedometer,
maximizeprofits pedestrians, brake, signal, GPS, odometer,
customers horn, display accelerometer,
engine sensors,
keyboard

Figure2.4 PEAS descriptionof the task environmentfor an automated taxi.

15
UNIT-1 AI

• Desirable qualities include getting to the correct destination; minimizing fuel consumption and wear
and tear; minimizing the trip time or cost; minimizing violations of traffic laws and disturbances to
other drivers; maximizing safety and passenger comfort; maximizing profits.
• Obviously, some of these goals conflict, so tradeoffs ( compromise)will be required.
• The actuators for an automated taxi include those available to a human driver: control over the
engine through the accelerator and control over steering and braking. In addition, it will need output
to a display screen or voice synthesizer to talk back to the passengers, and perhaps some way to
communicate with other vehicles, politely or otherwise.
• The basic sensors for the taxi will include one or more controllable video cameras so that it can see
the road; it might augment these with infrared or sonar sensors to detect distances to other cars and
obstacles.
• To avoid speeding, the taxi should have a speedometer, and to control the vehicle properly,
especially on curves, it should have an accelerometer.
• To determine the mechanical state of the vehicle, it will need the usual array of engine, fuel, and
electrical system sensors.
• Like many human drivers, it might want a global positioning system (GPS) so that it doesn’t get
lost. Finally, it will need a keyboard or microphone for the passenger to request a destination.
Software agents
• In contrast, some software agents (or software robots or softbots) exist in rich, unlimited domains.
• Imagine a softbot Web site operator designed to scan Internet news sources and show the interesting
items to its users, while selling advertising space to generate revenue.
• To do well, that operator will need some natural language processing abilities, it will need to learn
what each user and advertiser is interested in, and it will need to change its plans dynamically—for
example, when the connection for one news source goes down or when a new one comes online.
• The Internet is an environment whose complexity rivals that of the physical world and whose
inhabitants include many artificial and human agents.

Properties of Task Environment

• .Fully observable vs. partially observable:

• If an agent’s sensors give it access to the complete state of the environment at each point in time,

16
UNIT-1 AI

then we say that the task environment is fully observable. Fully observable environments are
convenient because the agent need not maintain any in- ternal state to keep track of the world. An
environment might be partially observable because of noisy and inaccurate sensors or because
parts of the state are simply missing from the sensor data.

• 2.Single agent vs. multiagent:

• If only one agent is involved in an environment, and operating by itself then such an environment
is called single agent environment. However, if multiple agents are operating in an environment,
then such an environment is called a multi-agent environment.The distinction between single-
agent and multiagent environments may seem simple enough.

• For example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment,
whereas an agent playing chess is in a two- agent environment.-chess is a competitive multiagent
environment. In the taxi-driving environment, on the other hand, avoiding collisions maximizes
the performance measure of all agents, so it is a partially cooperative multiagent environment.
• Deterministic vs. stochastic. If the next state of the environment is completely determined by the
current state and the action executed by the agent, then we say the environment is deterministic;
otherwise, it is stochastic. If the environment is partially observable, however, then it could appear
to be stochastic.
• 4.Episodic vs. sequential: In an episodic task environment, the agent’s experience divided into
atomic episodes. Each episode consists of the agent perceiving and then performing a single action.
The next episode does not depend on the actions taken in previous episodes. In episodic
environments, the choice of action in each episode depends only on the episode itself.
• In sequential environments, on the other hand, the current decision could affect all future
decisions.- Chess and taxi driving are sequential.
 Chess and taxi driving are sequential: in both cases, short-term actions can have long-term
consequences.
 Static vs. dynamic: If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static. Static environments are easy to deal
with because the agent need not keep looking at the world while it is deciding on an action, nor
need it worry about the passage of time. Dynamic environments, on the other hand, are

17
UNIT-1 AI

continuously asking the agent what it wants to do; if it hasn’t decided yet, that counts as deciding
to do nothing. If the environment itself does not change with the passage of time but the agent's
performance score does, then we say the environment is semi dynamic.
Discrete vs. continuous:
 An environment is said to be discrete if there are a finite number of actions that can be
performed within it. Continuous AI environments depend on unknown and rapidly changing
data sources. Vision systems in drones or self-driving cars operate on continuous AI
environments.
 The discrete/continuous distinction applies to the state of the environment, to the way time is
handled, and to the percepts and actions of the agent. For example, the chess environment has a
finite number of distinct states (excluding the clock). Chess also has a discrete set of percepts and
actions. Taxi driving is a continuous-state and continuous-time problem:
Known vs. unknown:
 In a known environment, the outcomes for all actions are given. If the environment is unknown,
the agent has to learn how it works in order to make good decisions.

The Structure of Agents


 The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can be
viewed as:
Agent = Architecture + Agent program
 The following are the main three terms involved in the structure of an AI agent:
1) Architecture: Architecture is machinery that an AI agent executes on.
2) Agent Function: Agent function is used to map a percept to an action.
3) Agent program: Agent program is an implementation of agent function. An agent
program executes on the physical architecture to produce function.
 There are four kinds of agent programs.
1) Simple reflex agents
2) Model-based reflex agents
3) Goal-based agents

18
UNIT-1 AI

4) Utility-based agents
1. Simple reflex agents
 The simplest kind of agent is the simple reflex agent. These agents select actions on the basis
of the current percept, ignoring the rest of the percept history.
 It acts according to a rule whose condition matches the current state, as defined by the
percept.
 Example The Vaccum agent is a simple reflex agent whose decision is based only on the
 Current location and on whether that contains dirt
 function REFLEX-VACCUM-AGENT([1ocation,status]) returns an action
 if status = Dirty then return Suck
 else if location = A then return Right
 else if location = B then return Left
 The agent program for a simple reflex agent in the two-state vacuum environment. This
program implements the agent function tabulated previously
 The Simple reflex agent works on Condition-action rule, which means it maps the current
state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
 The following is the schematic diagram for simple reflex agent.

 The following is the simple reflex agent function.

19
UNIT-1 AI

function SIMPLE-REFLEX-AGENT(percept ) returns an action


persistent: rules, a set of condition–action rules
state←INTERPRET-INPUT(percept )
rule←RULE-MATCH(state, rules)
action ←rule.ACTION
return action

 The INTERPRET-INPUT function generates an abstracted description of the current state from
the percept, and the RULE-MATCH function returns the first rule in the set of rules that
matches the given state description.
 The agent will work only if the correct decision can be made on the basis of only the current
percept—that is, only if the environment is fully observable.

20
UNIT-1 AI

2. Model based reflex agents


 The most effective way to handle partial observability is for the agent to keep track of the
part of the world it can’t see now.
 That is, the agent should maintain some sort of internal state that depends on the percept
history and thereby reflects at least some of the unobserved aspects of the current state.
 Updating this internal state information as time goes by requires two kinds of knowl-edge to
be encoded in the agent program.
 First, we need some information about how the world evolves independently of the agent.
 Second, we need some information about how the agent's own actions affect the world
 These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
 This knowledge about "how the world works whether implemented in simple Boolean
circuits or in complete scientific theories-is called a model of the world. An agent that uses
such a model is called a model-based agent.
 The following is the schematic diagram for Model based reflex agent.

 The following the model based reflex agent function.

21
UNIT-1 AI

function MODEL-BASED-REFLEX-AGENT(percept ) returns an action

persistent: state, the agent’s current conception of the world state model , a description of how the
next state depends on current state and action rules, a set of condition–action rules

action, the most recent action, initially none

state←UPDATE-STATE(state, action, percept ,model )

rule←RULE-MATCH(state, rules)

action ←rule.ACTION

return action

3. Goal based agents


 Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information. They choose an action, so that they can achieve the goal.
 The knowledge of the current state environment is not always sufficient to decide for an
agent to what to do.
 The agent needs to know its goal which describes desirable situations.
 They choose an action, so that they can achieve the goal.
 These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.

The above is the schematic diagram of goal based agent.

22
UNIT-1 AI

4. Utility based agents


 These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.

 Utility-based agent act based not only goals but also the best way to achieve the goal.

 The Utility-based agent is useful when there are multiple possible alternatives, and an agent
has to choose in order to perform the best action.

 A utility function maps a state (or a sequence of states) onto a real number, which describes
the associated degree of happiness. A complete specification of the utility function allows
rational decisions in two kinds of cases where goals are inadequate.

 First, when there are conflicting goals, only some of which can be achieved , the utility
function specifies the appropriate tradeoff.

 Second, when there are several goals that the agent can aim for, none of which can be
achieved with certainty, utility provides a way in which the likelihood of success can be
weighed up against the importance of the goals.

 A utility-based agent is an agent that acts based not only on what the goal is, but the best way
to reach that goal.
 The following is the schematic diagram for utility based agent.

23
UNIT-1 AI

Learning Agents
 A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
 A learning agent has mainly four conceptual components, which are:
 Learning element: It is responsible for making improvements by learning from environment
 Critic: Learning element takes feedback from critic which describes that how well the agent is
doing with respect to a fixed performance standard.
 Performance element: It is responsible for selecting external action
 Problem generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
 Hence, learning agents are able to learn, analyze performance, and look for new ways to improve
the performance.
 The performance element takes in percepts and decides on actions. The learning element uses
feedback from the critic on how the agent is doing and determines how the performance element
should be modified to do better in the future.
 The critic is necessary because the percepts themselves provide no indication of the agent's
success.

24
UNIT-1 AI

Problem Solving Agents


• One kind of goal-based agent called a problem-solving agent. Problem-solving agents decide
what to do by finding sequences of actions that lead to desirable states.

• Goals help organize behavior by limiting the objectives that the agent is trying to achieve.

• Goal formulation, based on the current situation and the agent's performance measure, is the first
step in problem solving.

• We will consider a goal to be a set of world states-exactly those states in which the goal is
satisfied. The agent's task is to find out which sequence of actions will get it to a goal state.
Before it can do this, it needs to decide what sorts of actions and states to consider.

• In general, an agent with several immediate options of unknown value can decide what to do by
first examining different possible sequences of actions that lead to states of known value, and
then choosing the best sequence.

• This process of looking for such a sequence is called search. A search algorithm takes a problem
as input and returns a solution in the form of an action sequence. Once a solution is found, the
actions it recommends can be carried out. This is called the execution phase. Thus, we have a
simple "formulate, search, execute" design for the agent.

25
UNIT-1 AI

 Our agent has now adopted the goal of driving to Bucharest, and is considering where to go
from Arad. There are three roads out of Arad, one toward Sibiu, one to Timisoara, and one to
Zerind.

26
UNIT-1 AI

Well-defined problems and solutions

A problem can be defined formally by four components:

1) initial state

2) actions

3) goal test

4) path cost

The initial state that the agent starts in. For example, the initial state for our agent in Romania
might be described as In(Arad).

A description of the possible actions available to the agent. The most common

Formulation uses a successor function.

Given a particular state x, SUCCESSOR-FN(x) returns a set of (action, successor) ordered pairs.

For example, from the state In(Arad), the successor function for the Romania

problem would return.

{ (Go(Sibzu), In(Sibiu)), (Go(Timisoara), In( Tzmisoara)), (Go(Zerind), In(Zerind))}

Definition:The state space is the set of all states reachable from the initial state.

• The goal test, which determines whether a given state is a goal state.

• Sometimes there is an explicit set of possible goal states, and the tes simply checks whether
the given state is one of them.

• The agent's goal in Romania is the singleton set {In(Bucharest)).

• For example, in chess, the goal is to reach a state called "checkmate," where the
opponent's king is under attack and can't escape.

• A path cost function that assigns a numeric cost to each path.

The problem-solving agent chooses a cost function that reflects its own performance
measure.

For the agent trying to get to Bucharest, time is of the essence, so the cost of a path might be its
length in kilometres.

27
UNIT-1 AI

• The cost of a path can be described as the sum of the costs of the individual actions along
the path.

• The step cost of taking action a to go from state x to state y is denoted by c(x, a, y).

• The step costs for Romania are shown in Figure 3.2 as route distances

• A solution to a problem is a path from the initial state to a goal state.

• Solution quality is measured by the path cost function, and an optimal solution has the lowest
path cost among all solutions.

• solution path à the path from Arad to Sibiu to Rimnicu Vilcea to Pitesti to Bucharest.

Example: Path Finding problem


Formulate goal:

– be in Bucharest

(Romania)

Formulate problem:

action: drive between pair of connected cities (direct road)

state: be in a city

(20 world states)

Find solution:

sequence of cities leading from start to goal state, e.g., Arad, Sibiu, Fagaras, Bucharest

Execution

drive from Arad to Bucharest according to the solution

Environment: fully observable (map), deterministic, and the agent knows effects of each action.

Toy problems are

1.Vacuum cleaner

2.8-puzzle problem

3.8-queens problem

28
UNIT-1 AI

Real-world problems

1) route-finding problem

2) Touring problems

3) traveling salesperson problem

4) VLSI layout

1) Vacuum cleaner

• States: The agent is in one of two locations, each of which might or might not contain dirt.
Thus there are 2 x 22 = 8 possible world states.

• Initial state: Any state can be designated as the initial state.

• Successor function: This generates the legal states that result from trying the three actions
(Left, Right,and Suck). The complete state space is shown in Figure 3.3.

• Goal test: This checks whether all the squares are clean.

• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

29
UNIT-1 AI

2) 8-puzzle problem
• The 8-puzzle, an instance of which is shown in Figure 3.4, consists of a 3 x 3 board with eight
numbered tiles and a blank space.

• A tile adjacent to the blank space can slide into the space. The object is to reach a specified goal
state

1) States: A state description specifies the location of each of the eight tiles and the blank in one of
the nine squares.

2) Initial state: Any state can be designated as the initial state.

3) Successor function: This generates the legal states that result from trying the four actions (blank
moves Left, Right, Up, or Down).

4) Goal test: This checks whether the state matches the goal configuration shown in Figure 3.4.

5) Path cost: Each step costs 1, so the path cost is the number of steps in the path.

8-queens problem
• The goal of the 8-queens problem is to place eight queens on a chessboard such
that no queen attacks any other. (A queen attacks any piece in the same row,
column or diagonal.)

• The Figure shows an attempted solution that fails: the queen in the rightmost
column is attacked by the queen at the top left.

30
UNIT-1 AI

1) States: Any arrangement of 0 to 8 queens on the board is a state.

2) Initial state: No queens on the board.

3) Successor function: Add a queen to any empty square.

4) Goal test: 8 queens are on the board, none attacked.

States: Arrangements of n queens (0 < n 8), one per column in the leftmost n columns, with no queen
attacking another are states.

Successor function: Add a queen to any square in the leftmost empty

column such that it is not attacked by any other queen.

• Route-finding problem
Route-finding algorithms are used in a variety of applications, such as routing in computer
networks, military operations planning, and airline travel planning systems.

Touring problems
• Touring problems are closely related to route-finding problems, but with an important difference.
Consider, for example, the problem, "Visit every city in the previous example at least once, starting
and ending in Bucharest."

31
UNIT-1 AI

Traveling salesperson problem

• The traveling salesperson problem (TSP) is a touring problem in which each city must be visited
exactly once. The aim is to find the shortest tour.

VLSI layout

• A VLSI layout problem requires positioning millions of components and connections on a chip to
minimize area, minimize circuit delays, minimize stray capacitances, and maximize manufacturing
yield.

• The layout problem comes after the logical design phase, and is usually split into two parts: cell
layout and channel routing.

32

You might also like