AIM1PPT
AIM1PPT
Department of
Artificial Intelligence and Machine Learning
ARTIFICIAL INTELLIGENCE
[23AII403]
Prepared by
Mrs. Shashikala AB
Asst. Professor Dept
of AI&ML,SJBIT
WHAT IS INTELLIGENCE?
WHAT IS ARTIFICIAL INTELLIGENCE?
WHAT IS INTELLIGENCE?
Homo Sapiens : The name is Latin for “wise man”
Matter can perceive, understand, predict, and manipulate.
AI is accomplished by studying how human brain thinks and how humans learn,
decide, and work while trying to solve a problem, and then using the outcomes of
this study as a basis of developing intelligent software and systems.
Thinking humanly Thinking rationally
―The exciting new effort to make computers ―The study of mental faculties through
think . .. machines with minds, in the full and the use of computational models.‖
literal sense.‖ (Haugeland, 1985) ― (Charniak and McDermott, 1985)
―[The automation of] activities that we ―The study of the computations that
associate with human thinking, activities such make it possible to perceive, reason, and
as decision-making, problem solving, learning act.‖ (Winston, 1992)
.. .‖ (Bellman, 1978)
i. Through introspection
Trying to catch our own thoughts as they go
• Cognitive Science brings together computer models from AI and experimental techniques from
psychology to try to construct precise and testable theories of the working of the human mind.
• Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as
a computer program.
• If the program’s input-output behaviour matches corresponding human behaviour, that is evidence
that the program’s mechanisms could also be working in humans.
• For example, Allen Newell and Herbert Simon, who developed GPS, the “General Problem
Solver”.
Thinking rationally: The “laws of thought” approach
Aristotle was one of the first to attempt to codify ―right thinking,‖ that
is, irrefutable reasoning processes. His syllogisms provided patterns for argument
structures that always yielded correct conclusions when given correct premises.
Eg.
Socrates is a man;
All men are mortal;
Therefore, Socrates is mortal.
These laws of thought were supposed to govern the operation of the mind; their study
initiated the field called logic.
The logicist tradition within artificial intelligence creates intelligent system.
All computer programs do something, but computer agents are expected to do more: operate
autonomously, perceive their environment, persist over a prolonged time period, and adapt to
change, and create and pursue goals.
A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty,
the best expected outcome.
The “thinking rational” approach to AI, the emphasis was on correct inferences. And making
correct inferences is sometimes part of a rational agent.
On the other hand, correct inference is not all of rationality; in some situations, there is no
provably correct thing to do, but something must still be done.
For example, recoiling from a hot stove is a reflex action that is usually more successful than a
slower action taken after careful deliberation.
Foundations of Artificial Intelligence
1. Philosophy
Aristotle was the first to formulate a precise set of laws governing the rational part of
the mind. He developed an informal system of syllogisms for proper reasoning, which
in principle allowed one to generate conclusions mechanically, given initial premises.
Utilitarianism: Decision makers should consider the interests of the many individuals,
rational decision making based on maximizing utility should apply to all spheres of
human activity, including public decisions made on behalf of many individuals.
Deontological ethics: Doing right things determined by universal laws(don’t lie, don’t
kill)
2. Mathematics
a. What are the formal rules to draw valid conclusions?
b. What can be computed?
c. How do we reason with uncertain information?
Decision theory, which combines probability theory with utility theory, provides
a formal and complete framework for decisions made under uncertainty.
The most part, economists did not address the third question, how to make
rational decisions when payoffs from actions are not immediate but instead result
from several actions taken in sequence. This topic was pursued in the emerging
field of operations research.
The best example of game theory is a classical hypothesis called “Prisoners Dilemma”.
According to this situation, two people are supposed to be arrested for stealing a car. They
have to serve 2-year imprisonment for this. But, the police also suspects that these two
people have also committed a bank robbery. The police placed each prisoner in a separate
cell. Both of them are told that they are suspects of being bank robbers. They are inquired
separately and are not able to communicate with each other.
Behaviorism discovered a lot about rats and pigeons but had less success at understanding human.
Common view among psychologist that a cognitive theory should be like a computer program.
(Anderson 1980) i.e. It should describe a detailed information processing mechanism whereby some
cognitive function might be implemented.
6. Computer engineering:
How can we build an efficient computer?
For artificial intelligence to succeed, we need two things: intelligence and an artifact. The
computer has been the artifact(object) of choice.
The first operational computer was the electromechanical Heath Robinson, built in 1940
by Alan Turing's team for a single purpose: deciphering German messages.
The first operational programmable computer was the Z-3, the invention of KonradZuse
in Germany in 1941.
The first electronic computer, the ABC, was assembled by John Atanasoff and his student
Clifford Berry between 1940 and 1942 at Iowa State University.
The first programmable machine was a loom, devised in 1805 by Joseph Marie Jacquard
(1752-1834) that used punched cards to store instructions for the pattern to be woven.
7. Control theory and cybernetics
How can artifacts operate under their own control?
Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water
clock with a regulator that maintained a constant flow rate. This invention changed the
definition of what an artifact could do.
Modern control theory, especially the branch known as stochastic optimal control, has as
its goal the design of systems that maximize an objective function over time. This
roughly OBJECTIVE FUNCTION matches our view of Al: designing systems that
behave optimally.
Noam Chomsky, who had just published a book on his own theory, Syntactic Structures.
Chomsky pointed out that the behaviourist theory did not address the notion of creativity
in language.
Modern linguistics and AI were ―born at about the same time, and grew up together,
intersecting in a hybrid field called computational linguistics or natural language
processing.
The problem of understanding language soon turned out to be considerably more complex
than it seemed in 1957. Understanding language requires an understanding of the subject
matter and context, not just an understanding of the structure of sentences.
Knowledge representation (the study of how to put knowledge into a form that a computer
can reason with) - tied to language and informed by research in linguistics.
History of Artificial Intelligence
The gestation of artificial intelligence (1943–1955)
The gestation of artificial intelligence (AI) during the period from 1943 to 1955 marked the early
theoretical and conceptual groundwork for the field. This period laid the foundation for the
subsequent development of AI
Robotic vehicles: A driverless robotic car named STANLEY sped through the rough terrain of the
Mojave dessert at 22 mph, finishing the 132-mile course first to win the 2005
Speech recognition: A traveler calling United Airlines to book a flight can have the entire
conversation guided by an automated speech recognition and dialog management system.
Autonomous planning and scheduling: A hundred million miles from Earth, NASA’s Remote
Agent program became the first on-board autonomous planning program to control the scheduling
of operations for a spacecraft (Jonsson et al., 2000). REMOTE AGENT generated plans from high-
level goals specified from the ground and monitored the execution of those plans—detecting,
diagnosing, and recovering from problems as they occurred. Successor program MAPGEN (Al-
Chang et al., 2004) plans the daily operations for NASA’s Mars
Exploration Rovers, and MEXAR2 (Cesta et al., 2007) did mission planning—both logistics
and science planning—for the European Space Agency’s Mars Express mission in 2008.Section
1.5. Summary 29
Game playing: IBM’s DEEP BLUE became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in
an exhibition match (Goodman and Keene, 1997). Kasparov said that he felt a “new kind of
intelligence” across the board from him. Newsweek magazine described the match as “The
brain’s last stand.” The value of IBM’s stock increased by $18 billion. Human champions
studied Kasparov’s loss and were able to draw a few matches in subsequent years, but the
most recent human-computer matches have been won convincingly by the computer.
Spam fighting: Each day, learning algorithms classify over a billion messages as spam,
saving the recipient from having to waste time deleting what, for many users, could comprise 80%
or 90% of all messages, if not classified away by algorithms. Because the spammers are continually
updating their tactics, it is difficult for a static programmed approach to keep up, and learning
algorithms work best (Sahami et al., 1998; Goodman and Heckerman, 2004).
Logistics planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do automated logistics
planning and scheduling for transportation. This involved up to 50,000 vehicles, cargo, and people
at a time, and had to account for starting points, destinations, routes, and conflict resolution among
all parameters.
Robotics: The iRobot Corporation has sold over two million Roomba robotic vacuum
cleaners for home use. The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear explosives, and identify
the location of snipers.
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract,
and so on for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets.
The term percept to refer to the agent’s perceptual inputs at any given instant.
In general, an agent’s choice of action at any given instant can depend on the entire
percept sequence observed to date, but not on anything it hasn’t perceived.
Tabulating the agent function that describes any given agent. The table is, of course, an
external characterization of the agent. Internally, the agent function for an artificial
agent will be implemented by an AGENT PROGRAM.
AGENT FUNCTION describes agent’s behavior by mapping any given percept
sequence to an action.
-To describe any given agent, we have to tabulate the agent function – and this will
typically be a very large table(potentially infinitely large table)
-The table can be constructed by trying out all possible percept sequences and
recording which actions the agent does in response.
-This table is external characterization of the agent.
-Agent function is abstract mathematical description.
It is better to design ‘performance measures’ according to what one actually wants in the
environment, rather than according to how one thinks the agent should behave.
Eg: for the vaccum agent, we propose to measure the performance by the amount of dirt
cleaned up in a single 8-hours shift.
Rationality
Prior knowledge The geography of the environment known but the dirt
distribution and the initial location of the agent are not. Clean squares stay
clean and sucking cleans the current square. The Left and Right actions move
the agent left and right except when this would take the agent outside the
environment, in which case the agent remains where it is.
Precept sequence: The agent correctly perceives its location and whether that
location contains dirt.
Omniscience, learning, and autonomy
An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is mpossible in reality.
LEARNING Our definition requires a rational agent not only to gather information but
also to learn as much as possible from what it perceives. The agent’s initial configuration
could reflect some prior knowledge of the environment, but as the agent gains experience
this may be modified and augmented.
To the extent that an agent relies on the prior knowledge of its designer rather than
AUTONOMY on its own percepts, we say that the agent lacks autonomy
The nature of environment
Task environments, which are essentially the ―problems‖ to which rational agents are
the solutions.‖
To specify the performance measure, the environment, and the agent’s actuators and
sensors called the PEAS (Performance, Environment, Actuators, Sensors) description.
In designing an agent, the first step must always be to specify the task environment as
fully as possible.
• If an agent’s sensors give it access to the complete state of the environment at each point
in time, then we say that the task environment is fully observable.
• A task environment is effectively fully observable if the sensors detect all aspects that
are relevant to the choice of action; relevance, in turn, depends on the performance
measure.
• Fully observable environments are convenient because the agent need not maintain any
internal state to keep track of the world.
• —for example, a vacuum agent with only a local dirt sensor cannot tell whether there is
dirt in other squares, and an automated taxi cannot see what other drivers are thinking. If
the agent has no sensors at all then the environment is unobservable.
Single agent vs. multiagent:
The distinction between single-agent and multi agent environments may seem simple
enough. For example, an agent solving a crossword puzzle by itself is clearly in a single-
agent environment, whereas an agent playing chess is in a two agent
environment.
• In sequential environments, on the other hand, the current decision could affect all
future decisions.
Static vs. dynamic:
• If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static.
• Static environments are easy to deal with because the agent need not keep looking at
the world while it is deciding on an action, nor need it worry about the passage of
time.
• Dynamic environments, on the other hand, are continuously asking the agent what it
wants to do; if it hasn’t decided yet, that counts as deciding to do nothing.
Discrete vs. continuous:
• The discrete/continuous distinction applies to the state of the environment, to the way
time is handled, and to the percepts and actions of the agent.
• For example, the chess environment has a finite number of distinct states (excluding
the clock). Chess also has a discrete set of percepts and actions.
• Taxi driving is a continuous-state and continuous-time problem: the speed and location
of the taxi and of the other vehicles sweep through a range of continuous values and do
so smoothly over time.
Properties of the Agent’s State of Knowledge
an unknown environment can be fully observable (Ex: a game I don’t know the
rules of)
Task Observable Agents Deterministic Episodic Static Discrete
Environment
Cross word
puzzle
Image
analysis
Interactive
English tutor
The structure of agents
The agent program runs on some computing device with physical sensors and
actuators: the agent architecture
• The knowledge of the current state environment is not always sufficient to decide for an agent
to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before deciding whether
the goal is achieved or not. Such considerations of different scenario are called searching and
planning, which makes an agent proactive.
• Sometimes goal-based action selection is straightforward: for example when
goal satisfaction results immediately from a single action.
• Sometimes it will be trickier: for example, when the agent has to consider long sequences of
twists and turns to find a way to achieve the goal.
• Search and planning are the subfields of AI devoted to finding action sequences that achieve
the agent’s goals.
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an agent has
to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
Utility-based Agents advantages wrt. goal-based:
• From this experience, the learning element formulates a rule saying this was a ba
d action.
• The performance element is modified by adding the new rule.
• The problem generator might identify certain areas of behavior in need of
improvement, and suggest trying out the brakes on different road surfaces under
different conditions.