0% found this document useful (0 votes)
6 views50 pages

Unit 1

The document provides an overview of Artificial Intelligence (AI), covering its foundations, history, and various techniques. It discusses the capabilities of AI systems, including their ability to think and act like humans, and introduces concepts like the Turing Test and rational decision-making. Additionally, it outlines the structure of intelligent agents, their environments, and the different types of agents based on their decision-making processes.

Uploaded by

h9179624
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views50 pages

Unit 1

The document provides an overview of Artificial Intelligence (AI), covering its foundations, history, and various techniques. It discusses the capabilities of AI systems, including their ability to think and act like humans, and introduces concepts like the Turing Test and rational decision-making. Additionally, it outlines the structure of intelligent agents, their environments, and the different types of agents based on their decision-making processes.

Uploaded by

h9179624
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Artificial Intelligence

Introduction
Outline: Unit 1
o Foundations and History of Artificial Intelligence,
o can machine think?,
o AI techniques,
o components of AI,
o Applications of Artificial Intelligence,
o Intelligent Agents,
o Structure of Intelligent Agents.
o Computer vision,
o Natural Language Possessing
AI
Today
o What is artificial intelligence?

o Where did it come from/What


can AI do?
What is AI?
The science of making machines that:

Think like people Think rationally

Act like people Act rationally


What is AI

Systems that think like human: Systems that think rationally


cognitive science, neuroscience Aristotle --- this is how to think so
that you don’t make mistakes;
Systems that act like humans Systems that act rationally
Alan Turing --- Turing test maximally achieving pre-defined
goals
Turing Test

o (Human) judge communicates with a human and a


machine over text-only channel,
o Both human and machine try to act like a human,
o Judge tries to tell which is which.

image from http://en.wikipedia.org/wiki/Turing_test


Turing Test
Rational Decisions
We’ll use the term rational in a very specific, technical way:
 Rational: maximally achieving pre-defined goals
 Rationality only concerns what decisions are made
(not the thought process behind them)
 Goals are expressed in terms of the utility of outcomes
 Being rational means maximizing your expected utility

Computational Rationality
Maximize Your
Expected Utility
The Foundation of AI
The Foundation of AI
o Economics- maximize payoff, Decision theory, Game
theory, Operations research

o Computer Engineering- efficient computer


A (Short) History of AI
o 1940-1950: Early days
o 1943: McCulloch & Pitts: Boolean circuit model of brain
o 1950: Turing's “Computing Machinery and Intelligence”
o 1950—70: Excitement: Look, Ma, no hands!
o 1950s: Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist, Gelernter's
Geometry Engine
o 1956: Dartmouth meeting: “Artificial Intelligence” adopted
o 1965: Robinson's complete algorithm for logical reasoning
o 1970—90: Knowledge-based approaches
o 1969—79: Early development of knowledge-based systems
o 1980—88: Expert systems industry booms
o 1988—93: Expert systems industry busts: “AI Winter”
o 1990—: Statistical approaches
o Resurgence of probability, focus on uncertainty
o General increase in technical depth
o Agents and learning systems… “AI Spring”?

o 2000—: Where are we now?


What Can AI Do?
Quiz: Which of the following can be done at present?

o Play a decent game of Jeopardy?


o Win against any human at chess?

o Play a decent game of tennis?


o Grab a particular cup and put it on a shelf?
o Unload any dishwasher in any home?
o Drive safely along the highway?

o Buy a week's worth of groceries on the web?

o Discover and prove a new mathematical theorem?


o Perform a surgical operation?
o Unload a know dishwasher in collaboration with a person?
o Translate spoken Chinese into spoken English in real time?
o Write an intentionally funny story?
Game Agents
o Classic Moment: May, '97: Deep Blue vs. Kasparov
o First match won against world champion
o “Intelligent creative” play
o 200 million board positions per second
o Humans understood 99.9 of Deep Blue's moves
o Can do about the same now with a PC cluster

o 1996: Kasparov Beats Deep Blue


“I could feel --- I could smell --- a new kind of intelligence across the table.”
o 1997: Deep Blue Beats Kasparov
“Deep Blue hasn't proven anything.”

Text from Bart Selman, image from IBM’s Deep Blue pages
Game Agents
o Reinforcement learning

Pong Enduro Beam rider Q*bert


Game Agents
o Reinforcement learning
Robotics
o Robotics
o Part mech. eng.
o Part AI
o Reality much
harder than
simulations!

o Technologies
o Vehicles
o Rescue
o Help in the home
o Lots of automation…

o In this class:
o We ignore mechanical aspects
o Methods for planning
o Methods for control
Images from UC Berkeley, Boston Dynamics, RoboCup, Google
Robots
Human-AI Interaction
Tools for Predictions & Decisions
Natural Language
o Speech technologies (e.g. Siri)
o Automatic speech recognition (ASR)
o Text-to-speech synthesis (TTS)
o Dialog systems

o Language processing technologies


o Question answering
o Machine translation

o Web search
o Text classification, spam filtering, etc…
Computer Vision

Karpathy & Fei-Fei, 2015; Donahue et al., 2015; Xu et al, 2015; many more
What About the Brain?
 Brains (human minds) are very
good at making rational decisions,
but not perfect
 Brains aren’t as modular as
software, so hard to reverse
engineer!
 “Brains are to intelligence as
wings are to flight”
 Lessons learned from the brain:
memory and simulation are key to
decision making
Designing Rational Agents

o An agent is an entity that perceives and acts.


o A rational agent selects actions that maximize
its (expected) utility.
o Characteristics of the percepts, environment,
and action space dictate techniques for
selecting rational actions

Environment
Sensors

Agent
Percepts
?
Actuators
Actions
Agents and environments

Agent Environment
Sensors
Percepts
?
Actuators
Actions

o An agent perceives its environment through sensors and


acts upon it through actuators (or effectors, depending on
whom you ask)
o The agent function maps percept sequences to actions
o It is generated by an agent program running on a machine
The Nature of Environment
o The task environment - PEAS

(Performance, Environment, Actuators, Sensors )


A human agent in Pacman
The task environment - PEAS
o Performance measure
o -1 per step; + 10 food; +500 win; -500 die;
+200 hit scared ghost
o Environment
o Pacman dynamics (incl ghost behavior)
o Actuators
o Left Right Up Down or NSEW
o Sensors
o Entire state is visible
PEAS: Automated taxi
o Performance measure
o Income, happy customer, vehicle costs,
fines, insurance premiums
o Environment
o Roads, streets, other drivers, customers,
weather, police…
o Actuators
o Steering, brake, gas, display/speaker, horn
o Sensors
o Camera, radar, accelerometer, engineImage: http://nypost.com/2014/06/21/how-google-
sensors, microphone, GPS might-put-taxi-drivers-out-of-business/
PEAS: Medical diagnosis system
o Performance measure
o Patient health, cost, reputation
o Environment
o Patients, medical staff,hospitals
o Actuators
o Screen display, email, dignoses,
treatment referrals
o Sensors
o Keyboard/mouse for entry of patient’s
recaords
Environment types
Fully or partially observable Single sensor

Single-agent or multiagent Crossword Competitive : Cooperative:


Vs. chess chess Taxi driving

Deterministic or stochastic Current state Next state Vaccum cleaner vs


and action completely defined traffic
Deterministic

Static or dynamic Crossword


Vs taxi
Discrete or continuous Respect to chess Taxi driving
Time
Known vs Unkown Agent state of Solitaire: known Video game: unknown
knowledge environment but environment but fully
partially observable
observable
Episodic vs sequential Defectice part chess
dection
Structure of Agent

Agent program implements agents function, The mapping of


precepts to actions.

Agent program= architecture + program


Structure of Agent

o Agent program
function TABLE-DRIVEN-AGENT(percept) returns an action
persistent: percepts, a sequence, initially empty
table, a table of actions, indexed by percept
sequences, initially fully specified
append percept to the end of percepts
action ← LOOKUP(percepts, table)
return action
Agent program

o Simple Reflex agents


o Model Based reflex agents
o Goal based agents
o Utility based agents
Simple reflex agents
Agent Sensors

What the world


is like now

Environment
Condition-action rules What action I
should do now

Actuators
SIMPLE-REFLEX-AGENT
function SIMPLE-REFLEX-AGENT(percept) returns an action
persistent: rules, a set of condition–action rules
state ← INTERPRET-INPUT(percept)
rule ← RULE-MATCH(state, rules)
action ← rule.ACTION
return action

A simple reflex agent. It acts according to a rule whose


condition matches the current state, as defined by the
percept.
function REFLEX-VACUUM-AGENT( [location,status]) returns
an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Reflex agents with state (Model Based)
Sensors
State
How the world evolves What the world
is like now

Environment
What my actions do

What action I
Condition-action rules
should do now

Agent Actuators
function MODEL-BASED-REFLEX-AGENT(percept) returns an action
persistent: state, the agent’s current conception of the world state
transition model, a description of how the next state depends on the
current state and action sensor
model, a description of how the current world state is reflected in the
agent’s percepts
rules, a set of condition–action rules
action, the most recent action, initially none
state ← UPDATE-STATE(state, action, percept,transition model, sensor
model)
rule ← RULE-MATCH(state, rules)
action ← rule.ACTION
return action
A model-based reflex agent. It keeps track of the current state of the world,
using an internal model. It then chooses an action in the same way as the reflex agent.
Goal-based agents
Sensors

State
What the world
How the world evolves is like now

Environment
What it will be like
What my actions do if I do action A

What action I
Goals should do now

Agent Actuators

A model-based, goal-based agent. It keeps track of the world state as well as


a set of goals it is trying to achieve, and chooses an action that will (eventually)
lead to the achievement of its goals.
Utility based Agents

A model-based, utility-based agent. It uses a model of the world, along with a


utility function that measures its preferences among states of the world. Then it
chooses the action that leads to the best expected utility, where expected utility is
computed by averaging over all possible outcome states, weighted by the
probability of the outcome.
General Learning Agent

A general learning agent. The “performance element” box represents what we


have previously considered to be the whole agent program. Now, the “learning
element” box gets to modify that program to improve its performance.
Spectrum of representations

Three ways to represent states and the transitions between them. (a) Atomic
representation: a state (such as B or C) is a black box with no internal structure; (b)
Factored representation: a state consists of a vector of attribute values; values can
be Boolean, real valued, or one of a fixed set of symbols. (c) Structured
representation: a state includes objects, each of which may have attributes of its own
as well as relationships to other objects.

You might also like