ARTIFICIAL INTELLIGENCE
UNIT-I
What is artificial intelligence?
1. Artificial Intelligence is the branch of computer science concerned with maki ng
computers behave like humans.
2. Major AI textbooks define artificial intelligence as "the study and design of IntellIgen
t agents," where an intelligent agent is a system that perceives its environment and
takes actions which maximize its chances of success.
3. John McCarthy, who coined the term in 1956, defines it as "the science and
engineering of making intelligent machines, especially intelligent computer
programs."
4. The definitions of AI according to some text books are categorized into
approaches and are summarized in the table below :
Systems that think like humans Systems that think rationally
"The exciting new effort to make computers "The study of mental faculties through the use
think … machines with minds, in the full and of computer models."
literal sense."(Haugeland,1985) (Charniak and McDermont,1985)
Systems that act like humans Systems that act rationally
The art of creating machines that performs "Computational intelligence is the study of the
functions that require intelligence when design of intelligent agents."(Poole et al.,1998)
performed by people."(Kurzweil,1990)
Goals of AI
• To Create Expert Systems − The systems which exhibit intelligent behavior, learn,
demonstrate, explain, and advice its users.
• To Implement Human Intelligence in Machines − Creating systems that understand, think,
learn, and behave like humans.
Applications of Artificial Intelligence:
1. Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc.,
where machine can think of large number of possible positions based on heuristic
knowledge.
2. Natural Language Processing − It is possible to interact with the computer that
understands natural language spoken by humans.
3. Expert Systems − There are some applications which integrate machine, software, and
special information to impart reasoning and advising. They provide explanation and advice
to the users.
4. Vision Systems − These systems understand, interpret, and comprehend visual input on the
computer. For example,
5. A spying aeroplane takes photographs, which are used to figure out spatial information or
map of the areas.
6. Doctors use clinical expert system to diagnose the patient.
7. Police use computer software that can recognize the face of criminal with the stored
portrait made by forensic artist.
8. Speech Recognition − Some intelligent systems are capable of hearing and comprehending
the language in terms of sentences and their meanings while a human talks to it. It can
handle different accents, slang words, noise in the background, change in human’s noise
due to cold, etc.
9. Handwriting Recognition − The handwriting recognition software reads the text written
on paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and
convert it into editable text.
10. Intelligent Robots − Robots are able to perform the tasks given by a human. They have
sensors to detect physical data from the real world such as light, heat, temperature,
movement, sound, bump, and pressure. They have efficient processors, multiple sensors
and huge memory, to exhibit intelligence. In addition, they are capable of learning from
their mistakes and they can adapt to the new environment.
History of AI
Maturation of Artificial Intelligence (1943-1952)
1. Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
2. Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
3. Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
1. Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program"Which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems, and find new and more elegant proofs for some theorems.
2. Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as an
academic field.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented.
And the enthusiasm for AI was very high at that time.
The golden years-Early enthusiasm (1956-1974)
1. Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named as ELIZA.
2. Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
The first AI winter (1974-1980)
1. The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers
to the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches.
2. During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
1. Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
2. In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
The second AI winter (1987-1993)
1. The duration between the years 1987 to 1993 was the second AI Winter duration.
2. Again Investors and government stopped in funding for AI research as due to high cost but
not efficient result. The expert system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
1. Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov,
and became the first computer to beat a world chess champion.
2. Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
3. Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-present)
1. Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could understand
natural language and can solve tricky questions quickly.
2. Year 2012: Google has launched an Android app feature "Google now", which was able to
provide information to the user as a prediction.
3. Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
4. Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
5. Google has demonstrated an AI program "Duplex" which was a virtual assistant and which
had taken hairdresser appointment on call, and lady on other side didn't notice that she was
talking with the machine.
STATE OF ART
Agents and environments:
An agent is anything that can be viewed as perceiving its environment through sensors and
sensor acting upon that environment through actuators.
1. A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and
other body parts for actuators.
2. A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
3. A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets.
FIG:AGENT WORKING
Sensor: Sensor is a device which detects the change in the environment and sends the information
to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an electric
motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels,
arms, fingers, wings, fins, and display screen.
Percept
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence
An agent's percept sequence is the complete history of everything the agent has ever
perceived.
Agent function
Mathematically speaking, we say that an agent's behavior is described by the agent function
that maps any given percept sequence to an action.
Agent program
1. Theagent function for an artificial agent will beimplemented by an agent program.
2. It is important to keep these two ideas distinct.
3. The agent function is an abstract mathematical description;
4. the agent program is aconcrete implementation, running onthe agent architecture.
5. To illustratetheseideas, wewill useaverysimpleexample-the vacuum-cleaner world
shown in Figure.
6. This particular world has just two locations: squares A and B.
7. The vacuum agent perceives which square it is in and whether there is dirt in
square.
8. It can choose to move left, move right, suck up the dirt, or do nothing.
9. One very simple agent function is the following:
10. if the current square is dirty, then suck, otherwise,
11. it move to the other square.
12. A partial tabulation of this agent function is shown in Figure.
Agent function
Percept Sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck
….. …..
Agent program
function Reflex-VACUUM-AGENT ([locations, status]) returns an
action if status = Dirty then return Suck
else if location = A then return Right
elseif location = B then return Left
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the environment to achieve their
goals. A thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.
Structure of an AI Agent: The task of AI is to design an agent program which implements the agent
function. The structure of an intelligent agent is a combination of architecture and agent program. It
can be viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action.
Agent program: Agent program is an implementation of agent function. An agent program
executes on the physical architecture to produce function f.
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to
maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents to use for
game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong action,
an agent gets a negative reward.
Good Behavior: The concept of Rationality
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be judged on
the basis of following points:
o Performance measure which defines the success criterion.
o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.
Performance measures
1. Aperformance measure embodies thecriterion for success of an agent'sbehavior.
2. When an agent is plunked down in an environment, it generates a sequence of actions
according to the percepts it receives.
3. This sequence of actions causes the environment to go through a sequence ofstates.
4. If the sequence is desirable, then the agent has performed well.
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made up
of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Examples:
1) Self Driving car
Performance: Safety, time, legal drive, comfort
Environment: Roads, other vehicles, road signs, pedestrian
Actuators: Steering, accelerator, brake, signal, horn
Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
2) Medical Diagnose
Performance: Healthy patient,Minimized cost
Environment: Patient,Hospital,Staff
Actuators: Tests,Treatments
Sensors: Keyboard(Entry of symptoms)
3) Vacuum Cleaner
Performance: Cleanness,Efficiency,Battery life,Security
Environment: Patient,Hospital,Staff
Actuators: Wheels,Brushes,Vacuum, Extractor
Sensors:Camera,Dirt detection sensor,Cliff sensor,Bump Sensor,Infrared Wall Sensor
4) Part -picking Robot
Performance: Percentage of parts in correct bins.
Environment: Conveyor belt with parts,Bins
Actuators: Jointed Arms,Hand
Sensors: Camera,Joint angle sensors.
Types of Agents
Agents can be grouped into four classes based on their degree of perceived intelligence and capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
Simple reflex agents
Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept.
Percept history is the history of all that an agent has perceived to date. The agent function is based on
the condition-action rule. A condition-action rule is a rule that maps a state i.e, condition to an action. If the
condition is true, then the action is taken, else not. This agent function only succeeds when the environment is
fully observable. For simple reflex agents operating in partially observable environments, infinite loops are
often unavoidable. It may be possible to escape from infinite loops if the agent can randomize its actions.
Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of rules need to be updated.
Model-based reflex agents
It works by finding a rule whose condition matches the current situation. A model-based agent can
handle partially observable environments by the use of a model about the world. The agent has to keep track
of the internal state which is adjusted by each percept and that depends on the percept history. The current
state is stored inside the agent which maintains some kind of structure describing the part of the world which
cannot be seen.
Updating the state requires information about :
• how the world evolves independently from the agent, and
• how the agent’s actions affect the world.
Goal-based agents
These kinds of agents take decisions based on how far they are currently from their goal(description of
desirable situations). Their every action is intended to reduce its distance from the goal. This allows the agent a
way to choose among multiple possibilities, selecting the one which reaches a goal state. The knowledge that
supports its decisions is represented explicitly and can be modified, which makes these agents more flexible.
They usually require search and planning. The goal-based agent’s behavior can easily be changed.
Utility-based agents
The agents which are developed having their end uses as building blocks are called utility-based agents. When
there are multiple possible alternatives, then to decide which one is best, utility-based agents are used. They
choose actions based on a preference (utility) for each state. Sometimes achieving the desired goal is not
enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken
into consideration. Utility describes how “happy” the agent is. Because of the uncertainty in the world, a
utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real
number which describes the associated degree of happiness.
Learning Agent :
A learning agent in AI is the type of agent that can learn from its past experiences or it has learning
capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through
learning.
A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning from the environment
2. Critic: The learning element takes feedback from critics which describes how well the agent is doing with
respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem Generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.
The Nature of Environments
An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense and
act upon it. An environment is mostly said to be non-feministic.
An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense and
act upon it. An environment is mostly said to be non-feministic.
Features of Environment
An environment can have various features from the point of view of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
1. Fully observable vs Partially Observable:
1. If an agent sensor can sense or access the complete state of an environment at each point of
time then it is a fully observable environment, else it is partially observable.
2. A fully observable environment is easy as there is no need to maintain the internal state to
keep track history of the world.
3. An agent with no sensors in all environments then such an environment is called
as unobservable.
2. Deterministic vs Stochastic:
1. If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
2. A stochastic environment is random in nature and cannot be determined completely by an
agent.
3. In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
1. In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
2. However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions
4. Single-agent vs Multi-agent
1. If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
2. However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.
3. The agent design problems in the multi-agent environment are different from single agent
environment.
5. Static vs Dynamic:
1. If the environment can change itself while an agent is deliberating then such environment is
called a dynamic environment else it is called a static environment.
2. Static environments are easy to deal because an agent does not need to continue looking at
the world while deciding for an action.
3. However for dynamic environment, agents need to keep looking at the world at each action.
4. Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.
6. Discrete vs Continuous:
1. If in an environment there are a finite number of percepts and actions that can be performed
within it, then such an environment is called a discrete environment else it is called
continuous environment.
2. A chess gamecomes under discrete environment as there is a finite number of moves that
can be performed.
3. A self-driving car is an example of a continuous environment.
7. Known vs Unknown
1. Known and unknown are not actually a feature of an environment, but it is an agent's state
of knowledge to perform an action.
2. In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.
3. It is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.
8. Accessible vs Inaccessible
1. If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called inaccessible.
2. An empty room whose state can be defined by its temperature is an example of an
accessible environment.
3. Information about an event on earth is an example of Inaccessible environment.
Rationality:
• Rationality is the ability of an agent to choose actions that maximize its expected performance
based on available knowledge and precepts.
Four Key Factors:
• Performance measure
• Agent’s prior knowledge
• Available actions
• Percept sequence
Autonomy and Rationality:
Autonomy: The ability to operate independently based on experience
Ideal Rational Agent:
• - Learns
• - Adapts
• - Reduces reliance on initial programming
• Closing Thought: Rationality is context-aware, experience-driven, and performance-focused.
PART A
1. Differentiate intelligence and artificial intelligence. [ May/June 2018].
2. What are the applications of AI?
3. Define AI. List it’s any two applications.
4. What is meant by Turing test?
5. Define Rational Agent.
6. List the various type of agent program.
7. Define Agent Function.
8. What is Rationality?
9. Give the differences between “Thinking – humanly” and “Thinking – rationally” in artificial
intelligence.
10. List the fields that form the basis for AI.
PART B
1. Explain the history of Artificial intelligence. [ May/June 2018].
2.Why would evaluation tend to result in systems that act rationally? What goals are such systems
designed to achieve? [ May/June 2018].
3. For each of the following agents, develop a PEAS description of the task environment:
(I) Robot Soccer Player. [ May/June 2018].
(II) Internet-book-shopping agent. [ May/June 2018].
4. How can you define AI? Explain briefly about the overview of AI.
(b) Describe learning agent, its functionality with neat diagram. [Feb 2018]
5. Describe the nature of the environment with its characteristics.
(b) What is agent? Draw the diagram how an agent interacts with environment.
6. Describe the need of following disciplines that contributed ideas, viewpoints and techniques to AI:
(i) Philosophy. (ii) Mathematics. (iii) Neuroscience.
7.Explain in detail with a neat sketch, general learning agent and its components.
8. Describe the challenges in developing AI application.
(b) What is environment with respect to AI? Describe it’s properties.
9. (a) Give the PEAS description for the following activities and state its properties:
(i) Medical diagnosis. (ii) Interactive English tutor.
(b) Compare goal-based and utility-based agent.
10. Define the following words with suitable examples: (i) Intelligence. (ii) Artificial intelligence.
(b) Explain the history of artificial intelligence.