0% found this document useful (0 votes)
13 views31 pages

Unit 1

The document provides an overview of Artificial Intelligence (AI), defining it as the capability of computers to perform tasks that typically require human intelligence. It discusses the differences between natural intelligence and AI, outlines key terminologies, branches, and techniques such as machine learning and natural language processing, and highlights the future applications of AI in various fields. Additionally, it covers the history of AI, characteristics of intelligent agents, and the properties of environments in which these agents operate.

Uploaded by

parasgarag123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views31 pages

Unit 1

The document provides an overview of Artificial Intelligence (AI), defining it as the capability of computers to perform tasks that typically require human intelligence. It discusses the differences between natural intelligence and AI, outlines key terminologies, branches, and techniques such as machine learning and natural language processing, and highlights the future applications of AI in various fields. Additionally, it covers the history of AI, characteristics of intelligent agents, and the properties of environments in which these agents operate.

Uploaded by

parasgarag123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

UNIT 1:

INTRODUCTION: Introduction–Definition – Future of Artificial Intelligence –


Characteristics of Intelligent Agents– Typical Intelligent Agents – Problem
Solving Approach to Typical AI problems.

Difference between Intelligence and Artificial Intelligence

INTELLIGENCE ARTIFICIAL INTELLIGENCE


It is a natural process. It is programmed by humans.
It is actually hereditary. It is not hereditary.
Knowledge is required for intelligence. KB and electricity are required to
generate output
No human is an expert. We may get Expert systems are made which
better solutions from other humans. aggregate many person’s experience
and ideas.

Definition of AI:

The study of how to make computers do things at which at the moment, people
are better. “Artificial Intelligence is the ability of a computer to act like a human
being”.

• Systems that think like humans

• Systems that act like humans

• Systems that think rationally. Systems that act rationally

OR

An area of computer science that emphasizes the creation of intelligent machines


that work and react like humans. It is now an essential part of technology
industry. The goal of AI is to create algorithms and systems that can learn from
data, reason, make predictions, and take actions.
Some terminologies:

(a) Intelligence - Ability to apply knowledge in order to perform better in an


environment.

(b) Artificial Intelligence - Study and construction of agent programs that perform
well in a given environment, for a given agent architecture.

(c) Agent - An entity that takes action in response to precepts from an


environment.

(d) Rationality - property of a system which does the “right thing” given what it
knows.

(e) Logical Reasoning - A process of deriving new sentences from old, such that
the new sentences are necessarily true if the old ones are true.

Branches Of Artificial Intelligence


Artificial Intelligence can be used to solve real-world problems by implementing
the following processes/ techniques:

1. Machine Learning
2. Deep Learning
3. Natural Language Processing
4. Robotics
5. Expert Systems
6. Fuzzy Logic

Figure 2: Types of AI

1. Machine Learning:

Machine learning is a subfield of artificial intelligence that gives computers the


ability to learn without explicitly being programmed.

Under Machine Learning there are three categories:

1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning

Supervised learning, as the name indicates, has the presence of a supervisor as


a teacher. Basically supervised learning is when we teach or train the machine
using data that is well-labelled.

Supervised learning is classified into two categories of algorithms:


 Classification: A classification problem is when the output variable is a
category, such as “Red” or “blue”, “disease” or “no disease”.
 Regression: A regression problem is when the output variable is a real value,
such as “dollars” or “weight”.

Unsupervised learning is the training of a machine using information that is


neither classified nor labeled and allowing the algorithm to act on that
information without guidance.

Unsupervised learning is classified into two categories of algorithms:


 Clustering: A clustering problem is where you want to discover the inherent
groupings in the data, such as grouping customers by purchasing behavior.
 Association: An association rule learning problem is where you want to
discover rules that describe large portions of your data, such as people that
buy X also tend to buy Y.

Deep Learning is the process of implementing Neural Networks on high


dimensional data to gain insights and form solutions. Deep Learning is an
advanced field of Machine Learning that can be used to solve more advanced
problems. Deep Learning is the logic behind the face verification algorithm on
Facebook, self-driving cars, virtual assistants like Siri, Alexa and so on.

Natural Language Processing:

Natural Language Processing (NLP) refers to the science of drawing insights from
natural human language in order to communicate with machines and grow
businesses.

Twitter uses NLP to filter out terroristic language in their tweets, Amazon uses
NLP to understand customer reviews and improve user experience.
Robotics

Robotics is a branch of Artificial Intelligence which focuses on different branches


and application of robots. AI Robots are artificial agents acting in a real-world
environment to produce results by taking accountable actions.

Sophia the humanoid is a good example of AI in robotics.

Fuzzy Logic

Fuzzy logic is a computing approach based on the principles of “degrees of truth”


instead of the usual modern computer logic i.e. boolean in nature.

Fuzzy logic is used in the medical fields to solve complex problems that involve
decision making. They are also used in automatic gearboxes, vehicle environment
control and so on.

FUTURE OF ARTIFICIAL INTELLIGENCE

• Transportation: Although it could take a decade or more to perfect them,


autonomous cars will one day ferry us from place to place.
• Manufacturing: AI powered robots work alongside humans to perform a
limited range of tasks like assembly and stacking, and predictive analysis sensors
keep equipment running smoothly.
• Healthcare: In the comparatively AI-nascent field of healthcare, diseases are
more quickly and accurately diagnosed, drug discovery is sped up and
streamlined, virtual nursing assistants monitor patients and big data analysis
helps to create a more personalized patient experience.
• Education: Textbooks are digitized with the help of AI, early-stage virtual tutors
assist human instructors and facial analysis gauges the emotions of students to
help determine who’s struggling or bored and better tailor the experience to their
individual needs.
• Media: Journalism is harnessing AI, too, and will continue to benefit from it.
Bloomberg uses Cyborg technology to help make quick sense of complex financial
reports. The Associated Press employs the natural language abilities of
Automated Insights to produce 3,700 earning reports stories per year — nearly
four times more than in the recent past
• Customer Service: Last but hardly least, Google is working on an AI assistant
that can place human-like calls to make appointments at, say, your neighborhood
hair salon. In addition to words, the system understands context and nuance.

History of Artificial Intelligence:

Artificial Intelligence is not a new word and not a new technology for researchers.
This technology is much older than you would imagine. Even there are the myths
of Mechanical men in Ancient Greek and Egyptian Myths. Following are some
milestones in the history of AI which defines the journey from the AI generation
to till date development.
Maturation of Artificial Intelligence (1943-1952)

o Year 1943: The first work which is now recognized as AI was done by
Warren McCulloch and Walter pits in 1943. They proposed a model
of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian
learning.
o Year 1950: The Alan Turing who was an English mathematician and
pioneered Machine learning in 1950. Alan Turing publishes "Computing
Machinery and Intelligence" in which he proposed a test. The test can
check the machine's ability to exhibit intelligent behavior equivalent to
human intelligence, called a Turing test.

The birth of Artificial Intelligence (1952-1956)

o Year 1955: An Allen Newell and Herbert A. Simon created the "first
artificial intelligence program" Which was named as "Logic Theorist". This
program had proved 38 of 52 Mathematics theorems, and find new and
more elegant proofs for some theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American
Computer scientist John McCarthy at the Dartmouth Conference. For the
first time, AI coined as an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL


were invented. And the enthusiasm for AI was very high at that time.

The golden years-Early enthusiasm (1956-1974):

o Year 1966: The researchers emphasized developing algorithms which can


solve mathematical problems. Joseph Weizenbaum created the first
chatbot in 1966, which was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which
was named as WABOT-1.

The first AI winter (1974-1980):


o The duration between years 1974 to 1980 was the first AI winter duration.
AI winter refers to the time period where computer scientist dealt with a
severe shortage of funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was
decreased.

A boom of AI (1980-1987)

o Year 1980: After AI winter duration, AI came back with "Expert System".
Expert systems were programmed that emulate the decision-making ability
of a human expert.
o In the Year 1980, the first national conference of the American Association
of Artificial Intelligence was held at Stanford University.

The second AI winter (1987-1993):

o The duration between the years 1987 to 1993 was the second AI Winter
duration.
o Again Investors and government stopped in funding for AI research as due
to high cost but not efficient result. The expert system such as XCON was
very cost effective.

The emergence of intelligent agents (1993-2011):

o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion,
Gary Kasparov, and became the first computer to beat a world chess
champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a
vacuum cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.

Deep learning, big data and artificial general intelligence (2011-present)

o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show,
where it had to solve the complex questions as well as riddles. Watson had
proved that it could understand natural language and can solve tricky
questions quickly.
o Year 2012: Google has launched an Android app feature "Google now",
which was able to provide information to the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a
competition in the infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics
with two master debaters and also performed extremely well.
o Google has demonstrated an AI program "Duplex" which was a virtual
assistant and which had taken hairdresser appointment on call, and lady on
other side didn't notice that she was talking with the machine.

Now AI has developed to a remarkable level.

Intelligent Agent:

An intelligent agent is an autonomous entity which acts upon an environment


using sensors and actuators for achieving goals. An intelligent agent may learn
from the environment to achieve their goals.

CHARACTERISTICS OF INTELLIGENT AGENTS:

 Situatedness: The agent receives some form of sensory input from its
environment, and it performs some action that changes its environment in
some way. Examples of environments: the physical world and the Internet.

 Autonomy The agent can act without direct intervention by humans or


other agents and that it has control over its own actions and internal state.

 Adaptivity The agent is capable of

(1) reacting flexibly to changes in its environment;


(2) taking goal-directed initiative (i.e., is pro-active), when appropriate; and
(3) Learning from its own experience, its environment, and interactions
with others

 Sociability The agent is capable of interacting in a peer-to-peer manner


with other agents or humans.

AGENTS AND ITS TYPES:

 An agent is anything that can be viewed as perceiving its environment


through sensors and acting upon that environment through actuators.
Agent runs in cycle of perceiving, thinking and acting.

An agent can be:

 Human-Agent: A human agent has eyes, ears, and other organs which work
for sensors and hand, legs, vocal tract work for actuators.
 Robotic Agent: A robotic agent can have cameras, infrared range finder,
NLP for sensors and various motors for actuators.
 Software Agent: Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cell phone,
camera, and even we are also agents. Before moving forward, we should first
know about sensors, effectors, and actuators.

Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system.
An actuator can be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.

PROPERTIES OF ENVIRONMENT:

An environment is everything in the world which surrounds the agent, but it is


not a part of an agent itself. An environment can be described as a situation in
which an agent is present.

The environment is where agent lives, operate and provide the agent with
something to sense and act upon it.

Fully observable vs Partially Observable: If an agent sensor can sense or access


the complete state of an environment at each point of time then it is a fully
observable environment, else it is partially observable. A fully observable
environment is easy as there is no need to maintain the internal state to keep
track history of the world.

An agent with no sensors in all environments then such an environment is called


as unobservable.
Example: chess – the board is fully observable, as are opponent’s moves. Driving
– what is around the next bend is not observable and hence partially observable.

Deterministic vs Stochastic:

• If an agent's current state and selected action can completely determine the
next state of the environment, then such environment is called a deterministic
environment. A stochastic environment is random in nature and cannot be
determined completely by an agent.

• In a deterministic, fully observable environment, agent does not need to worry


about uncertainty.

Episodic vs Sequential:

 In an episodic environment, there is a series of one-shot actions, and only


the current percept is required for the action.
 However, in Sequential environment, an agent requires memory of past
actions to determine the next best actions.

Single-agent vs Multi-agent:

 If only one agent is involved in an environment, and operating by itself


then such an environment is called single agent environment.
 However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
 The agent design problems in the multi-agent environment are different
from single agent environment.

Static vs Dynamic:

 If the environment can change itself while an agent is deliberating then


such environment is called a dynamic environment else it is called a static
environment.
 Static environments are easy to deal because an agent does not need to
continue looking at the world while deciding for an action.
 However for dynamic environment, agents need to keep looking at the
world at each action.
 Taxi driving is an example of a dynamic environment whereas Crossword
puzzles are an example of a static environment.

Discrete vs Continuous

• If in an environment there are a finite number of precepts and actions that can
be performed within it, then such an environment is called a discrete
environment else it is called continuous environment.

• A chess game comes under discrete environment as there is finite number of


moves that can be performed.

• A self-driving car is an example of a continuous environment.

Known vs Unknown

• Known and unknown are not actually a feature of an environment, but it is an


agent's state of knowledge to perform an action.

• In a known environment, the results for all actions are known to the agent.
While in unknown environment, agent needs to learn how it works in order to
perform an action.

Accessible vs. Inaccessible:

If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment else
it is called inaccessible.

• An empty room whose state can be defined by its temperature is an example of


an accessible environment. Information about an event on earth is an example of
Inaccessible environment

PEAS: Performance Measure, Environment, Actuators, Sensors. PEAS is a


representation system for AI agents which caters to measuring Performance with
respect to Environment, Sensors, and actuators.

Performance: The output which we get from the agent. All the necessary results
that an agent gives after processing comes under its performance.
Environment: All the surrounding things and conditions of an agent fall in this
section. It basically consists of all the things under which the agents work.

Actuators: The devices, hardware or software through which the agent performs
any actions or processes any information to produce a result are the actuators of
the agent.

Sensors: The devices through which the agent observes and perceives its
environment are the sensors of the agent.
Rational Agent - A system is rational if it does the “right thing”. Given what it
knows. Doing actions in order to modify future precepts-sometimes called
information gathering- is an important part of rationality.

• A rational agent should be autonomous-it should learn from its own prior
knowledge (experience)

Characteristic of Rational Agent:

▪ The agent's prior knowledge of the environment.


▪ The performance measure that defines the criterion of success.
▪ The actions that the agent can perform.
▪ The agent's percept sequence to date.

For every possible percept sequence, a rational agent should select an action that
is expected to maximize its performance measure, given the evidence provided
by the percept sequence and whatever built-in knowledge the agent has.

Omniscent agent: An omniscient agent knows the actual outcome of its actions
and can act accordingly; but omniscience is impossible in reality.

Ideal Rational Agent precepts and does things. It has a greater performance
measure. Eg. Crossing road. Here first perception occurs on both sides and then
only action. Eg. Clock. It does not view the surroundings. No matter what happens
outside. The clock works based on inbuilt program.

TYPES OF AGENTS:

Agents can be grouped into four classes based on their degree of perceived
intelligence and capability:

• Simple Reflex Agents


• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent

The Simple reflex agents:


• The Simple reflex agents are the simplest agents. These agents take decisions
on the basis of the current percepts and ignore the rest of the percept history
(past State).

• These agents only succeed in the fully observable environment.

• The Simple reflex agent does not consider any part of percepts history during
their decision and action process.

• The Simple reflex agent works on Condition-action rule, which means it maps
the current state to action. Such as a Room Cleaner agent, it works only if there is
dirt in the room.

• Problems for the simple reflex agent design approach:

 They have very limited intelligence.


 They do not have knowledge of non-perceptual parts of the current state.
 Mostly too big to generate and to store.
 Not adaptive to changes in the environment.

Condition-Action Rule − It is a rule that maps a state (condition) to an action. Ex:


if car-in-front-is-braking then initiate- braking.
Model Based Reflex Agents:

 The Model-based agent can work in a partially observable environment,


and track the situation
 A model-based agent has two important factors:
 Model: It is knowledge about "how things happen in the world," so
it is called a Model-based agent.
 Internal State: It is a representation of the current state based on
percept history.
 These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.
 Updating the agent state requires information about: How the world
evolves and how the agent's action affects the world.
Goal Based Agents:

 The knowledge of the current state environment is not always sufficient to


decide for an agent to what to do.
 The agent needs to know its goal which describes desirable situations.
 Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.
 They choose an action, so that they can achieve the goal.
 These agents may have to consider a long sequence of possible actions
before deciding whether the goal is achieved or not. Such considerations of
different scenario are called searching and planning, which makes an agent
proactive.
Utility Based Agents:

 These agents are similar to the goal-based agent but provide an extra
component of utility measurement (“Level of Happiness”) which makes
them different by providing a measure of success at a given state.
 Utility-based agent act based not only goals but also the best way to
achieve the goal. The Utility-based agent is useful when there are multiple
possible alternatives, and an agent has to choose in order to perform the
best action.
 The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
Learning Agents:

 A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
 A learning agent has mainly four conceptual components, which are:
a) Learning element: It is responsible for making improvements by
learning from environment. It tells when to do what
b) Critic: Learning element takes feedback from critic which describes
that how well the agent is doing with respect to a fixed performance
standard.
c) Performance element: It is responsible for selecting external action.
It tells how to do everything.
d) Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance.

PROBLEM SOLVING APPROACH TO TYPICAL AI PROBLEMS:

Problem-solving agents:

Problem-solving agents In Artificial Intelligence, Search techniques are universal


problem-solving methods. Rational agents or Problem-solving agents in AI mostly
used these search strategies or algorithms to solve a specific problem and
provide the best result. Problem- solving agents are the goal-based agents and
use atomic representation. In this topic, we will learn various problem-solving
search algorithms.

Some of the most popularly used problem solving with the help of artificial
intelligence are:

1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem

Problem Searching:

In general, searching refers to as finding information one needs. Searching is the


most commonly used technique of problem solving in artificial intelligence.

The searching algorithm helps us to search for solution of particular problem.

Problem: Problems are the issues which come across any system. A solution is
needed to solve that particular problem.

Steps: Solve Problem Using Artificial Intelligence

The process of solving a problem consists of five steps. These are:


Defining The Problem: The definition of the problem must be included precisely.
It should contain the possible initial as well as final situations which should result
in acceptable solution.

Analyzing The Problem: Analyzing the problem and its requirement must be done
as few features can have immense impact on the resulting solution.

Identification Of Solutions: This phase generates reasonable amount of solutions


to the given problem in a particular range.

Choosing a Solution: From all the identified solutions, the best solution is chosen
basis on the results produced by respective solutions.

Implementation: After choosing the best solution, its implementation is done.

Measuring problem-solving performance:

We can evaluate an algorithm’s performance in four ways:

Completeness: Is the algorithm guaranteed to find a solution when there is one?


Optimality: Does the strategy find the optimal solution?
Time complexity: How long does it take to find a solution?
Space complexity: How much memory is needed to perform the search?

Search Algorithm Terminologies

• Search: Searching is a step by step procedure to solve a search-problem in a


given search space. A search problem can have three main factors:

1. Search Space: Search space represents a set of possible solutions, which


a system may have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.
 Search tree: A tree representation of search problem is called Search tree. The
root of the search tree is the root node which is corresponding to the initial
state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented as
a transition model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal
node.
Optimal Solution: If a solution has the lowest cost among all solutions.

Example Problems:

TOY PROBLEM:
 Toy problem is intended to illustrate or exercise various problem-solving
methods.
 It can be given a concise, exact description and hence is usable by different
researchers to compare the performance of algorithms. A real- world
problem is one whose solutions people actually care about.
Toy Problems:

The first example we examine is the vacuum world. This can be formulated as a
problem as follows:

• States: The state is determined by both the agent location and the dirt
locations. The agent is in one of two locations, each of which might or might not
contain dirt. Thus, there are 2 × 2*2 = 8 possible world states. A larger
environment with n locations has n · 2n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left,
Right, and Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving
Left in the leftmost square, moving Right in the rightmost square, and Sucking in a
clean square have no effect. The complete state space is shown in Figure 3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
8- Puzzle Problem:

States: A state description specifies the location of each of the eight tiles and the
blank in one of the nine squares.

Initial state: Any state can be designated as the initial state. The simplest
formulation defines the actions as movements of the blank space Left, Right, Up,
or Down. Different subsets of these are possible depending on where the blank is.

Transition model: Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure 3.4, the resulting state has
the 5 and the blank switched.

Goal test: This checks whether the state matches the goal configuration shown in
Figure.

Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Queens Problem:
The goal of the 8-queens problem is to place eight queens on a chessboard such
that no queen attacks any other. (A queen attacks any piece in the same row,
column or diagonal.)

• States: Any arrangement of 0 to 8 queens on the board is a state.


• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
•Transition model: Returns the board with a queen added to the specified
square.
• Goal test: 8 queens are on the board, none attacked

Water Jug Problem:


Consider the given problem. Describe the operator involved in it. Consider the
water jug problem: You are given two jugs, a 4-gallon one and 3-gallon one.
Neither has any measuring marker on it. There is a pump that can be used to fill
the jugs with water. How can you get exactly 2 gallon of water from the 4-gallon
jug?

Here the initial state is (0, 0). The goal state is (2, n) for any value of n.
State Space Representation: we will represent a state of the problem as a tuple
(x, y) where x represents the amount of water in the 4-gallon jug and y represents
the amount of water in the 3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.

To solve this we have to make some assumptions not mentioned in the problem.
They are:
• We can fill a jug from the pump.
• We can pour water out of a jug to the ground.
• We can pour water from one jug to another.
• There is no measuring device available.

Operators - we must define a set of operators that will take us from one state to
another.

You might also like