0% found this document useful (0 votes)
74 views77 pages

AI - Unit 1,2

The document provides an overview of Artificial Intelligence (AI), defining it as a branch of computer science focused on creating intelligent machines that can mimic human behavior and decision-making. It discusses the importance of AI, its goals, advantages, and disadvantages, as well as its historical development and current state, highlighting significant milestones and advancements in the field. Key areas of AI development mentioned include machine learning, reinforcement learning, deep learning, and natural language processing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views77 pages

AI - Unit 1,2

The document provides an overview of Artificial Intelligence (AI), defining it as a branch of computer science focused on creating intelligent machines that can mimic human behavior and decision-making. It discusses the importance of AI, its goals, advantages, and disadvantages, as well as its historical development and current state, highlighting significant milestones and advancements in the field. Key areas of AI development mentioned include machine learning, reinforcement learning, deep learning, and natural language processing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Unit - 1

Introduction

1.1 Introduction to Artificial Intelligence

What is Artificial Intelligence?

In today's world, technology is growing very fast, and we are getting in


touch with different new technologies day by day.

Here, one of the booming technologies of computer science is Artificial


Intelligence which is ready to create a new revolution in the world by
making intelligent machines. The Artificial Intelligence is now all around
us. It is currently working with a variety of subfields, ranging from general
to specific, such as self-driving cars, playing chess, proving theorems,
playing music, Painting, etc.

AI is one of the fascinating and universal fields of Computer science which


has a great scope in future. AI holds a tendency to cause a machine to
work as a human.
Artificial Intelligence is composed of two
words Artificial and Intelligence, where Artificial defines "man-made,"
and intelligence defines "thinking power", hence AI means "a man-made
thinking power."

So, we can define AI as:

"It is a branch of computer science by which we can create intelligent


machines which can behave like a human, think like humans, and able to
make decisions."

Artificial Intelligence exists when a machine can have human based skills
such as learning, reasoning, and solving problems With Artificial
Intelligence you do not need to pre-program a machine to do some work,
despite that you can create a machine with programmed algorithms
which can work with own intelligence, and that is the awesomeness of AI.

It is believed that AI is not a new technology, and some people says that
as per Greek myth, there were Mechanical men in early days which can
work and behave like humans.

Why Artificial Intelligence?

Before Learning about Artificial Intelligence, we should know that what is


the importance of AI and why should we learn it. Following are some main
reasons to learn about AI:

● With the help of AI, you can create such software or devices which
can solve real-world problems very easily and with accuracy such as
health issues, marketing, traffic issues, etc.

● With the help of AI, you can create your personal virtual Assistant,
such as Cortana, Google Assistant, Siri, etc.
● With the help of AI, you can build such Robots which can work in an
environment where survival of humans can be at risk.

● AI opens a path for other new technologies, new devices, and new
Opportunities.

Goals of Artificial Intelligence

Following are the main goals of Artificial Intelligence:

1. Replicate human intelligence


2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human
intelligence such as:
o Proving a theorem

o Playing chess

o Plan some surgical operation

o Driving a car in traffic

5. Creating some system which can exhibit intelligent behavior,


learn new things by itself, demonstrate, explain, and can advise
its user.

What Comprises Artificial Intelligence?

Artificial Intelligence is not just a part of computer science even it's so vast
and requires lots of other factors which can contribute to it. To create the
AI first we should know that how intelligence is composed, so the
Intelligence is an intangible part of our brain which is a combination
of Reasoning, learning, problem-solving perception, language
understanding, etc.
To achieve the above factors for a machine or software Artificial
Intelligence requires the following discipline:

● Mathematics

● Biology

● Psychology

● Sociology

● Computer Science

● Neurons Study

● Statistics

Advantages of Artificial Intelligence


Following are some main advantages of Artificial Intelligence:

● High Accuracy with less errors: AI machines or systems are prone


to less errors and high accuracy as it takes decisions as per pre-
experience or information.

● High-Speed: AI systems can be of very high-speed and fast-


decision making, because of that AI systems can beat a chess champion
in the Chess game.

● High reliability: AI machines are highly reliable and can perform


the same action multiple times with high accuracy.

● Useful for risky areas: AI machines can be helpful in situations such


as defusing a bomb, exploring the ocean floor, where to employ a human
can be risky.

● Digital Assistant: AI can be very useful to provide digital assistant


to the users such as AI technology is currently used by various E-
commerce websites to show the products as per customer requirement.

● Useful as a public utility: AI can be very useful for public utilities


such as a self-driving car which can make our journey safer and hassle-
free, facial recognition for security purpose, Natural language processing
to communicate with the human in human-language, etc.

Disadvantages of Artificial Intelligence

Every technology has some disadvantages, and the same goes for
Artificial intelligence. Being so advantageous technology still, it has some
disadvantages which we need to keep in our mind while creating an AI
system. Following are the disadvantages of AI:
● High Cost: The hardware and software requirement of AI is very
costly as it requires lots of maintenance to meet current world
requirements.

● Can't think out of the box: Even we are making smarter machines
with AI, but still they cannot work out of the box, as the robot will only do
that work for which they are trained, or programmed.

● No feelings and emotions: AI machines can be an outstanding


performer, but still it does not have the feeling so it cannot make any kind
of emotional attachment with human, and may sometime be harmful for
users if the proper care is not taken.

● Increase dependency on machines: With the increment of


technology, people are getting more dependent on devices and hence
they are losing their mental capabilities.

● No Original Creativity: As humans are so creative and can imagine


some new ideas but still AI machines cannot beat this power of human
intelligence and cannot be creative and imaginative.

Key takeaway

AI is one of the fascinating and universal fields of Computer science which


has a great scope in future. AI holds a tendency to cause a machine to
work as a human.

1.2 Foundations and History of Artificial Intelligence

Artificial Intelligence is not a new word and not a new technology for
researchers. This technology is much older than you would imagine. Even
there are the myths of Mechanical men in Ancient Greek and Egyptian
Myths. Following are some milestones in the history of AI which defines
the journey from the AI generation to till date development.

Maturation of Artificial Intelligence (1943-1952)

o Year 1943: The first work which is now recognized as AI was done
by Warren McCulloch and Walter pits in 1943. They proposed a
model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for
modifying the connection strength between neurons. His rule is
now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician
and pioneered Machine learning in 1950. Alan Turing
publishes "Computing Machinery and Intelligence" in which he
proposed a test. The test can check the machine's ability to exhibit
intelligent behavior equivalent to human intelligence, called
a Turing test.

The birth of Artificial Intelligence (1952-1956)

o Year 1955: An Allen Newell and Herbert A. Simon created the


"first artificial intelligence program "Which was named as "Logic
Theorist". This program had proved 38 of 52 Mathematics
theorems, and find new and more elegant proofs for some
theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by
American Computer scientist John McCarthy at the Dartmouth
Conference. For the first time, AI coined as an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or


COBOL were invented. And the enthusiasm for AI was very high at that
time.

The golden years-Early enthusiasm (1956-1974)

o Year 1966: The researchers emphasized developing algorithms


which can solve mathematical problems. Joseph Weizenbaum
created the first chatbot in 1966, which was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan
which was named as WABOT-1.

The first AI winter (1974-1980)


o The duration between years 1974 to 1980 was the first AI winter
duration. AI winter refers to the time period where computer
scientist dealt with a severe shortage of funding from government
for AI researches.
o During AI winters, an interest of publicity on artificial intelligence
was decreased.

A boom of AI (1980-1987)

o Year 1980: After AI winter duration, AI came back with "Expert


System". Expert systems were programmed that emulate the
decision-making ability of a human expert.
o In the Year 1980, the first national conference of the American
Association of Artificial Intelligence was held at Stanford
University.

The second AI winter (1987-1993)

o The duration between the years 1987 to 1993 was the second AI
Winter duration.
o Again, Investors and government stopped in funding for AI
research as due to high cost but not efficient result. The expert
system such as XCON was very cost effective.

The emergence of intelligent agents (1993-2011)

o Year 1997: In the year 1997, IBM Deep Blue beats world chess
champion, Gary Kasparov, and became the first computer to beat
a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of
Roomba, a vacuum cleaner.
o Year 2006: AI came in the Business world till the year 2006.
Companies like Facebook, Twitter, and Netflix also started using
AI.

Deep learning, big data and artificial general intelligence (2011-


present)

o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz
show, where it had to solve the complex questions as well as
riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
o Year 2012: Google has launched an Android app feature "Google
now", which was able to provide information to the user as a
prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a
competition in the infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex
topics with two master debaters and also performed extremely
well.
o Google has demonstrated an AI program "Duplex" which was a
virtual assistant and which had taken hairdresser appointment on
call, and lady on other side didn't notice that she was talking with
the machine.

Now AI has developed to a remarkable level. The concept of Deep


learning, big data, and data science are now trending like a boom.
Nowadays companies like Google, Facebook, IBM, and Amazon are
working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.

Key takeaway
Artificial Intelligence is not a new word and not a new technology for
researchers. This technology is much older than you would imagine. Even
there are the myths of Mechanical men in Ancient Greek and Egyptian
Myths. Following are some milestones in the history of AI which defines
the journey from the AI generation to till date development.

1.3 State of the Art

Researchers now have access to new tools that can help them achieve critical
objectives, and these technologies are excellent starting points in and of
themselves. The following are some specific domains that have been achieved in
recent years:

● Machine learning;

● Reinforcement learning;

● Deep learning;

● Natural language processing.

Machine learning

Machine learning is a subtype of artificial intelligence that employs statistical


approaches to enable machines to absorb data without being explicitly instructed
to do so. This is referred to as 'training' a'model' with a learning 'algorithm,' which
improves performance on a given task over time. Researchers have been inspired
to push further on the accelerator as a result of their triumphs in this field.

The capacity of machines to learn how to create molecules is one of these


achievements. It was feasible to train the computers with roughly 12.4 million
reactions using a system made up of three neural networks and a search tree
technique dubbed 'Monte Carlo,' which solved various retrosynthetic studies. This
process is far faster than the current one, which involves computerised molecular
synthesis planning. In fact, it solves more than 80% of a single molecular test, with
a maximum time restriction of 5 seconds as a target for each unique molecule being
examined.

Research on improving techniques like hyperparameters and neural networks,


which are fixed parameters provided to computers as beginning values for
learning, is also continuing. By reducing the complexity and size of the calculation,
new algorithms can help to maximise network performance. LEAF (Learning
Evolutionary AI Framework) is an example of this, since it uses these methods to
precisely optimise hyperparameters and network designs by welding together
smaller, more effective networks.

Reinforcement Learning (RL)

Reinforcement learning is a branch of machine learning that deals with software


agents that learn 'goal-oriented' behaviour by attempting and failing in settings that
reward them for their actions (called 'Policy') toward accomplishing the goals.

This is the field that has piqued the interest of researchers the most in the recent
decade. In 2018, OpenAI, a non-profit artificial intelligence research organisation
dedicated to promoting and creating friendly AI, achieved significant results in the
game Montezuma's Revenge. A technique called Random Network Distillation
(RND) was used to achieve superhuman performance by encouraging the RL agent
to explore unanticipated states. The graph below demonstrates how far this
strategy outperformed the game's other AI algorithms.
This is only one of a number of examples of findings received in 2019. DeepMind's
AlphaStar is another AI worth considering. It used multi-agent algorithm training
to defeat the five-time world champion in the real-time strategy game StarCraft 2.
It began by forcing agents to compete against one another, allowing it to learn
about the vast strategic space. Later, a new agent was created that integrated the
greatest methods that people had devised. Multiple agents that independently
learnt and operated together to compete against one another achieved a level of
performance equal to humans in Quake 3 Arena.

Deep Learning

Deep learning, another type of machine learning, is inspired by the action of


neurons in the brain to learn how to discern complicated patterns from taught data.
This is because algorithms, mostly statistical calculations, are used. The term 'deep'
alludes to the huge number of levels of neurons that ML models at the same time,
allowing for the acquisition of rich data representations and performance
increases.

The year 2019 marked a watershed moment for deep learning and its applications
in a variety of fields, particularly medicine. For example, a technique known as
"two phases" has resulted in expert-level diagnosis and therapy prescriptions for
a variety of eye illnesses. The first stage used a computerised 3D scanner to
reconstruct an eye tissue map, and the second stage used this map to make
predictions about the severity of the condition. A deep learning model that was
employed in 54 thousand ECG traces is another example. It can identify 12
different types of arrhythmias.

Even more significant is what the researchers hope future research will reveal,
namely the possibility of recovering speech in paralysed patients and movement
in quadriplegics.

In the first case, Columbia University researchers were able to synthesise voice
with a vocoder employing ways of analysing the brain activity of five epileptic
patients. Although the system has achieved 75 percent accuracy, a deep learning
model has shown a 65 percent improvement in voice comprehensibility.

In the second, even more astonishing example, researchers implanted a


microelectrode in a paraplegic patient's hand and arm portion of the left side of the
primary motor cortex. They told the neural network to use the voltage signals
inside the patient's skull to forecast what the patient's arm movement intentions
would be. Without monitoring, the patient was able to maintain a high-accuracy
resuscitation of his paralysed forearm with functional electrical stimulation for
almost a year.

Natural Language Processing (NLP)

Natural language processing (NLP) is the process by which computers learn to


interpret, comprehend, and alter textual material. With Google AI's BERT and
Transformer, Allen Institute's ELMo, OpenAI's Transformer, Ruder and Howard's
ULMFit, and Microsoft's MT-DNN, 2019 was a banner year for NPL. All of these
studies have demonstrated that pre-taught language models can increase
performance on a wide range of NLP tasks.

Take the OpenAI GPT-2 pre-educated language model for example:

INPUT: Miley Cyrus was busted shoplifting on Hollywood Boulevard from


Abercrombie and Fitch.

OUTPUT (written totally by machine on the second attempt): Security guards were
recorded on camera escorting the 19-year-old singer out of the store. The singer
was dressed in a black hoodie with the labels 'Blurred lines' and 'Fashion Police' on
the front and back, respectively. The video can be found by scrolling below.

Miley Cyrus was busted shoplifting today on Hollywood Boulevard from


Abercrombie & Fitch (pictured).

1.4 Risks and Benefits of AI

Artificial intelligence is dreaded by many experts and ordinary individuals alike,


despite being heralded as a gift for humanity by tech pundits. This dread has been
shown on film countless times in the shape of dystopian futures generated by AI
computers that have taken over the world. The Matrix and the Terminator are two
of the most well-known examples.
AI is Unsustainable

Intelligent machines are known for their great computational power, which is
provided by a slew of processors. Selenium, a rare earth metal, is a major
component of these computer chips. Furthermore, such robots' batteries are made
of Lithium, another rare element found in the earth's crust. Increased mining of
these materials is rapidly and irreversibly harming our ecosystem. Furthermore,
they consume a large amount of power to operate, putting a strain on our power
plants and, once again, damaging the environment.

Lesser Jobs

Machines, without a doubt, perform routine and repetitious activities far better than
people. Many firms would prefer machines replace humans in order to boost their
profits, lowering the number of employment available for humans.

A threat to Humanity(?)

Elon Musk is widely regarded as one of the most intelligent people working on
artificial intelligence today. He has also stated openly that artificial intelligence is
the greatest future threat to human civilisation. This suggests that the dismal future
depicted in science fiction films is not implausible. In addition, Stephen Hawking
has long expressed his opposition to AI advancements.

The most serious danger connected with AI is that computers will develop
consciousness and turn against humans if they become self-aware.

Benefits of AI

Artificial Intelligence is a computer program's ability to learn and think. Everything


that includes a programme doing something that we would ordinarily associate
with human intelligence is termed artificial intelligence.
Artificial intelligence applications offer huge benefits and have the potential to
disrupt any industry. Let's take a look at a few of them.

1) Reduction in Human Error:

Because humans make mistakes from time to time, the term "human error" was
coined. Computers, on the other hand, do not make these errors if they are
correctly programmed. Artificial intelligence makes choices based on previously
obtained data and a set of algorithms. As a result, errors are decreased, and the
prospect of achieving better precision and accuracy is increased.

For example, AI has removed the majority of human mistake in weather


forecasting.

2) Takes risks instead of Humans:

One of the most significant advantages of artificial intelligence is this. By


constructing an AI Robot that can do the risky tasks for us, we can transcend many
of humanity's risky limits. It can be utilised effectively in every type of natural or
man-made disaster, whether it is going to Mars, defusing a bomb, exploring the
deepest regions of the oceans, mining for coal and oil.

Have you heard about the explosion at the Chernobyl nuclear power facility in
Ukraine? There were no AI-powered robots available at the time to assist us in
minimising the effects of radiation by controlling the fire early on, as any human
who came close to the core died in minutes. They eventually used helicopters to
drop sand and boron from a safe distance.

AI Robots can be utilised in situations when human intervention is risky.

3) Available 24x7:

Without breaks, an average human will labour for 4–6 hours every day. Humans
are created in such a way that they can take time off to replenish themselves and
prepare for a new day at work, and they even have weekly off days to keep their
professional and home lives separate. But, unlike humans, we can use AI to make
machines work 24 hours a day, seven days a week with no breaks, and they don't
get bored.

For example, educational institutions and helpline centres receive a large number
of requests and difficulties that AI can successfully address.

4) Helping in Repetitive Jobs:

We will be doing a lot of repetitious labour in our day-to-day work, such as writing
thank-you emails, double-checking documents for flaws, and so on. We can use
artificial intelligence to efficiently automate these monotonous operations and
even remove "boring" duties from humans' schedules, allowing them to be more
creative.

For example, at banks, we frequently see numerous document verifications in


order to obtain a loan, which is a time-consuming task for the bank's owner. The
owner can use AI Cognitive Automation to speed up the process of document
verification, which will benefit both the customers and the owner.

5) Digital Assistance:

Digital assistants are used by some of the most modern enterprises to engage with
people, reducing the requirement for human personnel. Many websites also use
digital assistants to supply things that users seek. We can discuss what we're
searching for with them. Some chatbots are created in such a way that it's difficult
to tell whether we're conversing with a machine or a human.

For example, we all know that businesses have a customer service team that is
responsible for answering customers' questions and concerns. Organizations can
use AI to create a voice bot or a chatbot that can assist customers with all of their
questions. Many firms have already begun to implement them on their websites
and mobile applications.

6) Faster Decisions:
We can make computers make decisions and carry out activities faster than
humans by combining AI with other technologies. While a human will consider
numerous aspects, both emotionally and practically, before making a decision, an
AI-powered machine will focus on what it has been designed to do and will
produce results more quickly.

For instance, we've all played Chess games on Windows. Because of the AI in the
game, beating the CPU in hard mode is practically impossible. According to the
algorithms utilised, it will take the best feasible step in the shortest amount of time.

7) Daily Applications:

Apple's Siri, Microsoft's Cortana, and Google's OK Google are all commonplace in
our daily lives, whether it's for finding a location, taking a selfie, making a phone
call, or responding to an email.

For example, when we wanted to go somewhere 20 years ago, we used to ask


someone who had already been there for instructions. All we have to do now is ask
Google, "OK Google, where is Visakhapatnam?" It will show you the location of
Visakhapatnam on a Google map as well as the best route between you and
Visakhapatnam.

8) New Inventions:

Many technologies in practically every domain are powered by AI, which will aid
humans in solving the majority of complicated problems.

For instance, using powerful AI-based technologies, clinicians can now identify
breast cancer in women at an early stage.

1.5 Intelligent Agents, Agents and Environments

Agents in Artificial Intelligence


An AI system can be defined as the study of the rational agent and its environment.
The agents sense the environment through sensors and act on their environment
through actuators. An AI agent can have mental properties such as knowledge,
belief, intention, etc.

What is an Agent?

An agent can be anything that perceive its environment through sensors and act
upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking, and acting. An agent can be:

● Human-Agent: A human agent has eyes, ears, and other organs which work
for sensors and hand, legs, vocal tract work for actuators.

● Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP
for sensors and various motors for actuators.

● Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cell phone, camera,
and even we are also agents.

Before moving forward, we should first know about sensors, effectors, and
actuators.

Sensor: Sensor is a device which detects the change in the environment and sends
the information to other electronic devices. An agent observes its environment
through sensors.

Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system.
An actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.

Intelligent Agents:

An intelligent agent is an autonomous entity which act upon an environment using


sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent
agent.

Following are the main four rules for an AI agent:

● Rule 1: An AI agent must have the ability to perceive the environment.

● Rule 2: The observation must be used to make decisions.

● Rule 3: Decision should result in an action.

● Rule 4: The action taken by an AI agent must be a rational action.

Rational Agent:

A rational agent is an agent which has clear preference, models uncertainty, and
acts in a way to maximize its performance measure with all possible actions.

A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward
and for each wrong action, an agent gets a negative reward.

Note: Rational agents in AI are very similar to intelligent agents.

Rationality:

The rationality of an agent is measured by its performance measure. Rationality


can be judged on the basis of following points:

● Performance measure which defines the success criterion.

● Agent prior knowledge of its environment.

● Best possible actions that an agent can perform.

● The sequence of percepts.

Note: Rationality differs from Omniscience because an Omniscient agent knows the
actual outcome of its action and act accordingly, which is not possible in reality.

Structure of an AI Agent

The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent
program. It can be viewed as:

Agent = Architecture + Agent program

Following are the main three terms involved in the structure of an AI agent:

Architecture: Architecture is machinery that an AI agent executes on.

Agent Function: Agent function is used to map a percept to an action.


f:P* → A

Agent program: Agent program is an implementation of agent function. An agent


program executes on the physical architecture to produce function f.

PEAS Representation

PEAS is a type of model on which an AI agent works upon. When we define an AI


agent or rational agent, then we can group its properties under PEAS
representation model. It is made up of four words:

● P: Performance measure

● E: Environment

● A: Actuators

● S: Sensors

Here performance measure is the objective for the success of an agent's behavior.

PEAS for self-driving cars:

Let's suppose a self-driving car then PEAS representation will be:


Performance: Safety, time, legal drive, comfort

Environment: Roads, other vehicles, road signs, pedestrian

Actuators: Steering, accelerator, brake, signal, horn

Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.

Example of Agents with their PEAS representation

Performance
Agent Environment Actuators Sensors
measure

● Healthy ● Patient
1. patient ● Tests Keyboard
Medical ● Hospital (Entry of
Diagnose ● Minimized ● Treatments symptoms)
cost ● Staff

● Camera

● Room ● Dirt
detection
● Cleanness ● Table sensor
● Wheels
2. ● Efficiency ● Wood ● Cliff
● Brushes
Vacuum floor sensor
Cleaner ● Battery life ● Vacuum
● Carpet ● Bump
● Security Extractor
Sensor
● Various
obstacles ● Infrared
Wall
Sensor
● Conveyor ● Camera
● Percentage ● Jointed
3. Part - belt with
of parts in Arms ● Joint
picking parts,
correct
Robot angle
bins. ● Hand
● Bins sensors.

1.6 Good Behavior: Concept of Rationality, Nature of


Environments

The state of being reasonable, sensible, and having a good sense of judgment is
known as rationality.

Rationality is concerned with the actions and consequences that can be foreseen
based on the agent's views. Taking actions with the objective of obtaining useful
knowledge is a fundamental part of reason.

The rationality of an agent is determined by its performance metric. To determine


reasonableness, utilize the following criteria:

● The success criterion is defined by a performance metric.

● The agent has prior knowledge of its surroundings.

● The most effective activities that an agent can take.

● The order in which percepts appear.

The Nature of Environments

Some programs operate in an entirely artificial environment, relying solely on


keyboard input, databases, computer file systems, and character output on a
screen.

On the other hand, some software agents (software robots or softbots) exist in rich,
unlimited softbots domains. The simulator's environment is incredibly detailed and
complex. The software agent must choose among a vast number of possibilities in
real time. In both a real and an artificial setting, a softbot examines a customer's
internet interests and offers appealing things to them.

The Turing Test is the most well-known artificial scenario, in which one real and
other artificial agents are put to the test on an equal footing. Because a software
agent cannot perform as effectively as a human, this is a challenging environment
to work in.

Turing test

The Turing Test can be used to determine whether a system's intelligent behavior
is successful.

Two humans will participate in the test, as well as a machine that will be checked.
One of the two individuals is assigned to the role of tester. They're all in separate
rooms. The tester has no way of knowing who is human and who is not. He types
the questions and transmits them to both intelligences, who react with typed
answers.

The goal of this test is to deceive the tester. The machine is deemed to be
intelligent if the tester is unable to distinguish the machine's response from the
human response.

Key takeaway

An AI system can be defined as the study of the rational agent and its environment.
The agents sense the environment through sensors and act on their environment
through actuators. An AI agent can have mental properties such as knowledge,
belief, intention, etc.

1.7 Structure of Agents


An intelligent agent (IA) is an entity that makes a decision that enables artificial
intelligence to be put into action. It can also be described as a software entity that
conducts operations in the place of users or programs after sensing the
environment. It uses actuators to initiate action in that environment.

Characteristics of intelligent agents

Intelligent agents have the following distinguishing characteristics:

● They have some level of autonomy that allows them to perform certain tasks on
their own.

● They have a learning ability that enables them to learn even as tasks are carried
out.

● They can interact with other entities such as agents, humans, and systems.

● New rules can be accommodated by intelligent agents incrementally.

● They exhibit goal-oriented habits.

● They are knowledge-based. They use knowledge regarding communications,


processes, and entities.

The structure of intelligent agents

The IA structure consists of three main parts: architecture, agent function, and
agent program.

1. Architecture: This refers to machinery or devices that consists of


actuators and sensors. The intelligent agent executes on this machinery.
Examples include a personal computer, a car, or a camera.
2. Agent function: This is a function in which actions are mapped from a
certain percept sequence. Percept sequence refers to a history of what the
intelligent agent has perceived.
3. Agent program: This is an implementation or execution of the agent
function. The agent function is produced through the agent program’s
execution on the physical architecture.

Categories of intelligent agents

There are 5 main categories of intelligent agents. The grouping of these agents is
based on their capabilities and level of perceived intelligence.

Simple reflex agents

These agents perform actions using the current percept, rather than the percept
history. The condition-action rule is used as the basis for the agent function. In this
category, a fully observable environment is ideal for the success of the agent
function.

Model-based reflex agents

Unlike simple reflex agents, model-based reflex agents consider the percept
history in their actions. The agent function can still work well even in an
environment that is not fully observable. These agents use an internal model that
determines the percept history and effect of actions. They reflect on certain aspects
of the present state that have been unobserved.

Goal-based agents

These agents have higher capabilities than model-based reflex agents. Goal-
based agents use goal information to describe desirable capabilities. This allows
them to choose among various possibilities. These agents select the best action
that enhances the attainment of the goal.
Utility-based agents

These agents make choices based on utility. They are more advanced than goal-
based agents because of an extra component of utility measurement. Using a utility
function, a state is mapped against a certain measure of utility. A rational agent
selects the action that optimizes the expected utility of the outcome.

Learning agents

These are agents that have the capability of learning from their previous
experience.

Learning agents have the following elements.

● The learning element: This element enables learning agents to learn from
previous experiences.

● The critic: It provides feedback on how the agent is doing.

● The performance element: This element decides on the external action that
needs to be taken.

● The problem generator: This acts as a feedback agent that performs certain
tasks such as making suggestions (new) and keeping history.

How intelligent agents work

Intelligent agents work through three main components: sensors, actuators, and
effectors. Getting an overview of these components can improve our
understanding of how intelligent agents work.

● Sensors: These are devices that detect any changes in the environment. This
information is sent to other devices. In artificial intelligence, the environment of the
system is observed by intelligent agents through sensors.
● Actuators: These are components through which energy is converted into
motion. They perform the role of controlling and moving a system. Examples
include rails, motors, and gears.

● Effectors: The environment is affected by effectors. Examples include legs,


fingers, wheels, display screen, and arms.

Key takeaway

An intelligent agent (IA) is an entity that makes a decision that enables artificial
intelligence to be put into action. It can also be described as a software entity that
conducts operations in the place of users or programs after sensing the
environment.

1.8 Case Study: Kroger: How This U.S. Retail Giant Is Using AI And
Robots to Prepare for the 4th Industrial Revolution

Kroger, one of the major grocery companies in the United States, has decided to
embrace technology in order to survive and grow in the fourth industrial
revolution. Kroger wants to use its data, shopper insights, and scale to help it
remain a leader in the future marketplace, with 2,782 grocery shops under nearly
two dozen names in 35 states. According to the Food Marketing Institute, by 2022,
online grocery will account for 20% of all grocery retail and generate $100 billion
in consumer sales, so Kroger and its competitors would be well to figure out how
to leverage technology.

Restock Kroger Initiative

Kroger announced an ambitious three-year $9 billion plan called Restock Kroger


in the fall of 2017, with the objective of expanding its e-commerce, digital, and
omnichannel operations and redefining the consumer experience. The retailer
already sends out 3 billion tailored recommendations per year, but they'll step up
their efforts to "create unique client experiences." Shoppers will receive not just
essential digital content, but also "inspiration" in the form of product-related
content and recipes. The Restock Kroger effort also includes the expansion of
Kroger's Scan, Bag, Go trial technology, which allows users to scan products while
shopping with their smartphone. It will be revealed to 400 stores by the end of 2018
after being tested in 20 stores. Kroger's operations will be made more efficient by
investing in Internet of Things (IoT) sensors, machine learning, and artificial
intelligence.

Delivery by autonomous vehicles

We can get groceries delivered today, but Kroger is trying the delivery of the
future: driverless vehicles delivering groceries. On its trial programme, Kroger
collaborated with Nuro, a Silicon Valley startup that specialised in autonomous
delivery cars. Customers can place same-day delivery orders via Kroger's
ClickList ordering system and Nuro's app, but customers must be home to get their
groceries from the car after entering a unique code to unlock the doors. There's no
indication yet on which locations will be chosen to test autonomous deliveries, but
local trade is expected to be disrupted.

Automated warehouses

Kroger and Ocado, a British online-only grocer, have formed a relationship that is
anticipated to help Kroger automate its warehouses and employ artificial
intelligence to boost its bottom line. Ocado claims to have the world's most
advanced automated grocery warehouses and has tested delivery options with
Uber and Instacart, and it's this know-how that Kroger hopes to capitalise on with
its investment. The companies announced that Ocado would operate three new
warehouses, with another 17 to come in the next three years. Ocado's warehouses
are run by robots that explore the warehouse and pick products for orders using
machine learning algorithms. Kroger will be able to get products to shops faster
thanks to this investment and access to Ocado's technologies.

Marketing gets a boost from analytics


Kroger Precision Marketing, which uses customer purchase data from Kroger's 60
million shopper households to conduct marketing campaigns across a digital
spectrum, was launched by Kroger's in-house analytics business 84.51. This not
only improves personalization for customers, but it also provides product
producers with fantastic marketing opportunities on Kroger.com, branded digital
media, and the MyMagazine Sharing Network.

Machine learning

In a project called Embedded Machine Learning, 84.51 has made it a goal to enable
and embed machine learning into Kroger's operations, where a "machine learning
machine" can generate and deploy a large number of models with very little human
intervention. With an aim to "enable, empower, and engage" machine learning
throughout the business, this was a comprehensive approach to machine learning,
with three phases to machine learning methodology: Solution Engineering, Model
Development, and Model Deployment.

Smart shelves

When a Kroger consumer travels down the aisle with the Kroger app open, sensors
recognise the shopper and use smart shelves technology to deliver personalised
pricing and highlight products the customer might be interested in. Smart shelves
benefit not only the customer, but they also assist the business check inventory to
ensure that expired products aren't on the shelves and that everything is stocked
correctly—and this has an impact on the customer experience. Kroger has been
testing this technology since 2015, and while adoption has been slow, the retailer
is attempting to improve it before deploying it.
Unit - 2

Problem-solving

2.1 Solving Problems by Searching, Problem-Solving Agents

In Artificial Intelligence, search techniques are universal problem-


solving procedures. To solve a given problem and deliver the optimum
outcome, rational agents or problem-solving agents in AI used these
search techniques or algorithms. Problem-solving agents are goal-based
agents that use atomic representation. In this subject, we will explore a
variety of problem-solving search approaches.

Searching Algorithms Terminologies:

 Search: In a given search space, searching is a step-by-step


procedure for addressing a search problem. There are three
primary variables that can influence the outcome of a search:

o Search space: A search space is a collection of probable solutions


that a system could have.
o Start state: It's the starting point for the agent's quest.
o Goal test: It's a function that looks at the current state and returns
whether or not the goal state has been reached.

 Search tree: Search tree is a tree representation of a search


problem. The root node, which corresponds to the initial state, is
at the top of the search tree.

● Actions: It provides the agent with a list of all available actions.

● Transition model: A transition model is a description of what each


action does.
● Path cost: It's a function that gives each path a numerical cost.

● Solutions: It is an action sequence that connects the start and end


nodes.

● Optimal solutions: If a solution is the cheapest of all the options.

Formulating problems

The problem of getting to Bucharest had been previously defined in terms


of the starting state, activities, transition model, goal test, and path cost.
Despite the fact that this formulation appears to be reasonable, it is still a
model—an abstract mathematical description—rather than the real thing.

Compare our simple state description, In (Arad), to a cross-country trip,


where the state of the world includes a wide range of factors such as the
traveling companions, what's on the radio, the scenery out the window,
whether there are any law enforcement officers nearby, how far it is to the
next rest stop, the road condition, the weather, and so on.

All of these factors aren't mentioned in our state descriptions because


they have nothing to do with finding a route to Bucharest. The process of
removing details from a representation is known as abstraction.

In addition to the state description, we must abstract both the state


description and the actions themselves. A driving action can result in a
number of different outcomes. It takes time, consumes fuel, emits
emissions, and changes the agent, in addition to shifting the vehicle's and
its occupants' positions (as they say, travel is broadening). In our system,
only the change in position is taken into account.

Can we be more precise about the proper level of abstraction? Consider


the abstract states and actions we've selected as large collections of
detailed world states and action sequences. Consider a path from Arad to
Sibiu to Rimnicu Vilcea to Pitesti to Bucharest as an example of a solution
to the abstract problem. A wide number of more detailed paths correlate
to this abstract solution.

Searching for solution

We must now address the issues we've identified. Because a solution


consists of a series of activities, search algorithms consider a number of
different action sequences. The SEARCH TREE possible action sequences
start with the starting state and build a search tree with the initial state
NODE at the root; the branches are actions, and the nodes are states in
the problem's state space.

Figure depicts the first few steps in developing a search tree for finding a
route from Arad to Bucharest. The root node of the tree represents the
initial state (Arad). The first stage is to determine whether or not this is a
goal state. (Of course, it isn't, but it's worth double-checking to avoid
problems like "starting in Arad, get to Arad.") Then we must weigh a
number of possibilities. This is done by extending the current state; in
other words, each legal action is applied to the existing state, resulting in
the production of a new set of states. In this scenario, we create three
additional child nodes from the parent node In(Arad): In(Sibiu),
In(Timisoara), and In(Sibiu) (Zerind).
Fig: Partial
search tree

This is the
essence of
searching:
pursuing one
option at a time
while putting
the others on
hold in the
event that the
first does not yield a solution. Let's use Sibiu as an example. We first check
to see whether it's a target state (it isn't), then expand it to get In(Arad),
In(Fagaras), In(Oradea), and In(Oradea) (RimnicuVilcea). Then we can
choose one of these four possibilities, or we can go back to LEAF NODE
and choose Timisoara or Zerind. Each of these six nodes in the tree is a
leaf node, which means it has no progeny.

At any given FRONTIER point, the frontier is the collection of all leaf nodes
that are available for expansion. Each tree's border is defined by the
nodes with bold outlines in Figure.

The process of choosing and expanding nodes on the frontier continues


until a solution is found or no more states can be added to the frontier.
Informally, the TREE-SEARCH algorithm is depicted in the figure. All
search algorithms have the same core basis; the difference is in how they
choose which state to expand next—the so-called search strategy.

Function TREE-SEARCH (problem) returns a solution, or failure

Initialize the frontier using the initial state of problem

Loop do
If the frontier is empty then return failure

Choose a leaf node and remove it from the frontier

If the node contains a goal state then return the corresponding solution

Expand the chosen node, adding the resulting nodes to the frontier

Function GRAPH-SEARCH(problem) returns a solution, or failure

Initialize the explored set to be empty

Loop do

If the frontier is empty then return failure

Choose a leaf node and remove it from the frontier

If the node contains a goal state then return the corresponding solution

Add the node to the explored set

Expand the chosen node, adding the resulting nodes to frontier

Only if not in the frontier or explored set

Problem solving agents

By defining problems and their various solutions, the problem-solving


agent performs exactly.

In the field of Artificial Intelligence, a problem-solving agent is a goal-


based agent that focuses on goals. It is one embodiment of a combination
of algorithms and strategies to tackle a well-defined problem. These
agents differ from reflex agents in that they only have to map states into
actions and are unable to map when storing and learning are both larger.
The following are the stages that problem-solving agents go through to
reach a desired state or solution:

● Articulating or expressing the desired goal and the problem is tried


upon, clearly.

● Explore and examine

● Find the solution from the various algorithms on the table.

● The final step is Execution!

The steps taken by the problem-solving agent

● Goal formulation: is the first and most basic step in solving a problem.
It organises the steps/sequence needed to construct a single goal from
many goals, as well as the actions needed to achieve that goal. The
existing circumstance and the agent's performance measure are used to
formulate goals.

● Problem formulation: is the most crucial step in the problem-solving


process because it determines what activities should be taken to attain
the stated goal. In the formulation of a problem, there are five elements
to consider:

● First State: This is the agent's initial state, or first step toward its goal.

● Actions: This is a list of the several options available to the agent.

● Transition Model: The Transition Model explains what each step does.

● Goal test: It determines whether or not a given state is a desired state.

● Path cost: Each path that leads to the goal is given a numerical cost. A
cost function is chosen by the problem-solving agent to reflect its
performance measure. Remember that the optimal option has the
cheapest path cost of all the alternatives.

Note that the problem's state-space is implicitly defined by the initial


state, actions, and transition model. A problem's state-space is a
collection of all states that can be achieved by following any series of
activities from the beginning state. The state-space is represented as a
directed map or graph, with nodes representing states, links between
nodes representing actions, and a path representing a series of states
connected by a series of actions.

● Search - It determines the best potential sequence of actions to get


from the current condition to the goal state. It receives an issue as input
and returns a solution as output.

● Solution - It selects the best algorithm from a set of algorithms that


can be demonstrated to be the most optimal solution.

● Execution - It uses the best optimal solution found by the searching


algorithms to get from the present state to the goal state.

Key takeaway

Search techniques are universal problem-solving strategies in Artificial


Intelligence.

The problem of traveling to Bucharest was previously formulated in terms


of the beginning state, activities, transition model, goal test, and path cost.

A solution is a series of actions, search algorithms explore a variety of


alternative action sequences.
2.2 Example Problems

In general, there are two sorts of problem-solving strategies:

● Toy problem - Researchers use it to compare the performance of


algorithms because it provides a succinct and clear explanation of the
problem.

● Real world problem - The problems that need to be solved are those
that occur in the real world. It does not require descriptions, unlike a toy
problem, yet we can have a generic formulation of the problem.

1. Some Toy Problems

8 Puzzle Problem: A 33 matrix with moveable tiles numbered 1 to 8 and


a vacant area is shown. The tile to the left of the vacant spot can be slid
into it. The goal is to achieve a goal state that is similar to the one indicated
in the diagram below.

Our goal is to slide digits into the blank space in the figure to change the
current state to the goal state.

Fig: 8 Puzzle problem

By sliding digits into the


vacant space in the
above diagram, we can
change the current
(Start) state to the
desired state.
The following is the problem statement:

● States: It shows where each numbered tile and the blank tile are
located.

● Initial State: Any state can be used as the starting point.

● Actions: The blank space's actions are defined here, i.e., left, right,
up, or down.

● Transition model: It returns the final state, which is determined by the


provided state and actions.

● Goal test: It determines if we have arrived at the correct goal-state.

● Path cost: The path cost is the number of steps in a path where each
step costs one dollar.

8-queens problem: The goal of this issue is to arrange eight queens on a


chessboard in such a way that none of them can attack another queen. A
queen can attack other queens in the same row and column or diagonally.

We can grasp the problem and its correct solution by looking at the
diagram below.
Fig: 8 Queen puzzle

As can be seen in the diagram above, each queen is put on the


chessboard in such a way that no other queen is placed diagonally, in the
same row or column. As a result, it is one viable solution to the eight-
queens dilemma.

For this problem, there are two main kinds of formulation:

Incremental formulation: It begins with an empty state and progresses


in steps, with the operator augmenting a queen at each step.

Following steps are involved in this formulation:


● States: Arrangement of any 0 to 8 queens on the chessboard.

● Initial State: An empty chessboard

● Actions: Add a queen to any empty box.

● Transition model: Returns the chessboard with the queen added in a


box.

● Goal test: Checks whether 8-queens are placed on the chessboard


without any attack.

● Path cost: There is no need for path cost because only final states are
counted.

In this formulation, there is approximately 1.8 x 1014 possible sequence


to investigate.

Complete-state formulation: It starts with all the 8-queens on the


chessboard and moves them around, saving from the attacks.

In this formulation, the steps are as follows:

States: Each of the eight queens is arranged in a column, with no queen


assaulting the other.

Actions: Move the queen to a spot where it will be secure from attacks.

This formulation is superior to the incremental formulation since it shrinks


the state space from 1.8 x 1014 to 2057 and makes finding solutions much
easier.

2. Some Real-world problems


Traveling salesperson problem(TSP): It's a problem of touring,
because the salesman can only visit each city once. The goal is to discover
the shortest tour and sell out all of the merchandise in each place.

VLSI Layout problem: Millions of components and connections are


placed on a chip in order to reduce area, circuit delays, and stray
capacitances while increasing manufacturing yield.

The layout problem is split into two parts:

● Cell layout: The circuit's primitive components are arranged into


cells, each of which performs a distinct purpose. Each cell is the same size
and shape. The goal is to arrange the cells on the chip so that they do not
overlap.

● Channel routing: It determines a unique path through the spaces


between the cells for each wire.

● Protein Design: The goal is to develop an amino acid sequence that


will fold into a 3D protein with the ability to treat an illness.

2.3 Search Algorithms

Problem-solving agents:

In Artificial Intelligence, Search techniques are universal problem-solving


methods. Rational agents or Problem-solving agents in AI mostly used these search
strategies or algorithms to solve a specific problem and provide the best result.
Problem-solving agents are the goal-based agents and use atomic representation.

Search Algorithm Terminologies:


● Search: Searching Is a step by step procedure to solve a search-problem in a
given search space. A search problem can have three main factors:

1. Search Space: Search space represents a set of possible solutions, which


a system may have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.

● Search tree: A tree representation of search problem is called Search tree. The
root of the search tree is the root node which is corresponding to the initial state.

● Actions: It gives the description of all the available actions to the agent.

● Transition model: A description of what each action do, can be represented as


a transition model.

● Path Cost: It is a function which assigns a numeric cost to each path.

● Solution: It is an action sequence which leads from the start node to the goal
node.

● Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:

Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return


a solution if at least any solution exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution


(lowest path cost) among all other solutions, then such a solution for is said to be
an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to
complete its task.

Space Complexity: It is the maximum storage space required at any point during
the search, as the complexity of the problem.

Types of search algorithms

Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.

Fig: Search algorithm

Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal. It operates in a brute-force way as it only includes
information about how to traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal,
so it is also called blind search. It examines each node of the tree until it achieves
the goal node.

It can be divided into five main types:

● Breadth-first search

● Uniform cost search

● Depth-first search

● Iterative deepening depth-first search

● Bidirectional Search

Informed Search

Informed search algorithms use domain knowledge. In an informed search,


problem information is available which can guide the search. Informed search
strategies can find a solution more efficiently than an uninformed search strategy.
Informed search is also called a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in
another way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search
Uninformed Search Algorithms

Uninformed search is a class of general-purpose search algorithms which operates


in brute force-way. Uninformed search algorithms do not have additional
information about state or search space other than how to traverse the tree, so it is
also called blind search.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

1. Breadth-first Search:

● Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.

● BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.

● The breadth-first search algorithm is an example of a general-graph search


algorithm.

● Breadth-first search implemented using FIFO queue data structure.

Advantages:

● BFS will provide a solution if any solution exists.


● If there are more than one solutions for a given problem, then BFS will provide
the minimal solution which requires the least number of steps.

Disadvantages:

● It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.

● BFS needs lots of time if the solution is far away from the root node.

Example:

In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

Time Complexity: Time Complexity of BFS algorithm can be obtained by the


number of nodes traversed in BFS until the shallowest Node. Where the d= depth
of shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size
of frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some
finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of
the node.

2. Depth-first Search

● Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.

● It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.

● DFS uses a stack data structure for its implementation.

● The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions


using recursion.

Advantage:

● DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.

● It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).

Disadvantage:

● There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.

● DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
Example:

In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal
node is not found. After backtracking it will traverse node C and then G, and here
it will terminate as it found goal node.

Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed
by the algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number


of steps or high cost to reach to the goal node.

3. Depth-Limited Search Algorithm:


A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will treat
as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

● Standard failure value: It indicates that problem does not have any solution.

● Cutoff failure value: It defines no solution for the problem within a given depth
limit.

Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

● Depth-limited search also has a disadvantage of incompleteness.

● It may not be optimal if the problem has more than one solution.

Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-
limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also
not optimal even if ℓ>d.

4. Uniform-cost Search Algorithm:

Uniform-cost search is a searching algorithm used for traversing a weighted tree


or graph. This algorithm comes into play when a different cost is available for each
edge. The primary goal of the uniform-cost search is to find a path to the goal node
which has the lowest cumulative cost. Uniform-cost search expands nodes
according to their path costs form the root node. It can be used to solve any
graph/tree where the optimal cost is in demand. A uniform-cost search algorithm
is implemented by the priority queue. It gives maximum priority to the lowest
cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost
of all edges is the same.

Advantages:

● Uniform cost search is optimal because at every state the path with the least
cost is chosen.

Disadvantages:

● It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an infinite
loop.

Example:
Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal
node. Then the number of steps is = C*/ε+1. Here we have taken +1, as we start
from state 0 and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.

Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of
Uniform-cost search is O(b1 + [C*/ε]).

Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest path
cost.

5. Iterative deepening depth-first Search:

The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing
the limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it


keeps increasing the depth limit after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.

Advantages:

● It Combines the benefits of BFS and DFS search algorithm in terms of fast search
and memory efficiency.

Disadvantages:

● The main drawback of IDDFS is that it repeats all the work of the previous phase.

Example:

Following tree structure is showing the iterative deepening depth-first search.


IDDFS algorithm performs various iterations until it does not find the goal node.
The iteration performed by the algorithm is given as:
1'st Iteration--
---> A

2'nd Iteration-
---> A, B, C

3'rd Iteration-
----->A, B, D,
E, C, F, G

4'th Iteration-
----->A, B, D,
H, I, E, C, F, K,
G

In the fourth
iteration, the algorithm will find the goal node.

Completeness:

This algorithm is complete is if the branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case time
complexity is O(bd).

Space Complexity:

The space complexity of IDDFS will be O(bd).

Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of
the node.

6. Bidirectional Search Algorithm:

Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-
search, to find the goal node. Bidirectional search replaces one single search
graph with two small subgraphs in which one starts the search from an initial vertex
and other starts from goal vertex. The search stops when these two graphs intersect
each other.

Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.

Advantages:

● Bidirectional search is fast.

● Bidirectional search requires less memory

Disadvantages:

● Implementation of the bidirectional search tree is difficult.

● In bidirectional search, one should know the goal state in advance.

Example:

In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the
forward direction and starts from goal node 16 in the backward direction.

The algorithm terminates at node 9 where two searches meet.


Completeness: Bidirectional Search is complete if we use BFS in both searches.

Time Complexity: Time complexity of bidirectional search using BFS is O(bd).

Space Complexity: Space complexity of bidirectional search is O(bd).

Optimal: Bidirectional search is Optimal.

Key takeaway

In Artificial Intelligence, Search techniques are universal problem-solving


methods. Rational agents or Problem-solving agents in AI mostly used these search
strategies or algorithms to solve a specific problem and provide the best result.
Problem-solving agents are the goal-based agents and use atomic representation.
2.4 Uninformed Search Strategies

Uninformed search, also known as blind, exhaustive, or brute-force search, is a


type of search that doesn't use any information about the problem to guide it and
so isn't very efficient.

Breadth first search

The breadth first search technique is a general strategy for traversing a graph. A
breadth first search takes up more memory, but it always discovers the shortest
path first. In this type of search, the state space is represented as a tree. The answer
can be discovered by traveling the tree. The tree's nodes indicate the initial value
of starting state, multiple intermediate states, and the end state.

This search makes use of a queue data structure, which is traversed level by level.
In a breadth first search, nodes are extended in order of their distance from the
root. It's a path-finding algorithm that, if one exists, can always find a solution. The
answer that is discovered is always the alternative option. This task demands a
significant amount of memory. At each level, each node in the search tree is
increased in breadth.

Algorithm:

Step 1: Place the root node inside the queue.

Step 2: If the queue is empty then stops and returns failure.

Step 3: If the FRONT node of the queue is a goal node, then stop and return success.
Step 4: Remove the FRONT node from the queue. Process it and find all its
neighbours that are in a ready state then place them inside the queue in any order.

Step 5: Go to Step 3.

Step 6: Exit.
Implementations

Let us implement the above algorithm of BFS by taking the following suitable
example

Fig: Example

Consider the graph below, where A is the


initial node and F is the destination node (*).

Step 1: Place the root node inside the queue


i.e. A

Step 2: The
queue is no longer empty, and the FRONT
node, i.e. A, is no longer our goal node. So,
proceed to step three.

Step 3: Remove the FRONT node, A, from the queue and look for A's neighbors, B
and C.

Step 4: Now, b is the queue's FRONT node. As a


result, process B and look for B's neighbors, i.e., D.

Step 5: Find out who C's neighbors are, which is E.

Ste 6: As D is the queue's FRONT node, find out


who D's neighbors are.
Step 7: E is now the queue's first node. As a result, E's
neighbor is F, which is our objective node.

Step 8: Finally, F is the FRONT of the queue, which is our goal node. As a result,
leave.

Advantages

● The goal will be achieved in whatever way possible using this strategy.

● It does not take any unproductive pathways for a long time.

● It finds the simplest answer in the situation of several pathways.

Disadvantages

● BFS consumes a significant amount of memory.

● It has a greater level of time complexity.

● It has long pathways when all paths to a destination have about the same
search depth.

Key takeaway

Breadth-first search is a search method in which the highest layer of a decision tree
is entirely searched before moving on to the next layer (BFS).

Because no feasible solutions are omitted in this technique, an optimal solution is


certain to be found.
When the search space is large, this method is frequently impractical.

2.5 Informed (Heuristic) Search Strategies

A heuristic is a technique for making our search process more efficient. Some
heuristics assist in the direction of a search process without claiming
completeness, whereas others do. A heuristic is a piece of problem-specific
knowledge that helps you spend less time looking for answers. It's a strategy that
works on occasion, but not all of the time.

The heuristic search algorithm uses the problem information to help steer the way
over the search space. These searches make use of a number of functions that,
assuming the function is efficient, estimate the cost of going from where you are
now to where you want to go.

A heuristic function is a function that converts problem situation descriptions into


numerical measures of desirability. The heuristic function's objective is to direct
the search process in the most profitable routes by recommending which path to
take first when there are multiple options.

The following procedures should be followed when using the heuristic technique
to identify a solution:

1. Add domain—specific information to select what is the best path to continue


searching along.

2. Define a heuristic function h(n) that estimates the ‘goodness’ of a node n.


Specifically, h(n) = estimated cost (or distance) of minimal cost path from n to a
goal state.

3. The term heuristic means ‘serving to aid discovery’ and is an estimate, based on
domain specific information that is computable from the current state description
of how close we are to a goal.
A search problem in which multiple search orders and the use of heuristic
knowledge are clearly understood is finding a path from one city to another.

1. State: The current city in which the traveller is located.

2. Operators: Roads linking the current city to other cities.

3. Cost Metric: The cost of taking a given road between cities.

4. Heuristic information: The search could be guided by the direction of the goal
city from the current city, or we could use airline distance as an estimate of the
distance to the goal.

Key takeaway

Informed Search also called heuristic or intelligent search, this uses information
about the problem to guide the search—usually guesses the distance to a goal state
and is therefore efficient, but the search may not always be possible.

The purpose of heuristic function is to guide the search process in the most
profitable directions by suggesting which path to follow first when more than is
available.

2.6 Heuristic Functions

As we've seen, an informed search employs heuristic functions in order to get at


the destination node in a more conspicuous manner. As a result, there are various
ways to get from the present node to the goal node in a search tree. It is undeniably
important to choose a decent heuristic function. The efficiency of a heuristic
function determines its usefulness. The more knowledge about the problem there
is, the longer it takes to solve it.

A heuristic function can help solve some toy problems more efficiently, such as 8-
puzzle, 8-queen, tic-tac-toe, and so on. Let's have a look at how:

Consider the eight-puzzle issue below, which has a start and a target state. Our
goal is to slide the tiles from the current/start state into the goal state in the correct
order. There are four possible movements: left, right, up, and down. There are
various ways to transform the current/start state to the desired state, but we can
solve the problem more efficiently by using the heuristic function h(n).

The following is a heuristic function for


the 8-puzzle problem:

h(n)=Number of tiles out of position.

So, there are three tiles that are out of place, namely 6, 5, and 4. The empty tile in
the goal state is not counted). h(n)=3 in this case. The value of h(n) =0 must now be
minimised.

To reduce the h(n) value to 0, we can build a state-space tree as shown below:

The objective state is minimised


from h(n)=3 to h(n)=0, as seen in
the state space tree above.
However, depending on the
requirement, we can design and
employ a number of heuristic
functions. A heuristic function
h(n) can alternatively be defined
as the knowledge needed to
solve a problem more
efficiently, as shown in the
previous example. The
information can be related to the
nature of the state, the cost of
changing states, the
characteristics of target nodes,
and so on, and is stated as a
heuristic function.
Key takeaway

As we've seen, an informed search employs heuristic functions in order to get at


the destination node in a more conspicuous manner. As a result, there are various
ways to get from the present node to the goal node in a search tree. It is undeniably
important to choose a decent heuristic function. The efficiency of a heuristic
function determines its usefulness. The more knowledge about the problem there
is, the longer it takes to solve it.

2.7 Search in Complex Environments

Agents rarely have complete control over their surroundings, and are more likely
to have only partial control. This implies that they have control over the
environment. Changes in the environment, in turn, will have a direct impact on the
agents in the environment.

Environments are often described as non-deterministic. That is to say, activities


taken in particular circumstances may fail. Furthermore, an agent executing the
same task in two different contexts can provide radically different results.

An agent in an environment will have a pre-programmed set of talents that it may


use to deal with various circumstances it may encounter. Effectoric capacities are
the name given to these skills. The agent has a sensor that is plainly affected by the
surroundings, as seen in the diagram on agents. The agent can use data from this
sensor with previously collected data to determine which action to take.

Obviously, not every action can be carried out in every circumstance. An agent,
for example, might be able to 'open door,' but only if the door is unlocked. A
precondition is that the door must be unlocked before the action may be
conducted.

The most difficult difficulty that agents face in an environment is determining which
action to take at any given time in order to maximise their chances of achieving
their goal, or at least working toward it. The qualities of an environment influence
the complexity of a decision-making process.

There are several types of environments:


● Fully Observable vs Partially Observable

● Deterministic vs Stochastic

● Competitive vs Collaborative

● Single-agent vs Multi-agent

● Static vs Dynamic

● Discrete vs Continuous

Fully Observable vs Partially Observable

● A fully observable environment is one in which an agent sensor can perceive or


access the complete state of an agent at any given time; otherwise, it is a partially
observable environment.

● It's simple to preserve a completely observable environment because there's


no need to keep track of the environment's past.

● When the agent has no sensors in all environments, it is said to be unobservable.

Deterministic vs Stochastic

● The environment is considered to be deterministic when a uniqueness in the


agent's present state totally determines the agent's next state.

● The stochastic environment is random in nature, not unique, and the agent
cannot entirely control it.

Competitive vs Collaborative

● When an agent competes with another agent to optimise the output, it is said to
be in a competitive environment.
● Chess is a competitive game in which the agents compete against one another
to win the game, which is the output.

● When numerous agents work together to generate the required result, the agent
is said to be in a collaborative environment.

● When many self-driving cars are discovered on the road, they work together to
avoid collisions and arrive at their planned destination.

Single-agent vs Multi-agent

● A single-agent environment is defined as one in which there is just one agent.

● A single-agent system is exemplified by a person who is left alone in a maze.

● A multi-agent environment is one in which there are multiple agents.

● Football is a multi-agent sport since each team has 11 players.

Dynamic vs Static

● Dynamic refers to an environment that changes constantly while the agent is


engaged in some action.

● A roller coaster ride is dynamic since it is set in motion and the surroundings
changes at all times.

● A static environment is one that is idle and does not modify its state.

● When an agent enters a vacant house, there is no change in the surroundings.

Discrete vs Continuous

● A discrete environment is one in which there are a finite number of actions that
can be deliberated in the environment to produce the output.
● Chess is a discrete game since it has a limited number of moves. The amount of
moves varies from game to game, but it is always finite.

● Continuous refers to an environment in which actions cannot be numbered, i.e.


it is not discrete.

● Self-driving cars are an example of continuous environments as their actions are


driving, parking, etc. which cannot be numbered.

2.8 Local Search and Optimization Problems

Local search algorithms

A local search algorithm completes its task by traversing on a single current node
rather than multiple paths and following the neighbours of that node generally.

Although local search algorithms are not systematic, still they have the following
two advantages:

● Local search algorithms use a very little or constant amount of memory as they
operate only on a single path.

● Most often, they find a reasonable solution in large or infinite state spaces
where the classical or systematic algorithms do not work.

Working of a Local search algorithm

Consider the below state-space landscape having both:

● Location: It is defined by the state.

● Elevation: It is defined by the value of the objective function or heuristic cost


function.
The local
search
algorithm
explores
the above
landscape
by finding
the
following
two points:

● Global
Minimum: If
the elevation corresponds to the cost, then the task is to find the lowest valley,
which is known as Global Minimum.

● Global Maxima: If the elevation corresponds to an objective function, then it


finds the highest peak which is called as Global Maxima. It is the highest point in
the valley.

Some different types of local searches:

● Hill-climbing Search

● Simulated Annealing

● Local Beam Search

Note: Local search algorithms do not burden to remember all the nodes in the
memory; it operates on complete state-formulation.

Hill Climbing Algorithm:


Hill climbing search is a local search problem. The purpose of the hill climbing
search is to climb a hill and reach the topmost peak/ point of that hill. It is based on
the heuristic search technique where the person who is climbing up on the hill
estimates the direction which will lead him to the highest peak.

State-space Landscape of Hill climbing algorithm

To understand the concept of hill climbing algorithm, consider the below


landscape representing the goal state/peak and the current state of the climber.
The topographical regions shown in the figure can be defined as:

● Global Maximum: It is the highest point on the hill, which is the goal state.

● Local Maximum: It is the peak higher than all other peaks but lower than the
global maximum.

● Flat local maximum: It is the flat area over the hill where it has no uphill or
downhill. It is a saturated point of the hill.

● Shoulder: It is also a flat area where the summit is possible.

● Current state: It is the current position of the person.


Types of Hill climbing search algorithm

There are following types of hill-climbing search:

● Simple hill climbing

● Steepest-ascent hill climbing

● Stochastic hill climbing

● Random-restart hill climbing


Simple hill climbing search

Simple hill climbing is the simplest technique to climb a hill. The task is to reach
the highest peak of the mountain. Here, the movement of the climber depends on
his move/steps. If he finds his next step better than the previous one, he continues
to move else remain in the same state. This search focus only on his previous and
next step.

Simple hill climbing Algorithm

1. Create a CURRENT node, NEIGHBOUR node, and a GOAL node.


2. If the CURRENT node=GOAL node, return GOAL and terminate the
search.
3. Else CURRENT node<= NEIGHBOUR node, move ahead.
4. Loop until the goal is not reached or a point is not found.

Steepest-ascent hill climbing

Steepest-ascent hill climbing is different from simple hill climbing search. Unlike
simple hill climbing search, It considers all the successive nodes, compares them,
and choose the node which is closest to the solution. Steepest hill climbing search
is similar to best-first search because it focuses on each node instead of one.
Note: Both simple, as well as steepest-ascent hill climbing search, fails when there
is no closer node.

Steepest-ascent hill climbing algorithm

1. Create a CURRENT node and a GOAL node.


2. If the CURRENT node=GOAL node, return GOAL and terminate the
search.
3. Loop until a better node is not found to reach the solution.
4. If there is any better successor node present, expand it.
5. When the GOAL is attained, return GOAL and terminate.

Stochastic hill climbing

Stochastic hill climbing does not focus on all the nodes. It selects one node at
random and decides whether it should be expanded or search for a better one.

Random-restart hill climbing

Random-restart algorithm is based on try and try strategy. It iteratively searches


the node and selects the best one at each step until the goal is not found. The
success depends most commonly on the shape of the hill. If there are few plateaus,
local maxima, and ridges, it becomes easy to reach the destination.

Limitations of Hill climbing algorithm

Hill climbing algorithm is a fast and furious approach. It finds the solution state
rapidly because it is quite easy to improve a bad state. But, there are following
limitations of this search:

● Local Maxima: It is that peak of the mountain which is highest than all its
neighbouring states but lower than the global maxima. It is not the goal peak
because there is another peak higher than it.
● Plateau: It is a flat
surface area where no
uphill exists. It becomes
difficult for the climber
to decide that in which
direction he should
move to reach the goal
point. Sometimes, the
person gets lost in the
flat area.

● Ridges: It is a
challenging problem
where the person finds
two or more local maxima
of the same height
commonly. It becomes
difficult for the person to
navigate the right point
and stuck to that point
itself.

Simulated
Annealing

Simulated
annealing is
similar to the hill
climbing
algorithm. It
works on the
current situation.
It picks a random move instead of picking the best move. If the move leads to the
improvement of the current situation, it is always accepted as a step towards the
solution state, else it accepts the move having a probability less than 1.

This search technique was first used in 1980 to solve VLSI layout problems. It is also
applied for factory scheduling and other large optimization tasks.

Local Beam Search

Local beam search is quite different from random-restart search. It keeps track
of k states instead of just one. It selects k randomly generated states, and expand
them at each step. If any state is a goal state, the search stops with success. Else it
selects the best k successors from the complete list and repeats the same process.

In random-restart search where each search process runs independently, but in


local beam search, the necessary information is shared between the parallel
search processes.

Disadvantages of Local Beam search

● This search can suffer from a lack of diversity among the k states.

● It is an expensive version of hill climbing search.

2.9 Case Study: 4th Industrial Revolution Using AI, Big Data and
Robotics

The fourth industrial revolution, often known as Industry 4.0, is getting a lot of
attention, especially because of its potential influence on humanity. 4IR, according
to Schwab, will alter how people live, work, and conduct business, as well as how
we are governed. It is also thought that the industrial revolutions began in the 17th
century, with Britain leading the way with the so-called "first" industrial revolution.
The term "industrial revolution" refers to a period of economic upheaval that forced
a shift in "people's livelihood from agrarian rural livelihood to city and town
livelihood."
According to Blinov, economic activities were limited prior to the arrival of the first
industrial revolution, which was championed by Britain, and as a result, many
people were destitute. Human livelihood was based on what individuals got from
small farms, which made life tough for the average person.

People were employing simple tools for production, which were mostly hand tools,
and production was mostly confined to people's houses throughout this period.

The steam engine replaced animal power, signalling the "transition from rural
livelihood" to industrialization, when "special-purpose machinery" was now used,
kicking off the first industrial revolution in Britain at the turn of the century.

Industry 4.0 is a new stage in the structure and control of the industrial value chain
that is used interchangeably with the fourth industrial revolution.

The intelligent networking of equipment and processes for industry using


information and communication technologies is known as Industrie 4.0. (Plattform
Industrie 4.0).

You might also like