AI - Unit 1,2
AI - Unit 1,2
Introduction
Artificial Intelligence exists when a machine can have human based skills
such as learning, reasoning, and solving problems With Artificial
Intelligence you do not need to pre-program a machine to do some work,
despite that you can create a machine with programmed algorithms
which can work with own intelligence, and that is the awesomeness of AI.
It is believed that AI is not a new technology, and some people says that
as per Greek myth, there were Mechanical men in early days which can
work and behave like humans.
● With the help of AI, you can create such software or devices which
can solve real-world problems very easily and with accuracy such as
health issues, marketing, traffic issues, etc.
● With the help of AI, you can create your personal virtual Assistant,
such as Cortana, Google Assistant, Siri, etc.
● With the help of AI, you can build such Robots which can work in an
environment where survival of humans can be at risk.
● AI opens a path for other new technologies, new devices, and new
Opportunities.
o Playing chess
Artificial Intelligence is not just a part of computer science even it's so vast
and requires lots of other factors which can contribute to it. To create the
AI first we should know that how intelligence is composed, so the
Intelligence is an intangible part of our brain which is a combination
of Reasoning, learning, problem-solving perception, language
understanding, etc.
To achieve the above factors for a machine or software Artificial
Intelligence requires the following discipline:
● Mathematics
● Biology
● Psychology
● Sociology
● Computer Science
● Neurons Study
● Statistics
Every technology has some disadvantages, and the same goes for
Artificial intelligence. Being so advantageous technology still, it has some
disadvantages which we need to keep in our mind while creating an AI
system. Following are the disadvantages of AI:
● High Cost: The hardware and software requirement of AI is very
costly as it requires lots of maintenance to meet current world
requirements.
● Can't think out of the box: Even we are making smarter machines
with AI, but still they cannot work out of the box, as the robot will only do
that work for which they are trained, or programmed.
Key takeaway
Artificial Intelligence is not a new word and not a new technology for
researchers. This technology is much older than you would imagine. Even
there are the myths of Mechanical men in Ancient Greek and Egyptian
Myths. Following are some milestones in the history of AI which defines
the journey from the AI generation to till date development.
o Year 1943: The first work which is now recognized as AI was done
by Warren McCulloch and Walter pits in 1943. They proposed a
model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for
modifying the connection strength between neurons. His rule is
now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician
and pioneered Machine learning in 1950. Alan Turing
publishes "Computing Machinery and Intelligence" in which he
proposed a test. The test can check the machine's ability to exhibit
intelligent behavior equivalent to human intelligence, called
a Turing test.
A boom of AI (1980-1987)
o The duration between the years 1987 to 1993 was the second AI
Winter duration.
o Again, Investors and government stopped in funding for AI
research as due to high cost but not efficient result. The expert
system such as XCON was very cost effective.
o Year 1997: In the year 1997, IBM Deep Blue beats world chess
champion, Gary Kasparov, and became the first computer to beat
a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of
Roomba, a vacuum cleaner.
o Year 2006: AI came in the Business world till the year 2006.
Companies like Facebook, Twitter, and Netflix also started using
AI.
o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz
show, where it had to solve the complex questions as well as
riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
o Year 2012: Google has launched an Android app feature "Google
now", which was able to provide information to the user as a
prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a
competition in the infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex
topics with two master debaters and also performed extremely
well.
o Google has demonstrated an AI program "Duplex" which was a
virtual assistant and which had taken hairdresser appointment on
call, and lady on other side didn't notice that she was talking with
the machine.
Key takeaway
Artificial Intelligence is not a new word and not a new technology for
researchers. This technology is much older than you would imagine. Even
there are the myths of Mechanical men in Ancient Greek and Egyptian
Myths. Following are some milestones in the history of AI which defines
the journey from the AI generation to till date development.
Researchers now have access to new tools that can help them achieve critical
objectives, and these technologies are excellent starting points in and of
themselves. The following are some specific domains that have been achieved in
recent years:
● Machine learning;
● Reinforcement learning;
● Deep learning;
Machine learning
This is the field that has piqued the interest of researchers the most in the recent
decade. In 2018, OpenAI, a non-profit artificial intelligence research organisation
dedicated to promoting and creating friendly AI, achieved significant results in the
game Montezuma's Revenge. A technique called Random Network Distillation
(RND) was used to achieve superhuman performance by encouraging the RL agent
to explore unanticipated states. The graph below demonstrates how far this
strategy outperformed the game's other AI algorithms.
This is only one of a number of examples of findings received in 2019. DeepMind's
AlphaStar is another AI worth considering. It used multi-agent algorithm training
to defeat the five-time world champion in the real-time strategy game StarCraft 2.
It began by forcing agents to compete against one another, allowing it to learn
about the vast strategic space. Later, a new agent was created that integrated the
greatest methods that people had devised. Multiple agents that independently
learnt and operated together to compete against one another achieved a level of
performance equal to humans in Quake 3 Arena.
Deep Learning
The year 2019 marked a watershed moment for deep learning and its applications
in a variety of fields, particularly medicine. For example, a technique known as
"two phases" has resulted in expert-level diagnosis and therapy prescriptions for
a variety of eye illnesses. The first stage used a computerised 3D scanner to
reconstruct an eye tissue map, and the second stage used this map to make
predictions about the severity of the condition. A deep learning model that was
employed in 54 thousand ECG traces is another example. It can identify 12
different types of arrhythmias.
Even more significant is what the researchers hope future research will reveal,
namely the possibility of recovering speech in paralysed patients and movement
in quadriplegics.
In the first case, Columbia University researchers were able to synthesise voice
with a vocoder employing ways of analysing the brain activity of five epileptic
patients. Although the system has achieved 75 percent accuracy, a deep learning
model has shown a 65 percent improvement in voice comprehensibility.
OUTPUT (written totally by machine on the second attempt): Security guards were
recorded on camera escorting the 19-year-old singer out of the store. The singer
was dressed in a black hoodie with the labels 'Blurred lines' and 'Fashion Police' on
the front and back, respectively. The video can be found by scrolling below.
Intelligent machines are known for their great computational power, which is
provided by a slew of processors. Selenium, a rare earth metal, is a major
component of these computer chips. Furthermore, such robots' batteries are made
of Lithium, another rare element found in the earth's crust. Increased mining of
these materials is rapidly and irreversibly harming our ecosystem. Furthermore,
they consume a large amount of power to operate, putting a strain on our power
plants and, once again, damaging the environment.
Lesser Jobs
Machines, without a doubt, perform routine and repetitious activities far better than
people. Many firms would prefer machines replace humans in order to boost their
profits, lowering the number of employment available for humans.
A threat to Humanity(?)
Elon Musk is widely regarded as one of the most intelligent people working on
artificial intelligence today. He has also stated openly that artificial intelligence is
the greatest future threat to human civilisation. This suggests that the dismal future
depicted in science fiction films is not implausible. In addition, Stephen Hawking
has long expressed his opposition to AI advancements.
The most serious danger connected with AI is that computers will develop
consciousness and turn against humans if they become self-aware.
Benefits of AI
Because humans make mistakes from time to time, the term "human error" was
coined. Computers, on the other hand, do not make these errors if they are
correctly programmed. Artificial intelligence makes choices based on previously
obtained data and a set of algorithms. As a result, errors are decreased, and the
prospect of achieving better precision and accuracy is increased.
Have you heard about the explosion at the Chernobyl nuclear power facility in
Ukraine? There were no AI-powered robots available at the time to assist us in
minimising the effects of radiation by controlling the fire early on, as any human
who came close to the core died in minutes. They eventually used helicopters to
drop sand and boron from a safe distance.
3) Available 24x7:
Without breaks, an average human will labour for 4–6 hours every day. Humans
are created in such a way that they can take time off to replenish themselves and
prepare for a new day at work, and they even have weekly off days to keep their
professional and home lives separate. But, unlike humans, we can use AI to make
machines work 24 hours a day, seven days a week with no breaks, and they don't
get bored.
For example, educational institutions and helpline centres receive a large number
of requests and difficulties that AI can successfully address.
We will be doing a lot of repetitious labour in our day-to-day work, such as writing
thank-you emails, double-checking documents for flaws, and so on. We can use
artificial intelligence to efficiently automate these monotonous operations and
even remove "boring" duties from humans' schedules, allowing them to be more
creative.
5) Digital Assistance:
Digital assistants are used by some of the most modern enterprises to engage with
people, reducing the requirement for human personnel. Many websites also use
digital assistants to supply things that users seek. We can discuss what we're
searching for with them. Some chatbots are created in such a way that it's difficult
to tell whether we're conversing with a machine or a human.
For example, we all know that businesses have a customer service team that is
responsible for answering customers' questions and concerns. Organizations can
use AI to create a voice bot or a chatbot that can assist customers with all of their
questions. Many firms have already begun to implement them on their websites
and mobile applications.
6) Faster Decisions:
We can make computers make decisions and carry out activities faster than
humans by combining AI with other technologies. While a human will consider
numerous aspects, both emotionally and practically, before making a decision, an
AI-powered machine will focus on what it has been designed to do and will
produce results more quickly.
For instance, we've all played Chess games on Windows. Because of the AI in the
game, beating the CPU in hard mode is practically impossible. According to the
algorithms utilised, it will take the best feasible step in the shortest amount of time.
7) Daily Applications:
Apple's Siri, Microsoft's Cortana, and Google's OK Google are all commonplace in
our daily lives, whether it's for finding a location, taking a selfie, making a phone
call, or responding to an email.
8) New Inventions:
Many technologies in practically every domain are powered by AI, which will aid
humans in solving the majority of complicated problems.
For instance, using powerful AI-based technologies, clinicians can now identify
breast cancer in women at an early stage.
What is an Agent?
An agent can be anything that perceive its environment through sensors and act
upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking, and acting. An agent can be:
● Human-Agent: A human agent has eyes, ears, and other organs which work
for sensors and hand, legs, vocal tract work for actuators.
● Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP
for sensors and various motors for actuators.
● Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cell phone, camera,
and even we are also agents.
Before moving forward, we should first know about sensors, effectors, and
actuators.
Sensor: Sensor is a device which detects the change in the environment and sends
the information to other electronic devices. An agent observes its environment
through sensors.
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system.
An actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and
acts in a way to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward
and for each wrong action, an agent gets a negative reward.
Rationality:
Note: Rationality differs from Omniscience because an Omniscient agent knows the
actual outcome of its action and act accordingly, which is not possible in reality.
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent
program. It can be viewed as:
Following are the main three terms involved in the structure of an AI agent:
PEAS Representation
● P: Performance measure
● E: Environment
● A: Actuators
● S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Performance
Agent Environment Actuators Sensors
measure
● Healthy ● Patient
1. patient ● Tests Keyboard
Medical ● Hospital (Entry of
Diagnose ● Minimized ● Treatments symptoms)
cost ● Staff
● Camera
● Room ● Dirt
detection
● Cleanness ● Table sensor
● Wheels
2. ● Efficiency ● Wood ● Cliff
● Brushes
Vacuum floor sensor
Cleaner ● Battery life ● Vacuum
● Carpet ● Bump
● Security Extractor
Sensor
● Various
obstacles ● Infrared
Wall
Sensor
● Conveyor ● Camera
● Percentage ● Jointed
3. Part - belt with
of parts in Arms ● Joint
picking parts,
correct
Robot angle
bins. ● Hand
● Bins sensors.
The state of being reasonable, sensible, and having a good sense of judgment is
known as rationality.
Rationality is concerned with the actions and consequences that can be foreseen
based on the agent's views. Taking actions with the objective of obtaining useful
knowledge is a fundamental part of reason.
On the other hand, some software agents (software robots or softbots) exist in rich,
unlimited softbots domains. The simulator's environment is incredibly detailed and
complex. The software agent must choose among a vast number of possibilities in
real time. In both a real and an artificial setting, a softbot examines a customer's
internet interests and offers appealing things to them.
The Turing Test is the most well-known artificial scenario, in which one real and
other artificial agents are put to the test on an equal footing. Because a software
agent cannot perform as effectively as a human, this is a challenging environment
to work in.
Turing test
The Turing Test can be used to determine whether a system's intelligent behavior
is successful.
Two humans will participate in the test, as well as a machine that will be checked.
One of the two individuals is assigned to the role of tester. They're all in separate
rooms. The tester has no way of knowing who is human and who is not. He types
the questions and transmits them to both intelligences, who react with typed
answers.
The goal of this test is to deceive the tester. The machine is deemed to be
intelligent if the tester is unable to distinguish the machine's response from the
human response.
Key takeaway
An AI system can be defined as the study of the rational agent and its environment.
The agents sense the environment through sensors and act on their environment
through actuators. An AI agent can have mental properties such as knowledge,
belief, intention, etc.
● They have some level of autonomy that allows them to perform certain tasks on
their own.
● They have a learning ability that enables them to learn even as tasks are carried
out.
● They can interact with other entities such as agents, humans, and systems.
The IA structure consists of three main parts: architecture, agent function, and
agent program.
There are 5 main categories of intelligent agents. The grouping of these agents is
based on their capabilities and level of perceived intelligence.
These agents perform actions using the current percept, rather than the percept
history. The condition-action rule is used as the basis for the agent function. In this
category, a fully observable environment is ideal for the success of the agent
function.
Unlike simple reflex agents, model-based reflex agents consider the percept
history in their actions. The agent function can still work well even in an
environment that is not fully observable. These agents use an internal model that
determines the percept history and effect of actions. They reflect on certain aspects
of the present state that have been unobserved.
Goal-based agents
These agents have higher capabilities than model-based reflex agents. Goal-
based agents use goal information to describe desirable capabilities. This allows
them to choose among various possibilities. These agents select the best action
that enhances the attainment of the goal.
Utility-based agents
These agents make choices based on utility. They are more advanced than goal-
based agents because of an extra component of utility measurement. Using a utility
function, a state is mapped against a certain measure of utility. A rational agent
selects the action that optimizes the expected utility of the outcome.
Learning agents
These are agents that have the capability of learning from their previous
experience.
● The learning element: This element enables learning agents to learn from
previous experiences.
● The performance element: This element decides on the external action that
needs to be taken.
● The problem generator: This acts as a feedback agent that performs certain
tasks such as making suggestions (new) and keeping history.
Intelligent agents work through three main components: sensors, actuators, and
effectors. Getting an overview of these components can improve our
understanding of how intelligent agents work.
● Sensors: These are devices that detect any changes in the environment. This
information is sent to other devices. In artificial intelligence, the environment of the
system is observed by intelligent agents through sensors.
● Actuators: These are components through which energy is converted into
motion. They perform the role of controlling and moving a system. Examples
include rails, motors, and gears.
Key takeaway
An intelligent agent (IA) is an entity that makes a decision that enables artificial
intelligence to be put into action. It can also be described as a software entity that
conducts operations in the place of users or programs after sensing the
environment.
1.8 Case Study: Kroger: How This U.S. Retail Giant Is Using AI And
Robots to Prepare for the 4th Industrial Revolution
Kroger, one of the major grocery companies in the United States, has decided to
embrace technology in order to survive and grow in the fourth industrial
revolution. Kroger wants to use its data, shopper insights, and scale to help it
remain a leader in the future marketplace, with 2,782 grocery shops under nearly
two dozen names in 35 states. According to the Food Marketing Institute, by 2022,
online grocery will account for 20% of all grocery retail and generate $100 billion
in consumer sales, so Kroger and its competitors would be well to figure out how
to leverage technology.
We can get groceries delivered today, but Kroger is trying the delivery of the
future: driverless vehicles delivering groceries. On its trial programme, Kroger
collaborated with Nuro, a Silicon Valley startup that specialised in autonomous
delivery cars. Customers can place same-day delivery orders via Kroger's
ClickList ordering system and Nuro's app, but customers must be home to get their
groceries from the car after entering a unique code to unlock the doors. There's no
indication yet on which locations will be chosen to test autonomous deliveries, but
local trade is expected to be disrupted.
Automated warehouses
Kroger and Ocado, a British online-only grocer, have formed a relationship that is
anticipated to help Kroger automate its warehouses and employ artificial
intelligence to boost its bottom line. Ocado claims to have the world's most
advanced automated grocery warehouses and has tested delivery options with
Uber and Instacart, and it's this know-how that Kroger hopes to capitalise on with
its investment. The companies announced that Ocado would operate three new
warehouses, with another 17 to come in the next three years. Ocado's warehouses
are run by robots that explore the warehouse and pick products for orders using
machine learning algorithms. Kroger will be able to get products to shops faster
thanks to this investment and access to Ocado's technologies.
Machine learning
In a project called Embedded Machine Learning, 84.51 has made it a goal to enable
and embed machine learning into Kroger's operations, where a "machine learning
machine" can generate and deploy a large number of models with very little human
intervention. With an aim to "enable, empower, and engage" machine learning
throughout the business, this was a comprehensive approach to machine learning,
with three phases to machine learning methodology: Solution Engineering, Model
Development, and Model Deployment.
Smart shelves
When a Kroger consumer travels down the aisle with the Kroger app open, sensors
recognise the shopper and use smart shelves technology to deliver personalised
pricing and highlight products the customer might be interested in. Smart shelves
benefit not only the customer, but they also assist the business check inventory to
ensure that expired products aren't on the shelves and that everything is stocked
correctly—and this has an impact on the customer experience. Kroger has been
testing this technology since 2015, and while adoption has been slow, the retailer
is attempting to improve it before deploying it.
Unit - 2
Problem-solving
Formulating problems
Figure depicts the first few steps in developing a search tree for finding a
route from Arad to Bucharest. The root node of the tree represents the
initial state (Arad). The first stage is to determine whether or not this is a
goal state. (Of course, it isn't, but it's worth double-checking to avoid
problems like "starting in Arad, get to Arad.") Then we must weigh a
number of possibilities. This is done by extending the current state; in
other words, each legal action is applied to the existing state, resulting in
the production of a new set of states. In this scenario, we create three
additional child nodes from the parent node In(Arad): In(Sibiu),
In(Timisoara), and In(Sibiu) (Zerind).
Fig: Partial
search tree
This is the
essence of
searching:
pursuing one
option at a time
while putting
the others on
hold in the
event that the
first does not yield a solution. Let's use Sibiu as an example. We first check
to see whether it's a target state (it isn't), then expand it to get In(Arad),
In(Fagaras), In(Oradea), and In(Oradea) (RimnicuVilcea). Then we can
choose one of these four possibilities, or we can go back to LEAF NODE
and choose Timisoara or Zerind. Each of these six nodes in the tree is a
leaf node, which means it has no progeny.
At any given FRONTIER point, the frontier is the collection of all leaf nodes
that are available for expansion. Each tree's border is defined by the
nodes with bold outlines in Figure.
Loop do
If the frontier is empty then return failure
If the node contains a goal state then return the corresponding solution
Expand the chosen node, adding the resulting nodes to the frontier
Loop do
If the node contains a goal state then return the corresponding solution
● Goal formulation: is the first and most basic step in solving a problem.
It organises the steps/sequence needed to construct a single goal from
many goals, as well as the actions needed to achieve that goal. The
existing circumstance and the agent's performance measure are used to
formulate goals.
● First State: This is the agent's initial state, or first step toward its goal.
● Transition Model: The Transition Model explains what each step does.
● Path cost: Each path that leads to the goal is given a numerical cost. A
cost function is chosen by the problem-solving agent to reflect its
performance measure. Remember that the optimal option has the
cheapest path cost of all the alternatives.
Key takeaway
● Real world problem - The problems that need to be solved are those
that occur in the real world. It does not require descriptions, unlike a toy
problem, yet we can have a generic formulation of the problem.
Our goal is to slide digits into the blank space in the figure to change the
current state to the goal state.
● States: It shows where each numbered tile and the blank tile are
located.
● Actions: The blank space's actions are defined here, i.e., left, right,
up, or down.
● Path cost: The path cost is the number of steps in a path where each
step costs one dollar.
We can grasp the problem and its correct solution by looking at the
diagram below.
Fig: 8 Queen puzzle
● Path cost: There is no need for path cost because only final states are
counted.
Actions: Move the queen to a spot where it will be secure from attacks.
Problem-solving agents:
● Search tree: A tree representation of search problem is called Search tree. The
root of the search tree is the root node which is corresponding to the initial state.
● Actions: It gives the description of all the available actions to the agent.
● Solution: It is an action sequence which leads from the start node to the goal
node.
● Optimal Solution: If a solution has the lowest cost among all solutions.
Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:
Space Complexity: It is the maximum storage space required at any point during
the search, as the complexity of the problem.
Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal. It operates in a brute-force way as it only includes
information about how to traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal,
so it is also called blind search. It examines each node of the tree until it achieves
the goal node.
● Breadth-first search
● Depth-first search
● Bidirectional Search
Informed Search
A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in
another way.
1. Greedy Search
2. A* Search
Uninformed Search Algorithms
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1. Breadth-first Search:
● Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
● BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
Advantages:
Disadvantages:
● It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
● BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Space Complexity: Space complexity of BFS algorithm is given by the Memory size
of frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some
finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of
the node.
2. Depth-first Search
● Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
● It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
Advantage:
● DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
● It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).
Disadvantage:
● There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
● DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal
node is not found. After backtracking it will traverse node C and then G, and here
it will terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed
by the algorithm. It is given by:
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is O(bm).
● Standard failure value: It indicates that problem does not have any solution.
● Cutoff failure value: It defines no solution for the problem within a given depth
limit.
Advantages:
Disadvantages:
● It may not be optimal if the problem has more than one solution.
Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-
limit.
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also
not optimal even if ℓ>d.
Advantages:
● Uniform cost search is optimal because at every state the path with the least
cost is chosen.
Disadvantages:
● It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an infinite
loop.
Example:
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal
node. Then the number of steps is = C*/ε+1. Here we have taken +1, as we start
from state 0 and end to C*/ε.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of
Uniform-cost search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest path
cost.
The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing
the limit until a goal is found.
This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.
Advantages:
● It Combines the benefits of BFS and DFS search algorithm in terms of fast search
and memory efficiency.
Disadvantages:
● The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
2'nd Iteration-
---> A, B, C
3'rd Iteration-
----->A, B, D,
E, C, F, G
4'th Iteration-
----->A, B, D,
H, I, E, C, F, K,
G
In the fourth
iteration, the algorithm will find the goal node.
Completeness:
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time
complexity is O(bd).
Space Complexity:
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of
the node.
Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-
search, to find the goal node. Bidirectional search replaces one single search
graph with two small subgraphs in which one starts the search from an initial vertex
and other starts from goal vertex. The search stops when these two graphs intersect
each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
Disadvantages:
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the
forward direction and starts from goal node 16 in the backward direction.
Key takeaway
The breadth first search technique is a general strategy for traversing a graph. A
breadth first search takes up more memory, but it always discovers the shortest
path first. In this type of search, the state space is represented as a tree. The answer
can be discovered by traveling the tree. The tree's nodes indicate the initial value
of starting state, multiple intermediate states, and the end state.
This search makes use of a queue data structure, which is traversed level by level.
In a breadth first search, nodes are extended in order of their distance from the
root. It's a path-finding algorithm that, if one exists, can always find a solution. The
answer that is discovered is always the alternative option. This task demands a
significant amount of memory. At each level, each node in the search tree is
increased in breadth.
Algorithm:
Step 3: If the FRONT node of the queue is a goal node, then stop and return success.
Step 4: Remove the FRONT node from the queue. Process it and find all its
neighbours that are in a ready state then place them inside the queue in any order.
Step 5: Go to Step 3.
Step 6: Exit.
Implementations
Let us implement the above algorithm of BFS by taking the following suitable
example
Fig: Example
Step 2: The
queue is no longer empty, and the FRONT
node, i.e. A, is no longer our goal node. So,
proceed to step three.
Step 3: Remove the FRONT node, A, from the queue and look for A's neighbors, B
and C.
Step 8: Finally, F is the FRONT of the queue, which is our goal node. As a result,
leave.
Advantages
● The goal will be achieved in whatever way possible using this strategy.
Disadvantages
● It has long pathways when all paths to a destination have about the same
search depth.
Key takeaway
Breadth-first search is a search method in which the highest layer of a decision tree
is entirely searched before moving on to the next layer (BFS).
A heuristic is a technique for making our search process more efficient. Some
heuristics assist in the direction of a search process without claiming
completeness, whereas others do. A heuristic is a piece of problem-specific
knowledge that helps you spend less time looking for answers. It's a strategy that
works on occasion, but not all of the time.
The heuristic search algorithm uses the problem information to help steer the way
over the search space. These searches make use of a number of functions that,
assuming the function is efficient, estimate the cost of going from where you are
now to where you want to go.
The following procedures should be followed when using the heuristic technique
to identify a solution:
3. The term heuristic means ‘serving to aid discovery’ and is an estimate, based on
domain specific information that is computable from the current state description
of how close we are to a goal.
A search problem in which multiple search orders and the use of heuristic
knowledge are clearly understood is finding a path from one city to another.
4. Heuristic information: The search could be guided by the direction of the goal
city from the current city, or we could use airline distance as an estimate of the
distance to the goal.
Key takeaway
Informed Search also called heuristic or intelligent search, this uses information
about the problem to guide the search—usually guesses the distance to a goal state
and is therefore efficient, but the search may not always be possible.
The purpose of heuristic function is to guide the search process in the most
profitable directions by suggesting which path to follow first when more than is
available.
A heuristic function can help solve some toy problems more efficiently, such as 8-
puzzle, 8-queen, tic-tac-toe, and so on. Let's have a look at how:
Consider the eight-puzzle issue below, which has a start and a target state. Our
goal is to slide the tiles from the current/start state into the goal state in the correct
order. There are four possible movements: left, right, up, and down. There are
various ways to transform the current/start state to the desired state, but we can
solve the problem more efficiently by using the heuristic function h(n).
So, there are three tiles that are out of place, namely 6, 5, and 4. The empty tile in
the goal state is not counted). h(n)=3 in this case. The value of h(n) =0 must now be
minimised.
To reduce the h(n) value to 0, we can build a state-space tree as shown below:
Agents rarely have complete control over their surroundings, and are more likely
to have only partial control. This implies that they have control over the
environment. Changes in the environment, in turn, will have a direct impact on the
agents in the environment.
Obviously, not every action can be carried out in every circumstance. An agent,
for example, might be able to 'open door,' but only if the door is unlocked. A
precondition is that the door must be unlocked before the action may be
conducted.
The most difficult difficulty that agents face in an environment is determining which
action to take at any given time in order to maximise their chances of achieving
their goal, or at least working toward it. The qualities of an environment influence
the complexity of a decision-making process.
● Deterministic vs Stochastic
● Competitive vs Collaborative
● Single-agent vs Multi-agent
● Static vs Dynamic
● Discrete vs Continuous
Deterministic vs Stochastic
● The stochastic environment is random in nature, not unique, and the agent
cannot entirely control it.
Competitive vs Collaborative
● When an agent competes with another agent to optimise the output, it is said to
be in a competitive environment.
● Chess is a competitive game in which the agents compete against one another
to win the game, which is the output.
● When numerous agents work together to generate the required result, the agent
is said to be in a collaborative environment.
● When many self-driving cars are discovered on the road, they work together to
avoid collisions and arrive at their planned destination.
Single-agent vs Multi-agent
Dynamic vs Static
● A roller coaster ride is dynamic since it is set in motion and the surroundings
changes at all times.
● A static environment is one that is idle and does not modify its state.
Discrete vs Continuous
● A discrete environment is one in which there are a finite number of actions that
can be deliberated in the environment to produce the output.
● Chess is a discrete game since it has a limited number of moves. The amount of
moves varies from game to game, but it is always finite.
A local search algorithm completes its task by traversing on a single current node
rather than multiple paths and following the neighbours of that node generally.
Although local search algorithms are not systematic, still they have the following
two advantages:
● Local search algorithms use a very little or constant amount of memory as they
operate only on a single path.
● Most often, they find a reasonable solution in large or infinite state spaces
where the classical or systematic algorithms do not work.
● Global
Minimum: If
the elevation corresponds to the cost, then the task is to find the lowest valley,
which is known as Global Minimum.
● Hill-climbing Search
● Simulated Annealing
Note: Local search algorithms do not burden to remember all the nodes in the
memory; it operates on complete state-formulation.
● Global Maximum: It is the highest point on the hill, which is the goal state.
● Local Maximum: It is the peak higher than all other peaks but lower than the
global maximum.
● Flat local maximum: It is the flat area over the hill where it has no uphill or
downhill. It is a saturated point of the hill.
Simple hill climbing is the simplest technique to climb a hill. The task is to reach
the highest peak of the mountain. Here, the movement of the climber depends on
his move/steps. If he finds his next step better than the previous one, he continues
to move else remain in the same state. This search focus only on his previous and
next step.
Steepest-ascent hill climbing is different from simple hill climbing search. Unlike
simple hill climbing search, It considers all the successive nodes, compares them,
and choose the node which is closest to the solution. Steepest hill climbing search
is similar to best-first search because it focuses on each node instead of one.
Note: Both simple, as well as steepest-ascent hill climbing search, fails when there
is no closer node.
Stochastic hill climbing does not focus on all the nodes. It selects one node at
random and decides whether it should be expanded or search for a better one.
Hill climbing algorithm is a fast and furious approach. It finds the solution state
rapidly because it is quite easy to improve a bad state. But, there are following
limitations of this search:
● Local Maxima: It is that peak of the mountain which is highest than all its
neighbouring states but lower than the global maxima. It is not the goal peak
because there is another peak higher than it.
● Plateau: It is a flat
surface area where no
uphill exists. It becomes
difficult for the climber
to decide that in which
direction he should
move to reach the goal
point. Sometimes, the
person gets lost in the
flat area.
● Ridges: It is a
challenging problem
where the person finds
two or more local maxima
of the same height
commonly. It becomes
difficult for the person to
navigate the right point
and stuck to that point
itself.
Simulated
Annealing
Simulated
annealing is
similar to the hill
climbing
algorithm. It
works on the
current situation.
It picks a random move instead of picking the best move. If the move leads to the
improvement of the current situation, it is always accepted as a step towards the
solution state, else it accepts the move having a probability less than 1.
This search technique was first used in 1980 to solve VLSI layout problems. It is also
applied for factory scheduling and other large optimization tasks.
Local beam search is quite different from random-restart search. It keeps track
of k states instead of just one. It selects k randomly generated states, and expand
them at each step. If any state is a goal state, the search stops with success. Else it
selects the best k successors from the complete list and repeats the same process.
● This search can suffer from a lack of diversity among the k states.
2.9 Case Study: 4th Industrial Revolution Using AI, Big Data and
Robotics
The fourth industrial revolution, often known as Industry 4.0, is getting a lot of
attention, especially because of its potential influence on humanity. 4IR, according
to Schwab, will alter how people live, work, and conduct business, as well as how
we are governed. It is also thought that the industrial revolutions began in the 17th
century, with Britain leading the way with the so-called "first" industrial revolution.
The term "industrial revolution" refers to a period of economic upheaval that forced
a shift in "people's livelihood from agrarian rural livelihood to city and town
livelihood."
According to Blinov, economic activities were limited prior to the arrival of the first
industrial revolution, which was championed by Britain, and as a result, many
people were destitute. Human livelihood was based on what individuals got from
small farms, which made life tough for the average person.
People were employing simple tools for production, which were mostly hand tools,
and production was mostly confined to people's houses throughout this period.
The steam engine replaced animal power, signalling the "transition from rural
livelihood" to industrialization, when "special-purpose machinery" was now used,
kicking off the first industrial revolution in Britain at the turn of the century.
Industry 4.0 is a new stage in the structure and control of the industrial value chain
that is used interchangeably with the fourth industrial revolution.