0% found this document useful (0 votes)
28 views15 pages

Ai QB

The document discusses artificial intelligence and various AI concepts like intelligent agents, rational agents, different types of agents and environments. It also covers search strategies like uniform cost search, iterative deepening depth-first search and properties of A* algorithm.

Uploaded by

user-417647
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views15 pages

Ai QB

The document discusses artificial intelligence and various AI concepts like intelligent agents, rational agents, different types of agents and environments. It also covers search strategies like uniform cost search, iterative deepening depth-first search and properties of A* algorithm.

Uploaded by

user-417647
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1. What is Artificial Intelligence?

Discuss with respect to its benefits and


risks.
Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to
do tasks that are usually done by humans because they require human intelligence and
discernment.
Speech recognition, decision-making, visual perception, for example, are features of human
intelligence that artificial intelligence may possess.

2. Explain Intelligent Agents and it’s function with the help of example?
An intelligent agent is a program that can make decisions or perform a
service based on its environment, user input and experiences. These
programs can be used to autonomously gather information on a regular,
programmed schedule or when prompted by the user in real time.
Intelligent agents may also be referred to as a bot, which is short for robot
Examples of intelligent agents
AI assistants, like Alexa and Siri, are examples of intelligent agents as they
use sensors to perceive a request made by the user and the automatically
collect data from the internet without the user's help. They can be used to
gather information about its perceived environment such as weather and
time.

Types of intelligent agents

Reflex agents: These agents function in a current state, ignoring past


history.
Model-based agents: These agents choose an action in the same way as a
reflex agent, but they have a more comprehensive view of the environment.
Goal-based agents: These agents expand upon the information model-
based agents store by also including goal information, or information about
desirable situations.

Utility-based agents: These agents are similar to goal-based agents but


provide an extra utility measurement which rates each possible scenario on
its desired result and chooses the action that maximizes the outcome.

Learning agents: These agents have the ability to gradually improve and
become more knowledgeable about an environment over time through an
additional learning element.

3. Illustrate task environment in detail. Apply PEAS description for


automated taxi
The first step in designing an AI agent is to specify the task environment. The task
environment is comprised of PEAS (Performance measure, Environment, Actuators, Sensors)
.

4. What is Rational Agent? Rationality depends on what?


Artificial intelligence is defined as the study of rational agents. A rational agent could
be anything that makes decisions, as a person, firm, machine, or software. It carries out an
action with the best outcome after considering past and current percepts(agent's perceptual
inputs at a given instance)
Rationality of an agent depends on the following −
• The performance measures, which determine the degree of success.
• Agent’s Percept Sequence till now.
• The agent’s prior knowledge about the environment.
• The actions that the agent can carry out.

5. What are the types of agents in AI ? Illustrate any two .

Agents can be grouped into five classes based on their degree of perceived
intelligence and capability. All these agents can improve their performance and
generate better action over the time. These are given below:

o Simple Reflex Agent


o Model-based reflex agent
o Goal-based agents
o Utility-based agent
o Learning agent

o The Simple reflex agents are the simplest agents. These agents take decisions
on the basis of the current percepts and ignore the rest of the percept history.
o These agents only succeed in the fully observable environment.
o The Simple reflex agent does not consider any part of percepts history during
their decision and action process.
o The Simple reflex agent works on Condition-action rule, which means it maps
the current state to action. Such as a Room Cleaner agent, it works only if there
is dirt in the room.
o Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.

o The Model-based agent can work in a partially observable environment, and


track the situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is
called a Model-based agent.
o Internal State: It is a representation of the current state based on
percept history.
o These agents have the model, "which is knowledge of the world" and based on
the model they perform actions.
o Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.

6. What is Agent Program? Describe any two agents?

Agent program is an implementation of an agent function. An agent function is a map from


the percept sequence(history of all that an agent has perceived to date) to an action.
Agent = Architecture + Agent Program

7. Explore any two types of AI environments.

An environment in artificial intelligence is the surrounding of the agent. The agent takes
input from the environment through sensors and delivers the output to the environment
through actuators.

• Fully Observable vs Partially Observable


• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential

4. Single-agent vs Multi-agent
• An environment consisting of only one agent is said to be a single-agent environment.
• A person left alone in a maze is an example of the single-agent system.
• An environment involving more than one agent is a multi-agent environment.
• The game of football is multi-agent as it involves 11 players in each team.
5. Dynamic vs Static
• An environment that keeps constantly changing itself when the agent is up with some
action is said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the environment keeps
changing every instant.
• An idle environment with no change in its state is called a static environment.
• An empty house is static as there’s no change in the surroundings when an agent
enters.

8. How can we measure the performance of problem solving?

9. Discuss Uninformed search strategies – UCS, IDDFS with suitable Example

Uninformed search is a class of general-purpose search algorithms which operates in


brute force-way. Uninformed search algorithms do not have additional information
about state or search space other than how to traverse the tree, so it is also called
blind search.

Uniform-cost Search Algorithm

Uniform-cost search is a searching algorithm used for traversing a weighted tree or


graph. This algorithm comes into play when a different cost is available for each
edge. The primary goal of the uniform-cost search is to find a path to the goal node
which has the lowest cumulative cost. Uniform-cost search expands nodes according
to their path costs form the root node. It can be used to solve any graph/tree where
the optimal cost is in demand. A uniform-cost search algorithm is implemented by
the priority queue. It gives maximum priority to the lowest cumulative cost. Uniform
cost search is equivalent to BFS algorithm if the path cost of all edges is the same.

Iterative deepeningdepth-first Search

The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the
limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search space is large,
and depth of goal node is unknown.
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

10. Discuss the properties of A* algorithm?

A* search is the most commonly known form of best-first search.


• It uses heuristic function h(n), and cost to reach the node n from the start state g(n).
• It has combined features of UCS and greedy best-first search, by which it solve the problem
efficiently.
• A* search algorithm finds the shortest path through the search space using the heuristic
function.
• This search algorithm expands less search tree and provides optimal result faster.
• A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of h(n).
• i.e f(n) = g(n)+h(n)
In A* search algorithm, we use search heuristic as well as the cost to reach the node.g(n)
Hence we can combine both costs as following, and this sum is called as a fitness number

Properties of A*: 
Complete: A* algorithm is complete as long as: Branching factor is finite. Cost at every
action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:


Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in nature.

Consistency: Second required condition is consistency for only A* graph search.


If the heuristic function is admissible, then A* tree search will always find the least cost path.
Time Complexity: The time complexity of A* search algorithm depends on heuristic
function, and the number of nodes expanded is exponential to the depth of solution d. So the
time complexity is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)

13. Explain with example Beam search algorithm?

Ans. Beam search is a heuristic search algorithm that explores a graph by expanding
the most optimistic node in a limited set. Beam search is an optimization of best-first
search that reduces its memory requirements.

Best-first search is a graph search that orders all partial solutions according to some
heuristic. But in beam search, only a predetermined number of best partial solutions
are kept as candidates. Therefore, it is a greedy algorithm.

Beam search uses breadth-first search to build its search tree. At each level of the
tree, it generates all successors of the states at the current level, sorting them in
increasing order of heuristic cost. However, it only stores a predetermined number (β),
of best states at each level called the beamwidth. Only those states are expanded
next.

A beam search takes three components as its input:

1. A problem to be solved,
2. A set of heuristic rules for pruning,
3. And a memory with a limited available capacity.

Example:, let's take the value of ß = 2 for the tree shown below. So, follow the
following steps to find the goal node.
Step 1: OPEN= {A}

Step 2: OPEN= {B, C}

Step 3: OPEN= {D, E}

Step 4: OPEN= {E}

Step 5: OPEN= { }

The open set becomes empty without finding the goal node.

14. State and illustrate the limitations of Hill Climbing Search algorithm
with suitable diagram

Ans.
Local Maxima:

A local maxima is a state that is better than each of its neighbouring states, but
not better than some other states further away. Generally this state is lower than
the global maximum. At this point, one cannot decide easily to move in which
direction! This difficulties can be extracted by the process of backtracking i.e.
backtrack to any of one earlier node position and try to go on a different event
direction. To implement this strategy, maintaining in a list of path almost taken
and go back to one of them. If the path was taken that leads to a dead end, then
go back to one of them.
Ridges:

It is a special type of local maxima. It is a simply an area of search space. Ridges


result in a sequence of local maxima that is very difficult to implement ridge itself
has a slope which is difficult to traverse. In this type of situation apply two or
more rules before doing the test. This will correspond to move in several
directions at once.
Plateau:

It is a flat area of search space in which the neighbouringhave same value. So it


is very difficult to calculate the best direction. So to get out of this situation, make
a big jump in any direction, which will help to move in a new direction this is the
best way to handle the problem like plateau.

15. Genetic algorithm with example

Ans. A genetic algorithm (GA) is a heuristic search algorithm used to solve search and
optimization problems. This algorithm is a subset of evolutionary algorithms, which are used
in computation. Genetic algorithms employ the concept of genetics and natural selection to
provide solutions to problems. Genetic algorithms simulate the process of natural selection
which means those species who can adapt to changes in their environment are able to survive
and reproduce and go to next generation. In simple words, they simulate “survival of the
fittest” among individual of consecutive generation for solving a problem. Each generation
consist of a population of individuals and each individual represents a point in search space
and possible solution. Each individual is represented as a string of character/integer/float/bits.
This string is analogous to the Chromosome.
Five phases are considered in a genetic algorithm.
• Initial population
• Fitness function
• Selection
• Crossover
• Mutation
Application of Genetic Algorithms
Genetic algorithms have many applications, some of them are –
• Recurrent Neural Network
• Mutation testing
• Code breaking
• Filtering and signal processing
• Learning fuzzy rule base etc
16. Explain depth limited search with example?

Ans. Depth limited search is the new search algorithm for uninformed search. The unbounded
tree problem happens to appear in the depth-first search algorithm, and it can be fixed by
imposing a boundary or a limit to the depth of the search domain. We will say that this limit
as the depth limit, making the DFS search strategy more refined and organized into a finite
loop. We denote this limit by l, and thus this provides the solution to the infinite path problem
that originated earlier in the DFS algorithm. Thus, Depth limited search can be called an
extended and refined version of the DFS algorithm. In a nutshell, we can say that to avoid the
infinite loop status while executing the codes, and depth limited search algorithm is being
executed into a finite set of depth called depth limit.
If we fix the depth limit to 2, DLS can be carried out similarly to the DFS until the goal node
is found to exist in the tree’s search domain.

Depth Limited Search

17. What do you mean by Heuristic function? Discuss Recursive Best First
Search algorithm.
Ans.
An informed search make use of heuristic functions in order to reach the goal node in a more
prominent way.
 It takes the current state of the agent as its input and produces the estimation of how close
agent is from the goal.
 The heuristic method, however, might not always give the best solution, but it guaranteed
to find a good solution in reasonable time.
 Heuristic function estimates how close a state is to the goal.
 It is represented by h(n), and it calculates the cost of an optimal path between the pair of
states. The value of the heuristic function is always positive.
 A good heuristic function is determined by its efficiency. More is the information about the
problem, more is the processing time.
 8-puzzle, 8-queen, tic-tac-toe, etc., can be solved more efficiently with the help of a
heuristic function

Recursive Best First Search (RBFS) requires Linear Space but uses backtracking. • It is a
heuristic depth first search in which backtracking occurs,when a given node does not have the
best OPEN node amongst it’s successor. • Interesting Feature- Having explored a path once,
if it backtracks from a node , it remembers the f-values it has found. • It uses backed up f-
values to update the values for nodes it has explored and backtracked on. • The backup value
of a node is given by, Fbacked-up(node) = f(node) if node is leaf node =
min{fbacked_up(child)| child is a successor of node}

18. What is bidirectional search? Explain it’s properties?


Ans.
Bidirectional search The idea behind bidirectional search is to run two simultaneous searches
—one forward from the initial state and the other backward from the goal
—hoping that the two searches meet in the middle. The motivation is that b^d/2 + b^d/2 is
much less than b^d, or in the figure, the area of the two small circles is less than the area of
one big circle centered on the start and reaching to the goal
Bidirectional search is implemented by replacing the goal test with a check to see whether the
frontiers of the two searches intersect; if they do, a solution has been found.
Advantages: Bidirectional search is fast.
Bidirectional search requires less memory
Disadvantages: Implementation of the bidirectional search tree is difficult.
In bidirectional search, one should know the goal state in advance.

Properties of Bidirectional search Completeness:


Bidirectional Search is complete if we use BFS in both searches.
Time Complexity: Time complexity of bidirectional search using BFS is O(b d/2).
Space Complexity: Space complexity of bidirectional search is O(b d/2).
Optimal: Bidirectional search is Optimal

19. Explain 8-puzzle problem with heuristic functions?


Ans.
20. Discuss SMA* Algorithm?
Ans. SMA* ( Simplified Memory Bounded A*) is a shortest path algorithm that is
based on the A* algorithm. The difference between SMA* and A* is that SMA* uses a
bounded memory, while the A* algorithm might need exponential memory.

Like the A*, it expands the most promising branches according to the heuristic. What
sets SMA* apart is that it prunes nodes whose expansion has revealed less
promising than expected.

SMA*, just like A* evaluates nodes by combining g(n), the cost to reach the node,
and h(n), the cost to get from the node to the goal: f(n) = g(n) + h(n).

Since g(n) gives the path cost from the start node to node n, and h(n) is the
estimated cost of the cheapest path from n to the goal, we have f(n) = estimated
cost of the cheapest solution through n. The lower the f value is, the higher priority
the node will have.
The difference from A* is that the f value of the parent node will be updated to reflect
changes to this estimate when its children are expanded. A fully expanded node will
have an f value at least as high as that of its successors.

In addition, the node stores the f value of the best forgotten successor (or best
forgotten child). This value is restored if the forgotten successor is revealed to be
the most promising successor.

You might also like