1. Explain the four categories of definition of artificial intelligence.
Thinking Humanly:
This definition focuses on the idea of creating machines that think like humans. It is concerned with
replicating human cognitive processes. The goal is to develop systems that can reason, learn, and make
decisions similar to how a human would. This category involves understanding how human intelligence
works and creating AI systems that emulate these processes. Examples include research in cognitive science
and attempts to model human thought processes in AI systems.
Thinking Rationally:
This definition is based on the idea of creating machines that think rationally or logically. It emphasizes the
development of AI systems that use logic and reasoning to solve problems and make decisions. The focus is
on creating systems that follow principles of rationality, such as drawing valid conclusions from premises.
This includes approaches like formal logic, rule-based systems, and algorithms that ensure logical
consistency in problem-solving.
Acting Humanly:
This definition aims at creating machines that exhibit human-like behavior. The focus is on making AI
systems that can perform tasks in a way that mimics human actions and interactions. This includes natural
language processing, robotics, and human-computer interaction. For instance, chatbots designed to engage
in conversation in a human-like manner and robots that can perform tasks in a human-like way fall under
this category.
Acting Rationally:
This definition focuses on creating machines that act in ways that are considered rational given their goals
and environment. Rational behavior is defined as that which maximizes the chances of achieving the
agent’s objectives. This involves developing AI systems that can make decisions and take actions that are
expected to lead to the best possible outcome according to their design. Examples include autonomous
vehicles making driving decisions and AI systems optimizing supply chain management.
2. Explain the contribution of Mathematics, Psychology, Linguistics to AI.
Mathematics:
Mathematics is foundational to AI as it provides the theoretical framework for algorithms and models. Key
areas of mathematics that contribute to AI include:
- Linear Algebra: Essential for understanding data structures, transformations, and machine learning
algorithms like neural networks.
- Calculus: Used in optimization problems, such as training machine learning models through gradient
descent.
- Probability and Statistics: Fundamental for making predictions, handling uncertainty, and learning from
data. Techniques such as Bayesian inference and hypothesis testing are integral to AI.
Psychology:
Psychology contributes to AI by offering insights into human cognition, behavior, and learning processes.
Understanding human thought and behavior helps in designing AI systems that interact naturally with
humans. Key contributions include:
- Cognitive Psychology: Provides models of human cognitive processes that inspire AI algorithms, such as
decision-making and problem-solving.
- Behavioral Psychology: Helps in designing reinforcement learning systems where agents learn from
interactions and feedback, similar to how humans learn from rewards and punishments.
Linguistics:
Linguistics plays a crucial role in natural language processing (NLP), a subfield of AI focused on
understanding and generating human language. Contributions include:
- Syntax and Semantics: Understanding the structure and meaning of language helps in developing
algorithms for text analysis, translation, and generation.
- Pragmatics: Helps in interpreting context and meaning beyond the literal content of language, which is
essential for conversational AI and chatbots.
- Phonetics and Phonology: Important for speech recognition and synthesis technologies, enabling
machines to understand and generate spoken language.
3. Explain any two applications of artificial intelligence.
1. Autonomous Vehicles:
Autonomous vehicles, or self-driving cars, use AI to navigate and control vehicles without human
intervention. Key AI technologies involved include:
- Computer Vision: For detecting and interpreting the surroundings, such as recognizing traffic signs,
pedestrians, and other vehicles.
- Machine Learning: For making driving decisions based on data from sensors and cameras, predicting the
behavior of other road users, and optimizing driving strategies.
- Sensor Fusion: Combining data from various sensors (e.g., lidar, radar, cameras) to create a comprehensive
understanding of the vehicle's environment.
2. Medical Diagnosis:
AI is increasingly used in medical diagnosis to assist healthcare professionals in detecting and diagnosing
diseases. Applications include:
- Image Analysis: AI algorithms analyze medical images (e.g., X-rays, MRIs) to identify anomalies or patterns
indicative of conditions such as tumors or fractures.
- Predictive Analytics: Machine learning models predict patient outcomes based on historical data and
patient characteristics, aiding in early diagnosis and personalized treatment plans.
- Natural Language Processing: Extracting and analyzing information from medical records and research
papers to support diagnosis and treatment recommendations.
4. Explain Artificial Intelligence with Turing Test approach.
Turing Test:
The Turing Test, proposed by Alan Turing in 1950, is a test for determining whether a machine exhibits
intelligent behavior indistinguishable from that of a human. The test involves a human evaluator engaging
in a conversation with both a human and a machine (which are hidden from view). If the evaluator cannot
reliably distinguish between the human and the machine based solely on the conversation, the machine is
said to have passed the test.
Key Concepts:
- Imitation Game: The original name for the Turing Test, where the machine's ability to imitate human
responses is evaluated.
- Behavioral Approach: The test focuses on the machine's behavior rather than its internal processes or
consciousness.
- Threshold for Intelligence: Passing the Turing Test implies that the machine demonstrates a level of
intelligence comparable to human capabilities in specific conversational contexts.
Implications:
- Measure of AI: The Turing Test is a benchmark for evaluating the performance and capabilities of AI
systems in natural language understanding and generation.
- Philosophical Debate: Raises questions about the nature of intelligence, consciousness, and the criteria for
evaluating machine intelligence.
5. Explain the following terms: agent, agent function, agent program, rationality, autonomy, architecture
of an agent, performance measure.
Agent:
An agent is an entity that perceives its environment through sensors and acts upon that environment using
actuators. It can be a software program, a robot, or any entity capable of autonomous decision-making and
action.
Agent Function:
The agent function is a mathematical description of the agent's behavior. It maps the percepts (sensory
inputs) to actions. The function defines how the agent decides on actions based on its perceptions.
Agent Program:
The agent program is the actual implementation of the agent function. It runs on the agent's hardware or
software platform and executes the logic required to map perceptions to actions.
Rationality:
Rationality refers to the quality of making decisions or taking actions that are expected to achieve the best
possible outcome based on the agent's goals and knowledge. A rational agent selects actions that maximize
its performance measure given its percepts and knowledge.
Autonomy:
Autonomy refers to the ability of an agent to operate independently of human intervention. An
autonomous agent can make decisions and take actions based on its own perceptions and reasoning,
rather than relying on external commands.
Architecture of an Agent:
The architecture of an agent refers to the underlying hardware and software framework that supports the
agent's operation. It includes components such as sensors, actuators, processors, and memory, which
enable the agent to perceive, reason, and act.
Performance Measure:
The performance measure is a quantitative metric used to evaluate the effectiveness of an agent's actions.
It defines the criteria for success and how well the agent is achieving its goals. Performance measures vary
depending on the specific task and environment of the agent.
6. Explain the components of a learning agent.
1. Learning Element:
The learning element is responsible for improving the agent's performance over time based on its
experiences. It updates the agent's knowledge or policy to enhance decision-making and adapt to changes
in the environment.
2. Performance Element:
The performance element executes actions based on the current knowledge or policy. It interacts with the
environment, receives feedback, and performs tasks to achieve the agent's goals.
3. Critic:
The critic provides feedback to the learning element about the performance of the agent's actions. It
evaluates the outcomes of actions and helps the learning element understand whether the actions were
successful or not.
4. Problem Generator:
The problem generator suggests new situations or problems for the agent to explore. It helps the learning
element by creating scenarios that are likely to improve the agent's learning and performance.
7. For each of the following activities, give a PEAS description of the task environment and characterize it
in terms of the properties.
a. Playing Soccer:
PEAS Description:
- Performance Measure: Goals scored, number of assists, successful tackles, overall game outcome.
- Environment: Soccer field, other players, goalposts, ball, referees.
- Actuators: Legs for kicking and running, head for heading, hands for goalkeepers.
- Sensors: Vision (to see ball, teammates, opponents), auditory (to hear referee’s whistle, coach’s
instructions).
- Properties:
- Fully Observable: Partially observable due to the dynamic nature of the game and occlusions by other
players.
- Multi-Agent: Multiple agents (players) with different roles and strategies.
- Stochastic: The game involves randomness in ball movement and player actions.
- Sequential: Actions have long-term consequences on the game outcome.
b. Shopping for Used AI Books on the Internet:
- PEAS Description:
- Performance Measure: Cost of books, relevance of books to the topic, delivery time.
- Environment: Online bookstores, search engines, book listings, user reviews.
- Actuators: Clicks, keyboard input, checkout actions.
- Sensors:
Screen display (for browsing), search queries, user reviews.
- Properties:
- Fully Observable: Mostly observable as the entire book catalog and user reviews are accessible.
- Single-Agent: Typically a single user browsing and making decisions.
- Deterministic: Outcomes are more predictable based on the choices made.
- Static: The environment remains mostly the same during a session but can change with updates.
c. Playing a Tennis Match:
- PEAS Description:
- Performance Measure: Points won, games won, set wins, match outcome.
- Environment: Tennis court, ball, racket, opponent, umpire.
- Actuators: Racket for hitting, legs for movement.
- Sensors: Vision (to track ball and opponent), auditory (to hear umpire’s calls and ball impact).
- Properties:
- Fully Observable: Partially observable as the opponent’s actions and the ball’s trajectory need to be
constantly monitored.
- Multi-Agent: Two agents (players) interacting and competing.
- Stochastic: Some unpredictability in ball movement and opponent’s actions.
- Sequential: Actions have a significant impact on future points and games.
8. Explain the following properties of task environments (any two):
a. Fully Observable vs Partially Observable:
- Fully Observable: In a fully observable environment, the agent has access to the complete state of the
environment at all times. There is no uncertainty or missing information about the environment. For
example, in a chess game, the entire board is visible, and all pieces are known to the player.
- Partially Observable: In a partially observable environment, the agent does not have complete information
about the state of the environment. There may be hidden or uncertain aspects. For example, in poker,
players do not know the cards of their opponents, only their own.
b. Deterministic vs Stochastic:
- Deterministic: In a deterministic environment, the outcome of actions is predictable and can be precisely
determined based on the current state and actions. For example, a calculator performs deterministic
operations with known inputs and produces consistent outputs.
- Stochastic: In a stochastic environment, the outcome of actions involves randomness and uncertainty. The
same action may lead to different outcomes. For example, in weather prediction, the outcome is influenced
by many uncertain factors.
c. Single Agent vs Multi-Agent:
- Single Agent: A single agent environment involves only one agent interacting with the environment. The
agent's performance is based solely on its own actions and decisions. For example, a vacuum cleaner
operating in a room is a single agent system.
- Multi-Agent: A multi-agent environment involves multiple agents interacting with each other and the
environment. The performance of each agent can be affected by the actions of other agents. For example,
in a soccer game, multiple players (agents) interact and compete.
d. Episodic vs Sequential:
- Episodic: In an episodic environment, each action or episode is independent of previous ones. The
outcome of one episode does not affect future episodes. For example, a single email classification task is
episodic because each email is classified independently.
- Sequential: In a sequential environment, actions are interdependent, and the outcome of one action
affects future actions. For example, in a chess game, each move affects the subsequent moves and the
overall game strategy.
e. Static vs Dynamic:
- Static: In a static environment, the environment remains unchanged while the agent is deciding or taking
actions. For example, a board game like chess is static because the game state does not change except for
the actions of the players.
- Dynamic: In a dynamic environment, the environment can change while the agent is making decisions or
taking actions. For example, in autonomous driving, the road conditions and traffic can change dynamically.
f. Discrete vs Continuous:
- Discrete: In a discrete environment, the state space, action space, and time are all finite and countable.
For example, a grid-based maze has a discrete set of positions and actions.
- Continuous: In a continuous environment, the state space, action space, and time are infinite and not
countable. For example, controlling a robot arm in continuous space involves infinite positions and
movements.
g. Known vs Unknown:
- Known: In a known environment, the agent has complete knowledge of the environment's dynamics and
can predict the outcomes of its actions accurately. For example, a puzzle with a well-defined solution space
is known.
9. Describe the following agents:
a. Reflex Agent:
A reflex agent selects actions based on the current percept without considering the history of past
percepts. It uses a set of condition-action rules (or reflexes) to respond to specific stimuli. For example, a
simple thermostat adjusts temperature based on the current reading, without considering past
temperatures.
b. Model-Based Agent:
A model-based agent maintains an internal model of the environment to handle partial observability. It
uses this model to make decisions based on both current and past percepts. For example, a navigation
system that keeps track of known obstacles and updates its map as it moves is model-based.
c. Goal-Based Agent:
A goal-based agent chooses actions based on achieving specific goals or objectives. It uses search and
planning to determine the sequence of actions required to achieve its goals. For example, a travel planning
system that selects routes based on reaching a destination efficiently is goal-based.
d. Utility-Based Agent:
A utility-based agent makes decisions based on a utility function that quantifies the desirability of different
states. It aims to maximize its overall satisfaction or utility by evaluating the trade-offs between different
actions. For example, a recommendation system that suggests products based on user preferences and
expected satisfaction is utility-based.
e. Learning Agent:
A learning agent improves its performance over time based on experience. It adapts its behavior by
learning from feedback and adjusting its actions to enhance its effectiveness. For example, a chess-playing
AI that refines its strategy by learning from past games and outcomes is a learning agent.
10. Explain the 5 components required to define a problem.
1. Initial State:
The initial state represents the starting configuration of the problem. It provides the baseline from which
the agent begins its search for a solution. For example, in the 8-puzzle problem, the initial state is the
configuration of tiles at the start.
2. Actions:
Actions are the possible moves or operations the agent can perform to transition from one state to
another. They define the set of operations available to the agent. For example, in a robot navigation
problem, actions might include moving forward, turning left, or turning right.
3. Transition Model:
The transition model describes how actions lead to changes in the state of the environment. It specifies the
resulting state after applying a given action to the current state. For example, in the vacuum world, the
transition model explains how moving the vacuum cleaner affects the environment's cleanliness.
4. Goal State:The goal state defines the desired outcome or final configuration that the agent aims to
achieve. It represents the conditions under which the problem is considered solved. For example, in the 8-
puzzle problem, the goal state is the configuration where all tiles are in their correct positions.
5. Path Cost:
Path cost represents the cost associated with a sequence of actions taken to reach the goal state. It
quantifies the effort, time, or resources required to achieve the goal. For example, in a pathfinding
problem, the path cost might be the total distance traveled or the total time spent.
11. Write the States, Initial State, Actions, Transition Model, State and path cost to formulate the
following problems:
a. Vacuum World:
- States: All possible configurations of the vacuum cleaner's location and the cleanliness of the rooms.
- Initial State: The initial position of the vacuum cleaner and the cleanliness status of the rooms (e.g., dirty
or clean).
- Actions: Move the vacuum cleaner to the left, move it to the right, clean the current location.
- Transition Model: Describes how the vacuum cleaner's position and the cleanliness of the rooms change
after an action. For example, moving the vacuum cleaner to the right changes its position, and cleaning
changes the cleanliness status of the current location.
- State Cost: Typically binary (clean or dirty).
- Path Cost: The number of actions performed (e.g., number of moves and cleaning actions).
b. 8-Puzzle:
- States: All possible configurations of the 8-puzzle board, where tiles can be arranged in various ways.
- Initial State: The initial configuration of the tiles on the 3x3 board.
- Actions: Move a tile into the empty space (up, down, left, or right).
- Transition Model: Describes how the position of tiles changes when a tile is moved into the empty space.
- State Cost: Number of misplaced tiles or Manhattan distance.
- Path Cost: Number of moves taken to reach the goal state.
c. 8-Queen:
- States: All possible configurations of 8 queens placed on a chessboard.
- Initial State: An empty chessboard with no queens placed.
- Actions: Place a queen on an empty square in any column.
- Transition Model: Describes how the placement of a queen affects the board's configuration, including
conflicts with other queens.
- State Cost: Number of pairs of queens attacking each other.
- Path Cost: Number of queens placed so far.
d. Traveling Salesperson:
- States: All possible permutations of city visits.
- Initial State: The starting city and the list of cities to be visited.
- Actions: Travel from the current city to another city.
- Transition Model: Describes the cost (distance or time) of traveling between cities.
- State Cost: Total distance or travel time for the current route.
- Path Cost: Total distance or travel time accumulated during the trip.
e. Robot Navigation:
- States: All possible positions and orientations of the robot in the environment.
- Initial State: The starting position and orientation of the robot.
- Actions: Move forward, turn left, turn right.
- Transition Model: Describes how the robot's position and orientation change based on actions.
- State Cost: Distance from the robot to the goal or obstacles encountered.
- Path Cost: Total distance traveled or time taken to reach the goal.
12. Describe General Tree-Search Algorithm
The General Tree-Search Algorithm is a foundational algorithm used in AI to explore the possible states and
actions in a problem domain. It is used to find a solution path from the initial state to a goal state by
systematically exploring the state space.
Steps:
1. Initialize: Start with the initial state and place it in the search tree as the root node.
2. Expand: Generate all possible successor states from the current node.
3. Select: Choose a node to expand based on a search strategy (e.g., breadth-first, depth-first).
4. Test: Check if the expanded node is a goal state.
5. Backtrack: If the node is not a goal, continue expanding nodes until the goal is found or all nodes are
explored.
6. Terminate: The search terminates when a goal state is found or when all possible states have been
explored.
Characteristics:
- Space Complexity: Can be high as it stores all nodes in memory.
- Time Complexity: Depends on the size of the state space and the search strategy used.
13. Describe General Graph-Search Algorithm
The General Graph-Search Algorithm is an extension of the tree-search algorithm that avoids redundant
exploration of previously visited states by maintaining a record of visited states.
Steps:
1. Initialize: Start with the initial state and place it in the open list (or frontier) and an empty closed list (or
explored set).
2. Expand: Remove a node from the open list, generate its successor states, and add them to the open list if
they have not been visited before.
3. Test: Check if the expanded node is a goal state.
4. Add to Closed List: Add the node to the closed list after expansion.
5. Continue: Continue expanding nodes until a goal state is found or all nodes have been explored.
6. Terminate: The search terminates when a goal state is found or when all reachable nodes have been
explored.
Characteristics:
- Avoids Redundancy: Prevents re-expanding previously visited states, improving efficiency compared to
tree search.
- Space Complexity: Maintains both open and closed lists, which can be memory-intensive.
14. Explain Steps for Breadth-First Search Algorithm with an Example
Breadth-First Search (BFS): BFS explores all nodes at the present depth level before moving on to nodes at
the next depth level.Steps:
1. Initialize: Start with the initial state and add it to the queue (open list). Mark it as visited.
2. Expand: Remove the front node from the queue and generate its successors.
3. Enqueue: Add each successor to the queue if it has not been visited.
4. Check Goal: If any successor is the goal state, return the path to this goal.
5. Repeat: Continue the process until the queue is empty or the goal state is found.
Example:For a simple maze where you want to find a path from start (A) to goal (B):
- Initial State: Add A to the queue.
- Expand: Dequeue A, explore its neighbors (B, C).
- Enqueue: Add B and C to the queue.
- Goal Check: B is the goal; return the path A -> B.
15. Explain Steps for Depth-First Search Algorithm
Depth-First Search (DFS): DFS explores as far as possible along a branch before backtracking.
Steps:
1. Initialize: Start with the initial state and push it onto the stack (open list). Mark it as visited.
2. Expand: Pop the top node from the stack and generate its successors.
3. Push: Push each successor onto the stack if it has not been visited.
4. Check Goal: If any successor is the goal state, return the path to this goal.
5. Repeat: Continue the process until the stack is empty or the goal state is found.
Example:
For the same maze:
- Initial State: Push A onto the stack.
- Expand: Pop A, explore its neighbors (B, C).
- Push: Push B and C onto the stack.
- Goal Check: Continue until B is found.
16. Explain Steps for A Search Algorithm
A Search: A search is an informed search algorithm that uses a heuristic to estimate the cost to reach the
goal, combining path cost and heuristic cost.
Steps:
1. Initialize: Start with the initial state, add it to the open list with a cost of f(n) = g(n) + h(n), where g(n) is
the cost to reach the node and h(n) is the heuristic estimate to the goal.
2. Expand: Remove the node with the lowest f(n) from the open list.
3. Check Goal: If this node is the goal state, return the path.
4. Generate Successors: Calculate f(n) for each successor and add them to the open list if not already
visited or if a cheaper path is found.
5. Update Costs: Update the costs and paths for successors as necessary.
6. Repeat: Continue until the goal state is reached or the open list is empty.
Characteristics:
- Optimal: A is guaranteed to find the optimal path if the heuristic is admissible (never overestimates the
true cost).
17. Explain Best First Search Algorithm
Best First Search: Best First Search explores the search space by selecting nodes based on a heuristic
function.
Steps:
1. Initialize: Start with the initial state and add it to the open list with an evaluation based on the heuristic
function.
2. Expand: Remove the node with the best heuristic value (lowest cost estimate) from the open list.
3. Check Goal: If this node is the goal state, return the path.
4. Generate Successors: Evaluate successors based on the heuristic function and add them to the open list.
5. Repeat: Continue until the goal state is found or the open list is empty.
Characteristics:
- Heuristic-Based: Focuses on nodes that appear more promising according to the heuristic.
18. Define Heuristic Function. Give an Example Heuristic Function for Solving 8-Puzzle Problem.