0% found this document useful (0 votes)
21 views25 pages

AOA 2019 Solution

Cook's theorem establishes the concept of NP-completeness in computational complexity theory. It identifies the Boolean satisfiability problem (SAT) as the first NP-complete problem - a problem that is in NP and is as hard as any other problem in NP. This implies that if a polynomial time algorithm can be found to solve one NP-complete problem, then polynomial time algorithms could be found for all problems in NP. Cook's theorem fundamentally changed understanding of algorithmic complexity by suggesting that certain problems are inherently difficult to solve efficiently.

Uploaded by

Sachin Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views25 pages

AOA 2019 Solution

Cook's theorem establishes the concept of NP-completeness in computational complexity theory. It identifies the Boolean satisfiability problem (SAT) as the first NP-complete problem - a problem that is in NP and is as hard as any other problem in NP. This implies that if a polynomial time algorithm can be found to solve one NP-complete problem, then polynomial time algorithms could be found for all problems in NP. Cook's theorem fundamentally changed understanding of algorithmic complexity by suggesting that certain problems are inherently difficult to solve efficiently.

Uploaded by

Sachin Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

1. Define complexity with its notations.

Ans)Analysis or Analysis of Algorithms. Complexity, in this context,


refers to the efficiency or resource usage of an algorithm. It is
commonly categorized into time complexity and space complexity.

*Time Complexity (T(n)):

Time complexity is a measure of the amount of time an algorithm takes


to complete as a function of the size of the input. It is denoted by T(n),
where "n" represents the size of the input.

Common notations used to describe time complexity include:

# Big O Notation (O): Describes the upper bound of the time


complexity.

# Omega Notation (Ω): Describes the lower bound of the time


complexity.

# Theta Notation (θ): Describes both the upper and lower bounds,
providing a tight bound on the growth rate.

For example:

O(n): Linear time complexity.

O(log n): Logarithmic time complexity.


O(n^2): Quadratic time complexity.

*Space Complexity (S(n)):

Space complexity is a measure of the amount of memory or space an


algorithm requires as a function of the size of the input. It is denoted by
S(n).

Similar to time complexity, common notations for space complexity


include:

# Big O Notation (O): Describes the upper bound of the space


complexity.

# Omega Notation (Ω): Describes the lower bound of the space


complexity.

# Theta Notation (θ): Provides a tight bound on the growth rate of the
space requirements.

For example:

O(1): Constant space complexity.

O(n): Linear space complexity.

O(n^2): Quadratic space complexity.

Q2.) Explain what is greedy algorithm and dynamic programming


approach.

Ans)Greedy Algorithm:

A greedy algorithm is an algorithmic paradigm that follows the


problem-solving heuristic of making the locally optimal choice at each
stage with the hope of finding a global optimum. In other words, it
makes the best decision at each step without considering the global
picture, with the belief that this will lead to an optimal solution. Greedy
algorithms are often used for optimization problems.

The key components of a greedy algorithm are:

i) Greedy Choice Property: A global optimum can be arrived at by


selecting a local optimum.

ii) Optimal Substructure: An optimal solution to the problem contains


an optimal solution to its subproblems.

* Dynamic Programming:

Dynamic programming is a method for solving optimization problems


by breaking them down into simpler overlapping subproblems and
solving each subproblem only once, storing the solutions to
subproblems in a table to avoid redundant work. Dynamic
programming is applicable to problems exhibiting the property of
optimal substructure, meaning that an optimal solution to the problem
can be constructed from optimal solutions to its subproblems.

Key features of dynamic programming include:

i)optimal Substructure: An optimal solution to the problem contains


optimal solutions to its subproblems.
ii)Overlapping Subproblems: The problem can be broken down into
subproblems, and the same subproblems are solved multiple times.

Dynamic programming is often implemented using a bottom-up


approach or a top-down approach (memoization). Common problems
solved using dynamic programming include:

~Shortest Path Problems: Like the Floyd-Warshall algorithm for finding


the shortest paths in a weighted graph.

~Longest Common Subsequence (LCS): Finding the longest


subsequence present in given sequence.

Dynamic programming can be more time and space-consuming


compared to greedy algorithms, but it ensures optimal solutions to
problems with overlapping subproblems. It is particularly useful when
subproblems need to be solved only once.

Q3.)Explain what is greedy approach to solve the problems.

Ans)The greedy approach is a problem-solving paradigm where, at each


step, the algorithm makes the locally optimal choice with the hope that
these choices will lead to a globally optimal solution. In other words, it
makes the best possible decision at the current moment without
considering the consequences of that decision on future steps.

Here are the key characteristics of the greedy approach:

i) Greedy Choice Property: A global optimum can be reached by


selecting a local optimum (the best available option at the current
step).
ii) Optimal Substructure: The solution to the entire problem can be
constructed from the solutions to its subproblems.

The basic steps of a greedy algorithm are as follows:

1. Initialization: Start with an empty solution or a trivial initial solution.

2. Greedy Choice: At each step, make the best possible choice


according to some criteria. This choice is the local optimum for that
step.

3. Feasibility Check: Ensure that the chosen solution is feasible and


doesn't violate any constraints.

4. Update Solution: Update the solution with the chosen option.

5. Termination: Repeat steps 2-4 until a complete solution is reached or


a termination condition is met.

The greedy approach is often used to solve optimization problems


where the goal is to find the best solution among a set of feasible
solutions. It is important to note that while greedy algorithms are
simple, they may not always guarantee a globally optimal solution.
Therefore, it's crucial to analyze the problem and prove that the greedy
choice at each step leads to an optimal solution.

Common examples of problems solved using the greedy approach


include:

Activity Selection Problem: Selecting the maximum number of


non-overlapping activities in a set with given start and finish times.
Fractional Knapsack Problem: Selecting items with maximum value
while respecting a weight constraint.

Huffman Coding: Constructing an optimal prefix-free binary tree for


data compression.

4.)what is minimum spanning tree?

Ans) In the context of Algorithm Analysis or Analysis of Algorithms


(AOA), the term "minimum spanning tree" (MST) refers to a specific
problem related to graphs. Let's break down the concept in the context
of AOA:

Definition:

*A minimum spanning tree is a tree that spans all the vertices of a


connected, undirected graph.

*The goal is to find a tree that connects all vertices with the minimum
possible sum of edge weights.

Graph Representation:

* The graph is represented as a set of vertices and edges, where each


edge has an associated weight.

* The weights typically represent the cost, distance, or some other


measure associated with the edges.

#Problem Formulation:

Given a connected, undirected graph with weighted edges, the task is


to find a subset of edges that forms a tree and spans all vertices with
the minimum possible total edge weight.

Notations in AOA:

In the analysis of algorithms, the time complexity and space complexity


of algorithms for finding minimum spanning trees are often expressed
using big O notation (O).

*The time complexity is expressed as O(f(n)), where "n" is the number


of vertices or edges in the graph, and "f(n)" represents the
computational cost of the algorithm in terms of the input size.

*The space complexity is similarly expressed using O(g(n)), where "g(n)"


represents the amount of memory or storage required by the algorithm
as a function of the input size.

Algorithms for Minimum Spanning Tree in AOA:

*Algorithms such as Kruskal's algorithm and Prim's algorithm are


commonly used to find the minimum spanning tree in a graph.

*The efficiency of these algorithms is analyzed in terms of time and


space complexity.

Applications:

The minimum spanning tree problem has practical applications in


network design, circuit design, and various scenarios where the goal is
to connect nodes while minimizing the overall cost.

Q5) What is Cut and min cut?

Ans)Cut:
*A cut in a graph is essentially a way of dividing the vertices into two
disjoint sets.

*Formally, a cut is represented by a partition (S, V - S), where S is a


subset of vertices, and V - S is the complement set of vertices.

*The cut is defined by the edges that have one endpoint in S and the
other in V - S.

Min Cut:

*The minimum cut is the cut with the smallest total weight of edges
crossing the cut.

*The goal is to find the cut that minimizes the sum of the weights of the
edges that are severed when the cut is made.

*The concept of minimum cut is particularly important in network flow


problems and is used in various algorithms and applications.

Q6)Define Cook's theorm.

Ans)Cook's theorem, also known as the Cook-Levin theorem, is a


fundamental result in theoretical computer science, particularly in the
area of algorithm analysis and computational complexity. In the context
of Algorithm Analysis or Analysis of Algorithms (AOA), let's define
Cook's theorem:

# Decision Problem and Complexity Classes:

*Cook's theorem deals with decision problems, which are problems


that require a yes/no answer.
*It introduces the concept of complexity classes, specifically the class
NP (Non-deterministic Polynomial time), which consists of problems for
which a solution can be verified quickly (in polynomial time).

Cook's Theorem Statement:

*Cook's theorem asserts the existence of an NP-complete problem,


which is a decision problem that is both in NP and, in a certain sense, is
as hard as the hardest problems in NP.

*The theorem identifies a specific problem known as the Boolean


satisfiability problem (SAT) as the first NP-complete problem.

NP-Completeness:

NP-complete problems have the property that if a polynomial-time


algorithm exists for one of them, then a polynomial-time algorithm
exists for all problems in NP. This is known as the P vs. NP question.

Implications for Algorithmic Complexity:

Cook's theorem has significant implications for the field of algorithmic


complexity. It suggests that solving certain problems efficiently (in
polynomial time) is inherently challenging, as the difficulty of these
problems is related to the entire class of problems in NP.

Cook's Contributions:

Cook's work laid the foundation for the theory of NP-completeness and
computational complexity. It demonstrated the existence of problems
that are likely to be computationally hard and influenced subsequent
research in algorithms and complexity theory.

In summary, in the realm of Algorithm Analysis, Cook's theorem


provides insights into the inherent difficulty of certain decision
problems. It introduces the concept of NP-completeness and identifies
the Boolean satisfiability problem as the first NP-complete problem,
contributing to our understanding of the boundaries and challenges in
computational complexity.

Q7)Define backtracking.

Ans)Backtracking is a problem-solving algorithmic technique that


involves finding a solution incrementally by trying different options and
undoing them if they lead to a dead end. It is commonly used in
situations where you need to explore multiple possibilities to solve a
problem, like searching for a path in a maze or solving puzzles like
Sudoku. When a dead end is reached, the algorithm backtracks to the
previous decision point and explores a different path until a solution is
found or all possibilities have been exhausted.

Q8)N,NP and NP hard problems?

Ans) 1) P (Polynomial Time):

*Problems in class P are the ones that can be solved in polynomial time.

*This means that the running time of the algorithm to solve the
problem is polynomial in terms of the size of the input.

Examples include sorting algorithms like quicksort and mergesort.

2) NP (Non-deterministic Polynomial Time):

*Problems in class NP are those for which a solution can be verified


quickly, but finding a solution quickly is not guaranteed.

*If you have a solution to an NP problem, you can check its correctness
efficiently.

*The term "non-deterministic" refers to the theoretical concept of a


non-deterministic Turing machine, which can guess the correct
solution.

Examples include the traveling salesman problem and the Boolean


satisfiability problem.

NP-hard (Non-deterministic Polynomial Time hard):

*A problem is NP-hard if it is at least as hard as the hardest problems in


NP.

Informally, an NP-hard problem is one to which any problem in NP can


be reduced in polynomial time.

*If you could solve an NP-hard problem in polynomial time, you could
solve any problem in NP in polynomial time as well (and therefore
prove P = NP).

*NP-hard problems may or may not be in NP themselves.

Examples include the Boolean satisfiability problem (SAT) and the


traveling salesman problem.

3) NP-complete (Non-deterministic Polynomial Time complete):

*A problem is NP-complete if it is both in NP and NP-hard.

*NP-complete problems are, in a sense, the "hardest" problems in NP.


*If you can find a polynomial-time algorithm for any NP-complete
problem, you can solve all problems in NP in polynomial time (and
prove P = NP).

*The concept of NP-completeness was introduced by Stephen Cook


with his discovery of Cook's theorem.

*The first problem shown to be NP-complete was the Boolean


satisfiability problem (SAT).

Q9)Define Assignment Problem.

Ans)The Assignment Problem is a classic optimization problem in the


field of operations research and combinatorial optimization. It involves
finding the most efficient assignment of a set of tasks (jobs) to a set of
agents (workers) in terms of minimizing the total cost or maximizing the
total profit.

Here's a formal description:

#Given:

A set of tasks (or jobs).

A set of agents (or workers).

A cost (or profit) associated with assigning a particular agent to a


specific task.

# Objective:

Minimize the total cost or maximize the total profit by determining the
optimal assignment of tasks to agents.

The problem can be represented as a cost (or profit) matrix, where the
rows correspond to tasks, the columns correspond to agents, and the
entries represent the cost (or profit) associated with assigning a
particular task to a specific agent. The goal is to find a one-to-one
assignment of tasks to agents that minimizes the total cost or
maximizes the total profit.

The Assignment Problem is a special case of the more general bipartite


matching problem, where the goal is to find a maximum cardinality
matching in a bipartite graph. Various algorithms, such as the
Hungarian algorithm, are commonly used to efficiently solve the
Assignment Problem and find the optimal assignment. The problem has
practical applications in various fields, including logistics, scheduling,
and resource allocation.

10)Differentiate between feasible and optimal solution.

Ans)

1) Feasible Solution:

*A feasible solution is a solution that satisfies all the constraints of the


problem.

*It is a point in the solution space that meets the requirements


imposed by the problem's constraints.

*Feasible solutions may or may not be the best or optimal solutions;


they just need to adhere to the problem's constraints.

2) Optimal Solution:
*An optimal solution is a feasible solution that either maximizes or
minimizes the objective function of the problem.

*In optimization problems, there is typically an objective function that


needs to be optimized (either maximized or minimized).

*An optimal solution is the one that achieves the best possible value of
the objective function while still satisfying all the constraints.

There can be multiple feasible solutions, but only one or more may be
optimal.

Part-B
Q1) Using quick sort algorithm sort the following sequence:

A={13,19,9,5,12,8,7,4,21,2,6,11}

Ans)Sure, I can guide you through the steps of the Quick Sort algorithm
to sort the given sequence A={13,19,9,5,12,8,7,4,21,2,6,11}. Quick Sort
is a divide-and-conquer algorithm that works by selecting a 'pivot'
element from the array and partitioning the other elements into two
sub-arrays according to whether they are less than or greater than the
pivot. The sub-arrays are then sorted recursively.

Here are the steps:

Choose a Pivot:

Let's choose the last element, 11, as the pivot.


Partition:

Rearrange the array such that all elements less than the pivot are on
the left, and all elements greater than the pivot are on the right.

less

Copy code

Before Partitioning:

[13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11]

After Partitioning:

[9, 5, 8, 7, 4, 2, 6] [11] [13, 19, 12, 21]

Recursion:

Apply the Quick Sort algorithm recursively to the sub-arrays on the left
and right of the pivot.

less

Copy code

Recursive Sorting:

[9, 5, 8, 7, 4, 2, 6] (Recursive Sort) [11] (Recursive Sort) [13, 19, 12, 21]

Repeat:
Repeat the process until each sub-array is of length 0 or 1.

Now, let's continue with the recursive sorting:

less

Copy code

Recursive Sorting:

[5, 4, 2, 6, 7, 8, 9] [11] [12, 13, 19, 21]

less

Copy code

Recursive Sorting:

[2, 4, 5, 6, 7, 8, 9] [11] [12, 13, 19, 21]

csharp

Copy code

Recursive Sorting:

[2, 4, 5, 6, 7, 8, 9, 11, 12, 13, 19, 21]

Now, the entire array is sorted. The sorted array using the Quick Sort
algorithm is:

2
,

11

12

13

19
,

21

[2,4,5,6,7,8,9,11,12,13,19,21]

Q2)Given 10 files with lengths of {28,32,12,5,84,53,91,35,3 and 11}.find


the optimal merge pattern. also calculate the total number of moves.

Ans)The optimal merge pattern and the total number of moves can be
found using the Huffman coding algorithm. The Huffman algorithm
builds a binary tree to represent the optimal merge pattern, and the
total number of moves corresponds to the sum of the lengths of the
files at each step in the construction of the tree.

Here are the steps to find the optimal merge pattern:

1) Sort the File Lengths:

*Sort the file lengths in ascending order.

*File lengths: {3, 5, 11, 12, 28, 32, 35, 53, 84, 91}

2) Build the Huffman Tree:

*Create a priority queue (min heap) based on the sorted file lengths.

*Repeat the following steps until only one element remains in the
priority queue:

*Remove the two smallest elements from the priority queue.

*Create a new node with a length equal to the sum of the removed
elements.

*Insert the new node back into the priority queue.

Q3) What do you understand Dynamic programming approach also


illustarte its elements

Ans)Dynamic programming is a powerful algorithmic technique used to


solve optimization problems by breaking them down into smaller
overlapping subproblems and solving each subproblem only once,
storing the solutions to subproblems in a table to avoid redundant
computations. It is particularly useful when a problem has overlapping
subproblems and optimal substructure, meaning the optimal solution
to the problem can be constructed from the optimal solutions of its
subproblems.

The key elements of dynamic programming include:

*Optimal Substructure:

The principle of optimal substructure means that an optimal solution to


the problem contains optimal solutions to its subproblems. In other
words, if we know the optimal solutions to smaller instances of the
problem, we can efficiently construct the optimal solution to the larger
problem.

*overlapping Subproblems:

Overlapping subproblems occur when the same subproblems are


solved multiple times in the process of solving the larger problem.
Dynamic programming exploits this property by storing the solutions to
subproblems in a table or memoization array, preventing redundant
computations.
*Memoization:

Memoization involves storing the solutions to subproblems in a data


structure (often an array or a hash table). Before solving a subproblem,
the algorithm checks whether the solution is already available in the
table. If it is, the stored solution is used instead of recomputing it.

*Bottom-Up Approach (Tabulation):

In the bottom-up approach, the algorithm starts by solving the smallest


subproblems and gradually builds up to the larger problem. The
solutions to subproblems are stored in a table, and the algorithm
iteratively fills in the table until the solution to the original problem is
obtained.

*Top-Down Approach (Memoization):

In the top-down approach, also known as memoization, the problem is


solved by recursively breaking it down into smaller subproblems. The
solutions to subproblems are memoized (stored) to avoid redundant
computations. This approach is often implemented using recursion and
is more intuitive but may suffer from stack overflow for deep recursion.

*State Transition and Recurrence Relation:

Dynamic programming problems can often be expressed in terms of


state transitions and recurrence relations. The relationship between the
solution to a problem and the solutions to its subproblems is captured
by a recurrence relation, and this helps in defining the state transition.

*Optimization and Subproblem Dependencies:

Dynamic programming is particularly effective for optimization


problems where there are choices to be made, and the goal is to find
the best combination of choices. The solution to a subproblem depends
on the choices made and the solutions to other subproblems.

Q4)Using strassen's matrix multiplication algorithm compute the matrix


product

A=[

Q5) Explain the vertex cover and set cover problem.

Ans)Vertex Cover Problem:

The Vertex Cover Problem is a classic optimization problem in graph


theory. Given an undirected graph;

G=(V,E), where

V is the set of vertices and

E is the set of edges, the goal is to find the smallest set of vertices such
that every edge in the graph is incident to at least one vertex from the
set. This set of vertices is called a "vertex cover."

Formally, a vertex cover

C is a subset of vertices

C⊆V such that for every edge

(u,v)∈E, at least one of the vertices

v belongs to

C. The Vertex Cover Problem seeks to find a vertex cover of minimum


size.

The Vertex Cover Problem is known to be an NP-complete problem,


meaning that it is unlikely to have a polynomial-time algorithm for
solving it optimally unless P equals NP.

Set Cover Problem:

The Set Cover Problem is a classical problem in combinatorial


optimization. Given a finite set

U and a collection

S of subsets of

U, the goal is to find the smallest subcollection of

S such that the union of the selected subsets covers all elements of

Formally, let

U be a finite set and

=U. The Set Cover Problem seeks to find a set cover of minimum size.

The Set Cover Problem is known to be NP-complete as well. There is a


straightforward greedy algorithm for approximating the solution to the
Set Cover Problem, but finding an optimal solution is generally
computationally expensive.

Both the Vertex Cover and Set Cover Problems are examples of
problems that are computationally hard to solve exactly in the general
case. They have applications in various fields, including network design,
resource allocation, and logistics optimization.
Q6)Differentiate between backtracking and branch and bound
alogrithms.

Q7)Explain the quadratic assignment problem.


part-c

You might also like