0% found this document useful (0 votes)
9 views3 pages

DAA Saq

The document provides an overview of key concepts in the design and analysis of algorithms, including definitions and explanations of various algorithms such as BFS, DFS, Merge Sort, Quick Sort, and techniques like greedy methods and dynamic programming. It also discusses the differences between greedy methods and dynamic programming, as well as asymptotic notations used for analyzing algorithm performance. The document emphasizes the importance of efficiency and optimal solutions in algorithm design.

Uploaded by

Atiya Fatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views3 pages

DAA Saq

The document provides an overview of key concepts in the design and analysis of algorithms, including definitions and explanations of various algorithms such as BFS, DFS, Merge Sort, Quick Sort, and techniques like greedy methods and dynamic programming. It also discusses the differences between greedy methods and dynamic programming, as well as asymptotic notations used for analyzing algorithm performance. The document emphasizes the importance of efficiency and optimal solutions in algorithm design.

Uploaded by

Atiya Fatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

DAA VIVA:-

1) What is Daa?

Ans: The design and analysis of algorithms is a field in computer science that focuses on creating
efficient and effective methods for solving computational problems. It involves designing algorithms to
solve specific problems and analyzing their performance in terms of time complexity, space
complexity, and other factors to determine their efficiency and scalability. The goal is to develop
algorithms that can solve problems quickly and with minimal resource usage.

2) What is BFS?

Ans: Breadth-First Search (BFS) is a graph traversal algorithm that explores the vertices of a graph
in a breadthward motion. It starts at a specified vertex and visits all of its neighboring vertices
before moving on to their neighbors, and so on. Time complexity:- O(V+E) , Data structure:-
Queue .

3) What is DFS?

Ans: Depth-First Search (DFS) is a graph traversal algorithm that starts from a chosen vertex and
explores as far as possible along each branch before backtracking. It follows the concept of depth,
where it explores as deeply as it can before exploring other branches. The algorithm can be
implemented recursively or using a stack data structure. Time complexity:- O(V+E)

4) What is merge sort?

Merge Sort is a popular sorting algorithm that follows the divide-and-conquer strategy to
efficiently sort an array or list of elements. It works by recursively dividing the input array into
smaller subarrays until each subarray contains only one element. Then, it merges these subarrays
in a sorted order to reconstruct the original array. The time complexity of the Merge Sort
algorithm is O(n log n), where “n” is the number of elements in the array being sorted. This
complexity holds true for the worst-case, best-case, and average-case scenarios.

5) What is Quick sort?

Ans: Quick Sort is a widely used sorting algorithm that follows the divide-and-conquer approach to
efficiently sort an array or list of elements. It works by selecting a “pivot” element from the array
and partitioning the other elements into two subarrays – one with elements less than the pivot
and another with elements greater than the pivot. The subarrays are then recursively sorted, and
the sorted subarrays are combined to obtain the final sorted array. The average and best-case time
complexity of the Quick Sort algorithm is O(n log n), where “n” is the number of elements in the
array being sorted. The worst-case time complexity of Quick Sort is O(n^2),

6) What is greedy method?


Ans: The Greedy method is the simplest and straightforward approach. It is not an algorithm,
but it is a technique. The main function of this approach is that the decision is taken on the
basis of the currently available information. Whatever the current information is present, the
decision is made without worrying about the effect of the current decision in future.
7) What is dynamic programming method?
Ans: Dynamic programming is a technique that breaks the problems into sub-problems, and
saves the result for future purposes so that we do not need to compute the result again. The
main use of dynamic programming is to solve optimization problems
8) Difference between greedy method and dynamic programming Method?
Ans: Greedy method:-
1. Greedy Method is also used to get the optimal solution.
2. In a greedy Algorithm, we make whatever choice seems best at the moment and then
solve the sub-problems arising after the choice is made.
3. Example: Fractional Knapsack
4. In Greedy Method, there is no such guarantee of getting Optimal Solution.
Dynamic programming method:-
1. Dynamic Programming is used to obtain the optimal solution.
2. In Dynamic Programming, we choose at each step, but the choice may depend on
the solution to sub-problems
3. Example: 0/1 Knapsack
4. It is guaranteed that Dynamic Programming will generate an optimal solution using
Principle of Optimality.

9)Asymptotic Notations:

Asymptotic Notations are programming languages that allow you to analyze an algorithm’s running
time by identifying its behavior as its input size grows.

There are mainly three asymptotic notations:

Big-O Notation (O-notation)

Omega Notation (Ω-notation)

Theta Notation (Θ-notation)

1) Theta notation encloses the function from above and below. Since it represents the upper and
the lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be
Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 *
g(n) for all n ≥ n0
2) Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it
gives the worst-case complexity of an algorithm.It is the most widely used notation for
Asymptotic analysis
F(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant
C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
3) Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

10 )

You might also like