0% found this document useful (0 votes)
40 views46 pages

DAA Unit-1

Uploaded by

txerox17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views46 pages

DAA Unit-1

Uploaded by

txerox17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Algorithm

Algorithm is a step-by-step procedure, which defines a set of instructions to


be executed in a certain order to get the desired output. Algorithms are
generally created independent of underlying languages, i.e. an algorithm can
be implemented in more than one programming language.

From the data structure point of view, following are some important
categories of algorithms −

 Search − Algorithm to search an item in a data structure.

 Sort − Algorithm to sort items in a certain order.

 Insert − Algorithm to insert item in a data structure.

 Update − Algorithm to update an existing item in a data structure.

 Delete − Algorithm to delete an existing item from a data structure.

Characteristics of an Algorithm

Not all procedures can be called an algorithm. An algorithm should have the
following characteristics −

 Unambiguous − Algorithm should be clear and unambiguous. Each of


its steps (or phases), and their inputs/outputs should be clear and must
lead to only one meaning.

 Input − An algorithm should have 0 or more well-defined inputs.

 Output − An algorithm should have 1 or more well-defined outputs


and should match the desired output.

 Finiteness − Algorithms must terminate after a finite number of


steps.

 Feasibility − Should be feasible with the available resources.

 Independent − An algorithm should have step-by-step directions,


which should be independent of any programming code.
How to Write an Algorithm?

There are no well-defined standards for writing algorithms. Rather, it is


problem and resource dependent. Algorithms are never written to support a
particular programming code.

As we know that all programming languages share basic code constructs like
loops (do, for, while), flow-control (if-else), etc. These common constructs
can be used to write an algorithm.

We write algorithms in a step-by-step manner, but it is not always the case.


Algorithm writing is a process and is executed after the problem domain is
well-defined. That is, we should know the problem domain, for which we are
designing a solution.

Example

Let's try to learn algorithm-writing by using an example.

Problem − Design an algorithm to add two numbers and display the result.

Step 1 − START

Step 2 − declare three integers a, b & c

Step 3 − define values of a & b

Step 4 − add values of a & b

Step 5 − store output of step 4 to c

Step 6 − print c

Step 7 − STOP

Algorithms tell the programmers how to code the program. Alternatively, the
algorithm can be written as −

Step 1 − START ADD

Step 2 − get values of a & b

Step 3 − c ← a + b

Step 4 − display c

Step 5 − STOP

In design and analysis of algorithms, usually the second method is used to


describe an algorithm. It makes it easy for the analyst to analyze the
algorithm ignoring all unwanted definitions. He can observe what operations
are being used and how the process is flowing.

Writing step numbers is optional.

We design an algorithm to get a solution of a given problem. A problem can


be solved in more than one way.

Hence, many solution algorithms can be derived for a given problem. The
next step is to analyze those proposed solution algorithms and implement
the best suitable solution.

Fundamentals of Algorithmic Problem Solving

Designing an algorithm is a structured, logical process that involves multiple


steps to ensure that the solution to a problem is both correct and efficient.
The figure titled Figure 1.2.1: Algorithm design and analysis process
outlines a step-by-step approach to designing and analyzing algorithms.
Below is a detailed explanation of each step, including the critical distinction
between exact and approximate solving.
Understand the Problem

The first and most crucial step in algorithm design is understanding the
problem thoroughly. This includes:

 Identifying the input and output clearly,

 Understanding the constraints (e.g., time limits, size of input),

 Clarifying what needs to be computed,


 Recognizing any special conditions or edge cases.

A poor understanding at this stage often leads to incorrect or incomplete


solutions. Therefore, one must spend time reading and analyzing the
problem before jumping into designing a solution.

Decide on Approach: Computational Means, Exact vs. Approximate


Solving, and Algorithm Design Technique

Once the problem is clearly understood, the next step involves deciding how
to solve it effectively. This decision includes several sub-steps:

a. Computational Means

You must consider the computational model:

 Will the algorithm run on a sequential, parallel, or distributed


system?

 What are the resource limitations (CPU, memory, etc.)?

b. Exact vs. Approximate Solving

This is a critical choice in algorithm design:

 Exact Algorithms always produce the correct and optimal solution.


They are used when the problem demands a precise result, such as
sorting, searching, and mathematical computations. For example,
Dijkstra’s algorithm gives the exact shortest path in a weighted graph
with non-negative edges.

 Approximate Algorithms are used when the problem is too complex


(often NP-hard) to solve exactly within a reasonable time. These
algorithms provide a solution that is close to optimal but much faster
to compute. For example, in the Travelling Salesman Problem
(TSP), exact solutions are computationally expensive, so heuristics or
approximation techniques are used.

Choosing between exact and approximate depends on:

 The nature of the problem,

 The acceptable level of accuracy,

 The available computational resources,


 The time constraints.

c. Choose an Algorithm Design Technique

Some common algorithm design strategies include:

 Brute-force: Try all possible solutions.

 Divide and conquer: Break the problem into sub-problems, solve


them independently, and combine results.

 Greedy approach: Make the best decision at each step hoping for the
global optimum.

 Dynamic programming: Solve complex problems by breaking them


into simpler overlapping sub-problems.

 Backtracking: Try all possibilities and backtrack upon failure.

This step ensures that a well-suited technique is used to approach the


problem efficiently.

Design the Algorithm

Once the strategy is decided, the algorithm is designed. This involves:

 Defining a clear sequence of steps or pseudocode,

 Ensuring that the steps solve the problem for all valid inputs,

 Identifying any loops, conditions, and function calls.

The algorithm should be general, robust, and logically consistent.

Prove Correctness

Designing an algorithm is not enough — we must prove that it is correct.


This means:

 The algorithm must give the correct output for all possible valid inputs,

 It must terminate in a finite number of steps,

 Techniques like mathematical induction, loop invariants, and


assertions can be used to establish correctness.
This step ensures that the algorithm not only works in some cases but in all
cases that meet the problem definition.

Analyze the Algorithm

This step focuses on evaluating the performance of the algorithm:

 Time complexity: How does the algorithm’s running time grow with
the input size? (e.g., O(n), O(n log n), O(n²), etc.)

 Space complexity: How much memory does it consume?

 Consider the best case, worst case, and average case


performance.

This analysis helps compare different algorithms and choose the most
efficient one for practical use.

Code the Algorithm

The final step is to implement the algorithm in a programming language


(e.g., Python, C++, Java). During implementation:

 The logic from the algorithm must be translated correctly into code,

 Boundary cases must be handled,

 The solution must be tested using different inputs to verify correctness


and efficiency.

Coding transforms the theoretical algorithm into a practical, usable tool.

Q : Important Problem Types in Algorithm Design

In computer science, certain types of problems are considered especially


important due to their practical applications and theoretical interest.

There are different types:

1. sorting
2. searching
3. string processing
4. graph problems
5. combinatorial problems
6. geometric problems
7. numerical problems.

1. Sorting

Sorting is the problem of arranging elements of a list in a specific order,


usually non-decreasing. It plays a fundamental role in computer science
because sorted data simplifies many operations such as searching and
ranking. In many applications, sorting is performed based on a key, such as
sorting student records by GPA or name. Sorting algorithms can differ in
performance, stability, and memory usage. A stable sorting algorithm
preserves the order of equal elements, which is useful when multiple
attributes are involved. An in-place sorting algorithm requires minimal
extra memory. Examples include Quicksort, Merge Sort, and Bubble Sort.

2. Searching

Involves finding a specific value, called a search key, within a set or multiset.
It is widely used in databases and information retrieval. Simple methods like
linear search work on any data, while more efficient methods like binary
search require sorted input. In dynamic applications, where data changes
frequently, search algorithms must be balanced with insertion and deletion
operations. The choice of data structures plays a key role in such situations.

3. String processing

Deals with operations on strings, which are sequences of characters. It has


become increasingly important due to applications in text editing, data
compression, and DNA analysis. One of the most studied problems is string
matching, which involves finding a word or pattern in a larger text. Special
algorithms like the Boyer-Moore algorithm have been developed to solve this
efficiently.

4. Graph problems

Involve structures made up of vertices (points) and edges (connections).


Graphs are used to model networks such as transportation, social networks,
and communication systems. Common graph problems include graph
traversal, shortest path, and topological sorting. More complex problems like
the Traveling Salesman Problem (TSP) and graph coloring are also studied.
TSP seeks the shortest tour visiting all cities once, while graph coloring is
used in scheduling problems to minimize conflicts.

5. Combinatorial problems

Are those that involve finding specific combinations, subsets, or


arrangements of items that satisfy given constraints. These problems are
often very complex because the number of possible combinations grows
rapidly with input size. Many combinatorial problems, such as TSP and graph
coloring, are difficult to solve exactly in reasonable time, and are believed to
have no efficient solutions.

6. Geometric problems

Focus on objects like points, lines, and polygons. They have gained renewed
importance with applications in computer graphics, robotics, and computer
vision. Classic problems include finding the closest pair of points and the
convex hull, which is the smallest polygon enclosing a set of points.

7. Numerical problems

Involve operations on continuous mathematical objects like real numbers.


They include solving equations, computing integrals, and evaluating
functions. Since computers represent real numbers approximately, issues like
round-off errors are common. Though historically central to computing,
numerical algorithms are now less emphasized due to the rise of business
and data-related applications. Nevertheless, they remain important in
scientific and engineering fields.

FUNDAMENTALS OF THE ANALYSIS OF ALGORITHM EFFICIENCY The


efficiency of an algorithm can be in terms of time and space. The algorithm
efficiency can be analyzed by the following ways.

a. Analysis Framework.

b. Asymptotic Notations and its properties.

c. Mathematical analysis for Recursive algorithms.

d. Mathematical analysis for non-recursive algorithms.


1.1 Analysis Framework

There are two kinds of efficiencies to analyze the efficiency of any algorithm.

They are:

• Time efficiency, indicating how fast the algorithm runs, and

• Space efficiency, indicating how much extra memory it uses.

The algorithm analysis framework consists of the following:

• Measuring an Input’s Size

• Units for Measuring Running Time

• Orders of Growth

• Worst-Case, Best-Case, and Average-Case Efficiencies

(i) Measuring an Input’s Size

• An algorithm’s efficiency is defined as a function of some parameter n


indicating the algorithm’s input size. In most cases, selecting such a
parameter is quite straightforward. For example, it will be the size of the list
for problems of sorting, searching.

• For the problem of evaluating a polynomial p(x) = anx n+ . . . + a0 of


degree n, the size of the parameter will be the polynomial’s degree or the
number of its coefficients, which is larger by 1 than its degree.

• In computing the product of two n × n matrices, the choice of a parameter


indicating an input size does matter.

• Consider a spell-checking algorithm. If the algorithm examines individual


characters of its input, then the size is measured by the number of
characters.

• In measuring input size for algorithms solving problems such as checking


primality of a positive integer n. the input is just one number.

• The input size by the number b of bits in the n’s binary representation is
b=(log2n)+1.
(ii) Units for Measuring Running Time

Some standard unit of time measurement such as a second, or millisecond,


and so on can be used to measure the running time of a program after
implementing the algorithm Drawbacks,

• Dependence on the speed of a particular computer.

• Dependence on the quality of a program implementing the algorithm.

• The compiler used in generating the machine code. The difficulty of


clocking the actual running time of the program. So, we need metric to
measure an algorithm’s efficiency that does not depend on these extraneous
factors.One possible approach is to count the number of times each of the
algorithm’s operations is executed. This approach is excessively difficult. The
most important operation (+, -, *, /) of the algorithm, called the basic
operation. Computing the number of times, the basic operation is executed is
easy. The total running time is basic operations count.

(iii)ORDERS OF GROWTH

• A difference in running times on small inputs is not what really


distinguishes efficient algorithms from in efficient ones.

• For example, the greatest common divisor of two small numbers, it is not
immediately clear how much more efficient Euclid’s algorithm is compared to
the other algorithms, the difference in algorithm efficiencies becomes clear
for larger numbers only.

• For large values of n,

• it is the function’s order of growth that counts just like theTable1.1, which
contains values of a few functions particularly important for analysis of
algorithms.

TABLE 1.1 Values (approximate) of several functions important for analysis of


algorithms
log₂ n·log₂
n √n n n² n³ 2ⁿ n!
n n

1 1 0 1 0 1 1 2 1

2 1.4 1 2 2 4 8 4 2

4 2 2 4 8 16 64 16 24

8 2.8 3 8 24 64 512 256 40,320

10 3.2 3.3 10 33 100 1,000 1,024 3.6×10⁶

2.1×10¹
16 4 4 16 64 256 4,096 65,536
³

10,00 1,000,00 1.3×10 9.3×10¹


100 10 6.6 100 660
0 0 ³⁰ ⁵⁷

1,000 31.6 10 1,000 10,000 10⁶ 10⁹ ~ Very Big

10,00 130,00
10,000 100 13.3 10⁸ 10¹² ~ ~
0 0

1.7×1
100,000 316 17 10⁵ 10¹⁰ 10¹⁵ ~ ~
0⁶

1,000,00 100
20 10⁶ 2×10⁷ 10¹² 10¹⁸ ~ ~
0 0

Explanation of Columns:

Colum
Meaning
n

n Input size (number of elements)

√n Square root of n — very slow growth

log₂n Log base 2 — grows slowly (e.g., binary search time)

n Linear growth — grows proportionally (e.g., for loop)

n·log₂
Common in efficient sorting algorithms like merge sort
n
Colum
Meaning
n

n² Quadratic time — nested loops (e.g., bubble sort)

Cubic time — three nested loops (e.g., matrix



multiplication)

Exponential time — grows extremely fast (e.g., brute force


2ⁿ
on subsets)

Factorial time — worst-case (e.g., permutations, traveling


n!
salesman)

(iii)Worst-Case, Best-Case, and Average-Case

Efficiencies Consider Sequential Search

algorithm some search key K ALGORITHM

Sequential Search (A[0..n 1],K)

//Searches for a given value in a given array by sequential search

//Input: An array A[0..n - 1] and a search key K

//Output: The index of the first element in A that matches K or -1 if


there

are no

// matching elements

i ←0

while i < n and A[i] ≠ K do

i ←i + 1

if i < n

return i

else

return- 1
Clearly, the running time of this algorithm can be quite different for the same
list

size n.

In the worst case, there is no matching of elements or the first matching

element can found at last on the list. In the best case, there is matching of
elements

at first on the list.

Worst-case efficiency

• The worst-case efficiency of an algorithm is its efficiency for the worst

case input of size n.

• The algorithm runs the longest among all possible inputs of that size.

• For the input of size n, the running time is Cworst(n) =n.

Best case efficiency

• The best-case efficiency of an algorithm is its efficiency for the best case

input of size n.

• The algorithm runs the fastest among all possible inputs of that size n.

• In sequential search, if we search a first element in list of size n. (i.e. first

element equal to a search key), then the running time is Cbest(n) =1

Average case efficiency

• The Average case efficiency lies between best case and worst case.

• To analyze the algorithm’s average case efficiency, we must make some

assumptions about possible inputs of size n.

• The standard assumptions are that

o The probability of a successful search is equal to p (0 ≤ p ≤ 1)and

o The probability of the first match occurring in the ith position of the
list is the same for every i. Yet another type of efficiency is called amortized
efficiency. It applies not to a single run of an algorithm but rather to a
sequence of operations performed on the same data structure.

Asymptotic Notations and Basic Efficiency Classes

Understanding the performance of algorithms is essential in computer


science. We use asymptotic notations to describe how the time or space
requirements of an algorithm grow with input size. These notations abstract
away hardware details, machine-specific constants, and lower-order terms,
allowing us to focus on the growth behavior of an algorithm.

What Are Asymptotic Notations?

Asymptotic notations provide a mathematical framework for analyzing an


algorithm's efficiency by describing its behavior as the input size approaches
infinity (i.e., large inputs). These notations help to standardize the way we
express runtime complexity and offer a common language to compare
algorithms.

There are three primary asymptotic notations:

 Big-O Notation (O) – Worst-case (Upper bound)

 Omega Notation (Ω) – Best-case (Lower bound)

 Theta Notation (Θ) – Tight bound (Average-case or exact bound)

 1. Theta Notation (Θ-Notation):


 Theta notation encloses the function from above and below. Since it represents
the upper and the lower bound of the running time of an algorithm, it is used
for analyzing the average-case complexity of an algorithm.
 .Theta (Average Case) You add the running times for each possible input
combination and take the average in the average case.
 Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a natural
number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that
0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then
the value f(n) is always between c1 * g(n) and c2 * g(n) for large values
of n (n ≥ n0). The definition of theta also requires that f(n) must be
non-negative for values of n greater than n0.

Example:
If f(n) = 5n + 4,
then it’s
f(n)=Θ(n)
because it grows exactly like n.

2. Big-O Notation (O-notation):


Big-O notation represents the upper bound of the running time of an
algorithm. Therefore, it gives the worst-case complexity of an
algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time
complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-O(Worst Case) It is defined as the condition that allows an
algorithm to complete statement execution in the longest amount of
time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there


exist a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥
n0

It returns the highest possible output value (big-O)for a given


input.
The execution time serves as an upper bound on the
algorithm’s time complexity.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤
f(n) ≤ cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in
the best case and quadratic time in the worst case. We can safely say
that the time complexity of the Insertion sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort,
we have to use two statements for best and worst cases:
 The worst-case time complexity of Insertion Sort is Θ(n2).
 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the
time complexity of an algorithm. Many times we easily find an upper
bound by simply looking at the algorithm.

Example:
For f(n) = 3n² + 2n + 1,
ignore smaller terms and constants → it becomes
f(n)=O(n2)f(n) = O(n^2)f(n)=O(n2)
Real Example:
 Insertion Sort → Worst case: O(n2)O(n^2)O(n2)
3. Omega Notation (Ω-Notation):

Omega notation represents the lower bound of the running time of an


algorithm. Thus, it provides the best case complexity of an algorithm.

The execution time serves as a lower bound on the algorithm’s time


complexity.

It is defined as the condition that allows an algorithm to complete


statement execution in the shortest amount of time.

Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural number
n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Mathematical Representation of Omega notation :

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n)
≤ f(n) for all n ≥ n0 }

Let us consider the same Insertion sort example here. The time complexity of
Insertion Sort can be written as Ω(n), but it is not very useful information
about insertion sort, as we are generally interested in worst-case and
sometimes in the average case.

Example:

For f(n) = n^2 + 3n,


we say

f(n)=Ω(n2)f(n) = \Omega(n^2)f(n)=Ω(n2)
Because it takes at least that time.

Real Example:

 Insertion Sort → Best case: Ω(n)\Omega(n)Ω(n) (when array is already


sorted)

General Framework for Analyzing Recursive Algorithms

The process for analyzing the time efficiency of a recursive algorithm follows
a structured plan:

General Steps:

1. Identify the input size parameter (e.g., n).

2. Identify the basic operation (e.g., multiplication, move, comparison).

3. Check variation in input: Determine if operation count varies with


inputs of same size → if yes, analyze worst, average, and best cases
separately.

4. Set up a recurrence relation to express the number of times the basic


operation is executed.

5. Solve the recurrence to get time complexity or asymptotic behavior.

Example 1: Factorial Function F(n) = n!

Recursive Definition:

 F(n) = n × F(n−1), for n > 0

 F(0) = 1
Recursive Algorithm:

Algorithm F(n)

if n = 0:

return 1

else:

return F(n−1) * n

Time Complexity Analysis:

Let M(n) be the number of multiplications:

 M(n) = M(n−1) + 1, for n > 0

 M(0) = 0

Solving the Recurrence (Backward Substitution):

M(n) = M(n-1) + 1

= M(n-2) + 1 + 1

= M(n-3) + 1 + 1 + 1

...

= M(0) + n

=0+n

⇒ M(n) = n ⇒ Time Complexity: Θ(n)

Space Complexity:

 Due to recursion depth: Θ(n)

Example 2: Tower of Hanoi

Problem Description:

Move n disks from peg A to peg C using peg B, following the rules:

 Only one disk at a time.

 No larger disk on a smaller one.

Recursive Solution:
Move n−1 disks from A to B

Move 1 disk from A to C

Move n−1 disks from B to C

Recursive Formula (Moves M(n)):

 M(n) = 2M(n−1) + 1

 M(1) = 1

Solving the Recurrence (Recursion Tree or Backward Substitution):

M(n) = 2M(n-1) + 1

= 2(2M(n-2) + 1) + 1 = 4M(n-2) + 2 + 1

= 8M(n-3) + 4 + 2 + 1

...

= 2^k * M(n-k) + (2^k - 1)

Let k = n−1:

= 2^(n−1) * M(1) + (2^(n−1) − 1)

= 2^(n−1) * 1 + (2^(n−1) − 1)

= 2^n − 1

⇒ Time Complexity: Θ(2ⁿ) (Exponential)

Space Complexity:

 Recursion depth = Θ(n)

Key Learnings from Both Examples:

Tower of Hanoi
Aspect Factorial Example
Example

Input Size
n (number) n (number of disks)
Parameter

Basic Operation Multiplication Disk move

Recurrence M(n) = M(n−1) + 1 M(n) = 2M(n−1) + 1


Tower of Hanoi
Aspect Factorial Example
Example

Relation

Initial Condition M(0) = 0 M(1) = 1

Time Complexity Θ(n) Θ(2ⁿ)

Θ(n) (due to recursion


Space Complexity Θ(n)
depth)

Growth Rate Linear Exponential

General Insights for Recursive Algorithm Analysis:

 Recursive algorithms are natural and elegant, but they often hide
expensive time or space costs.

 Recurrence relations are the mathematical heart of analysis in


recursive algorithms.

 Solving a recurrence gives us a closed-form time complexity.

 Always provide an initial condition to uniquely determine the solution.

Mathematical Analysis for Non-Recursive Algorithms

1.1 General Plan for Analyzing Time Efficiency

Steps to analyze non-recursive algorithms:

1. Identify the input size (n).

2. Determine the basic operation (usually in the innermost loop).

3. Check if the operation count depends only on n. If not, analyze worst,


average, and best cases.

4. Set up a summation for the count of basic operations.

5. Simplify the summation using standard formulas or find its order of


growth (Big-O notation).
Example 1: Finding Maximum Element in Array

Algorithm:

Max ← A[0]

for i = 1 to n-1 do

if A[i] > Max

Max ← A[i]

return Max

Analysis:

 Input size: n (number of elements)

 Basic operation: Comparison A[i] > Max

 Comparisons done: n - 1 (one per loop)

 Same for all cases (no best/worst/average needed)

 Total: C(n) = n - 1

 Time complexity: O(n)

Example 2: Check for Unique Elements

Algorithm:1,2,5,6,1,5

for i = 0 to n - 2 do

for j = i + 1 to n - 1 do

if A[i] == A[j]

return false

return true

Analysis:

 Input size: n

 Basic operation: Comparison A[i] == A[j]

 Case considered: Worst case (no duplicates)


 Total comparisons:

C(n)=∑i=0n−2∑j=i+1n−11=n(n−1)2C(n) = \sum_{i=0}^{n-2} \
sum_{j=i+1}^{n-1} 1 = \frac{n(n-1)}{2}C(n)=i=0∑n−2j=i+1∑n−1
1=2n(n−1)

 Time complexity: O(n²)

Example 3: Matrix Multiplication (n × n)

Algorithm:

for i = 0 to n - 1 do

for j = 0 to n - 1 do

C[i][j] = 0

for k = 0 to n - 1 do

C[i][j] += A[i][k] * B[k][j]

Analysis:

 Input size: n (matrix size)

 Basic operation: Multiplication A[i][k] * B[k][j]

 Multiplications per element: n

 Elements in result: n²

 Total multiplications: n × n × n = n³

 Time complexity: O(n³)

Example 4: Count Binary Digits

Algorithm:

count = 1

while n > 1 do

count = count + 1

n = floor(n / 2)
return count

Analysis:

 Input size: n

 Each iteration halves n

 Number of steps: ≈ ⌊log₂ n⌋ + 1

 Time complexity: O(log n)

Q : What is Selection Sort?

Selection Sort is a simple comparison-based sorting algorithm. It divides


the input list into two parts:

 A sorted sublist built from left to right.

 An unsorted sublist where the smallest (or largest) element is


repeatedly selected and moved to the sorted sublist.

It gets its name because it selects the minimum (or maximum) element
from the unsorted part during each iteration.

Selection Sort Algorithm

Step-by-step logic:

1. Start from the first element and search for the minimum element in
the rest of the array.

2. Swap it with the first element.

3. Move to the second element and repeat the process for the remaining
unsorted portion.
4. Repeat this until the entire array is sorted.

Pseudocode:

plaintext

CopyEdit

SelectionSort(A, n):

for i = 0 to n-2:

minIndex = i

for j = i+1 to n-1:

if A[j] < A[minIndex]:

minIndex = j

swap A[i] and A[minIndex]

Analysis

Selection sort is among the simplest sorting techniques and it works very
well for small files. It has a quite important application as each item is
actually moved at the most once.

Section sort is a method of choice for sorting files with very large objects
(records) and small keys. The worst case occurs if the array is already sorted
in a descending order and we want to sort them in an ascending order.

Nonetheless, the time required by selection sort algorithm is not very


sensitive to the original order of the array to be sorted: the test if [] < A[j] <
min x is executed exactly the same number of times in every case.

Selection sorts spend most of its time trying to find the minimum element in
the unsorted part of the array. It clearly shows the similarity between
Selection sort and Bubble sort.

 Bubble sort selects the maximum remaining elements at each stage,


but wastes some effort imparting some order to an unsorted part of the
array.

 Selection sort is quadratic in both the worst and the average case, and
requires no extra memory.

For each i from 1 to n - 1, there is one exchange and n - i comparisons, so


there is a total of n - 1 exchanges and
(n 1) + (n 2) + ...+2 + 1 = n(n 1)/2 comparisons.

These observations hold, no matter what the input data is.

In the worst case, this could be quadratic, but in the average case, this
quantity is O(n log n). It implies that the running time of Selection sort is
quite insensitive to the input.

Example – Selection Sort on [29, 10, 14, 37, 13]

Start:
[29, 10, 14, 37, 13]
→ Minimum is 10, swap with 29
[10, 29, 14, 37, 13]

→ Minimum in remaining [29, 14, 37, 13] is 13, swap with 29


[10, 13, 14, 37, 29]

→ Minimum in [14, 37, 29] is 14, already in place


[10, 13, 14, 37, 29]

→ Minimum in [37, 29] is 29, swap with 37


[10, 13, 14, 29, 37]

✅ Final Sorted Array:

[10, 13, 14, 29, 37]

Complexity Analysis of Selection Sort

Time Complexity: O(n2) ,as there are two nested loops:

 One loop to select an element of Array one by one = O(n)

 Another loop to compare that element with every other Array element
= O(n)

 Therefore, overall complexity = O(n) * O(n) = O(n*n) = O(n2)

Auxiliary Space: O(1) as the only extra memory used is for temporary
variables.
Q: Bubble Sort Algorithm

Bubble Sort is an elementary sorting algorithm, which works by repeatedly


exchanging adjacent elements, if necessary. When no exchanges are
required, the file is sorted.

We assume list is an array of n elements. We further assume


that swap function swaps the values of the given array elements.

Step 1 − Check if the first element in the input array is greater than the
next element in the array.

Step 2 − If it is greater, swap the two elements; otherwise move the pointer
forward in the array.

Step 3 − Repeat Step 2 until we reach the end of the array.

Step 4 − Check if the elements are sorted; if not, repeat the same process
(Step 1 to Step 3) from the last element of the array to the first.

Step 5 − The final output achieved is the sorted array.

Algorithm: Sequential-Bubble-Sort (A)

fori ← 1 to length [A] do

for j ← length [A] down-to i +1 do

if A[A] < A[j-1] then

Exchange A[j] ⟷ A[j-1]

Pseudocode

We observe in algorithm that Bubble Sort compares each pair of array


element unless the whole array is completely sorted in an ascending order.
This may cause a few complexity issues like what if the array needs no more
swapping as all the elements are already ascending.

To ease-out the issue, we use one flag variable swapped which will help us
see if any swap has happened or not. If no swap has occurred, i.e. the array
requires no more processing to be sorted, it will come out of the loop.

Pseudocode of bubble sort algorithm can be written as follows −

Void bubbleSort(int numbers[], intarray_size){

inti, j, temp;
for (i = (array_size - 1); i>= 0; i--)

for (j = 1; j <= i; j++)

if (numbers[j-1] > numbers[j]){

temp = numbers[j-1];

numbers[j-1] = numbers[j];

numbers[j] = temp;

Analysis

Here, the number of comparisons are

1 + 2 + 3 + ... + (n - 1) = n(n - 1)/2 = O(n2)

Clearly, the graph shows the n2 nature of the bubble sort.

In this algorithm, the number of comparison is irrespective of the data set,


i.e. whether the provided input elements are in sorted order or in reverse
order or at random.

Memory Requirement

From the algorithm stated above, it is clear that bubble sort does not require
extra memory.

Example of Bubble Sort

Input Array:[5, 1, 4, 2, 8]

We'll sort this array using Bubble Sort step by step by comparing and
swapping adjacent elements.

Step-by-step Comparisons & Swaps:

1. Compare 5 and 1 → 5 > 1 → swap → [1, 5, 4, 2, 8]

2. Compare 5 and 4 → 5 > 4 → swap → [1, 4, 5, 2, 8]

3. Compare 5 and 2 → 5 > 2 → swap → [1, 4, 2, 5, 8]

4. Compare 5 and 8 → 5 < 8 → no swap → [1, 4, 2, 5, 8]

Now largest element 8 has reached the correct position at the end.
1. Compare 1 and 4 → 1 < 4 → no swap → [1, 4, 2, 5, 8]

2. Compare 4 and 2 → 4 > 2 → swap → [1, 2, 4, 5, 8]

3. Compare 4 and 5 → 4 < 5 → no swap → [1, 2, 4, 5, 8]

1. Compare 1 and 2 → 1 < 2 → no swap → [1, 2, 4, 5, 8]

2. Compare 2 and 4 → 2 < 4 → no swap → [1, 2, 4, 5, 8]

1. Compare 1 and 2 → 1 < 2 → no swap → [1, 2, 4, 5, 8]

✅ Sorted Output:[1, 2, 4, 5, 8]

Complexity Analysis of Bubble Sort:

Time Complexity: O(n2)


Auxiliary Space: O(1)
Please refer Complexity Analysis of Bubble Sort for details.

Q : sequential searching or Linear search

Linear search is a type of sequential searching algorithm. In this method,


every element within the input array is traversed and compared with the key
element to be found. If a match is found in the array the search is said to be
successful; if there is no match found the search is said to be unsuccessful
and gives the worst-case time complexity.

For instance, in the given animated diagram, we are searching for an


element 33. Therefore, the linear search method searches for it sequentially
from the very first element until it finds a match. This returns a successful
search.
In the same diagram, if we have to search for an element 46, then it returns
an unsuccessful search since 46 is not present in the input.

Linear Search Algorithm

The algorithm for linear search is relatively simple. The procedure starts at
the very first index of the input array to be searched.

Step 1 − Start from the 0th index of the input array, compare the key value
with the value present in the 0th index.

Step 2 − If the value matches with the key, return the position at which the
value was found.

Step 3 − If the value does not match with the key, compare the next
element in the array.

Step 4 − Repeat Step 3 until there is a match found. Return the position at
which the match was found.

Step 5 − If it is an unsuccessful search, print that the element is not present


in the array and exit the program.

Pseudocode

procedure linear_search (list, value)

for each item in the list

if match item == value

return the item's location

end if
end for

end procedure

Analysis

Linear search traverses through every element sequentially therefore, the


best case is when the element is found in the very first iteration. The best-
case time complexity would be O(1).

However, the worst case of the linear search method would be an


unsuccessful search that does not find the key value in the array, it performs
n iterations. Therefore, the worst-case time complexity of the linear search
algorithm would be O(n).

Example

Let us look at the step-by-step searching of the key element (say 47) in an
array using the linear search method.

Step 1

The linear search starts from the 0th index. Compare the key element with
the value in the 0th index, 34.

However, 47 34. So it moves to the next element.

Step 2

Now, the key is compared with value in the 1st index of the array.
Still, 47 10, making the algorithm move for another iteration.

Step 3

The next element 66 is compared with 47. They are both not a match so the
algorithm compares the further elements.

Step 4

Now the element in 3rd index, 27, is compared with the key value, 47. They
are not equal so the algorithm is pushed forward to check the next element.

Step 5

Comparing the element in the 4th index of the array, 47, to the key 47. It is
figured that both the elements match. Now, the position in which 47 is
present, i.e., 4 is returned.
The output achieved is Element found at 4th index.

Time and Space Complexity of Linear Search Algorithm:

Time Complexity:

 Best Case: In the best case, the key might be present at the first
index. So the best case complexity is O(1)

 Worst Case: In the worst case, the key might be present at the last
index i.e., opposite to the end from which the search has started in the
list. So the worst-case complexity is O(N) where N is the size of the list.

 Average Case: O(N)

Auxiliary Space: O(1) as except for the variable to iterate through the list,
no other variable is used.

Q : What is Brute Force String Matching?

Brute Force (or Naive) String Matching is the simplest algorithm to find a
pattern inside a text. It works by checking for the pattern at every possible
position in the text until a match is found (or the search ends).

Behind the Algorithm

Given:

 A text T of length n

 A pattern P of length m

We want to find all occurrences of P in T.

In brute force, we:


 Align P at every position i in T (from 0 to n − m)

 Compare the pattern with the substring of T starting at position i

 If all characters match, we report a match.

🧾 Algorithm (Pseudocode)

text

CopyEdit

BruteForceMatch(T[0...n-1], P[0...m-1]):

for i = 0 to n - m

j=0

while j < m and T[i + j] == P[j]

j=j+1

if j == m

print "Pattern found at index", i

🧪 Example

Text T: "ABABDABACDABABCABAB"
Pattern P: "ABABCABAB"

We slide the pattern over the text one character at a time and compare each
character until:

🔎 Found a match at index 10

📊 Time Complexity Analysis

Time
Case Explanation
Complexity

Pattern matches on first try or mismatch


Best Case O(n)
happens early

Worst Case O(n × m) Pattern and text have repeated characters (e.g.
Time
Case Explanation
Complexity

"AAAA...")

Average
O(n × m) Checks every character shift
Case

Space Complexity:

 O(1) (no extra space used apart from input)

Q : What is Exhaustive Search?

Exhaustive Search (also known as Brute Force Search) is a general


problem-solving technique in algorithm design where every possible solution
to a problem is systematically enumerated and tested for correctness or
optimality. It guarantees finding the correct solution—whether it’s the best
(optimal) solution or just any solution that satisfies the conditions—by trying
all possible options. While conceptually simple and effective for small
input sizes, it is often computationally expensive and impractical for
large-scale problems due to exponential time complexity.

Problems Solved Using Exhaustive Search

1. Traveling Salesman Problem (TSP)

In the TSP, the goal is to find the shortest possible route that visits a set of
cities exactly once and returns to the starting city. Using exhaustive search,
all (n-1)! possible tours are evaluated, and the one with the minimum total
distance is selected.

🖼 Refer to the image you uploaded earlier for a complete example, where all
tours from vertex 'a' are evaluated and the shortest is selected.

2. Knapsack Problem (0/1 Knapsack)

Given a set of items, each with a weight and a value, the task is to determine
the combination of items that can be included in a knapsack of limited
capacity so that the total value is maximized. The exhaustive search
approach checks all 2ⁿ subsets of items, calculates total weight and value,
and selects the one with the highest value within the weight limit.

✅ This is effective for small values of n but inefficient for larger values.

3. Assignment Problem

In the assignment problem, tasks are to be assigned to workers (or jobs to


machines) in such a way that the total cost is minimized. Using exhaustive
search, all possible assignments (permutations) are tried, and the cost is
computed for each. The one with the least cost is chosen.

📌 For n workers and n jobs, there are n! possible assignments.


⏱ Time Complexity

Exhaustive search algorithms often have exponential time complexity:

 TSP: O((n-1)!)

 Knapsack: O(2ⁿ)

 Assignment Problem: O(n!)

Because of this, exhaustive search is mostly suitable for small input sizes
or used in combination with optimization techniques like backtracking,
branch and bound, or heuristics to reduce the search space.

Q: Breadth-First Search (BFS)

Breadth-First Search (BFS) is a fundamental graph traversal algorithm


used to explore nodes (or vertices) and edges of a graph systematically. It
begins at a starting node (source) and explores all its neighboring
nodes before moving to the next level of nodes. BFS is most often used in
unweighted graphs to find the shortest path between nodes.

🔧 How BFS Works

BFS uses a queue data structure to keep track of nodes to visit. Here's how
the algorithm works:

1. Start from a selected node, mark it as visited, and enqueue it.

2. While the queue is not empty:

o Dequeue a node.

o Visit all its unvisited neighbors, mark them as visited, and


enqueue them.

3. Repeat the process until all reachable nodes are visited.

This ensures that nodes are visited level by level, or in increasing distance
from the source.
BFS Algorithm (Pseudocode)

BFS(Graph G, Start node S):

create a queue Q

mark S as visited and enqueue S into Q

while Q is not empty:

current = Q.dequeue()

for each neighbor N of current:

if N is not visited:

mark N as visited

enqueue N into Q

🎯 Applications of BFS

 Finding the shortest path in an unweighted graph

 Solving puzzles like Rubik’s Cube or word ladder

 Web crawling (breadth-wise page exploration)

 Peer-to-peer networks (finding nearest peers)

✅ Example of BFS Traversal

Let’s consider a simple undirected graph:

/\

B C

/\ \

D E F

Step-by-Step BFS Traversal starting from A:


1. Start at A → Queue: [A]

2. Visit A → Enqueue B and C → Queue: [B, C]

3. Visit B → Enqueue D and E → Queue: [C, D, E]

4. Visit C → Enqueue F → Queue: [D, E, F]

5. Visit D → No new nodes → Queue: [E, F]

6. Visit E → No new nodes → Queue: [F]

7. Visit F → No new nodes → Queue: []

🔄 Final BFS Traversal Order:

A→B→C→D→E→F

⏱ Time and Space Complexity

Complexi
Value
ty

O(V + E) where V = vertices and E


Time
= edges

Space O(V) for the queue and visited list

Q: Depth First Search (DFS) Algorithm


Depth First Search (DFS) algorithm is a recursive algorithm for searching all
the vertices of a graph or tree data structure. This algorithm traverses a
graph in a depthward motion and uses a stack to remember to get the next
vertex to start a search, when a dead end occurs in any iteration.

As in the example given above, DFS algorithm traverses from S to A to D to G


to E to B first, then to F and lastly to C. It employs the following rules.

 Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display


it. Push it in a stack.

 Rule 2 − If no adjacent vertex is found, pop up a vertex from the


stack. (It will pop up all the vertices from the stack, which do not have
adjacent vertices.)

 Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty.


Ste
Traversal Description
p

1 Initialize the stack.

Mark S as visited and put it


onto the stack. Explore any
unvisited adjacent node
from S. We have three nodes
2
and we can pick any of them.
For this example, we shall
take the node in an
alphabetical order.

Mark A as visited and put it


onto the stack. Explore any
unvisited adjacent node from
3
A. Both S and D are adjacent
to A but we are concerned for
unvisited nodes only.
Visit D and mark it as visited
and put onto the stack. Here,
we have B and C nodes,
4 which are adjacent to D and
both are unvisited. However,
we shall again choose in an
alphabetical order.

We choose B, mark it as
visited and put onto the
5 stack. Here B does not have
any unvisited adjacent node.
So, we pop B from the stack.

We check the stack top for


return to the previous node
and check if it has any
6
unvisited nodes. Here, we
find D to be on the top of the
stack.
Only unvisited adjacent node
is from D is C now. So we
7
visit C, mark it as visited and
put it onto the stack.

As C does not have any unvisited adjacent node so we keep popping the
stack until we find a node that has an unvisited adjacent node. In this case,
there's none and we keep popping until the stack is empty.

Complexity of DFS Algorithm

Time Complexity

The time complexity of the DFS algorithm is represented in the form of O(V +
E), where V is the number of nodes and E is the number of edges.

Space Complexity

The space complexity of the DFS algorithm is O(V).

Difference between BFS and DFS

Breadth-First Search (BFS) and Depth-First Search (DFS) are two fundamental
algorithms used for traversing or searching graphs and trees. This article
covers the basic difference between Breadth-First Search and Depth-First
Search.
Difference between BFS and DFS

Parameter
s BFS DFS

BFS stands for Breadth DFS stands for Depth First


Stands for First Search. Search.

BFS(Breadth First Search)


uses Queue data structure DFS(Depth First Search) uses
Data for finding the shortest Stack data structure.
Structure path.

DFS is also a traversal


BFS is a traversal
approach in which the traverse
approach in which we first
begins at the root node and
walk through all nodes on
proceeds through the nodes as
the same level before
far as possible until we reach
moving on to the next
the node with no unvisited
level.
Definition nearby nodes.
Parameter
s BFS DFS

Conceptua
BFS builds the tree level DFS builds the tree sub-tree by
l
by level. sub-tree.
Difference

Approach It works on the concept of It works on the concept of LIFO


used FIFO (First In First Out). (Last In First Out).

BFS is more suitable for DFS is more suitable when


Suitable searching vertices closer there are solutions away from
for to the given source. source.

BFS is used in various DFS is used in various


applications such as applications such as acyclic
bipartite graphs, shortest graphs and finding strongly
paths, etc. If weight of connected components etc.
every edge is same, then There are many applications
BFS gives shortest pat where both BFS and DFS can
Applicatio from source to every other be used like Topological
ns vertex. Sorting, Cycle Detection, etc.

You might also like