0% found this document useful (0 votes)
8 views79 pages

DAA Module1

The document outlines the course structure for 'Design & Analysis of Algorithms (CSE2007)', including classroom discipline rules, course outcomes, assessment schedule, and key topics covered in the syllabus. It emphasizes the importance of algorithms, their properties, and various design strategies, such as brute force and dynamic programming. Additionally, it discusses the analysis of algorithms in terms of time and space complexity, including asymptotic notations for measuring efficiency.

Uploaded by

dates69672
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views79 pages

DAA Module1

The document outlines the course structure for 'Design & Analysis of Algorithms (CSE2007)', including classroom discipline rules, course outcomes, assessment schedule, and key topics covered in the syllabus. It emphasizes the importance of algorithms, their properties, and various design strategies, such as brute force and dynamic programming. Additionally, it discusses the analysis of algorithms in terms of time and space complexity, including asymptotic notations for measuring efficiency.

Uploaded by

dates69672
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Design & Analysis of

Algorithms(CSE2007)

1-1
General Instructions
• Follow the below mentioned instructions throughout the semester:
1)Maintain complete discipline in the class room & Be on time to every
class.
2)No student will be allowed if they are late (after the scheduled time). So,
be in the class before faculty arrives.
3)Students should start filling the seats from the front rows. There
should not be any row left empty in between and there should not be
scattered seating of the students in the classroom.
4)Boys and girls should sit separately, Girls on the right side and boys
on the left side as per the faculty view from the dais.
5)Students should keep their bags beside their chairs, not on the desks or
not on their laps.
6)Students should strictly switch off their Mobile phones and keep it in
their bags (not even in their pockets too)
COs
TABLE 1: COURSE OUTCOMES
CO CO BLOOMS
Num
ber
LEVEL
CO1 Identify the efficiency of a given algorithm. Understand
CO2 Illustrate the Brute Force Technique used for Apply
solving a problem.
CO3 Apply divide and conquer technique for Apply
searching and sorting Problems.
CO4 Apply the Dynamic Programming and greedy Apply
technique for solving a Problem.
CO4 Demonstrate Back tracking technique and Apply
limitations of Algorithms.

3
Evaluation Components
TABLE 6 ASSESSMENT SCHEDULE
Sl.no Assessment Contents Course Duration marks Weighta Tentative
type outcome In Hours ge Venue, DATE
&TIME
Number
Surprise Test-1 Module 1 CO1 30 mins 10 5% 01/02/25 to
1 07/02/25
Assignment 1 Module 1, CO2 1 week 10 5% 10/02/25 to
2 28/02/25
Module 2
Mid-Term(CAT) Module 1, CO1, CO2, 1.5 hour 50 25% 17/3/25 to
3 Module2 CO3 21/3/25

Surprise test-2 Module 3 CO3 30 mins 10 5% 28/3/25 to


4 2/04/25

Surprise test-3 Module 4 CO4 30 mins 10 5% 11/4/25 to


4 18/04/25

Assignment 2 Module 5 CO5 1 week 10 5% 02/05/25 to


5 9/05/25
End Term (FAT) All Modules CO1, CO2, 3 hours 100 50% --
6 CO3, CO4,
CO5

4
Design & Analysis of
Algorithms(CSE2007)

MODULE 1
Fundamentals of Algorithmic
Problem Solving

1-5
Module 1
• Module1:Introduction to Algorithms [8L Hours]
[Understand]

• Important Problem types, Asymptotic Notations and its


properties, Basic Efficiency classes, Mathematical analysis
for Recursive and Non-recursive algorithms.

6
What is an algorithm?
➢The word algorithm comes from Persian author, Abu
Ja’far Mohammed ibn Musa al Khowarizmi who wrote
a textbook on mathematics
➢An algorithm is an unambiguous step-by-step
procedure to solve the given problem in finite number
of steps by accepting set of legitimate inputs to produce
the desired output.
➢After producing the result, algorithm should terminate
➢This is a tool for solving computational problems.

1-7
Notion of Algorithm

problem

algorithm

input “computer” output

1-8
Importance of writing algorithm?
➢Writing an algorithm is just like preparing plan to solve the
problem.
➢Algorithm just gives the solution but not the answer.
➢Importance of writing algorithms are:
✓We can save resources like time, human effort & cost
✓Debugging will be easier.
✓Since they are written using pseudo code, any technical
person can understand easily
✓Can be used as a design document
✓Once algorithm is written & understood it is easily
converted into executable program by using any
programming language

1-9
Features/ Properties of Algorithm

➢ Every algorithm must satisfy the following criteria’s


Input: Should accept one or more external inputs
Output: Should produce at least one output
Effectiveness: Every instruction should transform the given
I/P to desired output
Finiteness: Algorithm should terminate after finite number of
steps
Definiteness: Each Instruction should be clear and
unambiguous

1-10
Algorithm Design And Analysis Process

11
Analysis of algorithms
• How good is the algorithm?
• Correctness of solution
• Validate whether the algorithm gives correct
solution.
• Efficiency of solution
• time efficiency
• space efficiency
• Does there exist a better algorithm?
• lower bounds
• optimality

1-12
Important problem types
• sorting

• searching

• string processing

1-13
Sorting (I)
• Rearrange the items of a given list in ascending order.
• Input: A sequence of n numbers <a1, a2, …, an>
• Output: A reordering <a´1, a´2, …, a´n> of the input
sequence such that a´1≤ a´2 ≤ … ≤ a´n.
• Why sorting?
• Help searching
• Algorithms often use sorting as a key subroutine.
• Sorting key
• A specially chosen piece of information used to guide
sorting. E.g., sort student records by names.

1-14
Sorting (II)
• Examples of sorting algorithms
• Selection sort
• Bubble sort
• Insertion sort
• Merge sort
• Heap sort …
• Evaluate sorting algorithm complexity: the number of key
comparisons.
• Two properties
• Stability: A sorting algorithm is called stable if it preserves
the relative order of any two equal elements in its input.
• In place : A sorting algorithm is in place if it does not require
extra memory, except, possibly for a few memory units.

1-15
Searching
• Find a given value, called a search key, in a given set.
• Examples of searching algorithms
• Sequential search
• Binary search …
Input: sorted array a[i] < … < a[j] and key x;
m (i+j)/2;
while i < j and x != a[m] do
if x < a[m] then j  m-1
else i  m+1;
if x = a[m] then output a[m];

1-16
String Processing
• A string is a sequence of characters from an alphabet.
• Text strings: letters, numbers, and special characters.
• String matching: searching for a given word/pattern
in a text.
Examples:
(i) searching for a word or phrase on WWW or in a
Word document
(ii) searching for a short read in the reference genomic
sequence

1-17
Algorithm design strategies
• Brute force
• Divide and conquer
• Decrease and conquer
• Transform and conquer
Classic Design Most problems
are solved by this
• Dynamic programming
Strategies
• Greedy approach
• Heuristic Approaches: Solution Space
• Backtracking Niche approach
• Branch-and-bound used to solve
• Space and time tradeoffs
Advanced techniques like Approximation
algorithms, AI/ML, etc. to solve other
problems that can’t be solved with these
techniques
1-18

1-18
Analysis Framework
➢The process of finding the efficiency of an algorithm is
called as analysis of algorithm.
➢Find the efficiency of an algorithm in 2 ways
1) Time efficiency/complexity
2) Space efficiency/complexity
➢Importance of analysis of algorithms
1) To compare different algorithms for the same task
2) To predict the performance in a new environment
3) To specify the range of inputs on which algorithm
works properly

19
Time complexity or time efficiency
• Time complexity of an algorithm is the amount of
time taken by the program to run completely &
efficiently
• Factors on which time complexity of an algorithm
depends are
1) Speed of computer
2) Choice of programming language
3) Compiler used
4) Choice of algorithmic design technique
5) Number of inputs/outputs
6) Size of the inputs/outputs

20
Measurement of running time
• Physically measure
• Using code
import time
start = time.time()
#do work
end = time.time()
elapsed =(end - start)

• This gives different value for different execution


• What we want is machine-independent measure

21
#include<time.h>
int main()
{
int n = 5, a[]={12, 34, 10, 77, 34}
clock_t start, end, total;
start = clock();
C code for
max=a[0];
for(i=1;i<n; i++) measuring time
{
if(A[i]>max)
max=a[i];
}
end = clock();
total = double(end-start)/CLOCKS_PER_SEC;

22
Time complexity or time
efficiency
• By considering the number of inputs given to the
algorithm & size of the inputs given to the algorithm,
time efficiency is normally computed by considering
the base operation or basic operation
• The statement/instruction which is consuming more
time is called as basic operation

23
Time complexity or time efficiency
• To find the time efficiency, we need to
1) Identify the base operation
2) Find the time taken by the basic operation to execute once
3) Find, how many times, this basic operation is executing
• Basic operation: the operation that contributes the most towards
the running time of the algorithm
n is the input size
T(n) ≈ copC(n)
running time Total number of
of the algorithm Time taken by the basic times, the basic
operation to execute once operation is
executing
Note: Different basic operations may cost differently!
Input size and basic operation examples
Problem Input size measure Basic operation

Searching for key in a Number of list’s items,


Key comparison
list of n items i.e. n

Multiplication of two Matrix dimensions or Multiplication of two


matrices total number of elements numbers

Checking primality of n’size = number of digits


Division
a given integer n (in binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
Step count Method
• Intuitive method to calculate the time taken to execute in
a machine-independent manner.
• Assume all steps/basic operations take unit time.
• Ignore comments
• Conditional statement:
• Conditional statement is executed once
• Only count the steps in the block that gets executed.
• Loops.
• Multiply the steps in loop by the number of times the loop is
executed.

26
Step count Method
Step Frequen Total times
Algorithm Sum(N) cy step is
executed
i←0 1 - 1
sum ←0 1 - 1
while i < n 1 n N
i ←i+1 1 n N
sum ←sum+i 1 n N
Print sum 1 1
Total 3.N + 3
Running time, T(N) = 3.N + 3
Running time depends on input, N

If N doubles, T(N) roughly doubles.


27
Order of growth
• As the value of n(input size) increases, time required for
execution also increases i.e., behavior of algorithm changes
with the increase in the value of n. This change in the
behavior is called orders of growth
• Most important: Order of growth within a constant multiple as
n→∞
• Example:
• How much faster will algorithm run on computer that is
twice as fast?
• How much longer does it take to solve problem of double
input size?
Order of growth for few values of n
n n n2 n3 2n n!
1 0 1 0 1 1 2 1
2 1 2 2 4 8 4 2
4 2 4 8 16 64 16 24
8 3 8 24 64 512 256 40320
16 4 16 64 256 4096 65536 HIGH
32 5 32 160 1024 32768 4294967296 VERY HIGH
Basic asymptotic efficiency classes
1 constant

log n logarithmic

n linear

n log n n-log-n

n2 quadratic

n3 cubic

2n exponential

n! factorial
Space Complexity or Space Efficiency

• Space complexity of an algorithm is the


amount of space consumed by the program
to run completely & efficiently.
• Total amount of space consumed by the
program is calculated by sum of the
following components
• Fixed space requirements
• Variable space requirements

31
Space Complexity or Space Efficiency
• Fixed space requirements are the requirements that do
not depend on the number of inputs & outputs and also
size of inputs & outputs of the program.
• Program space - instruction space to store the code
• Data space - space for constants, variables,
structures, etc
• Stack space - space for parameters, local variables,
return values, etc
Fixed space = Program space + Data Space + Stack Space
requirements

Note: Dynamic memory allocation is in Heap space

32
Space Complexity or Space Efficiency
Variable space requirement
➢Along with fixed space requirements, it also includes the
extra space required
1) When function uses recursion
2) Dynamically allocated arrays, structures, etc
➢So, space ‘S’ of a program ‘P’ on a particular instance ‘I’ is
denoted by
SP(I) =Fixed space requirements + space used during recursion + space used by
run time variables
➢Total space ‘S’ of a program ‘P’ is given by
S(P)=c + SP(I)

33
Asymptotic Notations
• Mathematical notations used to express the time
complexity of an algorithm is called asymptotic
notations
• Machine-independent
• The 3 different asymptotic notations are
1) O (Big-oh) for worst case
2) Ω (Big-Omega) for best case
3) Θ (Big-Theeta) for average case
• Functions showing asymptotic growth with respect to input
size
• Expressed as monotonically growing functions
• f(n) increases or remains same as n increase; cannot
fall as n increases.

34
O (Big-oh) notation
• Big-oh is the formal method of expressing the upper
bound of an algorithm’s running time.
• It is a measure of longest amount of time it could possibly
take for the algorithm to complete.
• “The function, t(n) is said to be in O(g(n)), which is
denoted by t(n)ЄO(g(n)) such that, there exist a
positive constant c & positive integer n0 satisfying the
constraint t(n)≤c g(n) for all n ≥ n0 “

35
Big-oh Time

Prior analysis

Post analysis
Big-Oh Time

For some n>n0, c. g(n) is bigger than t(n) Prior analysis

As input size grows, value of t(n) grows


same as or slower than constant times
g(n)
Example:
If an algorithm takes
T(n)=2N+1 Post analysis
time, and if we assume
g(N) = N;
then N >2
T(N) < 3. g(N)
N N:2 T(N) T(N):T(2) G(N) G(N):G(2)
For linear g(n), t(n) roughly = 2.N+1 = 3.N
doubles as N doubles 2 1 2.2+1= 5 5/5 = 1 . 3.2= 6 6/6 = 1
4 2 2.4+1= 9 9/5 = 1.8 3.4=12 12/6 = 2
6 3 2.6+1=13 13/5 =2.6 3.6=18 18/6 = 3
Ω (Big-Omega) notation
• Big-omega (Ω) is the formal method of expressing the
lower bound of an algorithm’s running time.
• It is a measure of minimum amount of time it could
possibly take for the algorithm to complete.
• “The function, t(n) is said to be in Ω(g(n)), which is
denoted by t(n)Є Ω(g(n)) such that, there exist a
positive constant c & positive integer n0 satisfying the
constraint t(n) ≥ c g(n) for all n ≥ n0 “

38
Big-omega Time

Post analysis

Prior analysis
Θ (Big-Theeta) notation
• Big-Theeta (Θ) is the formal method of expressing the
average case efficiency of the algorithm.
• “The function, t(n) is said to be in Θ (g(n)), which is
denoted by t(n)Є Θ (g(n)) such that, there exist a
positive constants c1, c2 & positive integer n0
satisfying the constraint c1g(n) ≤ t(n) ≤ c2 g(n) for
all n ≥ n0 “

40
Big-theta
Time
Example: Sequential search

• Worst case n key comparisons


• Best case 1 comparisons
• Average case (n+1)/2, assuming K is in A
Worst Case
Most useful for most problems.
This is what we always calculate.

The worst-case efficiency of an algorithm is its


efficiency for the worst-case input of size n, which is
an input (or inputs) of size n for which the
algorithm runs the longest among all possible
inputs of that size.
How:
• analyze the algorithm to see what kind of inputs yield the
largest value of the basic operation’s count C(n) among
all possible inputs of size n and then compute this worst-
case value Cworst (n).

43
Best Case
Not really useful for most problems

The best-case efficiency of an algorithm is its


efficiency for the best-case input of size n, which is
an input (or inputs) of size n for which the
algorithm runs the fastest among all possible
inputs of that size.
Steps:
• First, we determine the kind of inputs for which the
count C(n) will be the smallest among all possible inputs
of size n.
• Then we ascertain the value of C(n) on these most
convenient inputs.

44
Average Case
Useful for most problems.
We just want to estimate this for
our planning/implementation

The average-case efficiency of an algorithm yields


the necessary information about an algorithm’s
behavior on a “typical” or “random” input.
• This corresponds to the expected running time.

45
Example: Sequential search

Basic Operation

• Worst case n key comparisons


• Best case 1 comparisons
• Average case (n+1)/2, assuming K is in A
Mathematical Analysis of non-recursive algorithms
General Plan for Analysis of non-recursive algorithms
• Decide the number of input parameters given to the algorithm & decide the
size of inputs. (identify the problem size)
• Identify algorithm’s basic operation
• Decide whether we need to find only average case (Θ) or all the 3 cases
separately.
• For this, check whether the number of times the basic operation is executed
depends only on the problem size or not.
• If it depends only on the problem size, then we need to find only average
case Θ
• If it also depends on some other additional properties, then we need to find
all 3 cases separately.
• Find the total number of times, the basic operation is executing & express it
in sum expressions (Σ)
• Simplify the expressions using standard formulas & rules of sum
manipulations & obtain their order of growth.
48
Example 1: Maximum element

C(n) the number of times this comparison is executed


Example 2: Element uniqueness problem
Mathematical Analysis of Recursive Algorithms
General Plan for Analysis of recursive algorithms
• Decide the number of input parameters given to the algorithm &
decide the size of inputs. (identify the problem size)
• Identify algorithm’s basic operation
• Decide whether we need to find only average case (Θ) or all the 3
cases separately.
• For this, check whether the number of times the basic operation is executed
depends only on the problem size or not.
• If it depends only on the problem size, then we need to find only average case Θ
• If it also depends on some other additional properties, then we need to find all
3 cases separately.
• Obtain the recurrence relation with an appropriate initial
condition for the number of times the basic operation is
executing
• Solve the recurrence relation & obtain the order of growth & then
express it using asymptotic notations
Example 3: factorial of a number
//Purpose: to compute n! recursively
//Input: A non-negative integer ‘n’
//Output: value of n!

ALGORITHM:- fact(n)
{ If n==0 then
return 1
else
return n*fact(n-1)
}

52
return n*fact(n-1)

ALGORITHM:- fact(n)
{ If n==0 then
return 1
else
return n*fact(n-1)
}

53
Analysis of example3

54
The basic operation of the algorithm is multiplication,
whose number of executions we denote M(n).

Recurrence relation and initial condition for the


algorithm‘s number of multiplications M(n):

55
Solving the above recurrence relation using the
method of backward substitutions

M(n) = n
Time complexity of factorial algorithm T(n) = θ (n)

56
Tower of Hanoi
• There are three pegs, Source(A), Auxiliary(B) and
Destination(C). PegA contains a set of disks stacked to resemble
a tower, with the largest disk at the bottom and the smallest
disk at the top. The objective is to transfer the entire tower of
disks in pegA to pegC maintaining the same order of the disks.
Constraints
• Only one disk can be transfer at a time.
• Each move consists of taking the upper disk from one of the
peg and placing it on the top of another peg i.e. a disk can only
be moved if it is the uppermost disk of the peg.
• Never a larger disk is placed on a smaller disk during the
transfer.

57
Tower of Hanoi
(0,0,0)
Tower of Hanoi
(0,0,1)
Tower of Hanoi
(0,1,1)
Tower of Hanoi
(0,1,0)
Tower of Hanoi
(1,1,0)
Tower of Hanoi
(1,1,1)
Tower of Hanoi
(1,0,1)
Tower of Hanoi
(1,0,0)
Tower of Hanoi
❑ Move (n-1)discs from the source post to the auxiliary
post.
❑ Move the last disc to the destination post.
❑ Move (n-1)discs back from the auxiliary post to the
destination post.
Source Aux Destination

66
Example 2: The Tower of Hanoi
Puzzle

Procedure Hanoi(disk, source, dest, aux)

IF disk == 1, THEN
move disk from source to dest
ELSE
Hanoi(disk - 1, source, aux, dest) // Step 1
move disk from source to dest // Step 2
Hanoi(disk - 1, aux, dest, source) // Step 3
END IF

END Procedure
Recurrence for number of moves:
M(n) = 2M(n-1) + 1
Tree of calls for the Tower of Hanoi
Puzzle
Solving recurrence for number
of moves
• The number of moves M(n) depends on n only
• The recurrence equation is:
M(n) = M(n − 1) + 1 + M(n − 1) for n > 1
• The initial condition M(1) = 1
• The recurrence relation for the number of moves
M(n):
M(n) = 2M(n − 1) + 1 for n > 1
M(1) = 1 for n=1

69
M(n) = 2M(n − 1) + 1 for n > 1
M(1) = 1 for n=1
• Solving the above recurrence relation using the
method of backward substitutions
We have M(n-1) = 2M(n-1-1)+1 = 2M(n-2)+1
Back substituting we get
M(n) = 2(2M(n-2) + 1) + 1 = 22*M(n-2)+2+1
Substituting for M(n-2) as 2M(n-3)+1 we get
M(n) = 22 *(2M(n-3) + 1) + 2 + 1
= 23*M(n-3) + 22 + 2 + 1
= 2i M(n-i)+ 2i-1 + … + 22 + 2 + 1 = 2i M(n-i)+ 2i – 1

70
2i−1 + 2i−2 + . . . + 2 + 1 → Sum of N term of Geometric
Progression of type a, ar, ar2, ar3,……arn-1

• Here a is the first term, r is the common ratio, n is the


number of terms
• Sn = a[(rn – 1)/(r – 1)]
• Here a=1, r=2 and n =i

2i−1 + 2i−2 + . . . + 2 + 1 = 2i – 1

71
• Since the initial condition is specified for n = 1, which is
achieved for i = n − 1, we get the following formula for the
solution to recurrence

T(n) = θ(2n)
An exponential algorithm, which will run for an
unimaginably long time even for moderate values of n

72
END OF MODULE 1

73
Graph Problems
• Informal definition
• A graph is a collection of points called vertices, some
of which are connected by line segments called edges.

• Types of graph representations


• Adjacency matrix
• Cost adjacency matrix

• Spanning tree - a subgraph of a graph that contains all


the graph's vertices and some of its edges, without
forming any cycles
.

1-74
Graphs
• Formal definition
• A graph G = <V, E> is defined by a pair of two sets: a
finite set V of items called vertices and a set E of
vertex pairs called edges.
• Undirected and directed graphs (digraphs).
• Complete, dense, and sparse graphs
• A graph with every pair of its vertices connected by
an edge is called complete, K|V|

1 2

3 4

1-75
Graph Representation
• Adjacency matrix
• n x n boolean matrix if |V| is n.
• The element on the ith row and jth column is 1 if
there’s an edge from ith vertex to the jth vertex;
otherwise 0.
• The adjacency matrix of an undirected graph is
symmetric.
• Adjacency linked lists
• A collection of linked lists, one for each vertex, that
contain all the vertices adjacent to the list’s vertex.
1 2 0111 2 3 4
0001 4
3 4
0001 4
0000

1-76
Weighted Graphs
• Weighted graphs
• Graphs or digraphs with numbers assigned to the
edges.

5
1 2
6 7
9
3 4
8

1-77
Graph Properties -- Paths and Connectivity
• Paths
• A path from vertex u to v of a graph G is defined as a
sequence of adjacent (connected by an edge) vertices that
starts with u and ends with v.
• Simple paths: All edges of a path are distinct.
• Path lengths: the number of edges, or the number of
vertices – 1.
• Connected graphs
• A graph is said to be connected if for every pair of its
vertices u and v there is a path from u to v.
• Connected component
• The maximum connected subgraph of a given graph.

1-78
Graph Properties -- Acyclicity
• Cycle
• A simple path of a positive length that starts and
ends at the same vertex.
• Acyclic graph
• A graph without cycles
• DAG (Directed Acyclic Graph)

1 2

3 4

1-79

You might also like