0% found this document useful (0 votes)
39 views30 pages

ICP Unit-1 Updated V6

Uploaded by

gokulpolavarapu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views30 pages

ICP Unit-1 Updated V6

Uploaded by

gokulpolavarapu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Competitive Programming (CSEN2191)

Unit-1
Dept. of CSE, GITAM Visakhapatnam
June 2025

Contents
1 Time and Space Complexity 3
1.1 Input Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Experimental Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Online Judge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 Time & Space Experimental Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 Measure Execution Time with the given input size . . . . . . . . . . . . . . . . . . . . 7
1.3.4 Exercise: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Time Limit Exceed (TLE) Errors 9


2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 When Does TLE Occur? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Source Code Example (Check Prime) – to check TLE error . . . . . . . . . . . . . . . . . . . 9
2.4 Why is TLE Important? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 How to Avoid TLE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Avoiding off-by-one errors 11


3.1 Why Off-by-One Is Dangerous in Contests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Common Patterns with Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Strategy to Avoid Off-by-One Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4 Analysis of Loop 13
4.1 Single Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.1.1 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Nested Loop Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

5 Analysis of Sorting Algorithm 17


5.1 Bubble Sort Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Merge Sort Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Time Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.4 Space Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

6 Analysis of Searching Algorithms 19


6.1 Linear Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.1.1 Example Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.1.2 Time Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.1.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Binary Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3 Example Code (Python) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3.1 Example Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3.2 Time Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1
6.3.3 Space Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4 Binary Search Common Mistake in Competitive Coding . . . . . . . . . . . . . . . . 21
6.5 Linear Search vs Binary Search: Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.6.1 Time Complexity: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.6.2 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

7 Bit Manipulation 22
7.1 Key Bitwise Operators (C++, Python, Java) . . . . . . . . . . . . . . . . . . . . . . . 22
7.2 Advantages of Bit Manipulation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7.3 Decimal Vs Bit Manipulation: A Quick Comparison . . . . . . . . . . . . . . . . . . . . . . . 23
7.4 Common Bit Manipulation Operations (Verify with Examples) . . . . . . . . . . . . . . . . . 23
7.5 Examples of Bit Manipulation Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7.5.1 Example 1: Check if a number is power of 2 . . . . . . . . . . . . . . . . . . . . . . . . 23
7.5.2 Example 2: Counting 1 Bits (LeetCode 191) . . . . . . . . . . . . . . . . . . . . . . . 23
7.5.3 Example 3: Set the k-th Bit. Making k-th bit 1 . . . . . . . . . . . . . . . . . . . . . . 23
7.5.4 Example 4: Toggle/Reverse the k-th Bit (LeetCode 190) . . . . . . . . . . . . . . . . 24
7.6 Classic Competitive Problems Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.6.1 Problem 1: Single Number (Leetcode 136) . . . . . . . . . . . . . . . . . . . . . . . . 24
7.6.2 Problem 2: Subset Generation (Leetcode 78) . . . . . . . . . . . . . . . . . . . . . . . 24
7.6.3 Problem 3: Counting Bits (Leetcode 338) . . . . . . . . . . . . . . . . . . . . . . . . 24
7.7 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

8 Implementing Data Structures (Python) 25


8.1 Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
8.2 Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
8.3 Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
8.4 Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8.5 Deque (Double-ended Queue) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8.6 Priority Queue (Min-Heap / Max-Heap) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8.7 Heap (Binary Heap) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
8.8 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.9 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.9.1 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2
1 Time and Space Complexity
• Time Complexity: It estimates how the execution time of an algorithm increases as the input size
n increases. It is denoted as T(n)
– Example: O(1), O(n), O(n2 ), etc1
∗ O(1) Constant Time – Independent of n
∗ O(n) Linear Time
∗ O(n2 ) Quadric Time
– T(n) also depends on the number of key operations that are performed in a given algorithm. For
example, in searching, the key operation is comparison. In sorting, it is the number of swaps.
• Space Complexity: It estimates how much extra memory (not counting input) an algorithm needs
as input size n increases. It is denoted as S(n) in general or M(n)
– Total Space = Input Space (given) + Auxiliary Space (extra used)
– We usually analyze auxiliary space.
– Example: Same as Time – O(1), O(n), O(n2 ), etc

• Complexity Analysis: Unless until stated. It is assumed to be Time complexity analysis.


– Best Case: It occurs when the input data allows the algorithm to complete its task with the
fewest operations. It is denoted as Big-Omega (Ω)
– Worst Case: It occurs when the input data forces the algorithm to perform the maximum
number of. operations. It is denoted as Big-O (O)
– Average Case: It’s calculated by averaging the running time over all possible inputs of a specific
size. It is denoted by Big-Theta (Θ)
– These Ω, O, and Θ are used to describe the complexity in Asymptotic notations.
∗ In theoretical analysis, Asymptotic notations are used.
• Examples: The Table 1 shows some methods and their complexity involved with it.

Complexity Time Complexity: Code Demonstration Space Complexity: Code Demonstration


O(1)

def get_first_element(arr): # arr is an array def print_max(arr):


return arr[0] max_val = arr[0]
for i in range(1, len(arr)):
if arr[i] > max_val:
max_val = arr[i]
print(max_val)

Reason: Always does one operation, regardless of array size Uses only one extra variable (maxVal), regardless of n
Analogy Picking the first book from a shelf regardless of the Finding the tallest student in a class
shelf size
Let’s Think S(n) = ? T(n) = ?
O(n)

def print_all(arr): # creates a new array of same size


for item in arr: def copy_array(arr):
print(item) copy = arr[:]
return copy

Reason: It visits each element once, i.e n operations Allocates an extra array of size n
Analogy Reading all books one by one from a shelf Making an exact photocopy of each page in a book
Let’s Think S(n) = ? T(n) = ?

1 There √
exist several other complexities such as n, n!, log2 n, n log2 n, 2n

3
O(n2 )

def print_pairs(arr): def create_friend_matrix(n):


for i in arr: # Initialize all entries to False
for j in arr: is_friend = [[False for _ in range(n)]
print(i, j) for _ in range(n)]

# Mark some friendships


is_friend[0][1] = True
is_friend[1][0] = True
is_friend[2][3] = True
is_friend[3][2] = True

return is_friend

Reason: Nested loops each run n times, total operations: n × Uses an auxiliary 2D array of n X n
n = n2
Analogy Comparing each student with every other student. Listing each student with every other student, if they
are friends.
Let’s Think S(n) = ? T(n) = ?
O(log2 n)

def binary_search(arr, target): def binary_search_recursive(arr, target, low, high):


low =0, high = len(arr) - 1 if low > high:
while low <= high: return -1
mid = (low + high) // 2 mid = (low + high) // 2
if arr[mid] == target: if arr[mid] == target:
return mid return mid
elif arr[mid] < target: elif arr[mid] < target:
low = mid + 1 return binary_search_recursive(arr, target, mid + 1, high)
else: else:
high = mid - 1 return binary_search_recursive(arr, target, low, mid - 1)
return -1

Reason Each iteration halves the search space log2 (n) itera-
Each recursive call adds a new frame to the call stack,
tions with depth log2 (n)
Analogy Looking for a word in a very large physical dictionary
Calling an assistant each time to check if the word is
in the middle of this section
Let’s Think S(n) = ? T(n) = ?
Table 1: Examples: Code demonstration for T(n) and S(n)

1.1 Input Size


In competitive programming, Understanding input size (n) is crucial when analyzing time and space com-
plexity of algorithms. The input size is the total amount of data provided to the algorithm, and it varies
depending on the problem type (arrays, trees, graphs, numbers, strings, etc.).
Below is a comprehensive list of problem types and what counts as their input size:

Table 2: The type of problem and Its input size n

Problem Type Input Size n Example Problems


Array n = # of items Find max/min, sorting, searching
Number (Decimal) n = Number*2 Primality test, GCD(a, b), modular ex-
ponentiation
Number (Binary) n = # of digits = log2 (number) Bit operations, Fast exponentiation
Matrix n = # of cells = rows × columns Matrix multiplication, path in grid
Tree n = nodes Traversals, height/diameter, LCA
Graph n = (vertices + edges) BFS, DFS, Dijkstra, Kruskal, Topolog-
ical sort
String n = Length(string) (or n + m for Pattern matching, LCS, edit distance
2 strings)
Set / Map / Hash n = # of inserted items Insert/search/delete, frequency count
Sorting n = # of items Merge Sort, Quick Sort, Heap Sort
DP (1D or 2D) Size of DP table (ID: n, 2D: r×c) Fibonacci, Knapsack, LCS, Coin
change
Geometry n = (points/objects/shapes) Convex Hull, Closest Pair, Line sweep

2 *If individual digits are involved then n = log (N umber). b = base of any number. In cases involving numerical inputs,
b
the effective input size ’n’ can depend on the specific implementation details of the algorithm used to solve the problem.

4
Combinatorics n = # of items, but results = 2n Subset generation, permutation back-
(sub sets), n! (permutations) tracking
Lowest Common Ancestor (LCA), Longest common subsequence (LCS)

• Example Interpretation
– In Graphs, input includes both vertices and edges, so time/space complexities are usually in terms
of V + E.
∗ BFS/DFS: O(V + E)
∗ Dijkstra’s: O((V + E) log n) with priority queue
∗ Floyd-Warshall: O(n3 )
– In DP, if you’re solving a 0/1 Knapsack problem with n items and capacity W, the input size is n
+ W and table size is n × W.
∗ Fibonacci DP: T(n) = O(n)
∗ Knapsack DP: T(n) = O(nW)
– In Combinatorics, even though input is small (say n = 20), the solution space (2n = 1 million)
can be huge.

• Order of Growth:

O(1) < O(log2 n) < O(n) < O(n ∗ log2 n) < O(n2 ) < O(2n ) (1)

Where, n = input size


Q1) Why should we learn this order of growth while doing competitive coding which depends on the
I/P size?
Q2) How come input size n affects the time complexity? I am asking this question because, run time
depends on the # of instructions to be executed. So, how n influences the # of instructions?

To get the answer of Q1. Let’s solve the following question.


Question: For each function f(n) and time t in the following table, determine the largest size n of a problem
that can be solved in time t, assuming that the algorithm to solve the problem takes f(n) seconds.

Table 3: Largest ’n’ vs Time [1]

1 second 1 minute 1 hour 1 day 1 month 1 year 1 century


lg2 n

n
n 108 60 ∗ 108 360 ∗ 108
n lg2 n
n2
n3
2n
n!
Hint: Assume 108 operations/sec

5
1.2 Exercise
Consider the input size n = [ 1, 2, 4, 10, 20, 25, 50, 100, 103 , 108 ] and prepare a table to validate
the Equation 1.
a) Identify the minimum input size (n0 ) for which Equation 1 holds good.

b) Find the Proper place of n and n! in the Equation 1.
.

The Space and Time complexity Analysis can be done in two ways

• Experimental Analysis
• Asymptotic/Theoretical Analysis (Already discussed at the beginning)

1.3 Experimental Analysis


Experimental analysis refers to measuring the actual performance (typically run-time and memory usage) of
your program using tools or manual timing. It complements theoretical analysis by validating whether the
algorithm performs well on actual hardware.
• It emphasizes the practical aspect of writing a program and then using real input sizes and profilers
(like clock(), chrono, time() or IDE profiling tools) to determine actual time/memory usage.
• It helps verifying theoretical analysis on actual hardware
• In this analysis, we test algorithms (a program) with increasing input sizes (e.g., n = 103 , 104 , 105 , ...)
and plot time vs input size to see whether the curve matches the expected complexity.

• It helps to choose algorithm(s) for a given input size to solve a particular problem.
• A modern computer can perform about 108 to 109 basic operations per second [1, 2]. Based on this,
the maximum input size depends on the algorithm’s time complexity and the available time.
• Table 4 (from Book CP4 by Halim et.el [2]) shows accepted (AC) time complexities depending on input
size.

n Worst case Complexity Algorithm Example(s)


≤ [10..11] O(n!), O(n6 ) Enumerating permutations
≤ [17..19] O(2n × n2 ) Dynamic Programming (DP) TSP
≤ [18..22] O(2n × n) DP with bitmask technique
≤ [24..26] O(2n ) try 2n possibilities with O(1) check each
DP with 3 dimensions + O(n) loop, nk = 4
4

≤ 100 O(n )
≤ 450 O(n3 ) Floyd-Warshall
≤ 2.5K O(n2 log n) 2-nested loops + a tree-related DS
≤ 104 O(n2 ) Bubble/Selection/Insertion Sort
≤ 2 ∗ 105 O(n1.5 ) Square Root Decomposition
≤ 4.5 ∗ 106 O(n log n) Merge Sort
7
≤ 10 O(n log log n) Sieve of Eratosthenes
≤ 108 O(n), O(log n), O(1) Most contest problem have n ≤ 1M (I/O bottleneck)
The thumb rule for ”worst case AC Algorithm” for various single-test-case input sizes n, assuming
that a year 2020 CPU can compute 108 operations in 1 second.
Table 4: Acceptable Time vs Input size [2]

6
1.3.1 Online Judge
• It is a software system or platform used to automatically evaluate the correctness, performance, and
resource usage (time and space) of submitted code solutions to the programming problems.
• It is used in competitive coding to measure:

– Correctness of output
– Time taken (Execution Time) → for Time Complexity
– Memory used (RAM): for Space Complexity
– Edge case handling, runtime errors, or crashes

• Examples of Popular Online Judges: Codeforces, LeetCode, AtCoder, HackerRank, Google Kick Start
/ Code Jam (discontinued)
• Common Features:
Feature Description
Automatic Grading Verifies correctness of output against multiple test cases
Time Limit Enforcement Typically 1–2 seconds per test case
Memory Limit Enforcement Usually in MBs (e.g., 256MB, 512MB)
Language Support C++, Python, Java, Go, Rust, etc.
Feedback Accepted, TLE (Time Limit Exceeded), MLE (Memory Limit Ex-
ceeded), WA (Wrong Answer), RE (Runtime Error), CE (Compi-
lation Error)

1.3.2 Time & Space Experimental Tools

Language Time Function Space Function Profiler Tools


C++ chrono, clock() sizeof, Valgrind Valgrind, gperftools
Java System.nanoTime() Runtime.getRuntime() VisualVM, JProfiler, YourKit
Python time(), perf counter() sys.getsizeof, pympler cProfile, memory profiler
Table 5: Time & Space Experimental Tools in Various Languages

Let’s demonstrate this experimental analysis. Consider the following examples in the given table, which
have already been discussed in Table 1

O(1): Constant Time O(n): Linear Time O(n2 ): Quadratic Time

def constant_time_op(arr): def linear_time_op(arr): def quadratic_time_op(arr):


return arr[0] total = 0 total = 0
# Always takes same time for i in arr: for i in arr:
total += i for j in arr:
return total total += i * j
return total

Table 6: Three different functions showing T(n) = O(1), O(n), and O(n2 )

1.3.3 Measure Execution Time with the given input size


In python, we can import time and by using time.time(), the actual execution time of a function can be
computed. Similarly, in Java, it is System.nanoTime().

import time
start = time.time()
# Place your function Here

7
end = time.time()
total_runTime = end - start
print(total_runTime)
Example: Source Code to find time Click to toggle (Hide/Show)

The actual execution time Vs input size of the methods given in the Table 6 is observed in Table 7

Table 7: Input size Vs Expected Execution time (Not actual)

Input Size (n) O(1) Time O(n) Time O(n2 ) Time


100 ∼ 0µs ∼ 0.0001 s ∼ 0.001 s
200 ∼ 0µs ∼ 0.0002 s ∼ 0.004 s
400 ∼ 0µs ∼ 0.0004 s ∼ 0.016 s
800 ∼ 0µs ∼ 0.0008 s ∼ 0.065 s
1600 ∼ 0µs ∼ 0.0016 s –
3200 ∼ 0µs ∼ 0.0032 s –

8
1.3.4 Exercise:
a) Use time function as given in the Table 5 to compute and plot experimental time complexity, T(n) vs
Input size n = [ 100, 103 , 106 ] of the iterative binary search (Refer Table 1) and Linear search
algorithm. The answer should follow the template of Table ??. Hint: To generate sorted array of the
given size use the following code

def generate_sorted_array(size):
return list(range(size))

b) Use the space function as given in the Table 5 to compute experimental space complexity, S(n) Vs Input
size n of the algorithms given in Table 6
c) Try using any one of the profilers (Refer Table 5) to visualize the experimental time complexity of the
algorithms in the Table 6

2 Time Limit Exceed (TLE) Errors


Understanding TLE (Time Limit Exceeded) is crucial for solving problems efficiently. It means, your program
takes longer to run than the maximum allowed time for a test case. It doesn’t mean your logic is wrong – it
means your code is too slow.
TLE is not a failure, it’s a signal to optimize.

2.1 Example
Checking whether a decimal number (where, input size n <= 1010 ) is prime within 1 sec.

√Let’s consider two solution approaches, Approach-1 incurs time O(n) and Approach-2 incurs time
O( n). If the input size n ≤ 108 then Approach-1 may solve it within 1 sec (Refer Table 4 and 3).
However, if input size n ≥ 108 then it may take more than 1 sec, which is not permissible as per the problem
statement. So, your program should throw the error TLE. In this scenario, your obvious choice should be
Approach-2 which can handle the input size up to 1010 without TLE error.
Source Code Example (Check Prime) Click to toggle (Hide/Show) and Refer Section 2.3

2.2 When Does TLE Occur?


• In competitive coding platform, the online judge system sets a time limit (e.g., 1 or 2 seconds) for
each test case.

• If your code doesn’t produce output within that time, the system terminates it and reports TLE.

2.3 Source Code Example (Check Prime) – to check TLE error


Source Code Example (Check Prime) Click to toggle (Hide/Show)

9
2.4 Why is TLE Important?
• TLE helps enforce efficient algorithms.

• It eliminates brute force and rewards optimized logic.


• Encourages learning time complexity, data structures, and algorithmic design.

10
2.5 How to Avoid TLE?
• Analyze Time Complexity Choose an algorithm with acceptable time for input size. Refer Table ??
– n ≤ 108 : O(n) or O(n log n)
– n ≤ 104 : O(n2 ) may work
• Optimize Input/Output

– Use sys.stdin.readline() in Python for fast input


– Use scanf/printf instead of cin/cout in C++

• Avoid Unnecessary Computation

– Don’t compute the same result repeatedly


– Use memoization, prefix sums, or precomputation

• Use Fast Data Structures: set, dict, heapq, deque, etc in python

2.6 Exercise
Q.1) Test each method listed in Table 9 to check whether it results in a Time Limit Exceeded (TLE) error
under a 1-second execution limit. For each method, record the maximum input size that executes
within the time limit.
Then, write your observations by explaining the relationship between the input size, the time taken,
and the occurrence of the TLE error.
Q.2) The following Python method results in a TLE (Time Limit Exceeded) error when n = 1016 + 61. but
does not produce a TLE for n = 1018 . What could be the reason behind this behavior?

def is_prime_optimized_rootN(n):
if n <= 1:
return False
i = 2
while i * i <= n: #or while i<=sqrt(n)+1 may be avoided
if n % i == 0:
return False
i += 1
return True

3 Avoiding off-by-one errors


An off-by-one error (OBOE) happens when: You loop one step too few or too many, or use the wrong
index, usually because of incorrect loop bounds.
This usually happens when:
• You confuse between < and ≤ or > and ≥
• You mix up 0-based Vs 1-based indexing
• You overstep the bounds of arrays or ranges

3.1 Why Off-by-One Is Dangerous in Contests


• Leads to wrong answers (WA)
• Can cause TLE (loop goes one step too far)
• Can cause runtime errors (e.g., IndexError, Segmentation fault)
• Hard to debug quickly

11
3.2 Common Patterns with Examples

Error/Mistake Its Fix


1. Skipping first element due to wrong Correct 0-based indexing
loop start index
arr = [10, 20, 30]
arr = [10, 20, 30] for i in range(len(arr)):
for i in range(1, len(arr)): print(arr[i])
print(arr[i])

2. Using range(0, n+1) on array of size Loop up to n-1


n (IndexError)
arr = [0] * n
arr = [0] * n for i in range(n):
for i in range(0, n+1): arr[i] = i
arr[i] = i

3. Misinterpreting inclusive range (R is Include upper bound explicitly


skipped)
L, R = 5, 10
L, R = 5, 10 for i in range(L, R+1):
for i in range(L, R): print(i)
print(i)

4. Palindrome check oversteps midpoint Stop at midpoint (exclusive)


s = "racecar" s = "racecar"
for i in range(len(s)//2 + 1): for i in range(len(s)//2):
if s[i] != s[len(s)-1-i]: if s[i] != s[len(s)-1-i]:
print("Not Palindrome") print("Not Palindrome")

5. Binary search with wrong loop condi- Use standard binary search form
tion (low < high)
def binary_search(arr, low, high, item):
def binary_search(arr, low, high, item): while low <= high:
while low < high: mid = (low + high) // 2
mid = (low + high) // 2 if arr[mid] == item:
if arr[mid] == item: return mid
return mid #Item Found elif arr[mid] < item:
elif arr[mid] < item: low = mid + 1
low = mid + 1 else:
else: high = mid - 1
high = mid - 1 return -1 # if item is not found
return -1 # if item is not found

Table 8: Common Off-by-One Errors and Their Fixes with Code

3.3 Strategy to Avoid Off-by-One Errors


Step Practice
1 Know whether the loop should be inclusive or exclusive
2 Know whether array is 0-based or 1-based (Usually 0-based)
3 When in doubt, print loop indices to debug
4 Watch out for corner cases: n = 0, n = 1, n = 2, Check the array size ’1’, Check for empty string/array
5 Use helper functions and unit tests when possible

12
4 Analysis of Loop
In competitive coding, analyzing the loops (Single / Multiple) are often the first step to estimate time
complexity, check for Time Limit Exceeded (TLE) risks (Will be discussed in Section 2), and optimize the
solution accordingly.

• Analysis Strategy:
– Identify the number of iterations of the loop.
– Estimate work inside the loop – is it constant O(1), or some function of n?
– Calculate total time complexity.
– Verify via code (optional) using a timer to confirm feasibility.

4.1 Single Loop


One of the best ways to teach time
√ complexity intuitively is to show how a single loop can behave like
O(n), O(log n), and even O( n) depending on how the loop control variable changes (or how inner
operations behave).

• If there exist only a single loop in an algorithm/program which, runs for ’n’ times (that is, it may run
for n ± k times, k >= 0) and the statements/instructions under this loop incurs constant time, O(1)
then, algorithm’s complexity is treated as O(n). Some Examples of this complexity:
– Checking a Prime number
– Finding the divisors of a number
– Finding ab
– Finding the frequency of each item in an array
– Checking if the array is sorted
– Searching for an item in an array (Linear search)

• If the loop condition √


is i * i <= n, or i <= sqrt(n) and loop control variable (i) is initialized as i
+= 1 then T(n) = O( n). For Example
– Checking a prime number
– Finding divisors of a number
– Grover’s Algorithm (A quantum algo, used to search an item in an unsorted database of n entries)

• If the loop condition is i ≤ n and loop control variable i is doubled or halfed each time, i.e i *= 2
or i //= 2 then T(n) = O(log2 (n))
– Finding ab
– Searching for an item in an array (Binary search, Refer Table 1)
– Euclidean Algorithm for GCD

The following table presents a comparison of various algorithms that address the same problem, high-
lighting their differences in computational complexity.

13
Table 9: Simple Loop Analysis and Identifying T(n)

Checking prime

def is_prime_NaiveApproach(n): def is_prime_optimized_rootN(n):


if n <= 1: if n <= 1:
return False return False
for i in range(2, n): i = 2
if n % i == 0: while i * i <= n: #or while i<=sqrt(n)+1 may be avoide
return False if n % i == 0:
return True return False
i += 1
return True

O(n) O( n)

Finding Divisors Of a number

def find_divisors_Naive(n): def find_divisors_Optimized(n):


divisors = set() divisors = set()
for i in range(1, n+1): for i in range(1, int(math.sqrt(n)) + 1):
if n % i == 0: if n % i == 0:
divisors.add(i) divisors.add(i)
return divisors divisors.add(n // i)
return divisors


O(n) O( n)

Finding ab

def power_naive(a, b): def power_fast(a, b):


result = 1 result = 1
for _ in range(b): while b > 0:
result *= a if b % 2 == 1:
return result result *= a
a *= a
b //= 2
return result

O(n) O(log2 b) How??

4.1.1 Exercise
Q.1) For each method listed in Table 10, determine the time complexity and auxiliary space com-
plexity in Big-O notation. Additionally, clearly state the objective of each method by describing
the expected output for a given sample input.

Q.2) In Table 10, Can Method-3 and 4 be further optimized?


Q.3) In Table 10, rewrite the Method-5 to count number of digits in an integer number. Also, analyze the
time complexity of this method.
Q.4) Explain the time complexity of the following code snippet.

14
• If Some work is of O(1)
• If Some work is of O(n)

def generate_strings(n):
for i in range(2**n):
// Some work

Table 10: Exercise

Method-1 Method-2 Method-3

def divisor_groups(n): from collections import Counter def is_sorted(arr):


i = 1 def count_pairs(arr, target): for i in range(1, len(arr)):
while i <= n: freq = Counter() if arr[i] < arr[i-1]:
q = n // i count = 0 return False
next_i = n // q + 1 for num in arr: return True
print(f"For quotient q, i count += freq[target - num]
goes from i to next_i - 1") freq[num] += 1
i = next_i return count

Method-4 Method-5 Method-6

def count_frequency(arr): # n = a decimal number def gcd(a, b):


freq = def count_digits(n): while b:
for num in arr: count = 0 a, b = b, a % b
if num in freq: while n > 0: return a
freq[num] += 1 n //= 2
else: count += 1
freq[num] = 1 return count
return freq

Note: In this table, n, a, and b are decimal numbers. arr denotes an array

4.2 Nested Loop Analysis


Reason for Analyzing the Nested Loops
• Nested loops often imply higher complexity, e.g., O(n2 ) or O(n3 )
• These are often too slow for large inputs (e.g., n = 104 or 105 )

• Understanding when and how you can optimize nested loops helps avoid TLE (Time Limit
Exceeded) and win contests!
• Goal:
– Detect inefficient nested loops and see if you can replace them with a more optimized solution
using better data structures or mathematical insight.
2
let’s explore some classic
√ examples, given in Table 11 where O(n ) solutions can be optimized to
O(n), O(log2 n), or even O( n).

15
Table 11: Nested Loop Optimizations: Naive Vs Optimized Code

Problem Naive Code (O(n2 )) Optimized Code


Count pairs with
sum = target count = 0 from collections import defaultdict
for i in range(len(arr)): count = 0
for j in range(i+1, len(arr)): freq = defaultdict(int)
if arr[i] + arr[j] == target: for num in arr:
count += 1 count += freq[target - num]
freq[num] += 1
# Using Hash Map, Optimized: O(n)

Count frequency of
all divisors up to n for i in range(1, n+1): for i in range(1, n+1):
for j in range(1, i+1): for j in range(i, n+1, i):
if i % j == 0: div_count[j] += 1
div_count[i] += 1 # Optimized: (O(n log n))

Sum of divisors for


all numbers up to n for i in range(1, n+1): for i in range(1, n+1):
for j in range(1, i+1): for j in range(i, n+1, i):
if i % j == 0: div_sum[j] += i
div_sum[i] += j # Optimized: (O(n log n))

Number of Distinct
Pairs (i, j) where i * count = 0 count = 0
j≤n for i in range(1, n+1): i = 1
for j in range(1, n+1): while i <= n:
if i * j <= n: count += n // i
count += 1 i += 1
# Optimized: O(sqrt(n))

4.3 Exercise
Refer the code snippets given in Table 12 and answer the following
Q.1) Given an array of size n, count the number of pairs (i, j) such that i < j and arr[i] + arr[j] is
even.
(a) Write a brute-force solution using nested loops.
(b) Analyze its time complexity.
(c) Can you optimize it to O(n)? If yes, how?

Q.2) Analyze the time complexity of the following code ( Table 12, Q.2) and suggest an optimization if
possible
Q.3) Given an n x n sorted 2D matrix (each row and column is sorted), find whether a given number x
exists in the matrix.
Analyze the naive approach as given in the Table 12 and write an optimized code.
Q.4) Analyze the following code (Table 12, Q.4)
(a) What is the objective of this code?
(b) What is its Time complexity?
(c) Optimize the code so that T(n) will be O(n log n)

16
Q.5) Given a number n, count how many pairs (i, j) exist such that i divides j. Where, 1 ≤ i, j ≤ n.
The brute force approach is given below.

for i in range(1, n+1):


for j in range(i, n+1):
if j % i == 0:
count += 1

• What is the time complexity of this code?


• Suggest a more efficient approach.

Table 12: Code snippets associated with the Exercise in Section 4.3

Q.2 Q.3 Q.4

for i in range(1, n+1): for i in range(n): for i in range(1, n+1):


for j in range(1, i+1): for j in range(n): for j in range(1, i+1):
total += j if matrix[i][j] == x: if i % j == 0:
return True div_count[i] += 1

5 Analysis of Sorting Algorithm


5.1 Bubble Sort Analysis
Bubble Sort is a simple comparison-based sorting algorithm where adjacent elements are repeatedly swapped
if they are in the wrong order. There are two algorithms which are given below in the Table 13
Table 13: Bubble Sort – Inefficient Vs Efficient

Standard Bubble Sort (Inefficient Version) Optimized Bubble Sort (Efficient Version)

def bubble_sort(arr): def optimized_bubble_sort(arr):


n = len(arr) n = len(arr)
for i in range(n): for i in range(n):
for j in range(0, n - i - 1): swapped = False
if arr[j] > arr[j + 1]: for j in range(0, n - i - 1): # last i elements already sorted
arr[j], arr[j + 1] = arr[j + 1], arr[j] if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
swapped = True
if not swapped: # No swaps in this pass → array is sorted
break

Always runs n(n-1)/2 comparisons, even if the array is already The swapped flag allows early exit if no swaps are made in a
sorted. pass — If array is already sorted.
Always T(n) = O(n2 ) Best case: O(n), Worst Case: O(n2 )

• Bubble sort is a stable sort, meaning the relative order of equal elements is preserved.

• Sorting is done in-place (An in-place sorting algorithm3 ), i.e, no extra space is used. It modifies the
input array without needing a large copy of it.
– Space complexity is O(1)
• Can be applied on tiny data set

• It is applied when, situations requiring stable, in-place sort, and performance isn’t critical
3 In-place sorting Algos: Insertion Sort, Selection Sort, Quick Sort, Heap Sort, Shell Sort

17
5.2 Merge Sort Analysis
• Merge Sort is a divide-and-conquer sorting algorithm.
• It divides the array into halves, recursively sorts them, and merges the sorted halves.

• Given an array arr[0..n-1]:


1. Divide: Split the array into two halves: left = arr[0..mid-1], right = arr[mid..n-1]
2. Recur: Recursively sort left and right
3. Merge: Combine the two sorted halves into one sorted array

Merge Sort Code (Python)


def merge_sort(arr):
if len(arr) <= 1:
return arr

mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])

return merge(left, right)

def merge(left, right):


result = []
i = j = 0

# Merge two sorted arrays


while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1

result.extend(left[i:])
result.extend(right[j:])
return result

5.3 Time Complexity Analysis


Let T(n) be the time to sort n elements.

• Divide step: O(1)


• Recursive calls: 2 calls on size n/2 → 2T(n/2)
• Merge step: O(n) comparisons and insertions
• Recurrence Relation: (
Θ(1), if n = 1
T (n) = n

2T 2 + Θ(n), if n > 1

• By solving:
T (n) = O(n log n)

• The best, average and worst case T (n) = O(n log n)

18
5.4 Space Complexity
• Merge Sort uses O(n) auxiliary space for the temporary arrays created during merging.
• Not in-place4 (unlike Quick Sort or Heap Sort)

Note:
• Stable Sort: It preserves the order of equal elements
• Data sets that don’t fit in memory (Merge sort is used in external sorting)
• Efficient for Linked Lists
– Merge Sort can be implemented with O(1) auxiliary space on linked lists.
– Unlike arrays, you can’t easily jump across a linked list, so merge sort is ideal.

6 Analysis of Searching Algorithms


Searching is the operation of locating a specific value or record within a given data structure (such as an
array, list, or database) based on a given search key or condition.

6.1 Linear Search


Linear search simply scans each element one-by-one from the start until it either finds the target or reaches
the end.
Example Code (Python)
def linear_search(arr, target):
for i in range(len(arr)):
if arr[i] == target:
return i # Return the index if found
return -1 # Return -1 if not found

6.1.1 Example Execution


Input: arr = [5, 3, 7, 2, 9], target = 2

Index Value Compare to 1 Result


0 5 5 ̸= 1 Continue
1 3 3 ̸= 1 Continue
2 7 7 ̸= 1 Continue
3 1 2=2 Return 3

6.1.2 Time Complexity


Case Complexity Reason
Best O(1) Target found at the first element
Average O(n) Halfway through the list
Worst O(n) Target at end or not present

Space Complexity: O(1) — constant extra space

6.1.3 Limitations
• Too slow for large n (n > 108 or 106 )
• Does not leverage sorted data
4 Non
In-Place sorting Algos: Merge, Counting, Radix, Bucket, TimSort (TimSort in Python and Java, Best case O(n), Worst
Case O(nlog n))

19
6.2 Binary Search
Binary search requires the array to be sorted. It works by repeatedly halving the search space:
• Compare target with the middle element.

• If equal, done.
• If less, search the left half.
• If greater, search the right half.

6.3 Example Code (Python)


def binary_search(arr, target):
low, high = 0, len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1

6.3.1 Example Execution


Input: arr = [1, 3, 5, 7, 9], target = 7
low high mid arr[mid] Action
0 4 2 5 5 < 7 → right half (low = 3)
3 4 3 7 Found at index 3

6.3.2 Time Complexity


In Binary search, each time the search space becomes half. If # of item = n

Iteration Search space size


0th n
n
1st 2
n
2nd 4
n
3rd 8
... ...
... ...
n
kth (when loop terminates) 2k
=1
n
2k
= 1. So, on solving k = log2 n, i.e T(n) = log2 n

Case Complexity Why


Best O(1) Found at middle directly
Average O(log n) Halving each time
Worst O(log n) Halving until small

6.3.3 Space Complexity


• Iterative: O(1)
• Recursive: O(log n) (due to call stack) Refer Table 1 for recursive binary search code

20
6.4 Binary Search Common Mistake in Competitive Coding
• Forgetting that binary search requires sorted input.
• Using mid = (low + high) // 2. To avoid overflow in C, C++, and Java it is suggested to compute
mid = low + (high - low) // 2. (Can you explore the reason WHY ??)
• Failing to think in terms of searching over the answer, not just arrays.

6.5 Linear Search vs Binary Search: Analysis

Aspect Linear Search Binary Search


Prerequisite Works on unsorted or sorted data Requires sorted data
Approach Sequentially checks each element Repeatedly halves the search space
Best Case O(1) (first element) O(1) (middle element is target)
Average Case O(n) O(log n)
Worst Case O(n) O(log n)
Space O(1) O(1) iterative / O(log n) recursive (stack)
Stability Not relevant Not relevant

6.6 Example
Problem: Find the smallest x such that x2 ≥ target

Solution: How It Works (Explanation):


a) We search x within the range [0, target].
b) For each mid:
• If mid2 ≥ target store mid as a potential answer and search smaller.
• If mid2 < target search larger.
c) Finally, answer holds the smallest x satisfying the condition.
Python Code:

def find_smallest_x(target):
low, high = 0, target
answer = -1

while low <= high:


mid = (low + high) // 2
if mid * mid >= target:
answer = mid
high = mid - 1 # Search smaller possible x
else:
low = mid + 1 # Search larger possible x

return answer

# Example usage:
target = 20
result = find_smallest_x(target)
print(f"The smallest x such that x^2 >= {target} is {result}")

Example: For target = 20, the output is: 5. Because 52 = 25 ≥ 20 and 42 = 16 < 20.

21
6.6.1 Time Complexity:
Binary search on range [0, target] → T(n) = O(log target)

6.6.2 Exercise
Q.1) Given a sorted array and a target, return the index if found, else return the index where it would be
inserted.(LeetCode 35)

Q.2) Compute and return the integer square root of a non-negative integer x. Return the floor of the square
root.(LeetCode 69)
Q.3) Return True if a given number is a perfect square using Binary Search.(LeetCode 367)
Q.4) Find the peak index in a mountain array (first strictly increasing, then strictly decreasing). (LeetCode
852)

Q.5) As we know, binary search requires the input data to be sorted beforehand. Therefore,
if we are given an unsorted dataset, we must first sort it, which incurs a time complexity of T1(n)
(where T1(n) depends on the sorting algorithm used, typically O(n log n) for efficient algorithms).
Once sorted, binary search can be applied with a time complexity of O(log n).
Thus, the overall time complexity becomes:
T(n) = T1(n) + O(log n).
In comparison, a linear search operates directly on unsorted data with a time complexity of O(n).
Therefore, binary search (preceded by sorting) may actually take more time than linear search.
This raises an important question: What is the practical relevance of binary search in real-
world applications?
Q.6) Write a method/function to implement ternary search to find an item in a sorted array.
Hint: Here you need to divide the array into 3 parts. mid1 = left + (right - left) // 3 and
mid2 = right - (right - left) // 3

a) Analyse the code and find out the time complexity of the Ternary search?
b) Which one (Binary or Ternary) is more efficient while searching an item in a sorted array? Justify
your answer.

7 Bit Manipulation
• Bit Manipulation refers to performing operations directly on bits (binary digits) of integers using
bitwise operators.
• Bit Manipulation is essential for solving specific optimization problems, subsets generation,
parity problems, and problems related to binary representations efficiently in competitive
coding.

7.1 Key Bitwise Operators (C++, Python, Java)


Operator Symbol Description
AND & 1 if both bits are 1 otherwise 0
OR | 0 if both bits are 0 otherwise 1
XOR ^ 1 if bits differ otherwise 0
NOT ~ Flips all bits (0 to 1 and 1 to 0)
Left Shift << Shift bits to the left
Right Shift >> Shift bits to the right

22
7.2 Advantages of Bit Manipulation?
Advantage Why?
Faster Operations are performed directly on hardware level (O(1))
Compact Bit-level tricks reduce complex logic to a few lines
Space-efficient Can use individual bits to represent states (bitmasking)
Elegant Solutions Solves certain problems that are cumbersome otherwise
Enables Optimization Reduces memory usage (bitsets, flags)

7.3 Decimal Vs Bit Manipulation: A Quick Comparison


Operation Decimal Approach Bit Manipulation Approach
Multiply by 2 x * 2 x << 1 (left shift 1 bit)
Divide by 2 x // 2 x >> 1 (right shift 1 bit)
Check if even x % 2 == 0 x & 1 == 0
Swap variables (with XOR) Using temp variable x = x ^ y; y = x ^ y; x = x ^ y
Count number of 1s (bits) Use loop / log2 x & (x-1) repeatedly or bin(x).count(’1’)

7.4 Common Bit Manipulation Operations (Verify with Examples)

Operation Purpose Expression Explanation


Get i-th Bit Check if i-th bit is 1 (n >> i) & 1 Shift i bits to right, AND with 1
Set i-th Bit Make i-th bit 1 n | (1 << i) (1 << i) is the mask
Clear i-th Bit Make i-th bit 0 n & ∼(1 << i) AND with NOT of 1 at i-th position
Update i-th Bit Set i-th bit to b = (0 or 1) n = (n & ∼(b << i)) (b <<i) is the Mask
Clear Last i Bits Clear last i bits n & (∼ 0 <<i) AND with 1s except last i bits
∼0 = infinite 1s

7.5 Examples of Bit Manipulation Tricks


7.5.1 Example 1: Check if a number is power of 2
def is_power_of_two(n):
if n > 0 and (n & (n - 1)) == 0:
return 1
else:
return 0

Reason: Power of 2 has only one bit set. For example, in n = 8 (=1000), the 4th bit is 1.

7.5.2 Example 2: Counting 1 Bits (LeetCode 191)


Counting the number of 1s in the binary representation of a decimal number–Hamming Weight
def count_ones(n):
count = 0
while n:
n &= (n - 1)
count += 1
return count

Reason: Each step clears the lowest set bit.

7.5.3 Example 3: Set the k-th Bit. Making k-th bit 1


def set_kth_bit(n, k):
result = n | (1 << k)
return result

23
Reason: Here, the mask is 1 << k, shifting decimal 1 to its left k-th time. Then apply OR operation.

7.5.4 Example 4: Toggle/Reverse the k-th Bit (LeetCode 190)


def toggle_kth_bit(n, k):
result = n ^ (1 << k)
return result

Reason: Here, the mask is 1 << k, shifting decimal 1 to its left k-th time. Then apply Ex-OR operation.

7.6 Classic Competitive Problems Examples


7.6.1 Problem 1: Single Number (Leetcode 136)
Statement: Find the number that appears once while all others appear twice.
def singleNumber(nums):
ans = 0
for num in nums:
ans ^= num
return ans

Input: nums = [4, 1, 2, 1, 2], Output: 4

7.6.2 Problem 2: Subset Generation (Leetcode 78)


Statement: For every number from 0 to n, generate all subsets.
def generate_subset(n):
all_subsets = []
for mask in range(1 << n):
subset = []
for i in range(n):
if mask & (1 << i):
subset.append(i)
all_subsets.append(subset)
return all_subsets

Input: n = 3, that is, you need to generate the subsets of [0, 1, 2] Output: [ ], [0], [1], [2], [0,
1], [0, 2], [1, 2], [0, 1, 2]

7.6.3 Problem 3: Counting Bits (Leetcode 338)


Statement: For every number from 0 to n, count the number of 1s in its binary representation.
def countBits(n):
dp = [0] * (n+1)
for i in range(1, n+1):
dp[i] = dp[i >> 1] + (i & 1)
return dp

Input: n = 5, Output: [0, 1, 1, 2, 1, 2]

7.7 Exercise
Q.1) Given an array containing n distinct numbers in the range [0, n]. Find the one number missing from
the array. (Missing number–LeetCode 268)
Q.2) Find the sum of two integers a and b without using + or - operators.

24
Q.3) Given an array where every element appears three times except for one element which appears exactly
once. Find that unique element. I/P: nums = [2, 2, 6, 2], O/P: 6
Q.4) Given an integer, return True if it is a power of 4, otherwise return False.

Solution:

8 Implementing Data Structures (Python)


8.1 Array
• it is a fixed-size, contiguous memory structure that stores elements of the same type. Elements are
accessed via indexing.
• Direct, fast access to elements by index. Efficient for read-heavy scenarios.
• It can be implemented in Python through a List or numpy array

• Python Example: list based Array

arr = [10, 20, 30] # list implementation


print(arr[1]) # Access O(1)
arr.append(40) # Insert at end O(1)
arr.remove(20) # Delete item by its value. Doesn’t return anything.
# T(n) = O(n) Removing any item, T(n) = O(1) for last item
arr.pop() # Delete by index (default: last). T(n) = O(1)
# Returns the popped item

8.2 Vector
• A vector grows dynamically. It resizes (typically doubles its capacity) when the limit is reached.
• Provides array-like access but with flexibility for growth.

• Like array, it is also implemented using list


• Python Example: list based Vector Very similar to array

vec = []
vec.append(10)
vec.append(20)
vec.pop()
print(vec) # [10]

8.3 Stack
• A stack allows operations only on the top element (Last-In-First-Out).
• All the operations (push, pop, peek) incur O(1) time.
• Useful in function calls, expression evaluation, and undo mechanisms.

• It is implemented using list in python


• Python Example: list based stack

25
stack = []

# Push elements (insert at top)


stack.append(10)
stack.append(20)
stack.append(30)
print("Stack:", stack) # Output: [10, 20, 30]

# Pop element (remove from top)


top_element = stack.pop()
print("Popped:", top_element) # Output: 30
print("Stack after pop:", stack) # Output: [10, 20]

# Peek (top element without removing)


print("Top element:", stack[-1]) # Output: 20

# isEmpty
if not stack:
print("Stack is empty")
else:
print("Stack is not empty")

# Size of stack
print("Size of stack:", len(stack)) # Output: 2

8.4 Queue
• A queue adds elements at one end (rear) and removes from the other (front).
• Useful in scheduling, BFS traversal, producer-consumer problems.
• Implemented using deque
• Python Example: deque based Queue

from collections import deque

# Initialize queue
queue = deque()

# Enqueue (Insert at rear)


queue.append(10)
queue.append(20)
queue.append(30)

# Dequeue (Remove from front) # O(1)


front = queue.popleft()
print("Removed:", front) # Output: 10

# Peek (Access Front element without removing) # O(1)


print("Front:", queue[0]) # Output: 20

# Check if empty
if not queue:
print("Queue is empty")
else:
print("Queue is not empty")

26
# Size of queue
print("Size:", len(queue)) # Output: 2

8.5 Deque (Double-ended Queue)


• A deque allows insertion and deletion from both ends.
• All the operations incur O(1) time
• Useful in sliding window problems, palindromic checks
• Python Example:

from collections import deque

# Initialize deque
dq = deque()

# Add/Append operations
dq.append(10) # Add item at Rear/Right end of queue
dq.appendleft(5) # Add item at Front/Left end
dq.append(20)

print("Deque:", dq) # deque([5, 10, 20])

# Pop operations
dq.pop() # Removes 20 from rear/right
dq.popleft() # Removes 5 from front/left

print("Deque after pops:", dq) # deque([10])

# Peek operations: deque[0] or deque[-1]


print("Front:", dq[0]) #O/P: 10. Access the first/front element
print("Rear:", dq[-1]) #O/P: 10. Access the last/rear element

# Check empty
print("Is empty?", not dq)

# Size: len(dqueue)
print("Size:", len(dq))

# Reverse()
dq.append(30)
dq.append(40)
dq.reverse() #Reverse the order of elements in-place
print("Reversed deque:", dq)

# Rotate(k): Rotate elements to right/left


dq.rotate(1) # Moves elements right by 1
print("Rotated deque:", dq)

8.6 Priority Queue (Min-Heap / Max-Heap)


• A priority queue allows retrieval of the highest or lowest priority element efficiently.
• Useful in scheduling, greedy algorithms (Dijkstra, Huffman).

27
• Python Example (min-heap):

import heapq

heap = [5, 1, 9, 3]
heapq.heapify(heap) # Converts the list into a min-heap O(n)
print(heap) # Example output: [1, 3, 9, 5]

min_element = heap[0] # Get min value which is at 0 index.


# T(n) = O(1)
print(min_element) # Output: 1

min_element = heapq.heappop(heap) # Removes and returns the smallest element and apply heapify
# O(log n)

print(heap) # Heap is updated

heapq.heappush(heap, 2) # Insert 2 while maintaining heap order O(log n)


print(heap) # The smallest element will again be at index 0

8.7 Heap (Binary Heap)


• A binary heap is a complete binary tree maintaining the heap property (min-heap: parent ≤ children
or sometimes Max-heap).
• Used in Priority queues, scheduling, efficient min/max extraction.
• Basic Common operations: Insert–O(log n), Remove Min/Max – O(log n), Peek Min/Max – O(1)
• The implementation is very similar to priority key using heapq.
• Python Example: Using Python heapq

import heapq

# Initialize a list as a heap


heap = [5, 1, 9, 3]
heapq.heapify(heap) # Convert list to min-heap
print("Heapified:", heap) # Output may vary in structure but maintains min-heap property

# Insert: O(log n)
heapq.heappush(heap, 2)
print("After push:", heap)

# Get min: Access the smallest element O(1)


print("Min element:", heap[0])

# Remove min: Remove and return smallest element O(log n)


min_val = heapq.heappop(heap)
print("Removed min:", min_val)
print("After pop:", heap)

# Replace min: Pop & Push in one operation O(log n)


heapq.heapreplace(heap, 4)
print("After replace:", heap)

# n smallest and largest: O(n log n)


nums = [10, 20, 5, 8, 30, 2]
print("3 smallest:", heapq.nsmallest(3, nums)) #Get n smallest elements

28
print("3 largest:", heapq.nlargest(3, nums)) #Get n largest elements (reverse min-heap)

8.8 Hashing
• Hashing uses a hash function to map keys to indices for constant time lookup
• Efficient search, insertion, deletion.
• All operations (Insert, Search, Delete) incur O(1) time
• Implemented in python using Dictionary, dict

• All the operations (insert, search, delete) incur O(1)


• Python Example: Using Python dict

# Initialize hash table (dictionary)


hash_map = {}

# Insert / Put: Store key-value pair


hash_map[’apple’] = 5
hash_map[’banana’] = 10

# Update: Update existing key’s value


hash_map[’apple’] = 7

# Get / Search: Retrieve value by key


print(hash_map[’apple’]) # Output: 7
print(hash_map.get(’banana’)) # Output: 10
print(hash_map.get(’grape’)) # Output: None (safe if key missing)

# Check existence: Check if key exists


if ’apple’ in hash_map:
print("Apple exists")

# Delete: Remove key-value pair


del hash_map[’banana’]

# Get keys, values, items


print(hash_map.keys()) # dict_keys([’apple’]). Get keys: Get all stored keys
print(hash_map.values()) # dict_values([7]). Get Values: Get all stored values
print(hash_map.items()) # dict_items([(’apple’, 7)]). Get items: Get key-value pairs as tuples

# Size: Count number of key-value pairs


print(len(hash_map)) # Output: 1

# Clear: Remove all elements


hash_map.clear()
print(hash_map) # Output: {}

8.9 Exercise
Q.1) You are given two sorted arrays nums1 and nums2. LeetCode 88
• nums1 has a length of m + n, where the first m elements are valid and the rest are empty (filled
with 0) for accommodating nums2.
• nums2 has exactly n elements.

29
Merge nums2 into nums1 in-place (without using any extra memory space) as a sorted array.
Q.2) Given a string containing only (),[], and{}, determine if the input string is valid (i.e, brackets are
closed in the correct order). Hint: Use stack LeetCode 20

Input: ([]()), Output: True

Q.3) Find the first Unique Character in a String and return its index. Hint: Use Hashing (LeetCode 387)

Input: "hahb", Output: 1 (index of a)

8.9.1 Solution

References
[1] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algo-
rithms. MIT Press, Cambridge, MA, 3rd edition, 2009.
[2] Steven Halim and Felix Halim. Competitive Programming 4: The Lower Bound of Programming Contests
in the 2020s, Book 1. CP4 Books, 1st edition, 2020. Released on July 19, 2020.

30

You might also like