0% found this document useful (0 votes)
39 views38 pages

Unit-2 Ads&aa

It is the subject of computer science engineering in 2bd year btech of branch artificial intelligence and Machine learning

Uploaded by

rehanashaik89777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views38 pages

Unit-2 Ads&aa

It is the subject of computer science engineering in 2bd year btech of branch artificial intelligence and Machine learning

Uploaded by

rehanashaik89777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

ADS&AA

UNIT—2

Heap Trees (Priority Queues) – Min and Max Heaps, Operations and Applications.
Graphs–Terminology, Representations, Basic Search and Traversals, Connected
Components and Biconnected Components, applications. Divide and conquer. The
General Method, Quick Sort, Merge Sort, Strassen’s multiplication,
ConvexHull
---------------------------------------------------------------------------------------------------------

Heap Trees:

Heap trees or priority queues are specialized tree-based data structures that satisfy the heap property.
Heaps are commonly used to implement priority queues.They are used in various applications such as
implementing priority queues, heap sort, and graph algorithms.

Heap is essentially a complete binary tree and hence can be efficiently represented using array-based
representation

There are two main types of heap trees:

1. Max-Heap:
o In a max-heap, the key at a parent node is always greater than or equal to the keys
of its children.
o The largest key is at the root of the tree.
2. Min-Heap:
o In a min-heap, the key at a parent node is always less than or equal to the keys of
its children.
o The smallest key is at the root of the tree.

Properties of Heaps:

 Complete Binary Tree: A heap is a complete binary tree, meaning all levels are fully filled
except possibly for the last level, which is filled from left to right.

 Heap Property: Each node must follow the heap property (either max-heap or min-heap).

o Min Heap: The value of each node is greater than or equal to the value of its parent.
Thus, the smallest value is at the root node.

A[parent (i)] <=A[i]


o Max Heap: The value of each node is less than or equal to the value of its parent. Thus,
the largest value is at the root node.

A[parent (i)] >=A[i]

Operations on Heaps:

1. Insertion:
o Add the new element at the end of the heap.
o Adjust the heap to maintain the heap property (heapify up).
2. Deletion:
o Typically, the root node (the largest element in a max-heap or the smallest element in a
min-heap) is removed.
o Replace the root with the last element in the heap.
o Adjust the heap to maintain the heap property (heapify down).
3. Heapify:
o The process of adjusting the heap to maintain the heap property after insertion or
deletion.
o Can be done in two ways: heapify up (for insertion) or heapify down (for deletion).

Applications of Heaps:

 Priority Queues: Heaps are widely used to implement priority queues where you need to
efficiently access the highest or lowest priority element.
 Heap Sort: An efficient sorting algorithm with a time complexity of O(n log n)O(n \log
n)O(nlogn) in the worst case.
 Graph Algorithms: Heaps are used in algorithms like Dijkstra's and Prim's for finding the
shortest path and minimum spanning tree, respectively.

Structure

 Nodes: Each node in a heap contains a value. For a min heap, the smallest value is at the root,
and for a max heap, the largest value is at the root.
 Child Nodes: Every node has at most two children.

 For a node at index i,


 its children are located at indices 2i + 1 (left child) and 2i + 2 (right child).
 he parent of a node at index i is located at index (i - 1) / 2.
Difference between Min Heap and Max Heap

Min Heap Max Heap

In a Min-Heap the key present at the root In a Max-Heap the key present at the root node
1. node must be less than or equal to among the must be greater than or equal to among the
keys present at all of its children. keys present at all of its children.

In a Min-Heap the minimum key element In a Max-Heap the maximum key element
2.
present at the root. present at the root.

3. A Min-Heap uses the ascending priority. A Max-Heap uses the descending priority.

In the construction of a Min-Heap, the In the construction of a Max-Heap, the largest


4.
smallest element has priority. element has priority.

In a Min-Heap, the smallest element is the In a Max-Heap, the largest element is the first
5.
first to be popped from the heap. to be popped from the heap.

The insertion operation in a heap (whether min-heap or max-heap) involves adding a new element to the
heap and then ensuring that the heap property is maintained. Here, the algorithm for inserting an element
into a min-heap in C++.

Min-Heap

1. Insertion in a Min-Heap

When inserting an element into a min-heap:

1. Add the element to the end of the heap (maintaining the complete binary tree property).
2. "Heapify up" to restore the heap property by comparing the added element with its parent and
swapping them if the heap property is violated.

Algorithm

1. Insert the element at the end:

• Append the new element at the end of the heap array.

2. Restore the heap property:


 Compare the inserted element with its parent.
 If the inserted element is smaller than its parent, swap them.
 Repeat this process until the heap property is restored.

Insertion in a Min-Heap

class MinHeap {
public:
vector<int> heap;

void insert(int key) {


heap.push_back(key); // Add the new key at the end
int i = heap.size() - 1;

while (i != 0 && heap[parent(i)] > heap[i])


{ // Fix the min-heap property if it is violated
swap(heap[i], heap[parent(i)]);
i = parent(i);
}
} // Function to get the index of the parent of a given node
int parent(int i) { return (i - 1) / 2;
}

Explanation:

1. Insert Method:
o Adds the new element at the end of the heap.
o Fixes the heap property by comparing the inserted element with its parent and swapping
them if necessary. This is done repeatedly until the element is at the correct position.
2. Parent Method:
o Returns the index of the parent of the given node.

2 Algorithm for Deletion in Min-Heap

1. Remove the root: Save the root value to be returned later.


2. Move the last element: Replace the root with the last element.
3. Heapify down: Restore the min-heap property by comparing the new root with its children and
swapping with the smaller child if necessary, and repeat the process until the heap property is
restored.

int deleteMin()
{
int n = heap.size();
if (n == 0)
return -1; // Heap is empty

int root = heap[0];


heap[0] = heap[n - 1];
heap.pop_back(); // delete element
heapify(0);
return root;
}

Delete Min Function:


 Removes and returns the root (minimum element) of the heap.
 Replaces the root with the last element in the heap.
 Restores the min-heap property by heapifying down from the root.

3. Heapify Function for Min heap

 Ensures the subtree rooted at index i maintains the min-heap property.


 Swaps the current node with the smallest of its children if necessary and recursively heapifies the
affected subtree.

Max Heap

1. Algorithm for Max Heap

1. Assume the largest value is at the root.


2. Compare the root with its children.
3. If one of the children is larger, swap the root with the largest child.
4. Recursively apply heapify to the affected subtree.

Insertion in a Max-Heap

A max-heap is a binary tree where the key at each node is greater than or equal to the keys of its
children, and the tree is a complete binary tree. In a max-heap, the largest element is at the root.

void insert(int key)


{
heap.push_back(key);
int i = heap.size() - 1; // Fix the max-heap property if it is violated

while (i != 0 && heap[parent(i)] < heap[i])


{
swap(heap[i], heap[parent(i)]);
i = parent(i);
}
}

Insert Function : explanation

 Inserts a new element at the end of the heap.


 Restores the max-heap property by moving the new element up if necessary.

2 DeleteMax Function:

// Function to delete the maximum element (root) from the heap

int deleteMax()
{
int n = heap.size();
if (n == 0) return -1; // Heap is empty
int root = heap[0];
heap[0] = heap[n - 1];
heap.pop_back();
heapify(0);
return root;
}

Delete Function: explanation

 Removes and returns the root (maximum element) of the heap.


 Replaces the root with the last element in the heap.
 Restores the max-heap property by heapifying down from the root.

3 Heapify Function for Max Heap

 Ensures the subtree rooted at index i maintains the max-heap property.


 Swaps the current node with the largest of its children if necessary and recursively heapifies the
affected subtree.

Example:

Above tree is satisfying both Ordering property and Structural property according to the Max Heap data
structure.
Example
Consider the above max heap. Insert a new node with value 85.

 Step 1 - Insert the newNode with value 85 as last leaf from left to right. That means newNode is
added as a right child of node with value 75. After adding max heap is as follows...

 Step 2 - Compare newNode value (85) with its Parent node value (75). That means 85 > 75
 Step 3 - Here newNode value (85) is greater than its parent value (75), then swap both of
them. After swapping, max heap is as follows...

 Step 4 - Now, again compare newNode value (85) with its parent node value (89).

Here, newNode value (85) is smaller than its parent node value (89). So, we stop insertion
process. Finally, max heap after insertion of a new node with value 85 is as follows...

Delete:
In a max heap, deleting the last node is very simple as it does not disturb max heap properties.

Deleting root node from a max heap is little difficult as it disturbs the max heap properties.
Example
Consider the above max heap. Delete root node (90) from the max heap.

 Step 1 - Swap the root node (90) with last node 75 in max heap. After swapping max heap is as
follows...
 Step 2 - Delete last node. Here the last node is 90. After deleting node with value 90 from heap,
max heap is as follows...

 Step 3 - Compare root node (75) with its left child (89).

Here, root value (75) is smaller than its left child value (89). So, compare left child (89) with its
right sibling (70).

 Step 4 - Here, left child value (89) is larger than its right sibling (70), So, swap root
(75) with left child (89).

 Step 5 - Now, again compare 75 with its left child (36).


Here, node with value 75 is larger than its left child. So, we compare node 75 with its right
child 85.

 Step 6 - Here, node with value 75 is smaller than its right child (85). So, we swap both of them.
After swapping max heap is as follows...

 Step 7 - Now, compare node with value 75 with its left child (15).

Here, node with value 75 is larger than its left child (15) and it does not have right child. So we stop the
process.

Finally, max heap after deleting root node (90) is as follows...


Graphs:
Graph is a nonlinear data structure consisting of vertices and edges. The vertices are sometimes also
referred to as nodes and the edges are lines or arcs that connect any two nodes in the graph.
Graph is a set of vertices (V) and a set of edges (E). The graph is denoted by G(V,E)

A Graph G(V, E) with 5 vertices (A, B, C, D, E) and six edges ((A,B), (B,C), (C,E), (E,D), (D,B),
(D,A)) is shown in the following undirected figure. A directed graph is shown in the following figure.

Terminology ----

1. Vertex: An individual data element of a graph is called as Vertex. Vertex is also known
as node. In above example graph, A, B, C, D & E are known as vertices.

2. Edge: An edge is a connecting link between two vertices. Edge is also known as Arc. An edge is
represented as (starting Vertex, ending Vertex).

In above graph, the link between vertices A and B is represented as (A,B).

Edges are three types:

 Undirected Edge - An undirected edge is a bidirectional edge. If there is an undirected edge


between vertices A and B then edge (A , B) is equal to edge (B , A).

 Directed Edge - A directed edge is a unidirectional edge. If there is a directed edge between
vertices A and B then edge (A , B) is not equal to edge (B , A).
 Weighted Edge - A weighted edge is an edge with cost on it.

Types of Graphs

1. Undirected Graph

A graph with only undirected edges is said to be undirected graph.

2. Directed Graph

A graph with only directed edges is said to be directed graph.


3. Complete Graph

A graph in which any V node is adjacent to all other nodes present in the graph is known as a
complete graph. An undirected graph contains the edges that are equal to edges = n(n-1)/2 where n
is the number of vertices present in the graph. The following figure shows a complete graph.

4. Regular Graph

Regular graph is the graph, in which nodes are adjacent to each other, i.e., each node is accessible
from any other node.

5. Cycle Graph

A graph having cycle is called cycle graph. In this case the first and last nodes are the same. A
closed simple path is a cycle.

6. Acyclic Graph

A graph without cycle is called acyclic graphs.


7. Weighted Graph

A graph is said to be weighted if there are some non-negative value assigned to each edges of the
graph. The value is equal to the length between two vertices. Weighted graph is also called
network.

Outgoing Edge

A directed edge is said to be outgoing edge on its orign vertex.

Incoming Edge

A directed edge is said to be incoming edge on its destination vertex.

Degree

Total number of edges connected to a vertex is said to be degree of that vertex.

Indegree

Total number of incoming edges connected to a vertex is said to be indegree of that vertex.

Outdegree

Total number of outgoing edges connected to a vertex is said to be outdegree of that vertex.

Parallel edges or Multiple edges

If there are two undirected edges to have the same end vertices, and for two directed edges to have
the same origin and the same destination. Such edges are called parallel edges or multiple edges.

Self-loop

An edge (undirected or directed) is a self-loop if its two endpoints coincide.

Simple Graph

A graph is said to be simple if there are no parallel and self-loop edges.


Adjacent nodes

When there is an edge from one node to another then these nodes are called adjacent nodes.
Path
A sequence of edges that connect a sequence of vertices

If e1 and e2 be the two edges between the pair of vertices (v1,v3) and (v1,v2) respectively, then v3
e1 v1 e2 v2 be its path.
Length of a path

The number of edges in a path is called the length of that path. In the following, the length of the path
is 3.

An open walk Graph

Graph Representations

Graph data structure is represented using following representations

1. Adjacency Matrix

2. Adjacency List

1. Adjacency Matrix

In this representation, graph can be represented using a matrix of size total number of vertices by total
number of vertices; means if a graph with 4 vertices can be represented using a matrix of 4X4 size.

In this matrix, rows and columns both represent vertices.

This matrix is filled with either 1 or 0. Here, 1 represents there is an edge from row vertex to column
vertex and 0 represents there is no edge from row vertex to column vertex.

Adjacency Matrix:

let G = (V, E) with n 1. The adjacency matrix of G is a 2-dimensional

n n matrix, A, A(i, j) = 1 iff (vi, vj i, vj

example : for undirected graph


For a Directed graph

The adjacency matrix for an undirected graph is symmetric; the adjacency matrix for a directed graph
need not be symmetric.
The space needed to represent a graph using adjacency matrix is n2 bits. To identify the edges in a
graph, adjacency matrices will require at least O(n2) time.

2. Adjacency List

In this representation, every vertex of graph contains list of its adjacent vertices. The n rows of the
adjacency matrix are represented as n chains. The nodes in chain I represent the vertices that are
adjacent to vertex i.

It can be represented in two forms. In one form, array is used to store n vertices and chain is used to
store its adjacencies. Example:

0 1 2
So that we can access the adjacency list for any vertex in O(1) time. Adjlist[i] is a pointer to to first
node in the adjacency list for vertex i. Structure is

#define MAX_VERTICES 50 typedef struct node *node_pointer;


typedef struct node {
int vertex;
struct node *link;
};

node_pointer graph[MAX_VERTICES]; int n=0; /* vertices


currently in use */

Another type of representation is given below.


example: consider the following directed graph representation implemented using linked list. This
representation can also be implemented using array
Basic Search and Traversals:

Graph traversal techniques: “The process of traversing all the nodes or vertices on a graph is called
graph traversal”.
We have two traversal techniques on graphs

DFS (Depth First Search):

The DFS explore each possible path to its conclusion before another path is tried. In other words go as a
far as you can (if u don’t have a node to visit), otherwise, go back and try another way. Simply it can be
called as “backtracking”.

 Begin the search by visiting the start vertex v


o If v has an unvisited neighbour, traverse it recursively
o Otherwise, backtrack
 Time complexity
o Adjacency list: O(|E|)
o Adjacency matrix: O(|V|2)

We begin by visiting the start vertex v. Next an unvisited vertex w adjacent to v is selected, and a
depth-first search from w is initiated. When a vertex u is reached such that all its adjacent vertices
have been visited, we back up to the last vertex visited that has an unvisited vertex w adjacent to it
and initiate a depth-first search from w.
The search terminates when no unvisited vertex can be reached from any of the visited vertices.

DFS traversal of a graph, produces a spanning tree as final result. Spanning Tree is a graph
without any loops. We use Stack data structure with maximum size of total number of vertices
in the graph to implement DFS traversal of a graph.

We use the following steps to implement DFS traversal...

Step 1: Define a Stack of size total number of vertices in the graph.

Step 2: Select any vertex as starting point for traversal. Visit that vertex and push it on to the Stack.

Step 3: Visit any one of the adjacent vertex of the verex which is at top of the stack which is not
visited and push it on to the stack.

Step 4: Repeat step 3 until there are no new vertex to be visit from the vertex on top of the stack.

Step 5: When there is no new vertex to be visit then use back tracking and pop one vertex from

the stack. Step 6: Repeat steps 3, 4 and 5 until stack becomes Empty.

Step 7: When stack becomes Empty, then produce final spanning tree by removing unused edges

from the graph This function is best described recursively as in

Program.

#define FALSE 0
#define TRUE 1
Int visited[MAX_VERTICES];
void dfs(int v)
{
nod_pointer w;
visited[v]= TRUE;
printf(“%d”, v);
for (w=graph[v]; w; w=w->link)
if (!visited[w->vertex])
dfs(w->vertex);
}

When G is represented by its adjacency lists, the vertices w adjacent to v can be determined by
following a chain of links. Since DFS examines each node in the adjacency lists at most once and
there are 2e list nodes the time to complete the search is O(e). If G is represented by its
adjacency matrix then the time to determine all vertices adjacent to v is O(n). Since at most n
vertices are visited the total time is O(n2).
BFS (Breadth First Search):

It is one of the simplest algorithms for searching or visiting each vertex in a graph. In this method each
node on the same level is checked before the search proceeds to the next level. BFS makes use of a
queue to store visited vertices, expanding the path from the earliest visited vertices

In a breadth-first search, we begin by visiting the start vertex v. Next all unvisited vertices adjacent
to v are visited. Unvisited vertices adjacent to these newly visited vertices are then visited and so
on. Algorithm BFS (Program 6.2) gives the details.
typedef struct queue *queue_pointer;
typedef struct queue {
int vertex;
queue_pointer link;
};

void addq(queue_pointer *, queue_pointer *, int);


int deleteq(queue_pointer *); void bfs(int v)
{
node_pointer w; queue_pointer front, rear;
front = rear = NULL; printf(“%d”, v); visited[v] = TRUE;
addq(&front, &rear, v);
while (front)
{
v= deleteq(&front);
for (w=graph[v]; w; w=w->link) if (!visited[w->vertex])
{
printf(“%d”, w->vertex);
addq(&front, &rear, w->vertex);
visited[w->vertex] = TRUE;
}
}

BFS traversal of a graph, produces a spanning tree as final result. Spanning Tree is a graph without
any loops. We use Queue data structure with maximum size of total number of vertices in the graph
to implement BFS traversal of a graph.

We use the following steps to implement BFS traversal...


Step 1: Define a Queue of size total number of vertices in the graph.
Step 2: Select any vertex as starting point for traversal. Visit that vertex and insert it into the Queue.
Step 3: Visit all the adjacent vertices of the vertex which is at front of the Queue which is not
visited and insert them into the Queue.
Step 4: When there is no new vertex to be visit from the vertex at front of the Queue then delete
that vertex from the Queue.
Step 5: Repeat step 3 and 4 until queue becomes empty.
Step 6: When queue becomes Empty, then produce final spanning tree by removing unused edges
from the graph

Analysis of BFS:
Each visited vertex enters the queue exactly once. So the while loop is iterated at most n times If an
adjacency matrix is used the loop takes O(n) time for each vertex visited. The total time is therefore,
O(n2).
If adjacency lists are used the loop has a total cost of d0 + … + dn-1 = O(e), where d is the degree of
vertex i.
Result
A D E B C F G
Connected Components

A connected component or simply component of an undirected graph is a subgraph in which each pair
of nodes is connected with each other via a path.
A set of nodes forms a connected component in an undirected graph if any node from the set of
nodes can reach any other node by traversing edges. The main point here is reachability.
In connected components, all the nodes are always reachable from each other.

One Connected Component

Let’s name this graph G1(V, E). Here V ={V1,V2, V3, V4, V5,V6} denotes the vertex set and E=
{E1,E2, E3,E4,E5,E6,E7} denotes the edge set of G1. The graph G1 has one connected component,
let’s name it C1, which contains all the vertices of G1. Now let’s check whether the set C1 holds to the
definition or not.
According to the definition, the vertices in the set C1 should reach one another via a path. We’re
choosing two random vertices V1 and V6:

 V6 is reachable to V1 via: E4E7 or E3E5E7 or E1E2E6E7

 V1 is reachable to V6 via: E7E4 or E7E5E3 or E7E6E2E1

More Than One Connected Component

In this example, the undirected graph has three connected components:


Let’s name this graph as G2(V,E), where V ={V1,V2, V3, V4, V5,V6, V7, V8, V9, V10, V11, V12} ,
and E= {E1,E2, E3,E4,E5,E6,E7,E8,E9,E10,E11} . The graph G2 has 3 connected
components: C1=={V1,V2, V3, V4, V5,V6} C2={V7,V8, V9}.and C3={V10,V11, V12}.
Now, let’s see whether connected components C1, C2, and C3 satisfy the definition or not. We’ll
randomly pick a pair from each C1, C2, and C3 set.
From the set C1, let’s pick the vertices V4 and V6.

 V6 is reachable to V4via: E2E6E7 or E1E4E7 or E1E3E5E7

 V4 is reachable to V6 via: E7E6E2 or E7E4E1 or E7E5E3E1

Now let’s pick the vertices V8 and V9 from the set C2.

 V9 is reachable to V8: E9E8


 V8 is reachable to V9: E8E9

Finally, let’s pick the vertices and from the set .

 V11 is reachable to V12: E11E10


 V12 is reachable to V11: E10E11

Example:

Above graph G, vertex V , N is number of vertex

Connected components (G)


{
for each vertex V∈ N
flag [V] =-1;
count=0;
for( int V=0;V<N;V++);
{
If(flag[V]==-1)
{
DFS(V, flag)
Count++
}
Cout<<count;
}
}
DFS(int V, int flag)
{
flag[V] = 1;
cout << V;
For each adjacent node u of V
If (flag [u] = = -1)
DFS(u, flag)
}

Output:
0, 1, 2, 3,
4, 5
6, 7, 8

Biconnected Components,

Biconnected component

A connected graph is Biconnected if it is connected and doesn’t have any Articulation Point. We
mainly need to check two things in a graph.
1. The graph is connected.
2. There is not articulation point in graph.
We start from any vertex and do DFS traversal. In DFS traversal, we check if there is any articulation
point. If we don’t find any articulation point, then the graph is Biconnected. Finally, we need to check
whether all vertices were reachable in DFS or not. If all vertices were not reachable, then the graph is
not even connected.
 A graph is biconnected if it remains connected after removing any single vertex this means
that there are no articulation points (when remove any one vertices , the number of
components are increases) also known as blocks
 An articulation point (or cut vertex)is removal of any vertex from graph divid the graph
into multiple components that particular vertex is called Articulation point

Articulation Point

In a graph, a vertex is called an articulation point if removing it and all the edges associated with it results in
the increase of the number of connected components in the graph. For example consider the graph given in
following figure.

If in the above graph, vertex 1 and all the edges associated with it, i.e. the edges 1-0, 1-2 and 1-3 are
removed, there will be no path to reach any of the vertices 2, 3 or 4 from the vertices 0 and 5, that means
the graph will split into two separate components. One consisting of the vertices 0 and 5 and another one
consisting of the vertices 2, 3 and 4 as shown in the following figure.
Likewise removing the vertex 0 will disconnect the vertex 5 from all other vertices. Hence the given graph
has two articulation points: 0 and 1

finds an Articulation Point.

Step1: Connects DFS Tree

Step2: Depth of each and every vertex

Depth of Vertices A B C D E F are 1 2 6 3 4 5

Step3: to calculate depth the lowest depth number

Lowest depth number of A is 1 means the loop is A B D C A in this loop the minimum number is 1. So the
lowest number of A is 1.
Lowest depth number of B is 1 means the loop is B D C A B in this loop the minimum number is 1. So the
lowest number of B is 1.

Lowest depth number of C is 1 means the loop is C A B D C in this loop the minimum number is 1. So the
lowest number of C is 1.

Lowest depth number of D is 1 means the loop is D C A B D in this loop the minimum number is 1. So the lowest
number of D is 1.

Lowest depth number of E is 3 means the loop is E F D E in this loop the minimum number is 3. So the lowest
number of E is 3.

Lowest depth number of F is 3 means the loop is F D E F in this loop the minimum number is 3. So the lowest
number of F is 3.

Step4: to calculate the articulation point

(u,v) u is parent and v is child

Low[v] >= depth [u]

(B, D) low[D] >= depth [B]  1 >= 2 B is not a articulation point

(D, E) low[E] >= depth [D]  3 >= 3 D is a articulation point

(E, F) low[F] >= depth [E]  1 >= 2 E is not a articulation point

Applications of Graphs in Data Structures

Graphs are a useful tool for illustrating a variety of real-world issues. Some significant graph uses are
listed below:

 Social Networks
 Web Graphs
 Biological Networks
 Information Graphs
 Product Recommendations
 Neural Networks.
 Map Networks
 Blockchains
 Bitcoin Creation Graphs

Divide and Conquer Introduction

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a dispute on
a huge input, break the input into minor pieces, decide the problem on each of the small pieces, and then
merge the piecewise solutions into a global solution. This mechanism of solving the problem is called
the Divide & Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the whole
problem.

1. Divide the array into 2 halves


Recursively sort left and right halves
Then merge two halves { merge sort}
2. Partition array into small items and large items
Then recursively sort two sets{ quick sort}
Examples:
1. Searching :- binary search
2. Sorting :- merge sort guick sort
3. Tree travelsals
4. Matrix multiplication :- strassen’s algorithm

Merge Sort:

Merge sort is a sorting algorithm that follows the divide-and-conquer approach. It works by
recursively dividing the input array into smaller subarrays and sorting those subarrays then merging
them back together to obtain the sorted array.

In simple terms, we can say that the process of merge sort is to divide the array into two halves, sort
each half, and then merge the sorted halves back together. This process is repeated until the entire
array is sorted.

Merge sort is a popular sorting algorithm known for its efficiency and stability. It follows the divide-
and-conquer approach to sort a given array of elements.
Here’s a step-by-step explanation of how merge sort works:
1. Divide: Divide the list or array recursively into two halves until it can no more be divided.
2. Conquer: Each subarray is sorted individually using the merge sort algorithm.
3. Merge: The sorted subarrays are merged back together in sorted order. The process continues until
all elements from both subarrays have been merged.

Step1:

Step2:

Step3:

Step4

Divide:
 [38, 27, 43, 10] is divided into [38, 27 ] and [43, 10] .
 [38, 27] is divided into [38] and [27] .
 [43, 10] is divided into [43] and [10] .
Conquer:
 [38] is already sorted.
 [27] is already sorted.
 [43] is already sorted.
 [10] is already sorted.
Merge:
 Merge [38] and [27] to get [27, 38] .
 Merge [43] and [10] to get [10,43] .
 Merge [27, 38] and [10,43] to get the final sorted list [10, 27, 38, 43]
Therefore, the sorted list is [10, 27, 38, 43] .
Complexity Analysis of Merge Sort:
 Time Complexity:
o Best Case: O(n log n), When the array is already sorted or nearly sorted.
o Average Case: O(n log n), When the array is randomly ordered.
o Worst Case: O(n log n), When the array is sorted in reverse order.

Algorithm:
Mergesort(A, lb, ub)
{
If(lb<ub)
{
Mid= (lb+ub)/2;
Mergesort(A, lb, mid);
Mergesort(A, mid+1, ub);
Merge(A,lb,mid,ub);
}
}

Divide array into 2 halves recursively, sort each half and merge two halves
Merge(A, lb, mid, ub)
{
I = lb;
J = mid+1;
K= lb;
While(i<=mid && j<=ub)
{
If(a[i] <= a[j])

{
b[k] = a[i];
i++; k++;
}
else
{
b[k] = a[j];
j++; k++;
}
If(i> mid)
{
While(j<= ub)
{
b[k] = a[j];
j++; k++;
}
else
{
While(i<= mid)
b[k] =a[i];
i++;
k++;
}
}
For(k = lb; k<=ub; k++)
{
ak] = b[k];
}}

Quick sort:
Quicksort is a sorting algorithm based on the divide and conquer approach
The Quicksort algorithm takes an array of values, chooses one of the values as the
'pivot' element, and moves the other values so that lower values are on the left of the
pivot element, and higher values are on the right of it.

Let the elements of array are -

In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.

Since, pivot is at left, so algorithm starts from right and move towards left.

Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.

Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to
right, as -

Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts
from left and moves to right.

ADVERTISEMENT

As a[pivot] > a[left], so algorithm moves one position to right as -

Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one
position to right as -
ADVERTISEMENT

Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and
a[left], now pivot is at left, i.e. -

Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right]
= 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as -

Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and
a[right], now pivot is at right, i.e. -
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from
left and move to right.

Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the
same element. It represents the termination of procedure.

Element 24, which is the pivot element is placed at its exact position.

Elements that are right side of element 24 are greater than it, and the elements that are left
side of element 24 are smaller than it.

Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-
arrays. After sorting gets done, the array will be -

Quicksort Time Complexity


Quicksort Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n2)

Algorithm:
UICKSORT (array A, start, end)
{
if (start < end)
{
p = partition(A, start, end)
QUICKSORT (A, start, p - 1)
QUICKSORT (A, p + 1, end)
}
}
The partition algorithm rearranges the sub-arrays in a place.
PARTITION (array A, start, end)
{
pivot ? A[end]
i ? start-1
for j ? start to end -1
{
do if (A[j] < pivot) {
then i ? i + 1
swap A[i] with A[j]
}}
swap A[i+1] with A[end]
return i+1
}

Strassen's Matrix Multiplication


Strassen's Matrix Multiplication is the divide and conquer approach to solve the matrix multiplication
problems. The usual matrix multiplication method multiplies each row with each column to achieve the
product matrix. The time complexity taken by this approach is O(n3). Three types

1) Basic matrix Multiplication O(N3 )


N*N matrix
For ex:
C11 = A11 X B11+ A12 X B21
C12= A11 X B12+ A12 X B22
C21=A21 X B11+A22 X B21
C22=A21 X B12+A22 X B22
8 multiplications log28 = log223 O(N3 )

2) Divide and conquer Matrix Multiplication O(N3 )


Divide the matrix A & B in 4 sub matrix of size is N/2* N/2

 A B C squre matrix N*N


 A0 A1 A2 A3 are sub matrix of size N/2 *N/2
 B0 B1 B2 B3 are sub matrix of size N/2 *N/2
 Calculate A*B recursively Time complicity A*B =C
 Multiplications 8 and Additions 4of two matrix takes O(N2)
 T(N) = 8T(N/2)+ O(N2)  O(N3)
 Stressen’s Matrix Multiplication O(N3 )

Strassen's algorithm, developed by Volker Strassen in 1969, is a fast algorithm for matrix multiplication.
It is an efficient divide-and-conquer method that reduces the number of arithmetic operations required to
multiply two matrices compared to the conventional matrix multiplication algorithm (the naive
approach).

The traditional matrix multiplication algorithm has a time complexity of O(n^3) for multiplying two n x
n matrices. However, Strassen's algorithm improves this to O(n^log2(7)), which is approximately
O(n^2.81). The algorithm achieves this improvement by recursively breaking down the matrix
multiplication into smaller subproblems and combining the results.The Strassen algorithm
partitions A, B and C into equally sized block matrices
P = (A11 + A22) X (B11+B22)

Q = B11 (A21 + A22)

R = A11 (B12 - B22)

S = A22 (B11 - B21)

T = B22 (A11 + A12)

U = (A21 – A11) X (B11 + B12)

V = (B21 + B22) X (A12 – A22)

C11 = P + S – T + V

C12 = R + T

C21 = Q + S

C22 = P + R – Q + U

Example: implementation of Strassen's Matrix Multiplication


P = (1 + 4) X (2 + 1) = 5x3 = 15

Q = 2 (3 + 4) = 2 x 7 = 14

R = 1 (2 - 1) = 1 x 1 = 1

S = 4 (3 - 2 ) = 4 x 1 = 4

T = 1 (1 + 2) = 1 x 3 = 3

U = (2 + 2) X (3 –1) = 4 x 2 = 8

V = (3 + 1) X (2 – 4) = 4 x -2 = -8

C11 = P + S – T + V = 15 + 1 – 3 + (-8) = 16 -11 = 5

C12 = R + T = 1+ 3 = 4

C21 = Q + S = 14 + 1 = 15

C22 = P + R – Q + U = 15 + 1 – 14 +8 = 23 – 14 = 9

Convex Hull
The convex hull of a set of points in a Euclidean space is the smallest convex polygon that
encloses all of the points. In two dimensions (2D), the convex hull is a convex polygon,
and in three dimensions (3D), it is a convex polyhedron.
The below image shows a 2-D convex polygon:

Convex Hull using Divide and Conquer Algorithm:


1. The set of n points is divided into two subsets L containing the left most [n/2] points and R containing the
right most [n/2] points.
2. The convex hull of the subsets L and R are computed recursively
3. Combine
Pre-requisite: Tangents between two convex polygons

Algorithm: Given the set of points for which we have to find the convex hull.
 Suppose we know the convex hull of the left half points and the right half points,
 then the problem now is to merge these two convex hulls and determine the convex hull for the
complete set.
 This can be done by finding the upper and lower tangent to the right and left convex hulls.
 This is illustrated here Tangents between two convex polygons Let the left convex hull be a
and the right convex hull be b. Then the lower and upper tangents are named as 1 and 2
respectively, as shown in the figure.
 Then the red outline shows the final convex hull.

the convex hull problem, efficiently building the convex hull with a time complexity of O(n log n)

You might also like