0% found this document useful (0 votes)
11 views55 pages

DS Chapter6

The document provides an overview of various sorting algorithms including Bubble Sort, Selection Sort, Quick Sort, Merge Sort, Insertion Sort, Shell Sort, Radix Sort, and Heap Sort. Each algorithm is described with its methodology, complexity analysis, and C programming implementation. The document emphasizes the efficiency and application of each sorting technique in different scenarios.

Uploaded by

Bhavana Venkat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views55 pages

DS Chapter6

The document provides an overview of various sorting algorithms including Bubble Sort, Selection Sort, Quick Sort, Merge Sort, Insertion Sort, Shell Sort, Radix Sort, and Heap Sort. Each algorithm is described with its methodology, complexity analysis, and C programming implementation. The document emphasizes the efficiency and application of each sorting technique in different scenarios.

Uploaded by

Bhavana Venkat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 55

Sorting and Searching

The process of the rearranging the given


elements so that they are in ascending order or
descending order is called “sorting”
Different Sorting techniques are
a) Bubble Sort
b) Selection Sort
c) Quick Sort
d) Merge Sort
e) Heap Sort
f) Shell Sort
g) Radix sort
h) Insertion Sort
a) Bubble Sort:

This is the simplest and easiest sorting


technique. In this technique the two successive
items A[i] & A[i+1] are exchanged whenever A[i]
>= A[i+1].

Another name of bubble sort is “sinking


sort”. Because in this technique in each pass
(iteration) the larger values sink to the bottom of the
array and simultaneously smaller elements gradually
“bubble” their way upward to the top. Hence it is
called “Bubble sort”.
//c function for bubble sort.
Void bubble _sort ( a, n )
{
int temp;
for (i=0; i<n-2 ; i++)
{
for(j=0; j<n-2-i;j++)
{
If (a[j] > a[j+1])
{
temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
} } } }
Analysis:-
The algorithm consists of two for loops.
The outer for loop controls the number of passes and
(n-1) Passes are required to sort n elements.
So loop varies from O to n-2.
According to the function the inner loop works as
follows

Pass 1 : i=0 j=0,1,2,3,4 n-2-0


Pass 2 : i=1 j=0,1,2,3 n-2-1
Pass 3 : i=2 j=0,1,2 n-2-2
Pass 4 : i=3 j=0,1 n-2-3
Pass 5 : i=4 j=0 n-2-4
In general n-2-i
So the complexity is C(n) = 0(n2)
b) Selection sort :

In this method, first we determine smallest


element in the given array and exchange with A[0] .

Next out of the remaining elements find the


next smallest elements and swap with A[1], and so
on up to n-1.

In the each iteration, the unsorted array size


diminishes and the partially sorted list increases.
// C function for selection sort

Void selection_sort( a, n )
{
int min, temp;
for (i=0;i<n-2;i++)
{
min = i;
for ( j = i+1; j < n-1; j++)
{
if (a[j] < a[min] )
min = j;
}

temp = a[ i ];
a[i] = a[min];
a[min] = temp;
}
}

Analysis:

C(n) = 0(n2)
c)Quick sort

This sorting technique works very well on


large set of data.
Basically, quick sort works as follows,
The first step in this technique is to partition the
given table into two sub-tables based on Key
element.
Such that the elements towards left of the key
element are less than the key element , towards
right of the key element are greater than key
element.
The above technique is called as “Partition”
technique.
Partition technique :

We assume the first element of the given list of n


elements as the Key (pivot) element.

The remaining (n-1) elements will be compared with


this key element to place it at an appropriate
position.

This is achieved primarily by maintaining two


pointers i and j.

The initialization of i & j , comparison and swapping


is done based on the following function.
int partition (a, low, high)
{
int i , j , temp , key ;
key = a[low];
i=low+1;
j=high;
while (1)
{
while (i<high && key >=a[i])
i++;
while (key<a[j])
j--;
if ( i < j )
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
else
{
temp=a[low];
a[low]=a[j];
a[j]=temp;
return j;
}
}
}
//c function for quick sort
Void quick sort( int a[], int low, int high)
{
int j;
if ( low < high )
{
j = partition (a, low, high);
quick sort (a, low, j-1);
quick sort (a, low+1, high);
}
}
// Complete program for Quick sort.
#include < stdio . h >
/* include above two functions */
void main()
{
int i , n , a[20];
print f (“enter the value of n”);
Scanf ( “%d”, &n);
print f(“enter the numbers to be sorted”);
for ( i=0; i<n; i++)
Scanf (“%d”,&a[i]);
quick sort(a,o,n-1);
print f (“the sorted array is”);
for (i=0; i<n; i++)
printf (“%d”, a[i]);
Analysis of Quick Sort:

Running time of partition:

Let n = high – ( low + 1 ),

We observe that during the execution


of partition, we always have j > i-1, because all
keys below i are always smaller than or equal to
pivot and all the keys above j are always larger
than or equal to pivot.

This implies that the running time of partition is


O(n).
Running time of quick sort: “Worst-case”.

In worst case the array is always


partitioned into sub arrays of sizes 1 and n-1.

O (1) if n=1

T(n) =
T(n-1) + O (n)if n>1

This recurrence can be shown as a tree


n-1 n-1
n-2 n-2
n-3 n-3
- .
- .
- .
_____
O ( n2 )

There fore the worst case running of Quick sort is


o(n2).
Best Case Analysis :

The best case arises when the array is always split


in the middle.
then,
T(n) = T([n/2]) + T([n/2]) + o(n).
= 2T(n/2) + d(n)
= 2 ( 2T(n/4) + d(n/2)) + d(n)
= 4T(n/4) + 2d(n)
= 4 ( 2T(n/8) + d(n/4)) + 2.d.n
= ------
= ------
= 2k.T(n/2k)+ k.d.n
= 2k.T(n/2k) + k.d.n
= n. T(1) + d.n.k since 2 k = n
= n.1 + d.n.log(n) because n = 2 k

T(n) = 0( n logn )
Average case Analysis :

In the average case the list may not


be exactly partitioned into two halves (best case) nor
pushed to one side (worst case).

Instead the pivot (key) element


may be placed at any position in the array.

Key may range from O to n-1 or 0 < key


< n-1 with equal probability of 1/n.

Solving the recurrence, we get,


T(n)= 2n ln n approximately equal to
1.38nlog2n
Merge Sort:

The process of merging of two sorted


vectors into a single sorted vector is called simple
merge.

The only necessary condition for this


problem is that both vectors (arrays) should be
sorted.
The Algorithm merge two sorted arrays as follows
Algorithm Merge ( A , B , C , m , n )
//A &B are two sorted i/p array of size m & n
respectively.
//C is an o/p array.
i=j=k=o;
while ( i < m and j < n )
if ( A[i] < B[j] ) then
C[k] = A[i]
i = i+1;
k = k+1;
Else
C[k] = B[j]
J = j+1;
K = k+1;
End while.
while ( i < m )
C[k] = A[i]
i = i+1;
k = k+1;
End while

while ( j < n )
C[k] = B[j]
j = j+1;
k = k+1;
End while
In the above algorithm we considered two i/p
arrays. Suppose if we consider only single array
(vector) and which is divided into two sorted parts as
shown below ,
Vector A
10 20 30 40 50 15 25 35 45

low mid mid+1


high

Suppose 65
if the50array
25 10
is as 35
follows, 25 75 30

The above elements can be sorted by using merge sort


Algorithm Merge Sort ( A , low , high )
{
if ( low < high)
mid = ( low + high ) / 2;
Merge sort ( A , low , mid );
Merge sort ( A , mid+1 , high );
Merge (a , low , mid , high );
End if .
}
//c program for merge sort.
#include < stdio.h >
#define MAX= 50;
Int low , mid , high;
void Merge (low, mid, high)
{
int i=low; j=mid+1; k=low; c[MAX];
while ( i <= mid & & j <= high)
{
if (a[i] < a[j])
{
C[k]=a[i];
i = i+1; k = k+1;
}
Else
{
C[k]=a[j];
j = j+1; k = k+1;
}
While ( i < low)
{
C[k++] = a[i++];
}
While ( j < high)
{
C[k++] = a[j++];
}
for (i=low; i<high; i++)
{
A[i]=c[i];
}
}
Void merge sort (a, low, high)
{
If (low<high)
{
mid = ( low + high ) / 2;
merge sort (a , low , mid );
merge sort(a, mid+1, high);
merge (a, low, mid, high);
}
}
Void main()
{
int a[MAX];
int n , i;
print f(“enter the number of elements to
sort”);
Scanf(“%d”,& n);
print f (“enter the elements”);
for( i=0; i<n; i++)
Scan f (“%d”, &a[i]);
Merge sort (a , 0 , n-1);
print f (“the sorted elements are “);
for( i=0; i<n; i++)
Print f(“%d”, a[i]);
}
Simple insertion sort :

The insertion sort is efficient only when the numbers


to be sorted are very less.

The insertion sorts inserts each element in


appropriate position.

If an array A contains n elements and we place each


element of an array at proper place in the
previously sorted element list.

The sorting is takes place based on following


passes,
Pass 0 : A[0] is already sorted because of only one
element.
Pass 1 : A[1] is inserted before or after A[0], so A[0]
and A[1] are sorted.
Pass 2 : A[2] is inserted before A[0] or after A[0] and
A[1].
so, A[0],A[1] and A[2] are sorted.
-------
-------
-------

Pass n-1 : A[n-1] is inserted into its proper place in


A[0]….A[n-1]
// C function for insertion_sort
void insertion_sort ( A[ ] , int n )
{
int pass , k , temp , j ;
for ( pass = 1 ; pass < n ; pass++ )
{
k = A[pass];
for ( j = pass – 1 ; j > 0 && k < A[j] ; j-- )
{
A[j+1] = A[j];
}
A[j+1] = k ;
}
}
Complexity C(n) = O( n2 )
Shell Short :
This technique was invented in 1959 by D.L.
Shell.
This technique is similar to simple insertion sort,
instead of comparing the adjacent elements one after
the other as in insertion sort, far apart elements with
equal distance are compared.

Ex. 45, 36, 75, 20, 5, 90, 80, 65, 30, 50, 10, 75, 85.

let gap b/w elements is 3 . The sub files


generated with this gap are shown below.

sub file 1 : a[0] a[3] a[6] a[9] a[12]


sub file 2 : a[1] a[4] a[7] a[10]
// C function fir shell sort.
void shell_sort ( int n , int a[] )
{
int I , j , gap , item ;
for ( gap = (n-1) / 2 ; gap > 0 ; gap /= 2 )
{
for ( i = 0 ; i < n ; i+= gap)
{
item = a[i];
j = i-gap;
while ( j >= 0 && item <
a[j] )
{
a[ j + gap ] = a[j];
j = j – gap;
Radix Sort :
This is oldest sorting techniques, used in card-
sorting machines.
Hear we assume that all the numbers to be
sorted are of equal digits.
There are 10 pockets ranging from 0 to 9.
Initially all the pockets are empty.
Scan the numbers one by one sequentially,
based on the least significant digit insert into the
respective packet.
when you are inserting, If a pocket is not empty,
the new item has to be inserted at the rear end of the
pocket.

Ex. 57, 45, 36, 91, 28, 79, 35, 68, 89, 20, 62, 43, 84,
55, 86, 96, 78, 67.
Heap Sort :

Notation of heap :
“ A heap can be defined as a binary tree
with keys assigned to its nodes provided the following
two conditions are met,
a) The tree’s shape requirement :
The binary tree is essentially
complete that is, all its levels are fill except possibly
the last level, where only some right most nodes
leaves may be missing.

b ) The parental dominance requirement :


The key at each node is greater than
or equal to the keys at its children.
Examples :

10
10 10

5 7
5 7 5 7

4 2 1
2 1 9 6 11

“ Heap Tree “ “ Not Heap “ “ Not


Heap “
Different types of heap :
It can be divided into two types,
1. Max heap
2. Min heap.
Max heap :
A max heap is a complete binary tree with the
property that the value at each node is at least as
large as the values at its children.

Min heap :
A min heap is a complete binary tree with the
property that the value at each node is as low as the
values at its children.
Construction of a heap :
There are two methods
a) Bottom – up heap construction.
b) Top – down heap construction.

Bottom – up heap construction :


It initializes the essentially complete
binary tree with n nodes by placing keys in the order
given and then “heapifies” the tree as follows,
2
Ex. 2, 9, 7, 6, 5, 8.

9 7

6 5 8
Starting with the last parental node and
ending with the root, the algorithm checks whether the
parental dominance holds for the key at this node. If it
does not, the algorithm exchanges the node’s key with
the larger key of its children and checks whether
parental dominance holds for k in its new position. This
process continues until the parental dominance
requirement for k is satisfied.

After completing the “heapification” of the


subtree rooted at the current parental node, the
algorithm proceeds to do the same for the node’s
immediate predecessor. The algorithm stops after this
is done for the tree’s root.
Algorithm Heap Bottom up ( H[1..n])
// Constructs heap tree.
for i = n/2 down to 1 do
{
k=i; v = H[k] ;
heap = false ;
while ( not heap and 2 * k <= n ) do
{
j=2*k
if j < n
{
if H[j] < H[j+1]
j = j + 1;
}
if v >= H[j]
{
heap = true;
}
else
{
H[k] = H[j];
k = j;

}
H[k] = v;
}
}
Heap Sort :

This was discovered by J.W.J. Williams. This is


a two stage algorithm that works as follows,

stage 1 : ( Heap construction ) Construct a heap for


a given
array.

stag 2 : ( Maximum deletions ) Apply the root-


deletion operation n-1 times to the
remaining heap.
Root – deletion operation algorithm :

Step 1 : Exchange the root’s key with the last key k


of the heap.

Step 2 : Decrease the heap’s size by 1.

Step 3 : “ Heapify ” the smaller tree by sifting k down


the tree exactly in the same way we did it
in the bottom-up heap construction
algorithm.
#include<stdio.h>
#include<conio.h>
#include<time.h>
#include<stdlib.h>

void heapsort(int[],int);
void heapfy(int[],int);
void adjust(int[],int);

void main()
{
int x[10],n,i;
long m;
char ch;

clrscr();
while(1)
{
printf("\nEnter the number of elements:");
scanf("%d",&n);
printf("\nEnter Elements for Sorting:");
for(i=0;i<n;i++)
scanf("%d",&x[i]);
printf("\nElements Before Sorting:");
for(i=0;i<n;i++)
printf("%d ",x[i]);
heapsort(x,n);
printf("\n\n\n\n\n\nElements After Sorting:\n\n");
for(i=0;i<n;i++)
printf("%d\t",x[i]);
printf("\nDo you want to continue...(y/n)?: ");
ch=getche();
if(ch=='n')
exit(0); } }
void heapsort(int x[],int n)
{
int i,temp;
heapfy(x,n);
for(i=n-1;i>0;i--)
{
temp=x[0];
x[0]=x[i];
x[i]=temp;
adjust(x,i);
}
}
void heapfy(int x[],int n)
{
int i,j,k,item;
for(k=1;k<n;k++)
{
item=x[k];
i=k;
j=(i-1)/2;
while(i>0 && item>x[j])
{
x[i]=x[j];
i=j;
j=(i-1)/2;
}
x[i]=item;
}
}
void adjust(int x[],int n)
{
int i,j,item;
j=0; i=2*j+1; item=x[j];
while(i<=n-1)
{
if(i+1<=n-1)
if(x[i]<x[i+1])
i++;
if(item<x[i])
{
x[j]=x[i];
j=i;
i=2*j+1;
}
else
break;
} x[j]=item; }
Searching :

 The process of finding a particular item in the
large amount of data is called searching.

 Different types of searching techniques are,

 1. Linear Search.
 2. Binary Search.

Linear Search :
 A linear search is a simple searching
technique. In this technique we search for a given key
item in the list in linear order. i.e. one after the other.
// C function for linear search.

# include < stdio.h >


# include < process.h >
 void main ( )
 {
 int i , n , key , a[20];
 printf ( “ Enter the values of n “ );
 scanf ( “ %d”, &n );
 printf ( “ enter the values” );
 for ( i = 0 ; i < n ; i++ )
 scanf ( “%d”, &a[i] );
 printf ( “ enter the item to be searched” );
 scanf ( “%d”, &key );
 for ( i=0 ; i<n ; i++ )
 {
 if ( a[i] == key )
 {
 printf ( “ Item found” );
 exit (0);
 }
 }
 print ( “ not found “ );
 }
Binary Search :
 It is a simple searching technique which can be
applied if the items to be compared are either in
ascending order or descending order.
Method :
 The program can be designed as follows,

 If low is considered as the position of the first


element and high as the position of the last element,
the position of the middle element can be obtained
using the statement,

 mid = ( low + high ) / 2 ;

 The key to be searched is compared with


middle element. This can be done using the
statement,

 if ( key == a[mid] )
 {
 pos = mid;
 return;
 }

 If this condition is false, key may be in the left


part of the array i.e. < mid or in the right part of the
array i.e. > mid.


 If the key is less than the mid, the table has to
compared from low to mid-1. otherwise the table has
to compared from mid+1 to high.

 if ( key < a[mid] )


 high = mid - 1;
 else
 low = mid + 1;

Analysis of Binary Search : ( Recursive ) :
 The basic operation in binary search is the
comparison of key with the elements of the array.

Best-case analysis :
 when the array is divided into half and if the key
happens to be A[mid], then no recursive calls are
needed.
 T ( n ) = O ( 1 ).
Worst-case analysis :
 The worst-case occurs when either the key is
not found in the list or the key found as the first or the
last element.
 T ( n ) = C + T (n/2 )
 .

You might also like