0% found this document useful (0 votes)
30 views16 pages

Ds 1

The document provides an overview of data structures, classifying them into primitive and non-primitive types, and further detailing linear and non-linear structures. It discusses data types, abstract data types (ADT), searching algorithms (sequential and binary search), and sorting algorithms (bubble sort and selection sort), including their time complexities and implementations. The document emphasizes the importance of efficient data organization and manipulation in programming.

Uploaded by

Sreenu Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views16 pages

Ds 1

The document provides an overview of data structures, classifying them into primitive and non-primitive types, and further detailing linear and non-linear structures. It discusses data types, abstract data types (ADT), searching algorithms (sequential and binary search), and sorting algorithms (bubble sort and selection sort), including their time complexities and implementations. The document emphasizes the importance of efficient data organization and manipulation in programming.

Uploaded by

Sreenu Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

A data structure is a special way of organizing and storing data in a computer so that it can be

used efficiently.
Data structures are classified into two types. They are
1. Primitive data structure
2. Non-Primitive data structure

1. Primitive data structure: Primitive data structures are the data structures which is
used to store data values. These are the basic data types used in programming
languages.
Ex: int, char, float, double
2. Non-primitive data structure: Non primitive data structure classified into two types.
They are
1. Linear data structures
2. Non-Linear data structures
Linear data structures: In linear data structures, data is arranged in sequential form or linear
form.
Ex: Array, Linked List, Stack, Queue.
Array: An array is a collection of data elements of same type with a common name.
Linked List: A linked list is an ordered collection of finite, homogeneous data elements
called nodes where the linear order is maintained by means of links or pointers.
Stacks: A stack is a collection of homogeneous data elements where the insertion and
deletion takes place at one end.
Queues: Queue is a ordered collection of homogeneous data elements where the insertion
and deletion takes place at two ends.
Non Linear Data structures:
Non linear data structures are the data structures in which data may be arranged in non
sequence manner.
Ex: Trees, Graphs, Tables
Trees: Tree is a non linear data structure which can be represented in hierarchical manner. A
tree contains nodes.
Graphs: A graph G is an order set (V,E) where V represents the set of nodes or vertices and
E represents the set of Edges.
Table: A table is a non linear data structures which is represented by tabular form which
contains rows and columns.

Data type: A data type is a term which specifies what kind of value a variable may hold in a
program.
The primitive data types are mainly used in programs. The primitive data types are also called
as fundamental data types or primary data types.
1. Integer type 2. Floating point 3. String 4. Boolean
1. Integer type: This data types represent integer numbers that is numbers without any
fractional part or decimal points. They are divided into Type Size
byte, short, int, long.
Example: 45, -89 byte 1byte
2. Floating point type: This data type holds decimal points.
short 2 bytes
They are classified into float and double.
Float occupies 4 bytes and double occupies 8 bytes. int 4 bytes
Example:45.5, 89.6,-56.8
3. String: A sequence of characters is called a string. Strings long 8 bytes
are represented always within double quotation marks.
Example: “BADVEL”, “sbvr”
4. Boolean: Boolean is used when we want to test the condition. They take two values
true or false.

ADT:
Abstract data type: When an application requires a special kind of data which is not
available as built-in data type, then it is the programmer’s responsibility to implement his
own kind of data. The abstract data type is also alternatively termed as user-defined data type.
or
Abstrat Data Type (ADT) is a triple of D- set of domains, F –set of functions and A – axioms
in which only what is to be done is mentioned but how is to be done is not mentioned.
In ADT, all the implementations details are hidden. In short
ADT=Type + Functions name + Behaviour of each function
Domain (D): This is the range of values that the data may have. This domain also termed as
data object.
Functions (F): This is the set operations which may be legally applied to elements of the data
object.
Axioms(A): This is the set of rules with which the different operations belonging to F can be
actually implemented.

SEARCHING
Searching is a process of finding the value in the list of values. Different types of list are as
below
1. Binary search
2. Linear search or sequential search

SEQUENTIAL SEARCH OR LINEAR SEARCH (Array Method):


In sequential search, we check each element of an array one by one sequentially and see
whether the desired element is found or not. Search is successful if desired element is located
and returns element location. Search is unsuccessful if desired element is not located.
Algorithm:Linear_Search
Steps:
1. Start
2. for(i=0;i<len;i++)
3. if(a[i]=key)
4. Print “ found successful”
5. End if
6. End for
7. if(i==len)
8. print “ unsuccessful”
9. stop
Example:
Consider a sorted array.
A={21,24,45,46,49,54,59}.
key=46
Find the key from the given array.
Here i=0
A[0]=key => 21 = 46 ( not equal)
Next i=i+1 => i=0+1
A[1]=key =>24=46 ( not equal)
Next i=i+1 => i=1+1
A[2]=key => 45=46 ( not equal)
Next i=i+1 => i=2+1
A[3]=key => 46=46 (success)
Element found at location “3”

Efficiency of sequential searching:


The time taken or number of comparisons made in searching a record in a search table
determines the efficiency of the technique.
If the desired record is present in the first position of the search table, then only one
comparison is made. If the desired record is the last one, then n comparisons have to be made.
If the record is present somewhere in the search table, on an average, the number of
comparisons will be (n+1)/2. The worst case efficiency of this technique is O(n) stands for
order of execution.
Program
#include<stdio.h>
main()
{
int a[10],i,found,key,num;
printf("\n Enter the number of elements ");
scanf("%d",&num);
for(i=0;i<num;i++)
{
scanf("%d",&a[i]);
}
printf("\n Enter the element to be searched ");
scanf("%d",&key);
for(i=0;i<num;i++)
{
if(key==a[i])
{
printf(" The key is found in the list at position is %d",i+1);
found=1;
break;
}
}
if(!found)
printf("unsuccessfull");
}

BINARY SEARCH:
Binary search works on sorted arrays. Binary search begins by comparing the middle element
of the array with the target value. If the target value matches the middle element, its position
in the array is returned. If the target value is less than the middle element, the search
continues in the lower half of the array. If the target value is greater than the middle element,
the search continues in the upper half of the array. By doing this, the algorithm eliminates the
half in which the target value cannot lie in each iteration.
Algorithm: Binary_search
Steps:
1. Start
2. low=0, high=n-1
3. flag=false
4. while(low<=high) and ( flag=false) do
5. mid=(low+high)/2
6. if (key=a[mid]) then
7. print “ element found”
8. flag=true
9. end if
10. if (key<a[mid]) then
11. high=mid-1
12. else
13. low=mid+1
14. End if
15. End while
16. Stop
Example:
Consider a sorted array.
A={21,24,45,46,49,54,59}.
key=24
Find the key from the given array.
Step 1: A[mid]=(low+high)/2 low=0 and high=6
= (0+6)/2 => 3 (element not found)
key < A[mid]

Step 2: A[mid]=(low+high)/2 low=0, high=2


=(0+2)/2 => 1
A[mid]=1
A[mid]=key
A[1]=24
Element found at location A[1]. Search is successful.

Time Complexity
Case Time Complexity

Best Case O(1)

Average Case O(logn)

Worst Case O(logn)

Program:
#include<stdio.h>
main()
{
int list[10],i,num,mid,low,high,key,found=0;
printf("\n Enter the number of elements you want ");
scanf("%d",&num);
for(i=0;i<num;i++)
{
scanf("%d",&list[i]);
}
printf("\n Enter the element to be searched ");
scanf("%d",&key);
low=0;high=num-1;
while(low<=high)
{
mid=(low+high)/2;
if(key==list[mid])
{
printf("\n Element is found");
found=1;
break;
}
if(key<list[mid])
high=mid-1;
else
if(key>=list[mid])
low=mid+1;
}
if(found==0)
printf(" \n Element not found ");
}
Different between binary search and sequential search
Sequential search Binary search
Elements need not be in order Elements must be in order
Suitable for few numbers not suitable for Efficient for large numbers
large numbers
Search starts from first element and continues Search is not sequential
searching all the elements sequentially
Implement with arrays and linked lists No suitable to implement with linked lists

SORTING:
Sorting is a process of arranging the list of items or elements or data in proper order either in
ascending order or descending order. Data may be of any type like numerical, alphabetical or
alphanumeric.
TYPES OF SORTING:
Internal sorting: When a set of data to be sorted is small enough such that the entire sorting
can be performed in a computer’s internal storage (primary storage) then the sorting is called
internal sorting.
External sorting: Sorting of a large set of data, which is stored in low speed computer’s
external memory (such as hard disk, magnetic tape, etc.) is called external sort.
BUBBLE SORT:
Bubble sort is most popular sorting technique because it is very simple to understand and
implement. This algorithm is not suitable for large data sets as its average and worst case
complexity are of O(n2) where n is the number of items.
The bubble sort works as follows:
If N elements given in memory then for sorting we do following steps:
1. First compare the 1st and 2nd element of the array. If 1st <2nd, then no interchange else
interchange 1st and 2nd.
2. Now compare 2nd and 3rd elements. If 2nd > 3rd then interchange the value of 2nd and 3rd.
Otherwise no interchange.
3. Similarly compare until N-1th element is compared with Nth element.
4. At the end, the highest element value reaches the Nth place.
5. Now elements will be compared until N-1 passes.
In every pass, one element is reached at the end. So, if number of elements is N, then it takes
N-1 passes to sort the N elements.

Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of bubble sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
bubble sort is O(n2)

ALGORITHM:
Step 1: for i=0 to N do
Step 2: for j=0 to N-i-1 do
Step 3: if (a[j] > a[j+1) then
Step 4: temp = a[j]
Step 5: a[j] = a[j+1]
Step 6: a[j+1] = temp
Step 7: end if
Step 8: end for
Step 9: end for
Step 10: stop
Example: int a[ ]={28, 20, 30, 15, 05}
Pass 1:28 20 30 15 05 28 > 20 interchange
20 28 30 15 05 28 > 30 no interchange
20 28 30 15 05 30 > 15 interchange
20 28 15 30 05 30 > 05 interchange
20 28 15 05 30
In the first pass largest element of the array 30 is bubbled up to the last position in the array.
Pass 2: 20 28 15 05 30 20 > 28 no interchange
20 28 15 05 30 28 > 15 interchange
20 15 28 05 30 28 > 05 interchange
20 15 05 28 30
At the second pass, the second largest element of the array, 28 is bubbled up to the 3rd
position in the array.
Pass 3: 20 15 05 28 30 20 > 15 interchange
15 20 05 28 30 20 > 05 interchange
15 05 20 28 30
At the third pass, the third largest element 20 is bubbled up to the 2nd position in the array.
Pass 4: 15 05 20 28 30 15 > 05 interchange
05 15 20 28 30
By the fourth pass, all the elements are arranged in sorting order.
Program:
#include<stdio.h>
void main()
{
int a[6]={34,66,2,5,31,7};
int temp,i,j;
for(i=0;i<=5;i++)
{
for(j=i+1;j<=5;j++)
{
if(a[i]>a[j])
{
int temp;
temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}
}
printf("\n After sorting \n");
for(i=0;i<=5;i++)
{
printf("%d\t",a[i]);

}
}

SELECTION SORT:
As the name suggests selection sort is the selection of an element and placing it in proper
position. In selection sort, first we will search the smallest element in an array and
interchange with first element. Then search the second smallest element and interchange with
second element and continue this process until all the elements are completed.
This algorithm is not efficient for large arrays.
ALGORITHM:
Step 1: for i=0 to N do
Step 2: for j=i+1 to N do
Step 3: if (a[i] > a[j]) then
Step 4: temp = a[i]
Step 5: a[i] = a[j]
Step 6: a[j] = temp
Step 7: end if
Step 8: end for
Step 9: end for
Step 10: stop
The whole process of selection sort is as below:
Pass 1: search the smallest element from a[0] …..a[n-1]
Interchange a[0] with smallest element
Result: a[0] is sorted
Pass 2: search the smallest element from a[1] …..a[n-1]
Interchange a[1] with smallest element
Result: a[1] is sorted
………………………………….
……………………………………
Pass n-1: search the smallest element from a[n-2] …..a[n-1]
Interchange a[n-2] with smallest element
Result: a[0],a[1],……..a[n-1] are sorted

Example: int a[]={38, 47, 24, 42, 17}


Pass 1: 38 47 24 42 17
Exchange 38 with 17, since 17 is the smallest element.

Pass 2:
17 47 24 42 38
Exchange 47 with 24, since 24 is the second smallest element.

Pass 3: 17 24 47 42 38 Exchange 47 with 38, since 38 is the third smallest


element.
Pass 4:
17 24 38 42 47
No exchange is required. Since 42 is fourth smallest
element.
Time Complexity:
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of selection sort is O(n2).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of selection sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
selection sort is O(n2).
Program:
#include<stdio.h>
void main()
{
int a[6]={34,66,2,5,31,7};
int temp,i,j,small;
for(i=0;i<=5;i++)
{
small=i;
for(j=i+1;j<=5;j++)
{
if(a[i]>a[j])
{
small=j;
int temp;
temp=a[small];
a[small]=a[i];
a[i]=temp;
}
}
}
printf("\n After sorting \n");
for(i=0;i<=5;i++)
{
printf("%d\t",a[i]);

}
}

INSERTION SORT:
The insertion sort is efficient only when the numbers to be sorted are very less. The insertion
sort inserts each element in appropriate position. This is same as playing cards, in which we
insert a card in proper position.
The process of inserting each element in proper place is shown below with an example:
Example: int a[]={38,47,24,32,89} 0 1 2 3 4
38 47 24 32 89
Pass 0: pick 38, place it at array position 0 a[0] a[1] a[2] a[3] a[4]
0 1 2 3 4
38 47 24 32 89
So a[0] is sorted.
Pass 1: The 1st element 47 is compared with the 0 th 0 1 2 3 4
element 38. Since 47 is greater than 38, nothing is done. 38 47 24 32 89
Now, first two elements are in sorted order.
Pass 2: In the third pass, 2nd element 24 is compared with 0 th
0 1 2 3 4
element. Since 24 < 38, 24 is inserted at 0 th position and all the
elements from 0th till 1st position are shifted to right by one 24 38 47 32 89
position.
Now, first 3 elements are in sorted order.
Pass 3: In the fourth pass, 3rd element is compared with 0th element. Since 32 > 24, nothing is
done. Again 3rd element is compared with 1 st element. Since 32 < 38,
insert 32 at 1st position and all the elements from 1 st till 2nd position 0 1 2 3 4
are shifted to right by one position. 24 32 38 47 89
Now, first four elements are in sorted order.
Pass 4: The 4th element is compared with 0th, 1st, 2nd and 3rd elements. Since 89 is greater than
all the elements, nothing is done.
Now, all the five elements are in sorted order. 0 1 2 3 4
24 32 38 47 89
ALGORITHM:
Steps
1: for i=0 to n do
2. temp = a[i]
3. j=i
4. while( j > 0 and a[j-1] > temp)
5. a[j] = a[j-1]
6. j = j-1
7. end while
8. a[j] = temp
9. end for
10. stop
Time complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
insertion sort is O(n2).
Program
#include<stdio.h>
void main()
{
int a[6]={34,66,2,5,31,7};
int temp,i,j;
for(i=1;i<=6;i++)
{
temp=a[i];
j=i-1;
while(j>=0 &&temp<=a[j] )
{
a[j+1] = a[j];
j = j-1;
}
a[j+1]=temp;
}
printf("\n After sorting \n");
for(i=0;i<=5;i++)
{
printf("%d\t",a[i]);

}
}

MERGE SORT
Merge sort is one of the most efficient sorting algorithms. It works on the principle of Divide
and Conquer. Merge sort repeatedly break down a list into several sub lists until each sub list
consists of a single element and merging those sub lists in a manner that results into a sorted
list.
Let's consider the following example.

Algorithm: merge_sort(a, start1, end)


Steps:
1. if (start1 < end) then
2. mid = (start1 + end)/2
3. merge_sort(a,start1,mid)
4. merge_sort(a,mid+1,end)
5. merge(a,start1,mid,end)

Algorithm: merge(a, start1, mid, end)


Steps:
1. start
2. p=start1, q=mid+1, i
3. int arr[end-start+1], k=0
4. for(i=start1;i<=end;i++)
5. if(p>mid)
6. arr[k++]=arr[q++]
7. else if (q > end)
8. arr[k++]=a[p++]
9. else if ( a[p] < a[q])
10. arr[k++]=a[p++]
11. else
12. arr[k++]=a[q++]
13. End for
14. for ( p=0;p<k;p++)
15. a[start++]=arr[p]
16. stop
#include <stdio.h>
void merge(int a[], int beg, int mid, int end)
{
int i, j, k;
int n1 = mid - beg + 1;
int n2 = end - mid;

int LeftArray[n1], RightArray[n2]; //temporary arrays

/* copy data to temp arrays */


for (int i = 0; i < n1; i++)
LeftArray[i] = a[beg + i];
for (int j = 0; j < n2; j++)
RightArray[j] = a[mid + 1 + j];

i = 0; /* initial index of first sub-array */


j = 0; /* initial index of second sub-array */
k = beg; /* initial index of merged sub-array */

while (i < n1 && j < n2)


{
if(LeftArray[i] <= RightArray[j])
{
a[k] = LeftArray[i];
i++;
}
else
{
a[k] = RightArray[j];
j++;
}
k++;
}
while (i<n1)
{
a[k] = LeftArray[i];
i++;
k++;
}

while (j<n2)
{
a[k] = RightArray[j];
j++;
k++;
}
}

void mergeSort(int a[], int beg, int end)

if (beg < end)


{
int mid = (beg + end) / 2;
mergeSort(a, beg, mid);
mergeSort(a, mid + 1, end);
merge(a, beg, mid, end);

}
}

/* Function to print the array */


void printArray(int a[], int n)
{
int i;
for (i = 0; i < n; i++)
printf("%d ", a[i]);
printf("\n");
}

int main()
{
int a[] = { 12, 31, 25, 8, 32, 17, 40, 42 };
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArray(a, n);
mergeSort(a, 0, n - 1);
printf("After sorting array elements are - \n");
printArray(a, n);
return 0;
}

QUICK SORT:
The quick sort was invented by Prof. C.A.R. Hoare in early 1960’s. It is considered to be a
fast method to sort the elements. The method is also called partition exchange sorting. The
method is based on divide-and-conquer technique, i.e. the entire list is divided into various
partitions and sorting is applied again and again on the partitions.
In this method, the list is divided into two, based on an element called the pivot
element. Usually, the first element is considered to be the pivot element. Now, move the
pivot element into its correct position in the list. The elements to the left of the pivot are less
than the pivot while the elements to the right of the pivot are greater than the pivot. This
process is reapplied to each of these partitions. This process proceeds till we get the sorted
list of elements.
Example:
List of elements are 10, 80, 30, 90, 40, 50, 70
Pivot element is 70.
i=-1
j=0
pass 1:
10<=70 condition is true.
i=i+1=>-1+1=0
Swap(10,10)
j=j++=> 0+1 =1
Pass 2:
80<=70 condition is false. No action is taken.
j=j++=> 1+1 =2
Pass 3:
30<=70 condition is true.
I=i+1=>0+1=1
Swap(30,80)
j=j+1=>2+1=3
Pass 4:
90<=70 condition is false. No action is taken
j=j+1=>3+1=4
Pass 5:
40<=70 condition is true.
i=i+1=>1+1=2
swap(80,40)
j=j+1=>4+1=5
Pass 6:
50<=70 condition is true.
i=i+1=>2+1=3
swap(90,50)
j=j+1=5+1=6
The for loop condition becomes false.
swap(arr[i+1], pivot) i.e swap(80,70)
Now 70 is brought to its appropriate position by
the partition function.
We can begin the quick sorting the left part. Since quick sort is a recursion function, we call
the partition function again. The left part is sorted.
We can begin the quick sorting the right part. Since quick sort is a recursion function, we call
the partition function again. The right part is sorted.
After sorting sorted array is 10, 30, 40, 50, 70, 80, 90
Algorithm:quick_sort(a, left, right)
Steps:
1. if(left<right) then
2. loc=partition(a,left,right)
3. quick_sort(a,left,loc-1)
4. quick_sort(a,loc+1,right)
5. end if
Algorithm:partition(a, left, right)
Steps:
1. x=a[right]
2. i=left-1
3. for(j=left to right-1) do
4. if(a[j]<=x)
5. i=i+1
6. Exchange( a[i] with a[ j])
7. End if
8. End for
9. Exchange (a[i+1] with a[right])
10. Return i+1

Space complexity:
The space complexity can be defined as amount of memory required by an algorithm to run.
To calculate the space complexity we use two factors: constants and instance characteristics.
The space requirement S(p) can be given as
S(p)=C+Sp
Where C is an amount i.e. fixed part and it denotes the space of inputs and outputs. This
space is an amount of space taken by instruction, variable and identifiers. Sp is a space
dependent upon instance characteristics. This is a variable part whose space requirements
depend on particular problem instances.

Time complexity:
The time complexity of an algorithm is the amount of computer time required by an
algorithm to run for completion.
It is difficult to compute the time complexity in terms of physically clocked time. For
instance in multi-user system, executing time depends on many factors such as:-
1. System load
2. Number of other programs running
3. Instruction set used
4. Speed of underlying hardware
The time complexity is therefore given in terms of frequency count.
Frequency count is a count denoting number of execution of statement.

You might also like