Introduction
Problem:
The definition of a problem is something that has to be solved or an unpleasant or
undesirable condition that needs to be corrected.
Data Structures concepts
1. Data:
Data is a set of different symbols and characters whose meaning only becomes clear
when they connect with context. But it contains unprocessed information.
It is also called as raw data.
Data is a collection of text, numbers and symbols with no meaning. Data therefore has
to be processed, or provided with a context, before it can have meaning.
Example: sjadhjsabjhr&%$#2313rfn
Data Structures concepts
2. Information:
This is data that has been “cleaned” of errors and further processed in a way that
makes it easier to measure, visualize and analyze for a specific purpose.
When the context is added to data then it becomes Information.
Example: 16/08/2015
Data Structures concepts
3. Knowledge:
When we don’t just view information as a description of collected facts, but also
understand how to apply it to achieve our goals, we turn it into knowledge.
Example: 16/08/2015 is a birth date of Ajay.
Data Structures concepts
Data:
data is a set of different symbols and characters whose meaning only becomes clear
when they connect with context. But it contains unprocessed information.
It is also called as raw data.
Example: any word file, text file, video file, audio file, or any type of data.
In programming languages a data can be considered of following types
Int, char, flot, double, long, etc
Data Structures concepts
4. Data Objects:
A data object is a region of storage that contains a value or group of values.
A data object is a container which can store one or more elements of data.
Example: array
Int a=10;
Array
0 1 2 3 4
5 10 7 53 35
Image Source: [R]
Data Structures concepts
5. Data Structures:
a data structure is a data organization, management, and storage format that enables efficient access
and modification.
More precisely, is a collection of data values, the relationships among them, and the functions or
operations that can be applied to the data.
Ex. Array, Trees, Graphs etc
1
0 1 2 3 4
5 10 7 53 35 2 3
[B]
[A]
4 5 [C]
Data Structures concepts
4. Abstract Data Types (ADT):
a ADT is a data type which is not provided by the programming language inbuilt and created by the
user for his programming requirements.
An abstract data type is defined by its behavior from the point of view of a user, of the data, specifically
in terms of possible values, possible operations on data.
1
Ex. Array[A], Trees[B], Graphs[C], Linked list[D] etc.
2 3
0 1 2 3 4
[B] [C]
5 10 7 53 35 4 5
[A]
[D] 5 7 9
Types of Data Structures
Based on organization and arrangement of date elements and based on different
characteristics the data structures are classified in following different types:
1. Primitive & Non Primitives
2. Linear & Non Linear
3. Static & Dynamic
4. Persistent & Ephemeral Data Structures
Types of Data Structures
1. Primitive & Non Primitives:
Primitives Data Structures: a basic data type provided by a programming language as a basic
building block. These are provided inbuilt by programming languages.
Examples: int, Char, Float, double, etc.
Non Primitives Data Structures: The data type that are derived from primary data types are known
as non-primitive data type. These are created by user for his requirement.
Examples: . Ex. Array, Trees, Graphs, Linked list etc
Int a[10];
Types of Data Structures
2. Linear & Non Linear :
Linear Data Structures: is a data structure in which data items are stored linearly in the memory. So
there is contiguous memory allocation of the data.
Examples: int, Arrays, Linked List, Stack, Queue etc.
0 1 2 3 4
5 10 7 53 35
Non Linear Data Structures: is a data structure in which data items are not stored linearly in the
memory. So there is no contiguous memory allocation of the data.
Examples: . Ex. Trees, Graphs. 1
2 3
4 5
Types of Data Structures
3. Static & Dynamic Data Structures:
Static Data Structures: In Static data structure the size of the structure is fixed. The content of the
data structure can be modified but without changing the memory space allocated to it.
Examples: Arrays, Stack, Queue etc. Int a[5];
0 1 2 3 4
5 10 7 53 35
Dynamic Data Structures: In Dynamic data structure the size of the structure in not fixed and can be
modified during the operations performed on it.
1
Memory allocated at run time.
2 3
Examples: . Ex. Trees, Graphs.
4 5
Types of Data Structures
4. Persistent & Ephemeral Data Structures:
Persistent Data Structures: is a data structure that always preserves the previous
version of itself when it is modified. Such data structures are immutable.
Examples: Stack.
Push 10 Push 20 Push 30 Pop 30
30
20 20 20
10 10 10 10
Types of Data Structures
4. Persistent & Ephemeral Data Structures:
Ephemeral Data Structures: is a data structure that does not preserves the previous version of
itself when it is modified. Previous states of data types can not be retained in this data types.
Examples: Queue.
10 Insert 10
10 20 Insert 20
10 20 30 Insert 30
10 10
20 30 Remove 10
Introduction to algorithms
1. Algorithms:
an algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a
class of problems or to perform a computation.
Is a step by step solution of a problem.
Examples: Algorithm to compute addition of two numbers.
1. Start.
2. Declare three numbers a, b & c
3. Accept Number one a.
4. Accept Number two b.
5. Add both the numbers and store result in c .
6. Print the result c.
7. End
Introduction to algorithms
2. Algorithms Design Tools:
a. Flow Charts: A flowchart is a type of diagram that represents a workflow or process.
A flowchart can also be defined as a diagrammatic representation of an algorithm, a
step-by-step approach to solving a task. The flowchart shows the steps as boxes of
various kinds, and their order by connecting the boxes with arrows.
Introduction to algorithms
Flow Charts components :
Sr. No. Component Name Component Symbol Purpose
1 The Oval To Indicate An End or a Beginning
2 The Rectangle A Step in the Flowcharting Process
3 The Arrow Indicate Directional Flow
Indicate a Decision
4 The Diamond
data is coming in and out throughout
5 Input & Output Symbols your process
Introduction to algorithms
Flow Charts components :
Sr. No. Component Name Component Symbol Purpose
6 Document Symbols Document icons show that there are
additional points of reference involved
in your flowchart.
7 Connecting Symbols connect flowcharts that span multiple
pages.
8 Data Symbols clarify where the data your flowchart
references is being stored.
9 Predefined Process Define predefined flow
Introduction to algorithms
Flow Charts Example: Lamp Doesn't Work?
No Is Lamp
Plug in Lamp. Plugged
in?
Yes
Yes Is Bulb
Replace Bulb. burned
out?
NO
Repair Lamp.
Introduction to algorithms
b. Pseudo Code: Is a informal way of writing a Programme. It is a Algorithm written with
combination of English statements and some programming statements. It is a first step to
write a code for the algorithm.
Analysis of algorithms
1. Time Complexity: the time complexity is the computational complexity that describes
the amount of time it takes to run an algorithm. It can be computed by measuring the
amount of time taken by basic operation of an algorithm.
T(n)= O (b(n))
Where :
T(n) : Is time complexity of Algorithm.
O : is Big Oh notation used to represent time complexity.
b(n): is Basic operation of algorithm with input size n.
Analysis of algorithms
Example: Consider Linear Search example.
for (c = 0; c < n; c++)
{
if (array[c] == search)
Basic Operation of
{ algorithm
printf("%d is present at location %d.\n", search, c+1);
break;
}
}
The Basic operation may execute N time to search a key element so the complexity of algorithm is
O(n).
Analysis of algorithms
2. Space complexity: of an algorithm quantifies the amount of space or memory taken by
an algorithm to run as a function of the length of the input. It can be measured by
Space Complexity = Auxiliary Space + Input space
a) Auxiliary Space: is a temporary space occupied by the program which includes
i. Variables (This include the constant values, temporary values)
ii. Program Instruction
iii. Execution
b) Input space: which includes
i. Space required for loops
ii. File size.
Analysis of algorithms
3. Frequency count: A frequency count is a measure of the number of times the each instructions
in Programme executes. It can be calculated by summing execution count of all the instruction in a
Programme.
Consider a following program block.
Int SumElements(int a[10], int n)
Statements i=5 Frequency Count
{
int i, sum=0 1
int i, sum=0;
i=0 1
For (i=0; i<n; i++)
{ i<n n
i++ n
sum=sum+a[i]; sum=sum+a[i] n
} return sum 1
return sum; Total FC: 3n+3
Step Count Method to Analysis of algorithms
3. Frequency count: A frequency count is a measure of the number of times the each instructions
in Programme executes. It can be calculated by summing execution count of all the instruction in a
Programme.
Consider a following program block.
Int SumElements(int a[10], int n)
Statements i=5 Frequency Count
{
int i, sum=0 1
int i, sum=0;
i=0 1
For (i=0; i<n; i++)
{ i<n n
i++ n
sum=sum+a[i]; sum=sum+a[i] n
} return sum 1
return sum; Total FC: 3n+3
Characteristics of good algorithms
There can be many algorithms for a particular problem. So, how to classify an algorithm to
be good and others to be bad?
Correctness: An algorithm is said to be correct if for every set of input it halts with the
correct output. If you are not getting the correct output for any particular set of input,
then your algorithm is wrong.
Finiteness: The algorithm must always terminate after a finite number of steps.
Efficiency: An efficient algorithm is always used. By the term efficiency, we mean to say
that:
i. The computational time should be as less as possible.
ii. The memory used by the algorithm should also be as less as possible.
Unambiguity: Instructions written in a algorithms must be unambiguous
Asymptotic Notations
Asymptotic notations are mathematical tools to represent time complexity of
algorithms for asymptotic analysis.
The main idea of asymptotic analysis is to have a measure of efficiency of algorithms
that doesn’t depend on machine specific constants, and doesn’t require algorithms to
be executed and time taken by programs to be compared.
The following 3 asymptotic notations are mostly used to represent time complexity
of algorithms.
1. Big Oh Notation (O):
2. Omega Notation (Ω):
3. Theta Notation (Θ):
Asymptotic Notations
1. Big Oh Notation (O): The Big O notation defines an upper bound of an algorithm, it
indicates highest possible (worst) value of time complexity .
Consider a input function f(n) belongs to the set O(g(n)) if there exists a positive
constant c such that it lies between 1 and g(n), for sufficiently large n i.e. 0 < cg(n) < n.
Then
f(n)<C g(n)
Then we can say
f(n)=O g(n)
It can be represented on graph as.
Asymptotic Notations
2. Omega Notation (Ω): The Omega notation defines an lower bound of an algorithm,
it indicates lowest possible (best) value of time complexity .
if there exists a positive constant c such that it lies above cg(n), for sufficiently large n
i.e. cg(n)<C< n.
Then
f(n)>C g(n)
Then we can say
f(n)= Ω g(n)
It can be represented on graph as.
Asymptotic Notations
3. Theta Notation (Θ): Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average case complexity of an algorithm.
if there exist positive constants c1 and c2 such that it can be sandwiched
between c1g(n) and c2g(n), for sufficiently large n. If a function f(n) lies anywhere in
between c1g(n) and c2 > g(n) for all n ≥ n0, then f(n) is said to be asymptotically tight bound.
C1 g(n) < f(n)<C2 g(n)
Then we can say
f(n)= Θ g(n)
It can be represented on graph as.
Best Case, Worst Case & Average Case Analysis
The Best Case, Worst Case & Average Case analysis is to have a measure of efficiency of algorithms
Best Case Analysis :
In the best case analysis, we calculate lower bound on running time of an algorithm. We must
know the case that causes minimum number of operations to be executed. In the linear search
problem, the best case occurs when x is present at the first location. So time complexity in the
best case would be Ω(1)
Worst Case Analysis :
In the worst case analysis, we calculate upper bound on running time of an algorithm. We must
know the case that causes maximum number of operations to be executed. For Linear Search, the
worst case happens when the element to be searched is not present in the array.
Average Case Analysis:
In average case analysis, we take all possible inputs and calculate computing time for all of the
inputs. Sum all the calculated values and divide the sum by total number of inputs. We must know
(or predict) distribution of cases.
Best Case, Worst Case & Average Case Analysis
Best Case Worst Case Average Case
Searching Techniques
Linear Search Algorithm:
Linear Search ( Array A, Value x)
Step 1: Set i to
Step 2: if i > n then go to step 7
Step 3: if A[i] = x then go to step 6
Step 4: Set i to i + 1
Step 5: Go to Step 2
Step 6: Print Element x Found at index i and go to step 8
Step 7: Print element not found
Step 8: Exit
Searching Techniques
Linear Search Pseudo code:
procedure linear_search (list, value)
for each item in the list
if match item == value
return the item's location
end if
end for
end procedure
Types of Algorithms
The algorithms are classified based on the methods or strategies used by algorithms to
solve the given algorithm. The different types or strategies of algorithms are
1. Divide and Conquer:
2. Greedy Method
3. Dynamic Programming
4. Branch & Bound
5. Backtracking
Types of Algorithms
Divide and Conquer: Is a algorithm design strategy in which we can break a single big
problem into smaller sub-problems, and solve the smaller sub-problems and combine their
solutions to find the solution for the original big problem, it becomes easier to solve the whole
problem. The concept of Divide and Conquer involves three steps:
1. Divide the problem into multiple small
problems.
2. Conquer the sub problems by solving them.
The idea is to break down the problem into
atomic sub problems, where they are actually
solved.
3. Combine the solutions of the sub problems to
find the solution of the actual problem
Types of Algorithms
Applications of Divide and Conquer strategy are
1. Marge Sort
2. Quick Sort
3. Binary Search
4. Fibonacci Search
Types of Algorithms
Greedy Method: Among all the algorithmic approaches, the simplest and straightforward approach is the
Greedy method.
In this approach, the decision is taken on the basis of current available information without worrying
about the effect of the current decision in future.
Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an
immediate benefit.
This approach never reconsiders the choices taken previously.
This approach is mainly used to solve optimization problems.
Greedy method is easy to implement and quite efficient in most of the cases.
Hence, we can say that Greedy algorithm is an algorithmic paradigm based on heuristic that follows local
optimal choice at each step with the hope of finding global optimal solution.
Types of Algorithms
Greedy algorithm is designed using the following steps −
Characterize the structure of an optimal solution.
Find the set of feasible solutions.
Find the locally optimal solutions.
Types of Algorithms
Greedy solution must have following properties:
Feasible: Here we check whether it satisfies all possible constraints or not, to obtain at least one solution
to our problems.
Local Optimal Choice: In this, the choice should be the optimum which is selected from the currently
available feasible solutions.
Unalterable: Once the decision is made, at any subsequence step that option is not altered.
Types of Algorithms
Applications of Greedy methods are:
1. Fractional Knapsack Problem
2. Minimum Spanning Tree
3. Huffman Code
4. Job Sequencing With Deadlines
5. Single Source Shortest Path Problem
Types of Algorithms
Divide and Conquer: Is a algorithm design strategy in which we can break a single big
problem into smaller sub-problems, and solve the smaller sub-problems and combine their
solutions to find the solution for the original big problem, it becomes easier to solve the whole
problem. The concept of Divide and Conquer involves three steps:
1. Divide the problem into multiple small
problems.
2. Conquer the sub problems by solving them.
The idea is to break down the problem into
atomic sub problems, where they are actually
solved.
3. Combine the solutions of the sub problems to
find the solution of the actual problem
Analysis of programming constructs
Here we will discuss Some of the common running time taken by algorithm to solve the
given problem
Type Running Time
Constant O(1)
Linear O(n)
Logarithmic O(logn)
Quadratic O(n2)
Cubic O(n3)
Exponential O(2n)
Analysis of programming constructs
Constant Complexity:
It imposes a complexity of O(1).
A constant-time algorithm is one that takes the same amount of time, regardless of its input.
It undergoes an execution of a constant number of steps like 1, 5, 10, etc. for solving a given
problem.
The count of operations is independent of the input data size.
Here are some examples:
1. Given two numbers, report the sum
2. Given a list, report the first element
In such case F(n) = O(1)
Analysis of programming constructs
Linear Complexity:
It imposes a complexity of O(N). It encompasses the same number of steps as that of the total
number of elements to implement an operation on N elements.
For example, if there exist 500 elements, then it will take about 500 steps. Basically, in linear
complexity, the number of elements linearly depends on the number of steps.
Example:
1. Printing n Elements
2. Searching Element from array
In such case F(n) = O(n)
Analysis of programming constructs
Logarithmic Complexity:
A logarithmic-time algorithm is one that requires a number of steps proportional to the log(n).
In most cases, we use 2 as the base of the log, but it doesn't matter which base because we
ignore constants.
Examples:
1. Binary search
2. Searching a tree data structure
In such case F(n) = O(logn)
Analysis of programming constructs
Quadratic Complexity:
A quadratic-time algorithm is one takes a number of steps proportional to n2 .
That is, if the size of the input doubles, the number of steps quadruples.
A typical pattern of quadratic time algorithms is performing a linear-time operation on each
item of the input (n steps per item * n items = n2 steps).
Examples:
1. Compare each item of a list against all the other items in the list
2. Sort given list of elements
In such case F(n) = O(n2)
Analysis of programming constructs
Cubic Complexity:
A cubic-time algorithm is one that takes a number of steps proportional to n3 .
In other words, if the input doubles, the number of steps is multiplied by 8.
Similarly to the quadratic case, this could be the result of applying an n2 algorithm to n items,
or applying a linear algorithm to n2 items.
Examples:
1. Fill in a 3D board (or environment)
2. For each object in a list, construct an n-by-n bitmap drawing of the object
In such case F(n) = O(n3)
Analysis of programming constructs
Exponential Complexity:
An exponential-time algorithm is one that takes time proportional to 2n .
In other words, if the size of the input increases by one, the number of steps doubles.
Note that logarithms and exponents are inverses of each other.
Algorithms in this category are often considered too slow to be practical, especially if the input
is typically large.
Examples:
1. Given a number n, generate a list of every n-bit binary number
In such case F(n) = O(2n)