Full Cse 408 Unit 3
Full Cse 408 Unit 3
2
Quick Solutions
3
Quick Solution
4
Floyd-Warshall
5
Representation of the Input
wij= 0 if i=j
wij= w(i,j) if ij and (i,j) in E
wij= if ij and (i,j) not in E
6
Format of the Output
7
Intermediate Vertices
8
Key Definition
9
Remark 1
10
Remark 2
11
Remark 2
12
Recursive Formulation
13
The Floyd-Warshall Algorithm
Floyd-Warshall(W)
n = # of rows of W;
D(0) = W;
for k = 1 to n do
for i = 1 to n do
for j = 1 to n do
dij(k) = min{dij(k-1) , dik(k-1) + dkj(k-1)};
od;
od;
od;
return D(n);
14
Time and Space Requirements
15
CSE408
Optimal binary search
tree and
Knapsack problem
Lecture # 23
Optimal binary search trees
3 7 7 12
3 12 3 9 9
7
9 9 12 3
7
12
(a) (b) (c) (d)
8 -2
Optimal binary search trees
P + Q
i =1
i
i =1
i =1
8 -3
10
• Identifiers : 4, 5, 8, 10, 11,
12, 14
5 14
• Internal node : successful
search, Pi
4 8 11 E7
• External node : unsuccessful
E0 E1 E2 E3 E4 12 search, Qi
E5 E6
P level(a ) + Q (level(E ) − 1)
=1
i i i i
◼ Thenlevel of the root : 1 n =0
8 -4
The dynamic programming approach
ak
P1...Pk-1 Pk+1...Pn
Q0...Qk-1 Qk...Qn
a1...ak-1 ak+1...an
C(1,k-1) C(k+1,n) 8 -5
General formula
k −1
C(i, j) = min Pk + Qi-1 + (Pm + Q m ) + C(i, k − 1)
i k j
m =i
j
+ Q k + (Pm + Q m ) + C(k + 1, j)
m = k +1
j
= min C(i, k − 1) + C(k + 1, j) + Qi-1 + (Pm + Q m )
i k j
m =i
ak
P1...Pk-1 Pk+1...Pn
Q0...Qk-1 Qk...Qn
C(1,k-1) C(k+1,n) 8 -6
Computation relationships of sub trees
• e.g. n=4
C(1,4)
C(1,3) C(2,4)
8 -7
Knapsack problem
11-2
Matrix-chain Multiplication …contd
11-4
Algorithm to Multiply 2 Matrices
Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)
Result: Matrix Cp×r resulting from the product A·B
MATRIX-MULTIPLY(Ap×q , Bq×r)
1. for i ← 1 to p
2. for j ← 1 to r
3. C[i, j] ← 0
4. for k ← 1 to q
5. C[i, j] ← C[i, j] + A[i, k] · B[k, j]
6. return C
Scalar multiplication in line 5 dominates time to compute
CNumber of scalar multiplications = pqr
11-5
Matrix-chain Multiplication …contd
11-7
Dynamic Programming Approach
11-8
Dynamic Programming Approach …contd
11-9
Dynamic Programming Approach …contd
11-10
Dynamic Programming Approach …contd
11-11
Dynamic Programming Approach …contd
0 if i=j
m[i, j ] =
min {m[i, k] + m[k+1, j ] + pi-1pk pj } if i<j
i ≤ k< j
11-12
Dynamic Programming Approach …contd
11-13
Algorithm to Compute Optimal Cost
Input: Array p[0…n] containing matrix dimensions and n
Result: Minimum-cost table m and split table s
MATRIX-CHAIN-ORDER(p[ ], n)
for i ← 1 to n Takes O(n3) time
m[i, i] ← 0
Requires O(n 2) space
for l ← 2 to n
for i ← 1 to n-l+1
j ← i+l-1
m[i, j] ←
for k ← i to j-1
q ← m[i, k] + m[k+1, j] + p[i-1] p[k] p[j]
if q < m[i, j]
m[i, j] ← q
s[i, j] ← k
return m and s
11-14
Constructing Optimal Solution
11-15
Example
11-16
CSE408
Longest Common Sub
Sequence
Lecture # 25
Dynamic programming
LCS(X, Y) = BCB
X=AB C B
Y= BDCAB 10
ABCB
LCS Example (0) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
A
1
2 B
3 C
4 B
X = ABCB; m = |X| = 4
Y = BDCAB; n = |Y| = 5
Allocate array c[5,4]
11
ABCB
LCS Example (1) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0
2 B
0
3 C 0
4 B 0
for i = 1 to m c[i,0] = 0
for j = 1 to n c[0,j] = 0
12
ABCB
LCS Example (2) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0
2 B
0
3 C 0
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 13
ABCB
LCS Example (3) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0
2 B
0
3 C 0
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 14
ABCB
LCS Example (4) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1
2 B
0
3 C 0
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 15
ABCB
LCS Example (5) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0
3 C 0
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 16
ABCB
LCS Example (6) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1
3 C 0
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 17
ABCB
LCS Example (7) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1
3 C 0
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 18
ABCB
LCS Example (8) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 19
ABCB
LCS Example (10) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 20
ABCB
LCS Example (11) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 21
ABCB
LCS Example (12) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 22
ABCB
LCS Example (13) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 23
ABCB
LCS Example (14) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 24
ABCB
LCS Example (15) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
2/27/2024 25
LCS Algorithm Running Time
O(m*n)
since each c[i,j] is calculated in
constant time, and there are m*n
elements in the array
2/27/2024 26
How to find actual LCS
• So far, we have just found the length of LCS,
but not LCS itself.
• We want to modify this algorithm to make it
output Longest Common Subsequence of X
and Y
Each c[i,j] depends on c[i-1,j] and c[i,j-1]
or c[i-1, j-1]
For each c[i,j] we can say how it was acquired:
2 2 For example, here
2 3 c[i,j] = c[i-1,j-1] +1 = 2+1=3
27
How to find actual LCS - continued
• Remember that
c[i − 1, j − 1] + 1 if x[i] = y[ j ],
c[i, j ] =
max( c[i, j − 1], c[i − 1, j ]) otherwise
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
2/27/2024 29
Finding LCS (2)
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
LCS (reversed order): B C B
LCS (straight order): B C B
(this string turned out to be a palindrome) 30
CSE408
Bubble sort
Maximum & Minimum
Lecture #16
Why Study Sorting Algorithms?
• Internal Sort
– The data to be sorted is all stored in the
computer’s main memory.
• External Sort
– Some of the data to be sorted might be stored in
some external, slower, device.
• In Place Sort
– The amount of extra space required to sort the
data is constant with the input size.
Bubble Sort
• Idea:
– Repeatedly pass through the array
– Swaps adjacent elements that are out of order
i
1 2 3 n
8 4 6 9 2 3 1
j
8 4 6 9 2 1 3 1 2 8 4 6 9 3
i=1 j i=3 j
8 4 6 9 1 2 3 1 2 3 8 4 6 9
i=1 j i=4 j
8 4 6 1 9 2 3 1 2 3 4 8 6 9
i=1 j i=5 j
8 4 1 6 9 2 3 1 2 3 4 6 8 9
i=1 j i=6 j
8 1 4 6 9 2 3 1 2 3 4 6 8 9
i=1 j i=7
j
1 8 4 6 9 2 3
i=1 j
Bubble Sort
Alg.: BUBBLESORT(A)
for i 1 to length[A]
do for j length[A] downto i + 1
do if A[j] < A[j -1]
i then exchange A[j] A[j-1]
8 4 6 9 2 3 1
i=1 j
Bubble-Sort Running Time
Alg.: BUBBLESORT(A)
for i 1 to length[A] c1
do for j length[A] downto i + 1 c2
Comparisons: n2/2 do if A[j] < A[j -1] c3
Exchanges: n2/2
then exchange A[j] A[j-1] c4
n
(n − i )
n n
T(n) = c1(n+1) + c2 (n − i + 1) + c3 (n − i ) + c4
i =1 i =1 i =1
n
= (n) + (c2 + c2 + c4) (n − i )
i =1
n n n
n ( n + 1) n 2
n
where (n − i ) = n − i = n 2 − = −
i =1 i =1 i =1 2 2 2
Thus,T(n) = (n2)
Selection Sort
• Idea:
– Find the smallest element in the array
– Exchange it with the element in the first position
– Find the second smallest element and exchange it
with the element in the second position
– Continue until the array is sorted
• Disadvantage:
– Running time depends only slightly on the amount
of order in the file
Example
8 4 6 9 2 3 1 1 2 3 4 9 6 8
1 4 6 9 2 3 8 1 2 3 4 6 9 8
1 2 6 9 4 3 8 1 2 3 4 6 8 9
1 2 3 9 4 6 8 1 2 3 4 6 8 9
Selection Sort
Alg.: SELECTION-SORT(A) 8 4 6 9 2 3 1
n ← length[A]
for j ← 1 to n - 1
do smallest ← j
for i ← j + 1 to n
do if A[i] < A[smallest]
then smallest ← i
exchange A[j] ↔ A[smallest]
Analysis of Selection Sort
Alg.: SELECTION-SORT(A)
cost times
n ← length[A]
c1 1
for j ← 1 to n - 1
c2 n
do smallest ← j
c3 n-1
n2/2 for i ← j + 1 to n c4 nj=−11 (n − j + 1)
comparisons
do if A[i] < A[smallest] c5
n −1
j =1
(n − j )
n
exchanges then smallest ← i c6
n −1
j =1
(n − j )
c7
exchange A[j] ↔ A[smallest] n-1
n −1 n −1 n −1
T ( n) = c1 + c2 n + c3 (n − 1) + c4 (n − j + 1) + c5 ( n − j ) + c6 ( n − j ) + c7 (n − 1) = (n 2 )
j =1 j =1 j =2
MINIMUM AND MAXIMUM
CSE408
Count, Radix & Bucket
Sort
Lecture #17
How Fast Can We Sort?
2
Lower-Bound for Sorting
3
Can we do better?
– Radix Sort
– Bucket sort
output array
5
Step 1
(i.e., frequencies)
(r=6)
6
Step 2
7
Algorithm
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
0 1 2 3 4 5
Cnew 2 2 4 7 7 8
8
Example
1 2 3 4 5 6 7 8 0 1 2 3 4 5
A 2 5 3 0 2 3 0 3 Cnew 2 2 4 7 7 8
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
B 3 B 0 3
0 1 2 3 4 5
0 1 2 3 4 5
Cnew 2 2 4 6 7 8 Cnew 1 2 4 6 7 8
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
B 0 3 3 B 0 2 3 3
0 1 2 3 4 5
0 1 2 3 4 5
Cnew 1 2 4 5 7 8 Cnew 1 2 3 5 7 8
9
Example (cont.)
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
B 0 0 2 3 3 B 0 0 2 3 3 3 5
0 1 2 3 4 5 0 1 2 3 4 5
C 0 2 3 5 7 8 C 0 2 3 4 7 7
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
B 0 0 2 3 3 3 B 0 0 2 2 3 3 3 5
0 1 2 3 4 5
C 0 2 3 4 7 8
10
COUNTING-SORT
1 j n
A
Alg.: COUNTING-SORT(A, B, n, k) 0 k
1. for i ← 0 to r C
2. do C[ i ] ← 0 B
1 n
3. for j ← 1 to n
4. do C[A[ j ]] ← C[A[ j ]] + 1
5. C[i] contains the number of elements equal to i
6. for i ← 1 to r
7. do C[ i ] ← C[ i ] + C[i -1]
8. C[i] contains the number of elements ≤ i
9. for j ← n downto 1
10. do B[C[A[ j ]]] ← A[ j ]
11. C[A[ j ]] ← C[A[ j ]] - 1
11
Analysis of Counting Sort
Alg.: COUNTING-SORT(A, B, n, k)
1. for i ← 0 to r O(r)
2. do C[ i ] ← 0
3. for j ← 1 to n
O(n)
4. do C[A[ j ]] ← C[A[ j ]] + 1
5. C[i] contains the number of elements equal to i
6. for i ← 1 to r
O(r)
7. do C[ i ] ← C[ i ] + C[i -1]
8. C[i] contains the number of elements ≤ i
9. for j ← n downto 1
10. do B[C[A[ j ]]] ← A[ j ] O(n)
13
Radix Sort
• Represents keys as d-digit numbers in some
base-k
key = x1x2...xd where 0≤xi≤k-1
• Example: key=15
15
RADIX-SORT
Alg.: RADIX-SORT(A, d)
for i ← 1 to d
do use a stable sort to sort array A on digit i
(stable sort: preserves order of identical elements)
16
Analysis of Radix Sort
17
Analysis of Radix Sort
18
Bucket Sort
• Assumption:
– the input is generated by a random process that distributes
elements uniformly over [0, 1)
• Idea:
– Divide [0, 1) into k equal-sized buckets (k=Θ(n))
– Distribute the n input values into the buckets
– Sort each bucket (e.g., using quicksort)
– Go through the buckets in order, listing elements in each one
19
Example - Bucket Sort
A 1 .78 B 0 /
5 .72 4 / Distribute
6 .94 5 / Into buckets
7 .21 6 .68 /
9 .23 8 /
10 .68 9 .94 /
20
Example - Bucket Sort
0 /
1 .12 .17 /
3 .39 /
Sort within each
4 /
bucket
5 /
6 .68 /
7 .72 .78 /
8 /
9 .94 / 21
Example - Bucket Sort
.12 .17 .21 .23 .26 .39 .68 .72 .78 .94 /
0 /
1 .12 .17 /
3 .39 /
4 /
7 .72 .78 /
8 /
9 .94 / 22
Analysis of Bucket Sort
Alg.: BUCKET-SORT(A, n)
for i ← 1 to n
O(n)
do insert A[i] into list B[nA[i]]
for i ← 0 to k - 1
k O(n/k log(n/k))
do sort list B[i] with quicksort sort =O(nlog(n/k)
24
CSE408
Heap & Heap sort,Hashing
Lecture #18
Special Types of Trees
• Def: Full binary tree = a 4
4
• Def: Complete binary tree = a
binary tree in which all leaves 1 3
4 Height of root = 3
1 3
Height of (2)= 1 2 16 9 10 Level of (10)= 2
14 8
Useful Properties
height
height
1 3
Height of (2)= 1 2 16 9 10 Level of (10)= 2
14 8
The Heap Data Structure
• Def: A heap is a nearly complete binary tree with
the following two properties:
– Structural property: all levels are full, except
possibly the last one, which is filled from left to right
– Order (heap) property: for any node x
Parent(x) ≥ x
A[2] A[4]
A[2] violates the heap property A[4] violates the heap property
A[4] A[9]
Alg: BUILD-MAX-HEAP(A) 1
1. n = length[A] 4
3. do MAX-HEAPIFY(A, i, n) 8
2 9 10
16 9 10
14 8 7
A: 4 1 3 2 16 9 10 14 8 7
Example: A 4 1 3 2 16 9 10 14 8 7
4 4 4
2 3 2 3 2 3
1 3 1 3 1 3
4 5 6 7 4 5 6 7 4 5 6 7
8
2 9 10
16 9 10 8 2 9 10
16 9 10 8 14 9 10
16 9 10
14 8 7 14 8 7 2 8 7
i=2 i=1
1 1 1
4 4 16
2 3 2 3 2 3
1 10 16 10 14 10
4 5 6 7 4 5 6 7 4 5 6 7
8
14 9 10
16 9 3 8
14 9 10
7 9 3 8
8 9 10
7 9 3
2 8 7 2 8 1 2 4 1
Running Time of BUILD MAX HEAP
Alg: BUILD-MAX-HEAP(A)
1. n = length[A]
2. for i ← n/2 downto 1
O(n)
3. do MAX-HEAPIFY(A, i, n) O(lgn)
h1 = 2 i=1 21
h2 = 1 i=2 22
h3 = 0 i = 3 (lgn) 23
• Idea:
– Build a max-heap from the array
– Swap the root (the maximum element) with the last
element in the array
– “Discard” this last node by decreasing the heap size
– Call MAX-HEAPIFY on the new root
– Repeat this process until only one node remains
Example: A=[7, 4, 3, 1, 2]
MAX-HEAPIFY(A, 1, 1)
Alg: HEAPSORT(A)
1. BUILD-MAX-HEAP(A) O(n)
2. for i ← length[A] downto 2
3. do exchange A[1] ↔ A[i] n-1 times
4. MAX-HEAPIFY(A, 1, i - 1) O(lgn)
12 4
Operations on Priority Queues
Heap-Maximum(A) returns 7
HEAP-EXTRACT-MAX
Goal:
– Extract the largest element of the heap (i.e., return the max
value and also remove that element from the heap
Idea:
– Exchange the root element with the last
– Decrease the size of the heap by 1 element
– Call MAX-HEAPIFY on the new root, on a heap of size n-1
16 1
14 10 max = 16 14 10
8 7 9 3 8 7 9 3
2 4 1 2 4
Heap size decreased with 1
14
Alg: HEAP-EXTRACT-MAX(A, n)
1. if n < 1
2. then error “heap underflow”
3. max ← A[1]
4. A[1] ← A[n]
6. return max
Running time: O(lgn)
HEAP-INCREASE-KEY
• Goal:
– Increases the key of an element i in the heap
• Idea:
– Increment the key of A[i] to its new value
– If the max-heap property does not hold anymore:
traverse a path toward the root to find the proper
place for the newly increased key
16
14 10
8 i 7 9 3
Key [i] ← 15 2 4 1
Example: HEAP-INCREASE-KEY
16 16
14 10 14 10
8 i 7 9 3 8 i 7 9 3
2 4 1 2 15 1
Key [i ] ← 15
16 16
i
14 10 15 10
i
15 7 9 3 14 7 9 3
2 8 1 2 8 1
HEAP-INCREASE-KEY
max-heap property
Example: MAX-HEAP-INSERT
Insert value 15: Increase the key to 15
- Start by inserting - Call HEAP-INCREASE-KEY on A[11] = 15
16 16
14 10 14 10
8 7 9 3 8 7 9 3
2 4 1 - 2 4 1 15
16 16
14 10 15 10
8 15 9 3 8 14 9 3
2 4 1 7 2 4 1 7
MAX-HEAP-INSERT
16
Alg: MAX-HEAP-INSERT(A, key, n)
14 10
1. heap-size[A] ← n + 1 8 7 9 3
2 4 1 -
2. A[n + 1] ← -
3. HEAP-INCREASE-KEY(A, n + 1, key)
hashVal %=tableSize;
if (hashVal < 0) /* in case overflows occurs */
hashVal += tableSize;
return hashVal;
};
Hash function for strings:
98 108 105 key[i]
key a l i
0 1 2 i
KeySize = 3;
0
1
2
“ali” hash ……
function 8172
ali
……
10,006 (TableSize)
Double Hashing
• A second hash function is used to drive the
collision resolution.
– f(i) = i * hash2(x)
• We apply a second hash function to x and probe
at a distance hash2(x), 2*hash2(x), … and so on.
• The function hash2(x) must never evaluate to
zero.
– e.g. Let hash2(x) = x mod 9 and try to insert 99 in the
previous example.
• A function such as hash2(x) = R – ( x mod R)
with R a prime smaller than TableSize will work
well.
– e.g. try R = 7 for the previous example.(7 - x mode 7)
Hashing Applications
• Compilers use hash tables to implement the
symbol table (a data structure to keep track of
declared variables).
• Game programs use hash tables to keep track of
positions it has encountered (transposition table)
• Online spelling checkers.
Summary
• Hash tables can be used to implement the insert
and find operations in constant average time.
– it depends on the load factor not on the number of
items in the table.
• It is important to have a prime TableSize and a
correct choice of load factor and hash function.
• For separate chaining the load factor should be
close to 1.
• For open addressing load factor should not
exceed 0.5 unless this is completely
unavoidable.
– Rehashing can be implemented to grow (or shrink)
the table.
CSE408
Lower bound theory
Lecture # 31
Lower bound