0% found this document useful (0 votes)
147 views8 pages

Block LU Decomposition

This document describes the block LU decomposition algorithm for partitioning large matrices into smaller sub-matrices called blocks. It explains that the block LU decomposition factors a matrix A into lower and upper triangular block matrices L and U, where each block is of size dxd. It then outlines the steps of the algorithm, which involve recursively factorizing the Schur complement into block L and U factors until the entire matrix is decomposed. Finally, it discusses how to solve systems of equations using the block LU factors through forward and backward substitution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views8 pages

Block LU Decomposition

This document describes the block LU decomposition algorithm for partitioning large matrices into smaller sub-matrices called blocks. It explains that the block LU decomposition factors a matrix A into lower and upper triangular block matrices L and U, where each block is of size dxd. It then outlines the steps of the algorithm, which involve recursively factorizing the Schur complement into block L and U factors until the entire matrix is decomposed. Finally, it discusses how to solve systems of equations using the block LU factors through forward and backward substitution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Block LU decomposition

Numerical analysis B project

Alexandros Pavlatos,
Simona Aleksandrova

Vienna, February 2021


Contents
1 Introduction 1

2 Block LU algorithm and its uniqueness 2

3 Forward and backward substitution 3

4 Python implementation of the algorithm 5

5 Conclusion 7

1 Introduction
The elimination process for n equations in n unknowns Ax = b may also be
described as a factorization method. We look for factors L (lower triangular
matrix with units on the diagonal) and U (upper triangular matrix) such that
A = LU .
The Gaussian elimination and its interpretation as a LU decomposition is also
feasible with block matrices, whose blocks are understood as indivisible units.
The block matrix, also called partitioned matrix, is a matrix that can be di-
vided into sub-matrices(blocks). That means, the complexity and the size of
this matrix is not always convenient. That is why the block LU decomposition
is important. This algorithms work with partition of data having b2 elements.
The ratio O(b) of computing speed can be admitted.
In this work we look through the block matrix An ∈ Rnd×nd of the form
 
D1 F1
 E1 D2 F2 
 
 . .. . .. . .. 
An = 



 . .. . ..

 Fn−1 
En−1 Dn

The size of each sub-blocks is d × d. Now we observe a linear system An x = b


with a given n ∈ N.

1
2 Block LU algorithm and its uniqueness
We make the ansatz that An = Ln Un ∈ Rn×n , with
 
I
L1 I 
 
 . . . . 
Ln =  . . 

 .. .. 
 . . 
Ln−1 I

and
 
U1 V1

 U2 V2 

 .. .. 
Un = 
 . . 

 .. 
 . Vk−1 
Uk

If D1 ∈ Rd×d is nonsingular then (and therefore D2 certainly square)


    
D1 F1 I 0 D1 F1
An = =
E1 D2 L21 I 0 S

is just one block step for computing a block LU decomposition. S is here a


Schur complement of An (the trailing submatrix), S = D2 − E1 F1−1 E1 . By
factorizing S in a similar manner, and continuing recursively we can obtain the
following algorithm which computes a block LU decomposition:

1. U11 = D1 , U12 = F1 .
2. Solve L21 D1 = E1 for L21 .

3. S = D2 − L21 F1 (Schur complement)


4. Compute the block LU factorization of S, recursively.

Given the block LU factorization of An , the solution to a system An x = b


can be obtained by solving Ln x = y by forward substitution(since L is triangu-
lar) and solving Un x = y by block back substitution.
There are two possibilities to implement step 2. from the above algorithm.
In the first one D1 is factorized by Gaussian Elimination with Partial Pivot-
ing. The solution of linear systems with Uii are fulfilled by sustitution with
the LU factors od D1 . The second method is to compute D1−1 explicitly. Step
2. becomes matrix multiplication and the block back substitution is solved by
matrix-vector multiplications.

2
We look into the structure of the Schur complement by looking at an LU fac-
torization in block 2-by-2 form:
      
D1 F1 L11 0 U11 U12 L11 U11 L11 U12
= =
E1 D2 L21 L22 0 U22 L21 U11 L22 U22 + L21 U12
We compute L11 and U11 as LU factors of the leading sub-block of D1 . For U12
and L21 we become
U12 = L−111 F1
−1
L21 = E1 U11
Then we have
L22 U22 = D2 − L21 U12
−1 −1
= D2 − E1 U11 L11 F1
= D2 − E1 D2−1 F1

A particular case of LU factorization is recursively LU factorization. If for


example n is even, we have
     
D1 F1 L11 0 In/2 0 U11 U12
=
E1 D2 L21 In/2 0 S 0 In/2

It is very easy to invert the 2x2 block matrix. One just have to change the
sign of the (2, 1) block:
 −1  
I 0 I 0
=
L21 I −L21 I
kd
Suppose a matrix Ak := (aij )i,j=1 for k = 1, .., n is regular. Suppose
Ak−1 (the k − 1th leading submatrix of A) has the unique LU factorization
Ak−1 = Lk−1 Uk−1 . We look for a factorization
    
Ak−1 b Lk−1 0 Uk−1 y
Ak = = := Lk Uk
cT d xT 1 0 z
by picking appropriate x, y, z.
T
The equations to be satisfied are Lk−1 y = b, Uk−1 = c, and akk = xT y + z. The
matrices Lk−1 and Uk−1 are regular, since 0 6= det(Ak−1 ) = det(Lk−1 )det(Uk−1 ),
so the equations have a unique solution. Using induction Ak−1 = Lk−1 Uk−1
clearly holds also for k − 1 = 1.

3 Forward and backward substitution


Let a notation be like in the previous caption. We want to know the solutions
of U x = b and Ly = b,where b = (b1 , ..., bn )> with bj ∈ R is the right hand side
of our linear system.

3
1. We solve Ly = b for y by forward substitution.

     
b1 L11 0 0 ··· 0 y1
 b2   L21 l22 0 0   y2 
= ·
     
 .. .. .. .. .. 
 .   . . .   . 
bn Ln1 Ln2 ln3 ··· Lnn yn

This is equivalent to the following:

b1 = L11 y1
b2 = L21 y1 + L22 y2
.. . .
. .
bn = Ln1 y1 + Ln2 y2 + Ln3 y3 + · · · + Lnn yn
b1
It is easy to see, that y1 = L1 . Once we have y1 , the second equation yields
(b2 −L21 y1 )
y2 = L22 .
For the third one we compute y1 from the first equation
and substitute it into second to compute y2 . And so on, we repeat the
process. From the equation i we will have y1 , y2 , · · · , yi−1 available. Then,
they are substituted into equation i to solve for yi . That bring us to the
following formula:
j−1
" #
1 X
yj = bj − Li,j yi
Ljj i=1

2. We have y available, now we can solve for x by backwards substitution


U x = y.

 
  U11 U12 U13 ··· U1n  
y1 x1
 0 U22 U23 U2n 
 y2     x2 

 .. =
  0 0 u33 U3n ·
 
..


 .   .. .. ..   . 
 . . . 
yn xn
0 0 0 ··· Unn
This is equivalent to the following:

y1 = U11 x1 + U12 x2 + U13 x3 + · · · + U1n xn


y2 = 0 + U22 x2 + U23 x3 + · · · + U2n xn
.. . .
. .
yn = 0 + 0 + · · · + 0 + Unn xn
Now we proceed as the above algorithm, but in reverse order. From equa-
tion n we obtain xn . Then, we plug it into equation n − 1, so it is yn−1 =

4
Un−1n−1 xn−1 + Un−1n xn and solve this for xn−1 . We repeat it for xn−2 . When
xn , xn−1 und xn−2 are available, we can find xn−3 from equation n−3. After we
have xn , xn−1 , ..., xj+1 , we can solve for xj from equation j using the following
formula:
n
1 X
xj = [yj − Ui,j xi ]
Ujj
k=j+1

This process should be repeating until x1 is computed. When all unknown x


are avaible, we are done.
We solve an example through a python script.
Input:
L=np.array([(1,0,0,0),(-1,1,0,0),(2,-1,1,0),(-3,2,-2,1)])
U=np.array([(2,1,-1,3),(0,1,-1,3),(0,0,-1,3),(0,0,0,3)])
b = np.array([12,-8,21,-26])

Forward and backward algorithm:


def forward_subs(L,b):
y=[]
for i in range(len(b)):
y.append(b[i])
for j in range(i):
y[i]=y[i]-(L[i,j]*y[j])
y[i]=y[i]/L[i,i]

return y

def back_subs(U,y):
x=np.zeros_like(y)
for i in range(len(x),0,-1):
x[i-1]=(y[i-1]-np.dot(U[i-1,i:],x[i:]))/U[i-1,i-1]

return x
 
Output: 4 3 3 1.3333333

4 Python implementation of the algorithm


Now we want to run some test for the block LU decomposition using python
script.

import math
import numpy as np

def LU(entries):
rows,columns=np.shape(entries)

5
L=np.zeros((rows,columns))
U=np.zeros((rows,columns))
if rows!=columns:
return
for i in range (columns):
for j in range(i-1):
sum=0
for k in range (j-1):
sum+=L[i][k]*U[k][j]
L[i][j]=(entries[i][j]-sum)/U[j][j]
L[i][i]=1
for j in range(i,columns):
sum1=0
for k in range(i):
sum1+=L[i][k]*U[k][j]
U[i][j]=entries[i][j]-sum1
return L,U

A = np.array([[1, 2], [3, 4]])


n = len(A)
L, U = LU(A)
print("L =",L)
print("U =",U)
X = A[0][0]
Y = A[0][1]
Z = A[1][0]
W = A[1][1]
A_1 = np.array([[1,0], [Z*X, 1]])
A_2 = np.array([[X,0], [0, W-Z*X*Y]])
A_3 = np.array([[1,X*Y], [0, 1]])
print(A, "=")
print(A_1)
print( "*", A_2)
print("*", A_3)

Output for  
1 2
3 4
   
1 0 1 2
L= U=
3 1 0 −2
     
1 0 1 0 1 2
A= ∗ ∗
3 1 0 −2 0 1
Output for  
6 8
2 5

6
   
0 1 6 8
L=1 U =
3 1 0 2 31
     
1 0 1 0 6 8
A= 1 ∗ ∗
3 1 0 2 13 0 1

5 Conclusion
The block LU decomposition is important matrix factorization in numerical
analysis. This has application in solving linear system. The principal subma-
trices of the block matrix are nonsingular totally nonnegative. Together with
Schur complement, which is nonsingular and totally nonnegative as well, one
can easily find a unique block factorization.

References
[1] Nicholas J. Higham, Accuracy and Stability of Numerical Analysis

[2] https://pages.mtu.edu/ shene/COURSES/cs3621/NOTES/INT-APP/CURVE-linear-system.html


[3] G. M. M. Phillips, Peter J. Taylor, Theory and Applications of Numerical
Analysis
[4] http://www.cs.cornell.edu/ bindel/class/cs6210-f12/notes/lec09.pdf

You might also like