Numerical Methods for
Engineers
EE0903301
Lecture 13: LU Factorization (LU Decomposition)
Dr. Yanal Faouri
LU Decomposition
• Gauss elimination becomes inefficient when solving equations with the same coefficients [A], but
with different right-hand-side constants {b} in the system Ax = b. For instance, a structure must be
tested under several different loads, not just one.
• Gaussian elimination with pivoting is the most efficient and accurate way to solve a linear system.
Most of the work in this method is spent on the matrix A itself. If we need to solve several different
systems with the same A, and A is big, then we would like to avoid repeating the steps of Gaussian
elimination on A for every different b. This can be accomplished by the LU decomposition, which in
effect records the steps of Gaussian elimination.
• LU factorization methods separate the time-consuming elimination of the matrix [A] from the
manipulations of the right-hand side {b}. Thus, once [A] has been “factored” or “decomposed,”
multiple right-hand-side vectors can be evaluated in an efficient manner.
Overview of LU Factorization
• Just as was the case with Gauss elimination, LU factorization requires pivoting to avoid division by zero. However,
to simplify the following description, we will omit pivoting. In addition, the following explanation is limited to a
set of three simultaneous equations. The results can be directly extended to n-dimensional systems.
• A two-step strategy for obtaining solutions can be
summarized as follow:
1. LU factorization step. [A] is factored or “decomposed” into
lower [L] and upper [U] triangular matrices. Ax = b ➔ (LU)x = b.
2. Substitution step. [L] and [U] are used to determine a
solution {x} for a right-hand side {b}. This step itself consists of
two steps. First, equation Ld = b where d = Ux is used to
generate an intermediate vector {d} by forward substitution.
Then, the result is substituted into equation Ux = d which can
be solved by back substitution for {x}.
Example: LU Factorization with Gauss
Elimination
• Problem Statement.
• Use LU factorization to solve:
3x1 − 0.1x2 − 0.2x3 = 7.85
0.1x1 + 7x2 − 0.3x3 = −19.3
0.3x1 − 0.2x2 + 10x3 = 71.4
• Solution. This set of linear algebraic equations had the following coefficient matrix:
Forward Elimination Steps:
1. R2 – (0.1/3) R1
2. R3 – (0.3/3) R1
3. R3 + (0.2/7.00333) R2
Replace the l21, l31 and l32
Check that LU = A
elements in the identity
matrix to get the [L] matrix
Overview of LU Factorization
• After the matrix is decomposed, a solution can be generated for a particular right-hand side vector {b}.
• This is done in two steps. First, a forward-substitution step is executed by solving Ld = b for {d}. It is important to
recognize that this merely amounts to performing the elimination manipulations on {b}.
• Thus, at the end of this step, the right-hand side will be in the same state that it would have been had we
performed forward manipulation on [A] and {b} simultaneously.
• The forward-substitution step can be represented concisely as:
• The second step then merely amounts to implementing back substitution to solve Ux = d. Again, it is important
to recognize that this is identical to the back-substitution phase of conventional Gauss elimination:
Example: The Substitution Steps
• Problem Statement.
• Complete the previous example by generating the final solution with forward and back substitution.
• Solution.
• Recall that the system being solved is:
• The forward-elimination phase of conventional Gauss elimination resulted in
Example: The Substitution Steps
• The forward-substitution phase is implemented by applying Ld = b
• This result can then be substituted into Ux = d and solve by back substitution to find x:
LU Factorization with Pivoting
• Just as for standard Gauss elimination, partial pivoting is necessary to obtain reliable solutions with LU
factorization. One way to do this involves using a permutation matrix. The approach consists of the following
steps:
1. Elimination. The LU factorization with pivoting of a matrix [A] can be represented in matrix form as
[P][A] = [L][U]
• The upper triangular matrix, [U], is generated by elimination with partial pivoting, while storing the multiplier
factors in [L] and employing the permutation matrix, [P], to keep track of the row switches.
2. Forward substitution. The matrices [L] and [P] are used to perform the elimination step with pivoting on {b} in
order to generate the intermediate right-hand-side vector, {d}. This step can be represented concisely as the
solution of the following matrix formulation:
[L]{d} = [P]{b}
3. Back substitution. The final solution is generated in the same fashion as done previously for Gauss elimination.
This step can also be represented concisely as the solution of the matrix formulation:
[U]{x} = {d}
Example: LU Factorization with Pivoting
• Problem Statement.
• Compute the LU factorization and find the solution for this system:
• Solution. Before elimination, we set up the initial permutation matrix:
• It is obvious that pivoting is necessary, so prior to elimination we switch the rows:
• At the same time, keep track of the pivot by switching the rows of the permutation matrix:
Example: LU Factorization with Pivoting
• Then eliminate a21 by subtracting the factor l21 = a21∕a11 = 0.0003∕1 = 0.0003 from the second row of A.
• In so doing, we compute that the new value of a′22 = 3 − 0.0003(1) = 2.9997.
• Thus, the elimination step is complete with the result:
• Before implementing forward substitution, the permutation matrix is used to reorder the right-hand-side vector
to reflect the pivots as in
• Then, forward substitution is applied as in
Example: LU Factorization with Pivoting
• Which can be solved for d1 = 1 and d2 = 2.0001 − 0.0003(1) = 1.9998. At this point, the system is:
• Applying back substitution gives the final result:
• The LU factorization algorithm requires the same total flops as for Gauss elimination.
• The only difference is that a little less effort is expended in the factorization phase since the operations are not
applied to the right-hand side. Conversely, the substitution phase takes a little more effort.
Example: LU Factorization with MATLAB
• Problem Statement.
• Use MATLAB to compute the LU factorization and find the solution for the same linear system analyzed in the
previous example:
• Solution. The coefficient matrix and the right-hand-side vector can be entered as:
>> A = [3 -0.1 -0.2;.1 7 -0.3;.3 -0.2 10];
>> b = [7.85; -19.3; 71.4];
• Next, the LU factorization can be computed with
>> [L,U] = lu(A)
Example: LU Factorization with MATLAB
To check the answer:
>> L*U
• To generate the solution, we first compute
>> d = L\b
• And then use this result to compute the solution
>> x = U\d
Cholesky Factorization
• Special solution techniques are available for systems with symmetrical matrix [A] = [A]T. They offer
computational advantages because only half the storage is needed and only half the computation time is
required for their solution.
• One of the most popular approaches involves Cholesky factorization (also called Cholesky decomposition). This
algorithm is based on the fact that a symmetric matrix can be decomposed, as in
[A] = [U ]T [U]
• The factorization can be generated efficiently by recurrence relations. For the ith row:
Example: Cholesky Factorization
• Problem Statement. Compute the Cholesky factorization for the symmetric matrix
• Solution. For the first row (i = 1), uii equation is employed to compute
• Then, uij equation can be used to determine
Example: Cholesky Factorization
• For the second row (i = 2):
• For the third row (i = 3):
• Thus, the Cholesky factorization yields
Example: Cholesky Factorization with MATLAB
• Problem Statement. Use MATLAB to compute the Cholesky factorization for this matrix.
• Also obtain a solution for a right-hand-side vector that is the sum of the rows of [A]. Note that for this case, the
answer will be a vector of ones.
• Solution. The matrix is entered in standard fashion as
>> A = [6 15 55; 15 55 225; 55 225 979];
• A right-hand-side vector that is the sum of the rows of [A] can be generated as
>> b = [sum (A (1,:)); sum (A (2,:)); sum (A (3,:))]
Example: Cholesky Factorization with MATLAB
• Next, the Cholesky factorization can be computed with
>> U = chol(A)
• We can test that this is correct by computing the original matrix as
>> U’*U
• To generate the solution, we first compute
>> d = U’\b >> x = U\d
MATLAB Left Division
• We previously introduced left division without any explanation of how it works. Now that we have some
background on matrix solution techniques, we can provide a simplified Description of its operation.
• When we implement left division with the backslash operator, MATLAB invokes a highly sophisticated algorithm
to obtain a solution. In essence, MATLAB examines the structure of the coefficient matrix and then implements
an optimal method to obtain the solution. A simplified overview can be outlined:
• First, MATLAB checks to see whether [A] is in a format where a solution can be obtained without full Gauss
elimination. These include systems that are (a) sparse and banded, (b) triangular (or easily transformed into
triangular form), or (c) symmetric. If any of these cases are detected, the solution is obtained with the efficient
techniques that are available for such systems. Some of the techniques include banded solvers, back and forward
substitution, and Cholesky factorization.
• If none of these simplified solutions are possible and the matrix is square (It should be noted that if [A] is not
square, a least-squares solution is obtained with an approach called QR factorization), a general triangular
factorization is computed by Gauss elimination with partial pivoting and the solution obtained with substitution.