Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let $$ A=\left(\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 3 & 0 \\ -.5 & 0 & -.2 & 1 \\ -.5 & -.3 & 1 & 0 \end{array}\right) $$ Decompose \(A\) using partial pivoting as $$ P A=L U $$ where \(U\) is upper triangular, \(L\) is unit lower triangular and \(P\) is a permutation matrix. Record all the elementary matrices, \(P^{(i)}\), and \(M^{(i)}\) along the way. (You can use MATLAB to help you with these small, annoying calculations.) Show that \(L\) relates to \(\left[M^{(3)}\right]^{-1}\left[M^{(2)}\right]^{-1}\left[M^{(1)}\right]^{-1}\) in the way explained in Section \(5.3\), and find \(\tilde{M}^{(1)}, \bar{M}^{(2)}\), and \(\tilde{M}^{(3)}\).

Short Answer

Expert verified
#Answer# L = $\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\\ 0 & 1 & 0 & 0 \\\ 1/2 & 0 & 1 & 0 \\\ 1/2 & 0 & -\frac{8}{9} & 1 \end{array}\right)$

Step by step solution

01

Find the first pivot and interchange rows

First, identify the largest element in the first column as the pivot element. The largest element is 1, which is in the second row. So we need to swap the first and second rows using the permutation matrix \(P^{(1)}\): $$ P^{(1)}= \begin{bmatrix} 0 & 1 & 0 & 0 \\\ 1 & 0 & 0 & 0 \\\ 0 & 0 & 1 & 0 \\\ 0 & 0 & 0 & 1 \end{bmatrix} $$ Now, apply the permutation matrix to A: $$ P^{(1)} A=\left(\begin{array}{cccc} 1 & 0 & 3 & 0 \\\ 0 & 1 & 0 & 0 \\\ -.5 & 0 & -.2 & 1 \\\ -.5 & -.3 & 1 & 0 \end{array}\right) $$
02

Eliminate first column elements below the pivot

To eliminate the elements in the first column below the first pivot element, apply the following elementary matrix \(M^{(1)}\): $$ M^{(1)} =\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\\ 0 & 1 & 0 & 0 \\\ 1/2 & 0 & 1 & 0 \\\ 1/2 & 0 & 0 & 1 \end{array}\right) $$ Resulting matrix after applying \(M^{(1)}\) is: $$ M^{(1)} P^{(1)} A=\left(\begin{array}{cccc} 1 & 0 & 3 & 0 \\\ 0 & 1 & 0 & 0 \\\ 0 & 0 & 2.8 & 1 \\\ 0 & -.3 & 1.5 & 0 \end{array}\right) $$
03

Find the second pivot and interchange rows

For the second pivot, focus on the submatrix starting from the second row and the second column. The pivot element is -.3, which is in the fourth row. Swap the third and fourth rows using the permutation matrix \(P^{(2)}\): $$ P^{(2)} = \begin{bmatrix} 1 & 0 & 0 & 0 \\\ 0 & 1 & 0 & 0 \\\ 0 & 0 & 0 & 1 \\\ 0 & 0 & 1 & 0 \end{bmatrix} $$ Applying the permutation matrix gives: $$ P^{(2)} M^{(1)} P^{(1)} A=\left(\begin{array}{cccc} 1 & 0 & 3 & 0 \\\ 0 & 1 & 0 & 0 \\\ 0 & -.3 & 1.5 & 0 \\\ 0 & 0 & 2.8 & 1 \end{array}\right) $$
04

Eliminate second column element below the pivot

Eliminate the element in the second column below the second pivot element by applying the following elementary matrix \(M^{(2)}\): $$ M^{(2)} = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\\ 0 & 1 & 0 & 0 \\\ 0 & 0 & 1 & 0 \\\ 0 & -\frac{8}{9} & 0 & 1 \end{array}\right) $$ Applying \(M^{(2)}\) gives the resulting matrix: $$ M^{(2)} P^{(2)} M^{(1)} P^{(1)} A=\left(\begin{array}{cccc} 1 & 0 & 3 & 0 \\\ 0 & 1 & 0 & 0 \\\ 0 & 0 & 1.5 & 0 \\\ 0 & 0 & 0 & 1 \end{array}\right) $$
05

Compute L using inverse elementary matrices

Compute L by using the relation $$L=\left[M^{(3)}\right]^{-1}\left[M^{(2)}\right]^{-1}\left[M^{(1)}\right]^{-1},$$ where \(M^{(3)}=I\), since we don't need a third multiplier matrix. $$ L = M^{(1)} P^{(2)} M^{(2)} =\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\\ 0 & 1 & 0 & 0 \\\ 1/2 & 0 & 1 & 0 \\\ 1/2 & 0 & -\frac{8}{9} & 1 \end{array}\right) $$
06

Compute \(\tilde{M}^{(1)}, \bar{M}^{(2)},\) and \(\tilde{M}^{(3)}\) matrices

# Finally, we find the modified elementary matrices: $$ \tilde{M}^{(1)} = P^{(1)}M^{(1)} $$ $$ \bar{M}^{(2)} = P^{(2)} M^{(2)} $$ $$ \tilde{M}^{(3)} = I $$

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

LU Decomposition
The LU decomposition is a method used in numerical analysis to factor a given square matrix into two matrices: a lower triangular matrix (L) and an upper triangular matrix (U). The main idea behind LU decomposition is to simplify complex matrix operations by breaking them into manageable pieces. This is particularly useful in solving systems of linear equations, inverting matrices, and computing determinants.

The process starts with a matrix A and transforms it step by step using elementary row operations until the matrix is in upper triangular form, which will be the matrix U. Simultaneously, we record the row operations with elementary matrices, and the inverse of these products gives us the matrix L. Partial pivoting, as seen in the example, is a strategy that provides numerical stability to the decomposition, where rows of the matrix are interchanged to position the largest absolute value (pivot) as the leading coefficient of the working submatrix.

LU decomposition can be performed manually on smaller matrices or with the computer aid for larger and more complex matrices, as suggested by the step-by-step solution using MATLAB software. This reduces the risk of calculation errors and saves time.
Permutation Matrix
A permutation matrix is a square matrix that results from permuting the rows of an identity matrix. In the context of LU decomposition with partial pivoting, permutation matrices are used to swap rows of the original matrix to bring the largest pivot into position. These matrices are a key part of ensuring the numerical stability of the decomposition process.

Permutation matrices are orthogonal and their inverse is equal to their transpose. This means that applying permutation matrices to a given matrix does not alter the determinant of the original matrix. In the solution you saw, P(1) and P(2) were applied to matrix A to swap rows for the first and second pivots, respectively.

The property that permutation matrices are binary—containing only zeros and ones—and each row and column have exactly one entry of 1 makes their use in calculations particularly straightforward and easy to interpret, emphasizing their convenient role in matrix algebra operations.
Elementary Matrices
Elementary matrices represent the simplest form of row operations that can be performed on a matrix. These include row swapping, row multiplication, and row addition, or a combination of these. In the LU decomposition process, we use a series of elementary matrices to transform the original matrix A step by step until it becomes an upper triangular matrix U.

In the example provided, the matrices M(1), M(2), and M(3) (which turned out to be the identity matrix in this case, indicating no third operation was needed) were used to eliminate the elements below the pivots. These elementary matrices, when multiplied with the permutation matrices and applied to A, gave the form M(i)P(i)A at each step.

A crucial aspect to remember is that to find matrix L, we use the inverses of these elementary matrices, demonstrating a fundamental property: an elementary matrix's inverse is also an elementary matrix, but representing the opposite row operation.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Run the MATLAB script of Example \(5.22\), plugging in different values for \(d\). In particular, try \(d=.25, d=.85\), and \(d=-.75 .\) What do you observe? What will happen as \(d \downarrow(-1) ?\)

Apply your program from Exercise 17 to the problem described in Example \(4.17\) using the second set of boundary conditions, \(v(0)=v^{\prime}(1)=0\), for \(g(t)=\left(\frac{\pi}{2}\right)^{2} \sin \left(\frac{\pi}{2} t\right)\) and \(N=100\). Compare the results to the vector \(\mathbf{u}\) composed of \(u(i h)=\sin \left(\frac{\pi}{2} i h\right), i=1, \ldots, N\), by recording \(\|\mathbf{v}-\mathbf{u}\|_{\infty}\)

Consider the LU decomposition of an upper Hessenberg (no, it's not a place in Germany) matrix, defined on the facing page, assuming that no pivoting is needed: \(A=L U\). (a) Provide an efficient algorithm for this LU decomposition (do not worry about questions of memory access and vectorization). (b) What is the sparsity structure of the resulting matrix \(L\) (i.e., where are its nonzeros)? (c) How many operations (to a leading order) does it take to solve a linear system \(A \mathbf{x}=\mathbf{b}\), where \(A\) is upper Hessenberg? (d) Suppose now that partial pivoting is applied. What are the sparsity patterns of the factors of \(A ?\)

The Gauss-Jordan method used to solve the prototype linear system can be described as follows. Augment \(A\) by the right-hand-side vector \(\mathbf{b}\) and proceed as in Gaussian elimination, except use the pivot element \(a_{k k}^{(k-1)}\) to eliminate not only \(a_{i k}^{(k-1)}\) for \(i=k+1, \ldots, n\) but also the elements \(a_{i k}^{(k-1)}\) for \(i=1, \ldots, k-1\), i.e., all elements in the \(k\) th column other than the pivot. Upon reducing \((A \mid \mathbf{b})\) into $$ \left[\begin{array}{cccc|c} a_{11}^{(n-1)} & 0 & \cdots & 0 & b_{1}^{(n-1)} \\ 0 & a_{22}^{(n-1)} & \ddots & \vdots & b_{2}^{(n-1)} \\ \vdots & \ddots & \ddots & 0 & \vdots \\ 0 & \cdots & 0 & a_{n n}^{(n-1)} & b_{n}^{(n-1)} \end{array}\right], $$ the solution is obtained by setting $$ x_{k}=\frac{b_{k}^{(n-1)}}{a_{k k}^{(n-1)}}, \quad k=1, \ldots, n $$ This procedure circumvents the backward substitution part necessary for the Gaussian elimination algorithm. (a) Write a pseudocode for this Gauss-Jordan procedure using, e.g., the same format as for the one appearing in Section \(5.2\) for Gaussian elimination. You may assume that no pivoting (i.e., no row interchanging) is required. (b) Show that the Gauss-Jordan method requires \(n^{3}+\mathcal{O}\left(n^{2}\right)\) floating point operations for one right- hand-side vector \(\mathbf{b}\) -roughly \(50 \%\) more than what's needed for Gaussian elimination.

Write a MATLAB function that solves tridiagonal systems of equations of size \(n .\) Assume that no pivoting is needed, but do not assume that the tridiagonal matrix \(A\) is symmetric. Your program should expect as input four vectors of size \(n\) (or \(n-1):\) one right-hand-side \(\mathbf{b}\) and the three nonzero diagonals of \(A .\) It should calculate and return \(\mathbf{x}=A^{-1} \mathbf{b}\) using a Gaussian elimination variant that requires \(\mathcal{O}(n)\) flops and consumes no additional space as a function of \(n\) (i.e., in total \(5 n\) storage locations are required). Try your program on the matrix defined by \(n=10, a_{i-1, i}=a_{i+1, i}=-i\), and \(a_{i, i}=3 i\) for all \(i\) such that the relevant indices fall in the range 1 to \(n .\) Invent a right-hand-side vector b.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free