Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

The linear system $$ \begin{aligned} &10 x_{1}+x_{2}+x_{3}=12 \\ &x_{1}+10 x_{2}+x_{3}=12 \\ &x_{1}+x_{2}+10 x_{3}=12 \end{aligned} $$ has the unique solution \(x_{1}=x_{2}=x_{3}=1\). Starting from \(\mathbf{x}_{0}=(0,0,0)^{T}\) perform \- two iterations using Jacobi; \- one iteration using Gauss-Seidel. Calculate error norms for the three iterations in \(\ell_{1}\). Which method seems to converge faster?

Short Answer

Expert verified
Answer: The Gauss-Seidel method converges faster in this case, as the error norm after the first iteration is smaller compared to the error norm after the second iteration in the Jacobi method.

Step by step solution

01

Jacobi Method Iteration 1

In the given system of equations, we can rewrite each equation as: $$ \begin{aligned} x_{1} &= \frac{12 - x_{2} - x_{3}}{10} \\ x_{2} &= \frac{12 - x_{1} - x_{3}}{10} \\ x_{3} &= \frac{12 - x_{1} - x_{2}}{10} \end{aligned} $$ Starting from \(\mathbf{x}_{0}=(0,0,0)^{T}\), we can now compute the first iteration for the Jacobi method: $$ \begin{aligned} x_{1}^{(1)} &= \frac{12 - 0 - 0}{10} = 1.2\\ x_{2}^{(1)} &= \frac{12 - 0 - 0}{10} = 1.2\\ x_{3}^{(1)} &= \frac{12 - 0 - 0}{10} = 1.2 \end{aligned} $$ Now we have, \(\mathbf{x}^{(1)} = (1.2, 1.2, 1.2)^{T}\).
02

Jacobi Method Iteration 2

Using the values from \(\mathbf{x}^{(1)}\) for the second iteration of the Jacobi method: $$ \begin{aligned} x_{1}^{(2)} &= \frac{12 - 1.2 - 1.2}{10} = 0.96 \\ x_{2}^{(2)} &= \frac{12 - 1.2 - 1.2}{10} = 0.96 \\ x_{3}^{(2)} &= \frac{12 - 1.2 - 1.2}{10} = 0.96 \end{aligned} $$ Now we have, \(\mathbf{x}^{(2)} = (0.96, 0.96, 0.96)^{T}\).
03

Gauss-Seidel Method Iteration 1

For the Gauss-Seidel method, we can rewrite the equations using the same equations as Jacobi, but updating the values as soon as we have a new value. Using the initial \(\mathbf{x}_{0}=(0,0,0)^{T}\), we compute the first iteration for the Gauss-Seidel method: $$ \begin{aligned} x_{1}^{'} &= \frac{12 - 0 - 0}{10} = 1.2 \\ x_{2}^{'} &= \frac{12 - 1.2 - 0}{10} = 1.08 \\ x_{3}^{'} &= \frac{12 - 1.2 - 1.08}{10} = 0.972 \end{aligned} $$ Now we have, \(\mathbf{x}^{'} = (1.2, 1.08, 0.972)^{T}\).
04

Compute Error Norms in \(\ell_{1}\)

To determine which method converges faster, we will compute the error norms in \(\ell_{1}\) for each iteration. The exact solution is given by \(\mathbf{x} = (1,1,1)^{T}\). Error norm after 1st Jacobi iteration: $$ \|\mathbf{x}^{(1)} - \mathbf{x}\|_1 = \| (1.2-1,1.2-1, 1.2-1) \|_1 = 0.6 $$ Error norm after 2nd Jacobi iteration: $$ \|\mathbf{x}^{(2)} - \mathbf{x}\|_1 = \| (0.96-1,0.96-1, 0.96-1) \|_1 = 0.12 $$ Error norm after 1st Gauss-Seidel iteration: $$ \|\mathbf{x}^{'} - \mathbf{x}\|_1 = \| (1.2-1,1.08-1, 0.972-1) \|_1 = 0.252 $$
05

Results and Conclusions

Comparing the error norms, we can see that the Gauss-Seidel method has a smaller error after just one iteration than the Jacobi method after two iterations. As a result, we can conclude that the Gauss-Seidel method seems to converge faster in this case.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Jacobi Method
The Jacobi method is a classic iterative technique used to solve linear systems of equations. This method calculates each component of the solution vector independently at each iteration, making it particularly well-suited for parallel computing environments.

To apply the Jacobi method, you first begin by arranging your system of linear equations in a diagonal dominant form, where the coefficient of each variable in its respective equation is significantly larger than the other coefficients. From there, you solve each equation for its respective variable, as demonstrated in the given exercise.

In the context of the example, the initial guess \(\mathbf{x}_0=(0,0,0)^T\) is used to predict the next set of solutions \(\mathbf{x}^{(1)}\) and then \(\mathbf{x}^{(2)}\) by iteratively substituting the values into the adjusted equations. After two iterations using the Jacobi method, we observe that the values are converging towards the true solution.
Gauss-Seidel Method
The Gauss-Seidel method, another common iterative technique, differs from the Jacobi method primarily in its approach to using the most recent values. It sequentially updates each component of the solution vector, which potentially accelerates the convergence rate as the latest updates are utilized immediately in subsequent computations.

For the system provided, the Gauss-Seidel method starts with the same initial guess as the Jacobi method. But rather than calculating all variables independently, once the new value of \(x_1^{'}\) is found, it is directly used in the calculation of \(x_2^{'}\), and then both \(x_1^{'}\) and \(x_2^{'}\) are used for \(x_3^{'}\). This single iteration of the Gauss-Seidel method already shows a promising convergence, suggesting its effectiveness compared to the two iterations done by the Jacobi method.
Error Norms
Error norms are invaluable tools in numerical analysis to measure the accuracy of iterative methods. When approximating solutions, it's crucial to quantify how 'far' our estimates are from the actual values. One commonly used error norm is the \(\ell_1\) norm, which sums the absolute differences of the respective components of two vectors.

For our example, we compute the \(\ell_1\) norm after each iteration for both the Jacobi and Gauss-Seidel methods, using the exact solution \(\mathbf{x} = (1,1,1)^T\) for comparison. As shown in the step-by-step solution, the error decreases with each successive iteration. By examining the results of the error norms, it's evident that the Gauss-Seidel method, with a lower error after just one iteration compared to two iterations of the Jacobi method, demonstrates a faster rate of convergence, suggesting it may be the more efficient choice in certain cases.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Show that the error bound on \(\left\|\mathbf{e}_{k}\right\|_{A}\) given in the CG Convergence Theorem on page 187 implies that the number of iterations \(k\) required to achieve an error reduction by a constant amount is bounded by \(\sqrt{\kappa(A)}\) times a modest constant. [Hint: Recall the discussion of convergence rate from Section 7.3. Also, by Taylor's expansion we can write $$ \ln (1 \pm \varepsilon) \approx \pm \varepsilon $$ for \(0 \leq \varepsilon \ll 1 .]\)

Let \(A\) be a general nonsymmetric nonsingular square matrix, and consider the following two alternatives. The first is applying GMRES to solve the linear system \(A \mathbf{x}=\mathbf{b} ;\) the second is applying CG to the normal equations $$ A^{T} A \mathbf{x}=A^{T} \mathbf{b} $$ We briefly discussed this in Section \(7.5\); the method we mentioned in that context was CGLS. (a) Suppose your matrix \(A\) is nearly orthogonal. Which of the two solvers is expected to converge faster? (b) Suppose your matrix is block diagonal relative to \(2 \times 2\) blocks, where the \(j\) th block is given by $$ \left(\begin{array}{cc} 1 & j-1 \\ 0 & 1 \end{array}\right) $$ with \(j=1, \ldots, n / 2\). Which of the two solvers is expected to converge faster? [Hint: Consider the eigenvalues and the singular values of the matrices.]

Suppose CG is applied to a symmetric positive definite linear system \(A \mathbf{x}=\mathbf{b}\) where the righthand-side vector \(\mathbf{b}\) happens to be an eigenvector of the matrix \(A\). How many iterations will it take to converge to the solution? Does your answer change if \(A\) is not SPD and instead of CG we apply GMRES?

Consider the class of \(n \times n\) matrices \(A\) defined in Example 7.1. Denote \(N=\sqrt{n}\) and \(h=\) \(1 /(N+1)\) (a) Derive a formula for computing the condition number of \(A\) in any norm you want without actually forming this matrix. How does \(\kappa(A)\) depend on \(n ?\) (b) Write a program that solves the system of equations \(A \mathbf{x}=\mathbf{b}\) for a given \(N\), where the right-hand-side \(\mathbf{b}\) is defined to have the same value \(h^{2}\) in all its \(n\) locations. Your program should apply Jacobi iterations without ever storing \(A\) or any other \(n \times n\) matrix, terminating when the residual \(\mathbf{r}=\mathbf{b}-A \mathbf{x}\) satisfies $$ \|\mathbf{r}\| \leq \operatorname{tol}\|\mathbf{b}\| $$ In addition, your program should compute the condition number. Test the program on a small example, say, \(N=3\), before proceeding further. (c) Apply your program for the problem instances \(N=2^{l}-1, l=2,3,4,5\), using tol \(=\) \(10^{-5}\). For each \(l\), record \(n\), the number of Jacobi iterations required to reduce the relative residual to below the tolerance, and the condition number. What relationship do you observe between \(n\) and the iteration count? Confirm the theoretical relationship between \(n\) and \(\kappa(A)\) obtained in Part (a). Explain your observations. (You may wish to plot \(\kappa(A)\) vs. \(n\), for instance, to help you focus this observation.)

The smoothing factor \(\mu^{*}\) for a discrete operator is defined as the worst (i.e., smallest) factor by which high frequency components are reduced in a single relaxation step. For the two-dimensional Laplacian we have discussed throughout this chapter and a basic relaxation scheme, this can be stated as follows. Suppose \(e_{0}\) is the error before a relaxation step associated with a stationary iteration matrix \(T\) and \(\mathbf{e}_{1}\) the error after that step, and write $$ \mathbf{e}_{0}=\sum_{l, m=1}^{N} \alpha_{l, m} \mathbf{v}_{l, m} $$ where \(\left\\{\mathbf{v}_{l, m}\right\\}_{l, m=1}^{N}\) are the eigenvectors of the iteration matrix. Then $$ \mathbf{e}_{1}=\sum_{l, m=1}^{N} \alpha_{l, m} \mu_{l, m} \mathbf{v}_{l, m} $$ where \(\left\\{\mu_{l, m}\right\\}_{l, m=1}^{N}\) are eigenvalues of the iteration matrix. The smoothing factor is thus given by $$ \mu^{*}=\max \left\\{\left|\mu_{l, m}\right|: \frac{N+1}{2} \leq l, m \leq N\right\\} $$ (a) Denote the discrete Laplacian by \(A\) and the iteration matrix for damped Jacobi by \(T_{\omega}\). Confirm that the eigenvectors of \(A\) are the same as the eigenvectors of \(T_{\omega}\) for this scheme. (If you have already worked on Exercise 11 , this should be old news.) (b) Show that the optimal \(\omega\) that gives the smallest smoothing factor over \(0 \leq \omega \leq 1\) for the two-dimensional Laplacian is \(\omega^{*}=\frac{4}{5}\), and find the smoothing factor \(\mu^{*}=\mu^{*}\left(\omega^{*}\right)\) in this case. Note: \(\mu^{*}\) should not depend on the mesh size. (c) Show that Jacobi (i.e., the case \(\omega=1\) ) is not an effective smoother.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free