Chapter 2: Q20Q (page 93)
If \[n \times n\] matrices \(E\) and \(F\) have the property that \(EF = I\), then \(E\) and \(F\) commute. Explain why?
Short Answer
Matrices \(E\) and \(F\) are inverse to each other.
Chapter 2: Q20Q (page 93)
If \[n \times n\] matrices \(E\) and \(F\) have the property that \(EF = I\), then \(E\) and \(F\) commute. Explain why?
Matrices \(E\) and \(F\) are inverse to each other.
All the tools & learning materials you need for study success - in one app.
Get started for freeSuppose Aand Bare \(n \times n\), Bis invertible, and ABis invertible. Show that Ais invertible. (Hint: Let C=AB, and solve this equation for A.)
Suppose \(CA = {I_n}\)(the \(n \times n\) identity matrix). Show that the equation \(Ax = 0\) has only the trivial solution. Explain why Acannot have more columns than rows.
In exercise 5 and 6, compute the product \(AB\) in two ways: (a) by the definition, where \(A{b_{\bf{1}}}\) and \(A{b_{\bf{2}}}\) are computed separately, and (b) by the row-column rule for computing \(AB\).
\(A = \left( {\begin{aligned}{*{20}{c}}{\bf{4}}&{ - {\bf{2}}}\\{ - {\bf{3}}}&{\bf{0}}\\{\bf{3}}&{\bf{5}}\end{aligned}} \right)\), \(B = \left( {\begin{aligned}{*{20}{c}}{\bf{1}}&{\bf{3}}\\{\bf{2}}&{ - {\bf{1}}}\end{aligned}} \right)\)
In Exercises 1–9, assume that the matrices are partitioned conformably for block multiplication. Compute the products shown in Exercises 1–4.
1. \(\left[ {\begin{array}{*{20}{c}}I&{\bf{0}}\\E&I\end{array}} \right]\left[ {\begin{array}{*{20}{c}}A&B\\C&D\end{array}} \right]\)
When a deep space probe launched, corrections may be necessary to place the probe on a precisely calculated trajectory. Radio elementary provides a stream of vectors, \({{\bf{x}}_{\bf{1}}},....,{{\bf{x}}_k}\), giving information at different times about how the probe’s position compares with its planned trajectory. Let \({X_k}\) be the matrix \(\left[ {{x_{\bf{1}}}.....{x_k}} \right]\). The matrix \({G_k} = {X_k}X_k^T\) is computed as the radar data are analyzed. When \({x_{k + {\bf{1}}}}\) arrives, a new \({G_{k + {\bf{1}}}}\) must be computed. Since the data vector arrive at high speed, the computational burden could be serve. But partitioned matrix multiplication helps tremendously. Compute the column-row expansions of \({G_k}\) and \({G_{k + {\bf{1}}}}\) and describe what must be computed in order to update \({G_k}\) to \({G_{k + {\bf{1}}}}\).
What do you think about this solution?
We value your feedback to improve our textbook solutions.