Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \({{\bf{u}}_1},......,{{\bf{u}}_p}\) be an orthogonal basis for a subspace \(W\) of \({\mathbb{R}^n}\), and let \(T:{\mathbb{R}^n} \to {\mathbb{R}^n}\) be defined by \(T\left( x \right) = {\rm{pro}}{{\rm{j}}_W}x\). Show that \(T\) is a linear transformation.

Short Answer

Expert verified

It is verified that \(T\) is a linear transformation.

Step by step solution

01

\(QR\) factorization of a Matrix and Linear Transformation

A matrix with order \(m \times n\) can be written as the multiplication of an upper triangular matrix \(R\) and a matrix \(Q\) which is formed by applying the Gram–Schmidt orthogonalization processto the \({\rm{col}}\left( A \right)\).

The matrix \(R\) can be found by the formula \({Q^T}A = R\).

A transformation \(T:X \to Y\), is said to be linear if it satisfies the following properties:

1. For any \({{\bf{x}}_1},{{\bf{x}}_2} \in X\), \(T\left( {{{\bf{x}}_1} + {{\bf{x}}_2}} \right) = T\left( {{{\bf{x}}_1}} \right) + T\left( {{{\bf{x}}_2}} \right)\).

2. For any \({\bf{x}} \in X\) and \(c \in F\), \(T\left( {c{\bf{x}}} \right) = cT\left( {\bf{x}} \right)\).

02

Showing the Transformation is Linear

Given mapping is \(T:{\mathbb{R}^n} \to {\mathbb{R}^n}\) such that

\(T\left( x \right) = {\rm{pro}}{{\rm{j}}_W}x\)

It is given that \(\left\{ {{{\bf{u}}_1},......,{{\bf{u}}_p}} \right\}\) is an orthonormal basis for \(W\), then lets us construct a matrix with \({{\bf{u}}_1},......,{{\bf{u}}_p}\)as column of the matrix and name it as \(U\).

Then we have

\(U{U^T} = {\rm{pro}}{{\rm{j}}_W}\)by the Theorem 10(from 6.3).

Now,

\(\begin{aligned}{}T\left( x \right) &= {\rm{pro}}{{\rm{j}}_W}x\\T\left( x \right) &= \left( {U{U^T}} \right)x\end{aligned}\)

Hence, we get that \(T\) is a matrix transformation. Also, we know that every matrix transformation is linear so \(T\) is a linear transformation.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Compute the quantities in Exercises 1-8 using the vectors

\({\mathop{\rm u}\nolimits} = \left( {\begin{aligned}{*{20}{c}}{ - 1}\\2\end{aligned}} \right),{\rm{ }}{\mathop{\rm v}\nolimits} = \left( {\begin{aligned}{*{20}{c}}4\\6\end{aligned}} \right),{\rm{ }}{\mathop{\rm w}\nolimits} = \left( {\begin{aligned}{*{20}{c}}3\\{ - 1}\\{ - 5}\end{aligned}} \right),{\rm{ }}{\mathop{\rm x}\nolimits} = \left( {\begin{aligned}{*{20}{c}}6\\{ - 2}\\3\end{aligned}} \right)\)

7. \(\left\| {\mathop{\rm w}\nolimits} \right\|\)

Exercises 19 and 20 involve a design matrix \(X\) with two or more columns and a least-squares solution \(\hat \beta \) of \({\bf{y}} = X\beta \). Consider the following numbers.

(i) \({\left\| {X\hat \beta } \right\|^2}\)—the sum of the squares of the “regression term.” Denote this number by .

(ii) \({\left\| {{\bf{y}} - X\hat \beta } \right\|^2}\)—the sum of the squares for error term. Denote this number by \(SS\left( E \right)\).

(iii) \({\left\| {\bf{y}} \right\|^2}\)—the “total” sum of the squares of the \(y\)-values. Denote this number by \(SS\left( T \right)\).

Every statistics text that discusses regression and the linear model \(y = X\beta + \in \) introduces these numbers, though terminology and notation vary somewhat. To simplify matters, assume that the mean of the -values is zero. In this case, \(SS\left( T \right)\) is proportional to what is called the variance of the set of -values.

19. Justify the equation \(SS\left( T \right) = SS\left( R \right) + SS\left( E \right)\). (Hint: Use a theorem, and explain why the hypotheses of the theorem are satisfied.) This equation is extremely important in statistics, both in regression theory and in the analysis of variance.

Let \(T:{\mathbb{R}^n} \to {\mathbb{R}^n}\) be a linear transformation that preserves lengths; that is, \(\left\| {T\left( {\bf{x}} \right)} \right\| = \left\| {\bf{x}} \right\|\) for all x in \({\mathbb{R}^n}\).

  1. Show that T also preserves orthogonality; that is, \(T\left( {\bf{x}} \right) \cdot T\left( {\bf{y}} \right) = 0\) whenever \({\bf{x}} \cdot {\bf{y}} = 0\).
  2. Show that the standard matrix of T is an orthogonal matrix.

(M) Use the method in this section to produce a \(QR\) factorization of the matrix in Exercise 24.

Show that if \(U\) is an orthogonal matrix, then any real eigenvalue of \(U\) must be \( \pm 1\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free