Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(X_{1}, X_{2}, \ldots, X_{n}\) denote a random sample of size \(n\) from a distribution which is \(N\left(0, \sigma^{2}\right)\). Prove that \(\sum_{1}^{n} X_{i}^{2}\) and every quadratic form, which is nonidentically zero in \(X_{1}, X_{2}, \ldots, X_{n}\), are dependent.

Short Answer

Expert verified
The given quadratic form and sum of squares of \(X_i\)'s are not independent because the quadratic form contains the sum of squares of the variables.

Step by step solution

01

Understanding Variables

Consider the quadratic form \( Q = a_1X_1^2 + a_2X_2^2 + ... + a_nX_n^2 \), where \(a_i\) are constant scalars and \(X_i\) are the random variables. Here, \(X_i\) are the random sample of size \(n\) and they are independent (by definition of a random sample) and identically distributed as \(N\left(0, \sigma^{2}\right)\).
02

Understanding Dependent and Independent Variables

The variables \(X_i\) are said to be linearly dependent if scalar multiples of them sum to zero. For \(X_i\), they are linearly dependent if there exist coefficients \(a_i\) not all of which are zero such that \(a_1X_1 + a_2X_2 + ... + a_nX_n = 0\). As \(X_i\) are independent (definition of being a random sample), the sum \( S = X_1 + X_2 + ... + X_n\) is a linear combination of \(X_i\).
03

Evaluate the Quadratic form and the Sum of Squares

For quadratic form \(Q\), we should note that \(Q\) includes squares of \(X_i\). It's clear that the form \(S = X_1 + X_2 + ... + X_n\) is contained in \(Q\) when considering their square components. This implies that the quadratic form \(Q\) and the sum of squares of \(X\)'s are not independent.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

A random sample of size \(n=6\) from a bivariate normal distribution yields a value of the correlation coefficient of \(0.89 .\) Would we accept or reject, at the 5 percent significance level, the hypothesis that \(\rho=0\).

Let the \(4 \times 1\) matrix \(\boldsymbol{Y}\) be multivariate normal \(N\left(\boldsymbol{X} \boldsymbol{\beta}, \sigma^{2} \boldsymbol{I}\right)\), where the \(4 \times 3\) matrix \(\boldsymbol{X}\) equals $$ \boldsymbol{X}=\left[\begin{array}{rrr} 1 & 1 & 2 \\ 1 & -1 & 2 \\ 1 & 0 & -3 \\ 1 & 0 & -1 \end{array}\right] $$ and \(\beta\) is the \(3 \times 1\) regression coeffient matrix. (a) Find the mean matrix and the covariance matrix of \(\hat{\boldsymbol{\beta}}=\left(\boldsymbol{X}^{\prime} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\prime} \boldsymbol{Y}\). (b) If we observe \(\boldsymbol{Y}^{\prime}\) to be equal to \((6,1,11,3)\), compute \(\hat{\boldsymbol{\beta}}\).

Often in regression the mean of the random variable \(Y\) is a linear function of \(p\) -values \(x_{1}, x_{2}, \ldots, x_{p}\), say \(\beta_{1} x_{1}+\beta_{2} x_{2}+\cdots+\beta_{p} x_{p}\), where \(\boldsymbol{\beta}^{\prime}=\left(\beta_{1}, \beta_{2}, \ldots, \beta_{p}\right)\) are the regression coefficients. Suppose that \(n\) values, \(\boldsymbol{Y}^{\prime}=\left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) are observed for the \(x\) -values in \(\boldsymbol{X}=\left[x_{i j}\right]\), where \(\boldsymbol{X}\) is an \(n \times p\) design matrix and its ith row is associated with \(Y_{i}, i=1,2, \ldots, n .\) Assume that \(Y\) is multivariate normal with mean \(\boldsymbol{X} \boldsymbol{\beta}\) and variance-covariance matrix \(\sigma^{2} \boldsymbol{I}\), where \(\boldsymbol{I}\) is the \(n \times n\) identity matrix. (a) Note that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) are independent. Why? (b) Since \(\boldsymbol{Y}\) should approximately equal its mean \(\boldsymbol{X} \boldsymbol{\beta}\), we estimate \(\boldsymbol{\beta}\) by solving the normal equations \(\boldsymbol{X}^{\prime} \boldsymbol{Y}=\boldsymbol{X}^{\prime} \boldsymbol{X} \boldsymbol{\beta}\) for \(\boldsymbol{\beta}\). Assuming that \(\boldsymbol{X}^{\prime} \boldsymbol{X}\) is non- singular, solve the equations to get \(\hat{\boldsymbol{\beta}}=\left(\boldsymbol{X}^{\prime} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\prime} \boldsymbol{Y}\). Show that \(\hat{\boldsymbol{\beta}}\) has a multivariate normal distribution with mean \(\boldsymbol{\beta}\) and variance-covariance matrix $$ \sigma^{2}\left(\boldsymbol{X}^{\prime} \boldsymbol{X}\right)^{-1} $$ (c) Show that $$ (\boldsymbol{Y}-\boldsymbol{X} \boldsymbol{\beta})^{\prime}(\boldsymbol{Y}-\boldsymbol{X} \boldsymbol{\beta})=(\hat{\boldsymbol{\beta}}-\boldsymbol{\beta})^{\prime}\left(\boldsymbol{X}^{\prime} \boldsymbol{X}\right)(\hat{\boldsymbol{\beta}}-\boldsymbol{\beta})+(\boldsymbol{Y}-\boldsymbol{X} \hat{\boldsymbol{\beta}})^{\prime}(\boldsymbol{Y}-\boldsymbol{X} \hat{\boldsymbol{\beta}}) $$ say \(Q=Q_{1}+Q_{2}\) for convenience. (d) Show that \(Q_{1} / \sigma^{2}\) is \(\chi^{2}(p)\). (e) Show that \(Q_{1}\) and \(Q_{2}\) are independent. (f) Argue that \(Q_{2} / \sigma^{2}\) is \(\chi^{2}(n-p)\). (g) Find \(c\) so that \(c Q_{1} / Q_{2}\) has an \(F\) -distribution. (h) The fact that a value \(d\) can be found so that \(P\left(c Q_{1} / Q_{2} \leq d\right)=1-\alpha\) could be used to find a \(100(1-\alpha)\) percent confidence ellipsoid for \(\beta\). Explain.

With the background of the two-way classification with \(c>1\) observations per cell, show that the maximum likelihood estimators of the parameters are $$ \begin{aligned} \hat{\alpha}_{i} &=\bar{X}_{i . .}-\bar{X}_{\ldots} \\ \hat{\beta}_{j} &=\bar{X}_{. j .}-\bar{X}_{\cdots} \\ \hat{\gamma}_{i j} &=\bar{X}_{i j .}-\bar{X}_{i .}-\bar{X}_{. j}+\bar{X}_{\ldots} \\ \hat{\mu} &=\bar{X}_{\ldots} \end{aligned} $$ Show that these are unbiased estimators of the respective parameters. Compute the variance of each estimator.

Let \(X_{1}, X_{2}, X_{3}\) be a random sample from the normal distribution \(N\left(0, \sigma^{2}\right)\). Are the quadratic forms \(X_{1}^{2}+3 X_{1} X_{2}+X_{2}^{2}+X_{1} X_{3}+X_{3}^{2}\) and \(X_{1}^{2}-2 X_{1} X_{2}+\frac{2}{3} X_{2}^{2}-\) \(2 X_{1} X_{2}-X_{3}^{2}\) independent or dependent?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free