Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(\mathbf{X}^{\prime}=\left[X_{1}, X_{2}\right]\) be bivariate normal with matrix of means \(\boldsymbol{\mu}^{\prime}=\left[\mu_{1}, \mu_{2}\right]\) and positive definite covariance matrix \(\Sigma\). Let $$ Q_{1}=\frac{X_{1}^{2}}{\sigma_{1}^{2}\left(1-\rho^{2}\right)}-2 \rho \frac{X_{1} X_{2}}{\sigma_{1} \sigma_{2}\left(1-\rho^{2}\right)}+\frac{X_{2}^{2}}{\sigma_{2}^{2}\left(1-\rho^{2}\right)} $$ Show that \(Q_{1}\) is \(\chi^{2}(r, \theta)\) and find \(r\) and \(\theta\). When and only when does \(Q_{1}\) have a central chi-square distribution?

Short Answer

Expert verified
The parameters \(r\) and \(\theta\) for the \(\chi^{2}(r, \theta)\) can be found via calculating the rank of matrix \(A\) for \(r\) and square of the mean for \(\theta\). \(Q_{1}\) will have a central chi-square distribution when \(Q_1\) is zero-mean, which occurs either if the expected values of \(X_1\) and \(X_2\) are zero or if their mean has been subtracted, i.e., the values have been normalized.

Step by step solution

01

Understanding the Chi-square distribution parameters

The chi-square distribution is denoted by \(\chi^{2}(r, \theta)\), where \(r\) is the number of degrees of freedom and \(\theta\) is the non-centrality parameter. The Chi-square distribution is defined as central if \(\theta = 0\). Otherwise, it is non-central.
02

Express \(Q_1\) in a convenient form

Firstly, consider \(X_1, X_2\) to be zero-mean variations by taking \(X_1 - \mu_1\) and \(X_2 - \mu_2\). Now redefine \(Q_1\) in terms of these new variables. Let's call them \(x_1 (= X_1 - \mu_1)\) and \(x_2 (= X_2 - \mu_2)\), the values of \(\mu_1\) and \(\mu_2\) are subtracted for normalizing the values to have mean 0.
03

Recognition of Quadratic Form

Recognize that \(Q_1\) now has the form of \[x^{T}Ax\]which is a quadratic form, where \(x = [x_1, x_2]\), and \(A\) is a matrix whose components can be extracted from \(Q_1\)'s expression.
04

Eigenvalues Calculation

Now, represent \(A\) as \[PDP^{T}\], where \(D\) is a matrix with the eigenvalues of \(A\) on the diagonal, and \(P\) is the matrix composed of the eigenvectors. Then calculate the eigenvalues which are nothing but the variance parameters of chi-square distribution, as this new representation is a sum of squares of independent standard normals.
05

Calculation of \(r\) and \(\theta\)

The parameter \(r\) is equal to the rank of matrix \(A\), which is the number of non-zero eigenvalues. To find \(\theta\), realize that if the quadratic form \(Q_1\) has a non-zero mean, it implies it is a non-central chi-square distribution, and the non-centrality parameter \(\theta\) is the square of the mean. If the form \(Q_1\) is already zero-mean, \(\theta = 0\) and the distribution is a central chi-square.
06

Conditions for Central Chi-Square Distribution

\(Q_1\) has a central chi-square distribution when \(\theta = 0\). This happens when either \(\mu = 0\) (the expected value of the original random variables are zero) or when the mean has been subtracted from the variables \(X_1, X_2\) to get \(x_1, x_2\) (in other words, the variables have been normalized).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Bivariate Normal Distribution
Understanding the bivariate normal distribution is crucial for analyzing the relationship between two continuous random variables. It describes the joint distribution of two variables which are both normally distributed. A key characteristic of this distribution is that, for any linear combination of the two variables, the resulting variable will also have a normal distribution.

In the context of our exercise, the bivariate normal distribution is defined by the mean vector \( \boldsymbol{\mu}^\prime = [\mu_1, \mu_2] \) and the covariance matrix \( \Sigma \), which encompasses the variances \( \sigma_1^2 \) and \( \sigma_2^2 \) along the diagonal and the covariance \( \sigma_{1}\sigma_{2}\rho \) in the off-diagonal elements. The correlation coefficient \( \rho \) dictates the strength and direction of the linear relationship between these two normally distributed variables. When \( \rho = 0 \) the variables are independent; as \( \rho \) deviates from zero, the relationship strengthens.
Covariance Matrix
The covariance matrix \( \Sigma \) is a symmetric matrix that serves as the cornerstone of the bivariate normal distribution. In general, for a set of random variables, this matrix holds the covariances between each pair of variables, providing insights into how they change together. The diagonal entries are the variances of individual variables, while the off-diagonal entries are the respective covariances.

The positivity of the matrix—meaning that it only has positive eigenvalues—is essential for defining a valid bivariate normal distribution. The positive-definiteness ensures that the quadratic form used in the calculation of \( Q_1 \) yields a well-defined chi-square value. In our exercise, the elements of \( \Sigma \) come into play when defining the quadratic form in terms of variance and correlation, ultimately shaping the distribution of \( Q_1 \) by weighting the contribution of each variable.
Quadratic Form
A quadratic form is a homogenous polynomial of degree two in a number of variables. When referring to the matrix representation, a quadratic form can be expressed as \( x^T A x \), with \( A \) being a symmetric matrix and \( x \) a column vector. This form is especially important in statistics for expressing variables that are linear combinations of others.

In our exercise, \( Q_1 \) is a quadratic form where the matrix \( A \) and the vector \( x \) are extracted from the given variables and the covariance matrix. By finding the eigenvalues of \( A \) through diagonalization, we can transform this form into a sum of squares, which leads us to determine the chi-square distribution of \( Q_1 \)—a critical step in defining the properties of our test statistic.
Non-centrality Parameter
The non-centrality parameter \( \theta \) of a chi-square distribution adds a layer of complexity by shifting the distribution away from zero when \( \theta \) is non-zero. It represents the distance of the mean of the quadratic form from the origin and is crucial in distinguishing between the central and non-central chi-square distributions.

When \( \theta = 0 \) which occurs under the null hypothesis in hypothesis testing, we have a central chi-square distribution. On the other hand, if \( \theta \) is greater than zero—a scenario usually encountered in power analyses or when the null hypothesis is false—the distribution becomes non-central. In our exercise, determining the value of \( \theta \) helps identify whether \( Q_1 \) follows a central chi-square distribution, which largely depends on the mean values of the original variables as well as their normalization, highlighted in the process of finding \( r \) and \( \theta \) in the step by step solution.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Using the notation of Section \(9.2\), assume that the means \(\mu_{j}\) satisfy a linear function of \(j\), namely, \(\mu_{j}=c+d[j-(b+1) / 2] .\) Let independent random samples of size \(a\) be taken from the \(b\) normal distributions having means \(\mu_{1}, \mu_{2}, \ldots, \mu_{b}\), respectively, and common unknown variance \(\sigma^{2}\). (a) Show that the maximum likelihood estimators of \(c\) and \(d\) are, respectively, \(\hat{c}=\bar{X}_{. .}\) and $$ \hat{d}=\frac{\sum_{j=1}^{b}[j-(b-1) / 2]\left(\bar{X}_{. j}-\bar{X}_{\cdots}\right)}{\sum_{j=1}^{b}[j-(b+1) / 2]^{2}} $$ (b) Show that $$ \begin{aligned} \sum_{i=1}^{a} \sum_{j=1}^{b}\left(X_{i j}-\bar{X}_{. .}\right)^{2}=& \sum_{i=1}^{a} \sum_{j=1}^{b}\left[X_{i j}-\bar{X}_{. .}-\hat{d}\left(j-\frac{b+1}{2}\right)\right]^{2} \\ &+\hat{d}^{2} \sum_{j=1}^{b} a\left(j-\frac{b+1}{2}\right)^{2} \end{aligned} $$ (c) Argue that the two terms in the right-hand member of part (b), once divided by \(\sigma^{2}\), are independent random variables with \(\chi^{2}\) distributions provided that \(d=0 .\) (d) What \(F\) -statistic would be used to test the equality of the means, that is, \(H_{0}: d=0 ?\)

Let \(\boldsymbol{X}^{\prime}=\left[X_{1}, X_{2}, \ldots, X_{n}\right]\), where \(X_{1}, X_{2}, \ldots, X_{n}\) are observations of a random sample from a distribution that is \(N\left(0, \sigma^{2}\right) .\) Let \(b^{\prime}=\left[b_{1}, b_{2}, \ldots, b_{n}\right]\) be a real nonzero vector, and let \(\boldsymbol{A}\) be a real symmetric matrix of order \(n\). Prove that the linear form \(b^{\prime} X\) and the quadratic form \(\boldsymbol{X}^{\prime} \boldsymbol{A} \boldsymbol{X}\) are independent if and only if \(\boldsymbol{b}^{\prime} \boldsymbol{A}=\mathbf{0}\). Use this fact to prove that \(\boldsymbol{b}^{\prime} \boldsymbol{X}\) and \(\boldsymbol{X}^{\prime} \boldsymbol{A} \boldsymbol{X}\) are independent if and only if the two quadratic forms \(\left(\boldsymbol{b}^{\prime} \boldsymbol{X}\right)^{2}=\boldsymbol{X}^{\prime} \boldsymbol{b b}^{\prime} \boldsymbol{X}\) and \(\boldsymbol{X}^{\prime} \boldsymbol{A} \boldsymbol{X}\) are independent.

Students' scores on the mathematics portion of the ACT examination, \(x\), and on the final examination in the first-semester calculus ( 200 points possible), \(y\), are: $$ \begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline x & 25 & 20 & 26 & 26 & 28 & 28 & 29 & 32 & 20 & 25 \\ \hline y & 138 & 84 & 104 & 112 & 88 & 132 & 90 & 183 & 100 & 143 \\ \hline x & 26 & 28 & 25 & 31 & 30 & & & & & \\ \hline y & 141 & 161 & 124 & 118 & 168 & & & & & \\ \hline \end{array} $$ The data are also in the rda file regr1.rda. Use \(\mathrm{R}\) or another statistical package for computation and plotting. (a) Calculate the least squares regression line for these data. (b) Plot the points and the least squares regression line on the same graph. (c) Obtain the residual plot and comment on the appropriateness of the model. (d) Find \(95 \%\) confidence interval for \(\beta\) under the usual assumptions. Comment in terms of the problem.

By doing the following steps, determine a \((1-\alpha) 100 \%\) approximate confidence interval for \(\rho\). (a) For \(0<\alpha<1\), in the usual way, start with \(1-\alpha=P\left(-z_{\alpha / 2}

Show that the square of a noncentral \(T\) random variable is a noncentral \(F\) random variable.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free