Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(Y\) have the \(p\)-variate multivariate normal distribution with mean vector \(\mu\) and covariance matrix \(\Omega\). Partition \(Y^{\mathrm{T}}\) as \(\left(Y_{1}^{\mathrm{T}}, Y_{2}^{\mathrm{T}}\right)\), where \(Y_{1}\) has dimension \(q \times 1\) and \(Y_{2}\), has dimension \(r \times 1\), and partition \(\mu\) and \(\Omega\) conformably. Find the conditional distribution of \(Y_{1}\) given that \(Y_{2}=y_{2}\) direct from the probability density functions of \(Y\) and \(Y_{2}\).

Short Answer

Expert verified
The conditional distribution of \( Y_1 \) given \( Y_2 = y_2 \) is multivariate normal with mean \( \mu_1 + \Omega_{12}\Omega_{22}^{-1}(y_2 - \mu_2) \) and covariance \( \Omega_{11} - \Omega_{12}\Omega_{22}^{-1}\Omega_{21} \).

Step by step solution

01

Define the distribution of Y

The random vector \( Y \) is partitioned as \( \begin{pmatrix} Y_1 \ Y_2 \end{pmatrix} \). \( Y \) follows a multivariate normal distribution with mean vector \( \mu \) and covariance matrix \( \Omega \). We partition \( \mu \) and \( \Omega \) conformably as follows: \( \mu = \begin{pmatrix} \mu_1 \ \mu_2 \end{pmatrix} \) and \( \Omega = \begin{pmatrix} \Omega_{11} & \Omega_{12} \ \Omega_{21} & \Omega_{22} \end{pmatrix} \).
02

Write the joint distribution of Y1 and Y2

Using the properties of the multivariate normal distribution, the joint distribution of \( Y_1 \) and \( Y_2 \) can be expressed as:\[\begin{pmatrix} Y_1 \ Y_2 \end{pmatrix} \sim \mathcal{N}\left( \begin{pmatrix} \mu_1 \ \mu_2 \end{pmatrix}, \begin{pmatrix} \Omega_{11} & \Omega_{12} \ \Omega_{21} & \Omega_{22} \end{pmatrix} \right)\]
03

Find the distribution of Y2

The distribution of \( Y_2 \) is obtained by projecting the joint distribution on \( Y_2 \). Thus, \( Y_2 \sim \mathcal{N}(\mu_2, \Omega_{22}) \) since \( Y_2 \) is part of the multivariate normal distribution with mean \( \mu_2 \) and covariance \( \Omega_{22} \).
04

Derive the conditional distribution of Y1 given Y2=y2

The conditional distribution of \( Y_1 \) given \( Y_2 = y_2 \) is multivariate normal with mean and covariance given by:\[Y_1 | Y_2 = y_2 \sim \mathcal{N}(\mu_{1 \,|\, 2}, \Omega_{1 \,|\, 2})\]where \( \mu_{1 \,|\, 2} = \mu_1 + \Omega_{12}\Omega_{22}^{-1}(y_2 - \mu_2) \) and \( \Omega_{1 \,|\, 2} = \Omega_{11} - \Omega_{12}\Omega_{22}^{-1}\Omega_{21} \).
05

Express the final conditional distribution

The final conditional distribution is expressed neatly using the expressions for the mean and covariance derived above:\[Y_1 | Y_2 = y_2 \sim \mathcal{N}\left( \mu_1 + \Omega_{12}\Omega_{22}^{-1}(y_2 - \mu_2), \Omega_{11} - \Omega_{12}\Omega_{22}^{-1}\Omega_{21} \right)\]

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Mean Vector
In the context of a multivariate normal distribution, the mean vector is a critical component. It essentially tells us the "center" of the data in a multivariate space. Imagine a multidimensional graph where each axis is one variable from your data. The mean vector \( \mu \) is a point in this space that represents the average of all the points.For example, if your data has two variables, the mean vector might look something like \( \begin{pmatrix} \mu_1 \ \mu_2 \end{pmatrix} \), where \( \mu_1 \) is the mean of the first variable and \( \mu_2 \) is the mean of the second variable. This concept is important because it helps in determining how data is distributed around this center point.
Covariance Matrix
The covariance matrix, denoted as \( \Omega \) in our multivariate normal distribution, provides vital insights into the relationships between the different variables in your data. Each element in this matrix represents covariances between pairs of variables, essentially measuring how changes in one variable are associated with changes in another.The diagonal elements of this matrix represent variances of individual variables. Since variance is the square of standard deviation, it provides a measure of the spread of the data for each variable. The off-diagonal elements represent the covariance between different variables, indicating if changes in one variable are linked to changes in another. When the covariance is positive, it indicates that as one variable increases, the other tends to increase, too; if it's negative, as one increases, the other tends to decrease. Understanding the covariance matrix is crucial for grasping the geometry and orientation of the data cloud in a multivariate space.
Conditional Distribution
Conditional distributions allow us to understand how one part of our variable set behaves given some information about another part. In the multivariate normal scenario, we're calculating the distribution for one group of variables given specific values for another group.Simply put, if \( Y_2 \) is one set of variables and you know its value, you can determine how \( Y_1 \) behaves. The conditional distribution \( Y_1 | Y_2 = y_2 \) is still a multivariate normal distribution but with a new mean vector and covariance matrix. The new mean vector, represented as \( \mu_{1 | 2} = \mu_1 + \Omega_{12}\Omega_{22}^{-1}(y_2 - \mu_2) \), is adjusted based on the known value of \( Y_2 \). Similarly, the new covariance matrix \( \Omega_{1 | 2} \) becomes \( \Omega_{11} - \Omega_{12} \Omega_{22}^{-1} \Omega_{21} \), which accounts for this conditioning.
Probability Density Function
The probability density function (pdf) is a fundamental concept in probability and statistics, especially in the context of continuous random variables like those in a multivariate normal distribution. The pdf describes the likelihood of a random variable taking on a particular value. In the case of the multivariate normal distribution, the pdf is a bit complex due to the involvement of multiple variables. It involves an exponential function of the variables’ distances from their mean vector, scaled by the covariance matrix. The pdf is crucial for defining the distribution and finding probabilities over ranges of values for the variables. Understanding the pdf provides a way to compute the probability of observing a set of outcomes within a particular range. It also helps to express the overall shape and spread of the data distribution, and importantly, how similar or different all these data observations are from the mean vector.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

(a) If \(F \sim F_{v_{1}, v_{2}}\), show that \(1 / F \sim F_{v_{2}, v_{1}}\). Give the quantiles of \(1 / F\) in terms of those of \(F\) (b) Show that as \(v_{2} \rightarrow \infty, v_{1} F\) tends in distribution to a chi-squared variable, and give its degrees of freedom. (c) If \(Y_{1}\) and \(Y_{2}\) are independent variables with density \(e^{-y}, y>0\), show that \(Y_{1} / Y_{2}\) has the \(F\) distribution, and give its degrees of freedom.

Let \(R_{1}, R_{2}\) be independent binomial random variables with probabilities \(\pi_{1}, \pi_{2}\) and denominators \(m_{1}, m_{2}\), and let \(P_{i}=R_{i} / m_{i} .\) It is desired to test if \(\pi_{1}=\pi_{2}\). Let \(\widehat{\pi}=\left(m_{1} P_{1}+m_{2} P_{2}\right) /\left(m_{1}+m_{2}\right) .\) Show that when \(\pi_{1}=\pi_{2}\), the statistic $$ Z=\frac{P_{1}-P_{2}}{\sqrt{\widehat{\pi}(1-\hat{\pi})\left(1 / m_{1}+1 / m_{2}\right)}} \stackrel{D}{\longrightarrow} N(0,1) $$ when \(m_{1}, m_{2} \rightarrow \infty\) in such a way that \(m_{1} / m_{2} \rightarrow \xi\) for \(0<\xi<1\). Now consider a \(2 \times 2\) table formed using two independent binomial variables and having entries \(R_{i}, S_{i}\) where \(R_{i}+S_{i}=m_{i}, R_{i} / m_{i}=P_{i}\), for \(i=1,2\). Show that if \(\pi_{1}=\pi_{2}\) and \(m_{1}, m_{2} \rightarrow \infty\), then $$ X^{2}=\left(n_{1}+n_{2}\right)\left(R_{1} S_{2}-R_{2} S_{1}\right)^{2} /\left\\{n_{1} n_{2}\left(R_{1}+R_{2}\right)\left(S_{1}+S_{2}\right)\right\\} \stackrel{D}{\longrightarrow} \chi_{1}^{2} $$ Two batches of trees were planted in a park: 250 were obtained from nursery \(A\) and 250 from nursery \(B\). Subsequently 41 and 64 trees from the two groups die. Do trees from the two nurseries have the same survival probabilities? Are the assumptions you make reasonable?

The Cholesky decomposition of an \(p \times p\) symmetric positive matrix \(\Omega\) is the unique lower triangular \(p \times p\) matrix \(L\) such that \(L L^{\mathrm{T}}=\Omega\). Find the distribution of \(\mu+L Z\), where \(Z\) is a vector containing a standard normal random sample \(Z_{1}, \ldots, Z_{p}\), and hence give an algorithm to generate from the multivariate normal distribution.

If \(W \sim \chi_{v}^{2}\), show that \(\mathrm{E}(W)=v, \operatorname{var}(W)=2 v\) and \((W-v) / \sqrt{2 v} \stackrel{D}{\longrightarrow} N(0,1)\) as \(v \rightarrow\) \(\infty\)

Construct a rejection algorithm to simulate from \(f(x)=30 x(1-x)^{4}, 0 \leq x \leq 1\), using the \(U(0,1)\) density as the proposal function \(g\). Give its efficiency.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free