Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(\left(X_{1}, Y_{1}\right),\left(X_{2}, Y_{2}\right), \ldots,\left(X_{n}, Y_{n}\right)\) be a random sample from a bivariate normal distribution with \(\mu_{1}, \mu_{2}, \sigma_{1}^{2}=\sigma_{2}^{2}=\sigma^{2}, \rho=\frac{1}{2}\), where \(\mu_{1}, \mu_{2}\), and \(\sigma^{2}>0\) are unknown real numbers. Find the likelihood ratio \(\Lambda\) for testing \(H_{0}: \mu_{1}=\mu_{2}=0, \sigma^{2}\) unknown against all alternatives. The likelihood ratio \(\Lambda\) is a function of what statistic that has a well- known distribution?

Short Answer

Expert verified
The likelihood ratio \(\Lambda\) is a function of \(\frac{\sum_{i=1}^{n}Y_{i}^{2}-n\overline{Y^2}}{2\overline{X^2}}\). This statistic has a chi-square distribution with \(n-1\) degrees of freedom.

Step by step solution

01

Establish the joint PDF

First, we must establish that the joint probability density function (PDF) of the bivariate normal distribution for a single observation (Xi, Yi) is given by \(f(X_{i},Y_{i};\theta)=\frac{1}{2\pi\sigma^{2}\sqrt{1-\rho^2}}e^{-\frac{1}{2\sigma^2}\frac{(X_{i}-\mu_{1})^2 + (Y_{i}-\mu_{2})^2-2\rho(X_{i}-\mu_{1})(Y_{i}-\mu_{2})}{\sqrt{1-\rho^2}}}\), where \(\rho, \sigma, \mu_{1}, \mu_{2}\) are the correlation coefficient, the standard deviation, and the means of \(X_{i}\) and \(Y_{i}\) respectively.
02

Calculate joint PDF of the sample

Next, we calculate the joint PDF of the whole sample by assuming independence, which gives us the product:\(L(\theta)=\prod_{i=1}^{n}f(X_{i},Y_{i};\theta)\)
03

Establish the hypotheses

Then we establish that for the null hypothesis, \(H_{0}\), \(\mu_{1}=\mu_{2}=0\) and for alternatives \(\mu_{1},\mu_{2}\) can be anything.
04

Calculate the likelihood ratio

Next, we calculate the likelihood ratio, which is defined as the maximum of the likelihood function under the null hypothesis divided by the maximum of the likelihood function under any hypothesis, which yields \(\Lambda =\frac{maxL(\theta|H_{0})}{maxL(\theta)}\).
05

Find the statistic

Finally, we find that the statistic \(\Lambda\) is a function of is actually \(\frac{\sum_{i=1}^{n}Y_{i}^{2}-n\overline{Y^2}}{2\overline{X^2}}\), where \(\overline{X^2}\) represents the sample variance of the \(X_{i}\)'s and \(\overline{Y^2}\) represents the sample variance of the \(Y_{i}\)'s. We find that under \(H_{0}\), this statistic has a chi-square distribution with \(n-1\) degrees of freedom as this follows directly from the properties of the bivariate normal distribution when \(\mu_{1}=\mu_{2}=0\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a distribution having a pmf of the form \(f(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=\) 0,1, zero elsewhere. Let \(H_{0}: \theta=\frac{1}{20}\) and \(H_{1}: \theta>\frac{1}{20} .\) Use the Central Limit Theorem to determine the sample size \(n\) of a random sample so that a uniformly most powerful test of \(H_{0}\) against \(H_{1}\) has a power function \(\gamma(\theta)\), with approximately \(\gamma\left(\frac{1}{20}\right)=0.05\) and \(\gamma\left(\frac{1}{10}\right)=0.90\)

Let \(Y_{1}

Illustrative Example \(8.2 .1\) of this section dealt with a random sample of size \(n=2\) from a gamma distribution with \(\alpha=1, \beta=\theta .\) Thus the mgf of the distribution is \((1-\theta t)^{-1}, t<1 / \theta, \theta \geq 2 .\) Let \(Z=X_{1}+X_{2} .\) Show that \(Z\) has a gamma distribution with \(\alpha=2, \beta=\theta\). Express the power function \(\gamma(\theta)\) of Example 8.2.1 in terms of a single integral. Generalize this for a random sample of size \(n\).

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be iid \(N\left(\theta_{1}, \theta_{2}\right) .\) Show that the likelihood ratio principle for testing \(H_{0}: \theta_{2}=\theta_{2}^{\prime}\) specified, and \(\theta_{1}\) unspecified, against \(H_{1}: \theta_{2} \neq \theta_{2}^{\prime}, \theta_{1}\) unspecified, leads to a test that rejects when \(\sum_{1}^{n}\left(x_{i}-\bar{x}\right)^{2} \leq c_{1}\) or \(\sum_{1}^{n}\left(x_{i}-\bar{x}\right)^{2} \geq c_{2}\) where \(c_{1}

Let \(X_{1}, X_{2}, \ldots, X_{n}\) denote a random sample from a normal distribution \(N(\theta, 16)\). Find the sample size \(n\) and a uniformly most powerful test of \(H_{0}: \theta=25\) against \(H_{1}: \theta<25\) with power function \(\gamma(\theta)\) so that approximately \(\gamma(25)=0.10\) and \(\gamma(23)=0.90\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free