Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(X_{1}, X_{2}, \ldots, X_{n}\) and \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be independent random samples from two normal distributions \(N\left(\mu_{1}, \sigma^{2}\right)\) and \(N\left(\mu_{2}, \sigma^{2}\right)\), respectively, where \(\sigma^{2}\) is the common but unknown variance. (a) Find the likelihood ratio \(\Lambda\) for testing \(H_{0}: \mu_{1}=\mu_{2}=0\) against all alternatives. (b) Rewrite \(\Lambda\) so that it is a function of a statistic \(Z\) which has a well-known distribution. (c) Give the distribution of \(Z\) under both null and alternative hypotheses.

Short Answer

Expert verified
The likelihood ratio \( \Lambda \) for testing \( H_{0}: \mu_{1}=\mu_{2}=0 \) against all alternatives is the ratio of the joint likelihood functions under the null hypothesis and alternative hypothesis. The ration \( \Lambda \) can be rewritten in terms of a statistic \( Z \), where \( Z = -\frac{n}{2\sigma^2}(\bar{X}^2 + \bar{Y}^2) + n(\frac{\bar{X}^2}{S_X^2} + \frac{\bar{Y}^2}{S_Y^2}) \). \( Z \) has a chi-square distribution (\( Z \sim \chi^2(2) \)) under the alternative hypothesis, and \( Z = 0 \) under null hypothesis.

Step by step solution

01

Part (a): Compute the Likelihood Ratio

First, calculate the joint likelihood function under both the null and alternative hypotheses. Under null hypothesis, \( \mu_{1} = \mu_{2} = 0 \). So, the joint likelihood function of \( X = (X_1, X_2, ..., X_n) \) and \( Y = (Y_1, Y_2, ..., Y_n) \) is \[ L_{0} = \frac{1}{(2\pi \sigma^2)^n} exp\left(-\frac{1}{2\sigma^2}\sum_{i=1}^{n}(x_i^2 + y_i^2)\right)\]. Under alternative hypothesis \( H_1: \mu_1 \neq \mu_2 \), joint likelihood function is \[ L_{1} = \frac{1}{(2\pi \sigma^2)^n} exp\left(-\frac{1}{2\sigma^2}\sum_{i=1}^{n}((x_i - \mu_1)^2 + (y_i - \mu_2)^2)\right)\]. Next, we can calculate the likelihood ratio \( \Lambda \) which is equal to \( \Lambda = \frac{ L_{0} }{ L_{1} }\).
02

Part (b): Rewrite \(\Lambda\) as a Function of the Statistic Z

Rewrite the likelihood ratio in terms of statistic Z. Notice that maximizing the likelihood function is the same as maximizing the logarithm of the likelihood function, because log function is monotonically increasing. We can call the maximum log-likelihood ratio as Z which equals \[ Z = -\frac{n}{2\sigma^2}(\bar{X}^2 + \bar{Y}^2) + n(\frac{\bar{X}^2}{S_X^2} + \frac{\bar{Y}^2}{S_Y^2})\] where \( S_X^2, S_Y^2 \) are the sample variances for X and Y respectively, and \( \bar{X}, \bar{Y} \) are sample means for X and Y.
03

Part (c): Determine the Distribution of Z

The distribution of \( Z \) under both null and alternative hypotheses is to be determined. Under null hypothesis (\(H_0\)), \( Z = 0 \). Under alternative hypothesis (\(H_1\)), Z follows a chi-square distribution \( \chi^2 \) with degrees of freedom equal to the number of parameters in the alternative hypothesis which equals the difference in the number of parameters between the alternative and null hypotheses, in this case 2, i.e. \( Z \sim \chi^2(2) \).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

If in Example \(8.2 .2\) of this section \(H_{0}: \theta=\theta^{\prime}\), where \(\theta^{\prime}\) is a fixed positive number, and \(H_{1}: \theta<\theta^{\prime}\), show that the set \(\left\\{\left(x_{1}, x_{2}, \ldots, x_{n}\right): \sum_{1}^{n} x_{i}^{2} \leq c\right\\}\) is a uniformly most powerful critical region for testing \(H_{0}\) against \(H_{1}\).

Show that the likelihood ratio principle leads to the same test when testing a simple hypothesis \(H_{0}\) against an alternative simple hypothesis \(H_{1}\), as that given by the Neyman-Pearson theorem. Note that there are only two points in \(\Omega\).

Consider a normal distribution of the form \(N(\theta, 4)\). The simple hypothesis \(H_{0}: \theta=0\) is rejected, and the alternative composite hypothesis \(H_{1}: \theta>0\) is accepted if and only if the observed mean \(\bar{x}\) of a random sample of size 25 is greater than or equal to \(\frac{3}{5}\). Find the power function \(\gamma(\theta), 0 \leq \theta\), of this test.

Consider a random sample \(X_{1}, X_{2}, \ldots, X_{n}\) from a distribution with pdf \(f(x ; \theta)=\theta(1-x)^{\theta-1}, 00\) (a) Find the form of the uniformly most powerful test of \(H_{0}: \theta=1\) against \(H_{1}: \theta>1\) (b) What is the likelihood ratio \(\Lambda\) for testing \(H_{0}: \theta=1\) against \(H_{1}: \theta \neq 1 ?\)

Let \(X\) have the pdf \(f(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=0,1\), zero elsewhere. We test \(H_{0}: \theta=\frac{1}{2}\) against \(H_{1}: \theta<\frac{1}{2}\) by taking a random sample \(X_{1}, X_{2}, \ldots, X_{5}\) of size \(n=5\) and rejecting \(H_{0}\) if \(Y=\sum_{1}^{n} X_{i}\) is observed to be less than or equal to a constant \(c\). (a) Show that this is a uniformly most powerful test. (b) Find the significance level when \(c=1\). (c) Find the significance level when \(c=0\). (d) By using a randomized test, as discussed in Example \(5.6 .4\), modify the tests given in Parts (b) and (c) to find a test with significance level \(\alpha=\frac{2}{32}\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free