Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be iid \(N\left(\theta_{1}, \theta_{2}\right) .\) Show that the likelihood ratio principle for testing \(H_{0}: \theta_{2}=\theta_{2}^{\prime}\) specified, and \(\theta_{1}\) unspecified, against \(H_{1}: \theta_{2} \neq \theta_{2}^{\prime}, \theta_{1}\) unspecified, leads to a test that rejects when \(\sum_{1}^{n}\left(x_{i}-\bar{x}\right)^{2} \leq c_{1}\) or \(\sum_{1}^{n}\left(x_{i}-\bar{x}\right)^{2} \geq c_{2}\) where \(c_{1}

Short Answer

Expert verified
The problem solves the hypothesis testing problem using the likelihood ratio principle. The likelihood ratio test principle rejects the null hypothesis \( H_{0} \) when the sum of squared deviations, which is the maximum likelihood estimator of the variance under \( H_{0} \), is either smaller or larger than two selected constants \( c_{1} \) and \( c_{2} \) respectively, where \( c_{1} < c_{2} \).

Step by step solution

01

Understanding the likelihood ratio test principle

The likelihood ratio for a test of hypothesis \( H_{0} \) versus \( H_{1} \) is defined as: \( \Lambda = \frac{L(\theta_{2}^{\prime})}{L(\theta_{2})} \), where \( L(\theta) \) is the likelihood function which measures how likely the observed data are for different values of \( \theta \), and \( \theta \) and \( \theta' \) represent the parameter under \( H_{0} \) and \( H_{1} \) respectively.
02

Define the test statistic

Under the null hypothesis \( H_{0} \), the maximum likelihood estimator of the variance \( \sigma^2 \) of a normal distribution is the sample variance \( S^2 \), defined as \( S^2 = \sum_{1}^{n}\left(x_{i}-\bar{x}\right)^{2} \). This will be our test statistic.
03

Determine the rejection regions

The Likelihood Ratio Test (LRT) tends to be most powerful when the likelihood under \( H_{1} \) is furthest from \( H_{0} \). Because of this, the LRT will tend to reject \( H_{0} \) when our test statistic is significantly greater or significantly less than what we would expect if the null hypothesis were true. This means that our rejection region will be when \( S^2 \leq c_{1} \) or \( S^2 \geq c_{2} \) where \( c_{1} < c_{2} \) are selected appropriately depending on the significance level of the test.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

If \(X_{1}, X_{2}, \ldots, X_{n}\) is a random sample from a beta distribution with parameters \(\alpha=\beta=\theta>0\), find a best critical region for testing \(H_{0}: \theta=1\) against \(H_{1}: \theta=2\)

Let \(X_{1}, \ldots, X_{n}\) and \(Y_{1}, \ldots, Y_{m}\) follow the location model $$\begin{aligned} X_{i} &=\theta_{1}+Z_{i}, \quad i=1, \ldots, n \\ Y_{i} &=\theta_{2}+Z_{n+i}, \quad i=1, \ldots, m\end{aligned}$$ where \(Z_{1}, \ldots, Z_{n+m}\) are iid random variables with common pdf \(f(z)\). Assume that \(E\left(Z_{i}\right)=0\) and \(\operatorname{Var}\left(Z_{i}\right)=\theta_{3}<\infty\) (a) Show that \(E\left(X_{i}\right)=\theta_{1}, E\left(Y_{i}\right)=\theta_{2}\), and \(\operatorname{Var}\left(X_{i}\right)=\operatorname{Var}\left(Y_{i}\right)=\theta_{3}\). (b) Consider the hypotheses of Example \(8.3 .1\); i.e, $$H_{0}: \theta_{1}=\theta_{2} \text { versus } H_{1}: \theta_{1} \neq \theta_{2}$$ Show that under \(H_{0}\), the test statistic \(T\) given in expression \((8.3 .5)\) has a limiting \(N(0,1)\) distribution. (c) Using Part (b), determine the corresponding large sample test (decision rule) of \(H_{0}\) versus \(H_{1}\). (This shows that the test in Example \(8.3 .1\) is asymptotically correct.)

Consider a distribution having a pmf of the form \(f(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=\) 0,1, zero elsewhere. Let \(H_{0}: \theta=\frac{1}{20}\) and \(H_{1}: \theta>\frac{1}{20} .\) Use the central limit theorcu? to determine the sample size \(n\) of a random sample so that a uniformly most powerful test of \(H_{0}\) against \(H_{1}\) has a power function \(\gamma(\theta)\), with approximately \(\gamma\left(\frac{1}{20}\right)=0.05\) and \(\gamma\left(\frac{1}{10}\right)=0.90\).

Let \(X_{1}, \ldots, X_{n}\) denote a random sample from a gamma-type distribution with \(\alpha=2\) and \(\beta=\theta .\) Let \(H_{0}: \theta=1\) and \(H_{1}: \theta>1\) (a) Show that there exists a uniformly most powerful test for \(H_{0}\) against \(H_{1}\), determine the statistic \(Y\) upon which the test may be based, and indicate the nature of the best critical region. (b) Find the pdf of the statistic \(Y\) in Part (a). If we want a significance level of \(0.05\), write an equation which can be used to determine the critical region. Let \(\gamma(\theta), \theta \geq 1\), be the power function of the test. Express the power function as an integral.

Illustrative Example \(8.2 .1\) of this section dealt with a random sample of size \(n=2\) from a gamma distribution with \(\alpha=1, \beta=\theta .\) Thus the mgf of the distribution is \((1-\theta t)^{-1}, t<1 / \theta, \theta \geq 2 .\) Let \(Z=X_{1}+X_{2}\). Show that \(Z\) has a gamma distribution with \(\alpha=2, \beta=\theta .\) Express the power function \(\gamma(\theta)\) of Example \(8.2 .1\) in terms of a single integral. Generalize this for a random sample of size \(n .\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free