Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a \(N\left(\mu_{0}, \sigma^{2}=\theta\right)\) distribution, where \(0<\theta<\infty\) and \(\mu_{0}\) is known. Show that the likelihood ratio test of \(H_{0}: \theta=\theta_{0}\) versus \(H_{1}: \theta \neq \theta_{0}\) can be based upon the statistic \(W=\sum_{i=1}^{n}\left(X_{i}-\mu_{0}\right)^{2} / \theta_{0}\). Determine the null distribution of \(W\) and give, explicitly, the rejection rule for a level \(\alpha\) test.

Short Answer

Expert verified
The likelihood ratio test can be based on \(W = \sum_{i=1}^{n}(X_{i}-\mu_{0})^{2}/\theta_{0}\). The null distribution of \(W\) is \(\chi_{2}(n)\). For a level \(\alpha\) test, \(H_{0}\) should be rejected if either \(W > \chi_{2}^{2}(1 - \alpha/2)\) or \(W < \chi_{2}^{2}(\alpha/2)\).

Step by step solution

01

- Proof for the Statistic

Firstly, the likelihood function for this normal distribution with known \(\mu_0\) and unknown \(\sigma^2\) is \(\frac{1}{\sqrt{2\pi \theta}}\exp\left(-\frac{1}{2\theta}\sum_{i=1}^{n}\left(X_{i}-\mu_{0}\right)^{2}\right)\). For the hypothesis \(H_{0}: \theta =\theta_{0}\) and Alternative hypothesis \(H_{1}: \theta \neq \theta_{0}\), the likelihood ratio is the maximum likelihood of \(H_{1}\) divided by the likelihood of \(H_{0}\), which equates to \(\exp\left(-\frac{1}{2}\left(\frac{\sum_{i=1}^{n}(X_{i}-\mu_{0})^{2}}{\hat{\theta}}-\frac{\sum_{i=1}^{n}(X_{i}-\mu_{0})^{2}}{\theta_{0}}\right)\). It's easier to make decisions based on \(W\) rather than the exponentiated version; hence, \(W = \sum_{i=1}^{n}(X_{i}-\mu_{0})^{2}/\theta_{0}\) is derived from this.
02

- Determining the Null Distribution

Given the null hypothesis \(H_{0}: \theta =\theta_{0}\), the null distribution of \(W\) is then a chi-square distribution with \(n\) degree of freedom; hence \(W \sim \chi^{2}(n)\). This is due to the property of a standard normal distribution that the sum of squares of standard normally distributed variables follows a chi-square distribution.
03

- Setting Up the Rejection Rule

For a level \(\alpha\) test, the rule is set up as rejecting \(H_{0}\) if either \(W > \chi_{2}^{2}(1 - \alpha/2)\) on the right side or if \(W < \chi_{2}^{2}(\alpha/2)\) on the left side because the likelihood ratio test for this context is a two-tailed test. Here, \(\chi_{2}^{2}(1 - \alpha/2)\) and \(\chi_{2}^{2}(\alpha/2)\) are the \((1 - \alpha/2)\)th and \(\alpha/2\)th percentiles of the chi-square distribution with \(n\) degrees of freedom respectively.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(n\) independent trials of an experiment be such that \(x_{1}, x_{2}, \ldots, x_{k}\) are the respective numbers of times that the experiment ends in the mutually exclusive and exhaustive events \(C_{1}, C_{2}, \ldots, C_{k} .\) If \(p_{i}=P\left(C_{i}\right)\) is constant throughout the \(n\) trials, then the probability of that particular sequence of trials is \(L=p_{1}^{x_{1}} p_{2}^{x_{2}} \cdots p_{k}^{x_{k}}\). (a) Recalling that \(p_{1}+p_{2}+\cdots+p_{k}=1\), show that the likelihood ratio for testing \(H_{0}: p_{i}=p_{i 0}>0, i=1,2, \ldots, k\), against all alternatives is given by $$ \Lambda=\prod_{i=1}^{k}\left(\frac{\left(p_{i 0}\right)^{x_{i}}}{\left(x_{i} / n\right)^{x_{i}}}\right) $$ (b) Show that $$ -2 \log \Lambda=\sum_{i=1}^{k} \frac{x_{i}\left(x_{i}-n p_{i 0}\right)^{2}}{\left(n p_{i}^{\prime}\right)^{2}} $$ where \(p_{i}^{\prime}\) is between \(p_{i 0}\) and \(x_{i} / n\). Hint: Expand \(\log p_{i 0}\) in a Taylor's series with the remainder in the term involving \(\left(p_{i 0}-x_{i} / n\right)^{2}\). (c) For large \(n\), argue that \(x_{i} /\left(n p_{i}^{\prime}\right)^{2}\) is approximated by \(1 /\left(n p_{i 0}\right)\) and hence \(-2 \log \Lambda \approx \sum_{i=1}^{k} \frac{\left(x_{i}-n p_{i 0}\right)^{2}}{n p_{i 0}}\) when \(H_{0}\) is true. Theorem \(6.5 .1\) says that the right-hand member of this last equation defines a statistic that has an approximate chi-square distribution with \(k-1\) degrees of freedom. Note that dimension of \(\Omega-\) dimension of \(\omega=(k-1)-0=k-1\)

Let \(X_{1}, \ldots, X_{n}\) and \(Y_{1}, \ldots, Y_{m}\) be independent random samples from the distributions \(N\left(\theta_{1}, \theta_{3}\right)\) and \(N\left(\theta_{2}, \theta_{4}\right)\), respectively. (a) Show that the likelihood ratio for testing \(H_{0}: \theta_{1}=\theta_{2}, \theta_{3}=\theta_{4}\) against all alternatives is given by $$ \begin{aligned} &\qquad\left[\sum_{1}^{n}\left(x_{i}-\bar{x}\right)^{2} / n\right]^{n / 2}\left[\sum_{1}^{m}\left(y_{i}-\bar{y}\right)^{2} / m\right]^{m / 2} \\ &\left\\{\left[\sum_{1}^{n}\left(x_{i}-u\right)^{2}+\sum_{1}^{m}\left(y_{i}-u\right)^{2}\right] /(m+n)\right\\}^{(n+m) / 2} \end{aligned} $$ (b) Show that the likelihood ratio test for testing \(H_{0}: \theta_{3}=\theta_{4}, \theta_{1}\) and \(\theta_{2}\) unspecified, against \(H_{1}: \theta_{3} \neq \theta_{4}, \theta_{1}\) and \(\theta_{2}\) unspecified, can be based on the random variable $$ F=\frac{\sum_{1}^{n}\left(X_{i}-\bar{X}\right)^{2} /(n-1)}{\sum_{1}^{m}\left(Y_{i}-\bar{Y}\right)^{2} /(m-1)} $$

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a \(N\left(\theta, \sigma^{2}\right)\) distribution, where \(\sigma^{2}\) is fixed but \(-\infty<\theta<\infty\) (a) Show that the mle of \(\theta\) is \(\bar{X}\). (b) If \(\theta\) is restricted by \(0 \leq \theta<\infty\), show that the mle of \(\theta\) is \(\widehat{\theta}=\max \\{0, \bar{X}\\}\).

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a distribution with pmf \(p(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=0,1\), where \(0<\theta<1 .\) We wish to test \(H_{0}: \theta=1 / 3\) versus \(H_{1}: \theta \neq 1 / 3\). (a) Find \(\Lambda\) and \(-2 \log \Lambda\). (b) Determine the Wald-type test. (c) What is Rao's score statistic?

The data file beta30. rda contains 30 observations generated from a beta \((\theta, 1)\) distribution, where \(\theta=4\). The file can be downloaded at the site discussed in the Preface. (a) Obtain a histogram of the data using the argument \(\mathrm{pr}=\mathrm{T}\). Overlay the pdf of a \(\beta(4,1)\) pdf. Comment. (b) Using the results of Exercise \(6.2 .12\), compute the maximum likelihood estimate based on the data. (c) Using the confidence interval found in Part (c) of Exercise 6.2.12, compute the \(95 \%\) confidence interval for \(\theta\) based on the data. Is the confidence interval successful?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free