Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Consider a distribution having a pmf of the form \(f(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=\) 0,1, zero elsewhere. Let \(H_{0}: \theta=\frac{1}{20}\) and \(H_{1}: \theta>\frac{1}{20} .\) Use the Central Limit Theorem to determine the sample size \(n\) of a random sample so that a uniformly most powerful test of \(H_{0}\) against \(H_{1}\) has a power function \(\gamma(\theta)\), with approximately \(\gamma\left(\frac{1}{20}\right)=0.05\) and \(\gamma\left(\frac{1}{10}\right)=0.90\)

Short Answer

Expert verified
The main steps are understanding the Central Limit Theorem, understanding power functions, transforming the hypotheses and power probabilities into Z-scores, setting up a system of equations based on these Z-scores, and then solving this system for the sample size. Rounding up to the nearest whole number will ensure an integer sample size. Empirical verification of the hypothesis also required.

Step by step solution

01

Understanding the question

The problem involves hypothesis testing for a probability mass function (pmf) \(f(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=\{ 0,1\}\). The central limit theorem (CLT) is used to determine the sample size \(n\) which provides a uniformly most powerful test between the two hypotheses.
02

Writing down the power function of the test

Start by writing down the power function of the test \(\gamma(\theta)\). There is a relationship between the sample size \(n\) and the power of a test. The power function is given here as \(\gamma\left(\frac{1}{20}\right)=0.05\) and \(\gamma\left(\frac{1}{10}\right)=0.90\).
03

Apply the Central Limit theorem

In the Central Limit Theorem, the sum of a large number of independent and identically distributed variables will be approximately normally distributed. We can use this to transform our hypotheses concerning \(\theta\) into hypotheses concerning the sample mean \(\bar{X}\). Specifically, if we consider the statistic \(Z= \sqrt{n}(\bar{X}-E(X))/\sqrt{Var(X)}\), as \(n\) becomes large, \(Z\) tends to a standard normal distribution.
04

Transform the power probabilities into Z-scores

The Z-scores corresponding to the probabilities of \(0.05\) and \(0.90\) can be found from the standard normal distribution table or using a computation tool. The corresponding Z-scores are \(Z_{0.05}= -1.645\) and \(Z_{0.90}= 1.28\) respectively.
05

Setting up the system of equations

Having the transformed hypotheses and the z-scores, we can formulate the following system of equations: 1) \(\sqrt{n}\left(\frac{1}{20} - E(X)\right) / \sqrt{Var(X)} = -1.645\), 2) \(\sqrt{n}\left(\frac{1}{10} - E(X)\right) / \sqrt{Var(X)} = 1.28\)
06

Solving system of equations for sample size \(n\)

To find the required sample size \(n\), the system of equations can be solved. Since \(E[X] = \theta\) and \(Var(X)=\theta(1-\theta)\) for a Bernoulli distribution, we can substitute these values into our equations and solve for \(n\). The sample size \(n\) will likely be a real number, but since we can't have a fraction of a sample, always round up to the nearest whole number.
07

Empirical verification

Once you have determined the sample size, you would then carry out the hypothesis test, determining whether to accept or reject the null hypothesis based on your samples.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(X_{1}, \ldots, X_{n}\) and \(Y_{1}, \ldots, Y_{m}\) follow the location model $$ \begin{aligned} X_{i} &=\theta_{1}+Z_{i}, \quad i=1, \ldots, n \\ Y_{i} &=\theta_{2}+Z_{n+i}, \quad i=1, \ldots, m, \end{aligned} $$ where \(Z_{1}, \ldots, Z_{n+m}\) are iid random variables with common pdf \(f(z) .\) Assume that \(E\left(Z_{i}\right)=0\) and \(\operatorname{Var}\left(Z_{i}\right)=\theta_{3}<\infty\) (a) Show that \(E\left(X_{i}\right)=\theta_{1}, E\left(Y_{i}\right)=\theta_{2}\), and \(\operatorname{Var}\left(X_{i}\right)=\operatorname{Var}\left(Y_{i}\right)=\theta_{3}\). (b) Consider the hypotheses of Example 8.3.1, i.e., $$ H_{0}: \theta_{1}=\theta_{2} \text { versus } H_{1}: \theta_{1} \neq \theta_{2} \text { . } $$ Show that under \(H_{0}\), the test statistic \(T\) given in expression \((8.3 .4)\) has a limiting \(N(0,1)\) distribution. (c) Using part (b), determine the corresponding large sample test (decision rule) of \(H_{0}\) versus \(H_{1}\). (This shows that the test in Example \(8.3 .1\) is asymptotically correct.)

Let \(X_{1}, X_{2}, \ldots, X_{n}\) denote a random sample from a normal distribution \(N(\theta, 16)\). Find the sample size \(n\) and a uniformly most powerful test of \(H_{0}: \theta=25\) against \(H_{1}: \theta<25\) with power function \(\gamma(\theta)\) so that approximately \(\gamma(25)=0.10\) and \(\gamma(23)=0.90\).

Consider a normal distribution of the form \(N(\theta, 4)\). The simple hypothesis \(H_{0}: \theta=0\) is rejected, and the alternative composite hypothesis \(H_{1}: \theta>0\) is accepted if and only if the observed mean \(\bar{x}\) of a random sample of size 25 is greater than or equal to \(\frac{2}{5}\). Find the power function \(\gamma(\theta), 0 \leq \theta\), of this test.

Let \(X\) and \(Y\) have a joint bivariate normal distribution. An observation \((x, y)\) arises from the joint distribution with parameters equal to either $$ \mu_{1}^{\prime}=\mu_{2}^{\prime}=0, \quad\left(\sigma_{1}^{2}\right)^{\prime}=\left(\sigma_{2}^{2}\right)^{\prime}=1, \quad \rho^{\prime}=\frac{1}{2} $$ Ior $$ \mu_{1}^{\prime \prime}=\mu_{2}^{\prime \prime}=1, \quad\left(\sigma_{1}^{2}\right)^{\prime \prime}=4, \quad\left(\sigma_{2}^{2}\right)^{\prime \prime}=9, \quad \rho^{\prime \prime}=\frac{1}{2} \text { . } $$ Show that the classification rule involves a second-degree polynomial in \(x\) and \(y\).

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be iid \(N\left(\theta_{1}, \theta_{2}\right) .\) Show that the likelihood ratio principle for testing \(H_{0}: \theta_{2}=\theta_{2}^{\prime}\) specified, and \(\theta_{1}\) unspecified, against \(H_{1}: \theta_{2} \neq \theta_{2}^{\prime}, \theta_{1}\) unspecified, leads to a test that rejects when \(\sum_{1}^{n}\left(x_{i}-\bar{x}\right)^{2} \leq c_{1}\) or \(\sum_{1}^{n}\left(x_{i}-\bar{x}\right)^{2} \geq c_{2}\) where \(c_{1}

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free