Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Consider two Bernoulli distributions with unknown parameters \(p_{1}\) and \(p_{2}\). If \(Y\) and \(Z\) equal the numbers of successes in two independent random samples, each of size \(n\), from the respective distributions, determine the mles of \(p_{1}\) and \(p_{2}\) if we know that \(0 \leq p_{1} \leq p_{2} \leq 1\)

Short Answer

Expert verified
The maximum likelihood estimate (MLE) of \(p_{1}\) and \(p_{2}\) are given as \(\hat{p_{1}} = \frac{Y}{n}\) and \(\hat{p_{2}} = \frac{Z}{n}\) respectively. If \(p_{1}\) is more than \(p_{2}\) considering the given constraint, then both \(p_{1}\) and \(p_{2}\) should be the maximum value of \(\hat{p_{1}}\) and \(\hat{p_{2}}\). So, \(\hat{p} = max(\hat{p_{1}}, \hat{p_{2}})\).

Step by step solution

01

Write down the likelihood function

For Bernoulli distributions, the likelihood function for observing the given data sample is given by \(p^{y}(1-p)^{n-y}\) where \(y\) is the number of successes and \(n\) is the total sample size. The combined likelihood function for \(Y\) key and \(Z\) key becomes \(p_{1}^{Y}(1-p_{1})^{n-Y}p_{2}^{Z}(1-p_{2})^{n-Z}\)
02

Log transformation

To simplify the function, we take the logarithm of the likelihood function and obtain its log-likelihood. We ignore the constants, as they won't affect the position of the maximum, yielding \(Y log(p_{1}) + (n - Y)log(1 - p_{1}) + Z log(p_{2}) + (n - Z)log(1 - p_{2})\)
03

Solve for MLEs

By taking derivatives of this function with respect to \(p_{1}\) and \(p_{2}\) and equating it to zero, we can solve for \(p_{1}\) and \(p_{2}\). The equations obtained are: \[ Y = n p_{1} \] and \[ Z = n p_{2} \] So the MLEs of \(p_{1}\) and \(p_{2}\) are \(\hat{p_{1}} = \frac{Y}{n}\) and \(\hat{p_{2}} = \frac{Z}{n}\) respectively. But we must remember that \(0 \leq p_{1} \leq p_{2} \leq 1\). If value of \(p_{1}\) is more than \(p_{2}\), then we can't accept this result. In this case, both \(p_{1}\) and \(p_{2}\) should be take the same value which is \(\hat{p} = max(\hat{p_{1}}, \hat{p_{2}})\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Rao (page 368,1973 ) considers a problem in the estimation of linkages in genetics. McLachlan and Krishnan (1997) also discuss this problem and we present their model. For our purposes it can be described as a multinomial model with the four categories \(C_{1}, C_{2}, C_{3}\) and \(C_{4}\). For a sample of size \(n\), let \(\mathbf{X}=\left(X_{1}, X_{2}, X_{3}, X_{4}\right)^{\prime}\) denote the observed frequencies of the four categories. Hence, \(n=\sum_{i=1}^{4} X_{i} .\) The probability model is $$\begin{array}{|c|c|c|c|}\hline C_{1} & C_{2} & C_{3} & C_{4} \\ \hline \frac{1}{2}+\frac{1}{4} \theta & \frac{1}{4}-\frac{1}{4} \theta & \frac{1}{4}-\frac{1}{4} \theta & \frac{1}{4} \theta \\\\\hline\end{array}$$ where the parameter \(\theta\) satisfies \(0 \leq \theta \leq 1 .\) In this exercise, we obtain the mle of \(\theta\). (a) Show that likelihood function is given by $$L(\theta \mid \mathbf{x})=\frac{n !}{x_{1} ! x_{2} ! x_{3} ! x_{4} !}\left[\frac{1}{2}+\frac{1}{4} \theta\right]^{x_{1}}\left[\frac{1}{4}-\frac{1}{4} \theta\right]^{x_{2}+x_{3}}\left[\frac{1}{4} \theta\right]^{x_{4}}$$ (b) Show that the log of the likelihood function can be expressed as a constant (not involving parameters) plus the term $$ x_{1} \log [2+\theta]+\left[x_{2}+x_{3}\right] \log [1-\theta]+x_{4} \log \theta $$ (c) Obtain the partial of the last expression, set the result to 0, and solve for the mle. (This will result in a quadratic equation which has one positive and one negative root.)

Suppose \(X_{1}, X_{2}, \ldots, X_{n_{1}}\) are a random sample from a \(N(\theta, 1)\) distribution. Suppose \(Z_{1}, Z_{2}, \ldots, Z_{n_{2}}\) are missing observations. Show that the first step EM estimate is $$\hat{\theta}^{(1)}=\frac{n_{1} \bar{x}+n_{2} \widehat{\theta}^{(0)}}{n}$$ where \(\widehat{\theta}^{(0)}\) is an initial estimate of \(\theta\) and \(n=n_{1}+n_{2} .\) Note that if \(\widehat{\theta}^{(0)}=\bar{x}\), then \(\widehat{\theta}^{(k)}=\bar{x}\) for all \(k\)

Prove that \(\bar{X}\), the mean of a random sample of size \(n\) from a distribution that is \(N\left(\theta, \sigma^{2}\right),-\infty<\theta<\infty\), is, for every known \(\sigma^{2}>0\), an efficient estimator of \(\theta\).

Let the table $$\begin{array}{c|cccccc}x & 0 & 1 & 2 & 3 & 4 & 5 \\\\\hline \text { Frequency } & 6 & 10 & 14 & 13 & 6 & 1 \end{array}$$ represent a summary of a sample of size 50 from a binomial distribution having \(n=5 .\) Find the mle of \(P(X \geq 3)\)

Let \(S^{2}\) be the sample variance of a random sample of size \(n>1\) from \(N(\mu, \theta), 0<\theta<\infty\), where \(\mu\) is known. We know \(E\left(S^{2}\right)=\theta\) (a) What is the efficiency of \(S^{2} ?\) (b) Under these conditions, what is the mle \(\widehat{\theta}\) of \(\theta\) ? (c) What is the asymptotic distribution of \(\sqrt{n}(\widehat{\theta}-\theta) ?\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free