Chapter 6: Problem 2
Let \(X_{1}, X_{2}, \ldots, X_{n}\) represent a random sample from each of the
distributions having the following pdfs:
(a) \(f(x ; \theta)=\theta x^{\theta-1}, 0
Chapter 6: Problem 2
Let \(X_{1}, X_{2}, \ldots, X_{n}\) represent a random sample from each of the
distributions having the following pdfs:
(a) \(f(x ; \theta)=\theta x^{\theta-1}, 0
All the tools & learning materials you need for study success - in one app.
Get started for freeLet \(X_{1}, X_{2}, X_{3}, X_{4}, X_{5}\) be a random sample from a Cauchy
distribution with median \(\theta\), that is, with pdf
$$
f(x ; \underline{\theta})=\frac{1}{\pi} \frac{1}{1+(x-\theta)^{2}},
\quad-\infty
Let \(X_{1}, X_{2}, \ldots, X_{n}\) be iid, each with the distribution having pdf \(f\left(x ; \theta_{1}, \theta_{2}\right)=\) \(\left(1 / \theta_{2}\right) e^{-\left(x-\theta_{1}\right) / \theta_{2}}, \theta_{1} \leq x<\infty,-\infty<\theta_{2}<\infty\), zero elsewhere. Find the maximum likelihood estimators of \(\theta_{1}\) and \(\theta_{2}\).
Let \(n\) independent trials of an experiment be such that \(x_{1}, x_{2}, \ldots, x_{k}\) are the respective numbers of times that the experiment ends in the mutually exclusive and exhaustive events \(C_{1}, C_{2}, \ldots, C_{k} .\) If \(p_{i}=P\left(C_{i}\right)\) is constant throughout the \(n\) trials, then the probability of that particular sequence of trials is \(L=p_{1}^{x_{1}} p_{2}^{x_{2}} \cdots p_{k}^{x_{k}}\). (a) Recalling that \(p_{1}+p_{2}+\cdots+p_{k}=1\), show that the likelihood ratio for testing \(H_{0}: p_{i}=p_{i 0}>0, i=1,2, \ldots, k\), against all alternatives is given by $$ \Lambda=\prod_{i=1}^{k}\left(\frac{\left(p_{i 0}\right)^{x_{i}}}{\left(x_{i} / n\right)^{x_{i}}}\right) $$ (b) Show that $$ -2 \log \Lambda=\sum_{i=1}^{k} \frac{x_{i}\left(x_{i}-n p_{i 0}\right)^{2}}{\left(n p_{i}^{\prime}\right)^{2}} $$ where \(p_{i}^{\prime}\) is between \(p_{i 0}\) and \(x_{i} / n\). Hint: Expand \(\log p_{i 0}\) in a Taylor's series with the remainder in the term involving \(\left(p_{i 0}-x_{i} / n\right)^{2}\). (c) For large \(n\), argue that \(x_{i} /\left(n p_{i}^{\prime}\right)^{2}\) is approximated by \(1 /\left(n p_{i 0}\right)\) and hence \(-2 \log \Lambda \approx \sum_{i=1}^{k} \frac{\left(x_{i}-n p_{i 0}\right)^{2}}{n p_{i 0}}\) when \(H_{0}\) is true. Theorem \(6.5 .1\) says that the right-hand member of this last equation defines a statistic that has an approximate chi-square distribution with \(k-1\) degrees of freedom. Note that dimension of \(\Omega-\) dimension of \(\omega=(k-1)-0=k-1\)
Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a distribution with pmf \(p(x ; \theta)=\theta^{x}(1-\theta)^{1-x}, x=0,1\), where \(0<\theta<1 .\) We wish to test \(H_{0}: \theta=1 / 3\) versus \(H_{1}: \theta \neq 1 / 3\). (a) Find \(\Lambda\) and \(-2 \log \Lambda\). (b) Determine the Wald-type test. (c) What is Rao's score statistic?
Let \(Y_{1}
What do you think about this solution?
We value your feedback to improve our textbook solutions.