Chapter 6: Problem 3
Let \(Y_{1}
Chapter 6: Problem 3
Let \(Y_{1}
All the tools & learning materials you need for study success - in one app.
Get started for freeConsider two Bernoulli distributions with unknown parameters \(p_{1}\) and \(p_{2}\). If \(Y\) and \(Z\) equal the numbers of successes in two independent random samples, each of size \(n\), from the respective distributions, determine the mles of \(p_{1}\) and \(p_{2}\) if we know that \(0 \leq p_{1} \leq p_{2} \leq 1\)
Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample from a \(N\left(\theta, \sigma^{2}\right)\) distribution, where \(\sigma^{2}\) is fixed but \(-\infty<\theta<\infty\) (a) Show that the mle of \(\theta\) is \(\bar{X}\). (b) If \(\theta\) is restricted by \(0 \leq \theta<\infty\), show that the mle of \(\theta\) is \(\widehat{\theta}=\max \\{0, \bar{X}\\}\).
Consider a location model $$ X_{i}=\theta+e_{i}, \quad i=1, \ldots, n $$ where \(e_{1}, e_{2}, \ldots, e_{n}\) are iid with pdf \(f(z)\). There is a nice geometric interpretation for estimating \(\theta\). Let \(\mathbf{X}=\left(X_{1}, \ldots, X_{n}\right)^{\prime}\) and \(\mathbf{e}=\left(e_{1}, \ldots, e_{n}\right)^{\prime}\) be the vectors of observations and random error, respectively, and let \(\boldsymbol{\mu}=\theta \mathbf{1}\), where \(\mathbf{1}\) is a vector with all components equal to 1 . Let \(V\) be the subspace of vectors of the form \(\mu\); i.e., \(V=\\{\mathbf{v}: \mathbf{v}=a \mathbf{1}\), for some \(a \in R\\}\). Then in vector notation we can write the model as $$ \mathbf{X}=\boldsymbol{\mu}+\mathbf{e}, \quad \boldsymbol{\mu} \in V $$ Then we can summarize the model by saying, "Except for the random error vector e, X would reside in \(V\)." Hence, it makes sense intuitively to estimate \(\boldsymbol{\mu}\) by a vector in \(V\) that is "closest" to \(\mathbf{X}\). That is, given a norm \(\|\cdot\|\) in \(R^{n}\), choose $$ \widehat{\boldsymbol{\mu}}=\operatorname{Argmin}\|\mathbf{X}-\mathbf{v}\|, \quad \mathbf{v} \in V $$ (a) If the error pdf is the Laplace, \((2.2 .4)\), show that the minimization in \((6.3 .27)\) is equivalent to maximizing the likelihood when the norm is the \(l_{1}\) norm given by $$ \|\mathbf{v}\|_{1}=\sum_{i=1}^{n}\left|v_{i}\right| $$ (b) If the error pdf is the \(N(0,1)\), show that the minimization in \((6.3 .27)\) is equivalent to maximizing the likelihood when the norm is given by the square of the \(l_{2}\) norm $$ \|\mathbf{v}\|_{2}^{2}=\sum_{i=1}^{n} v_{i}^{2} $$
Let \(X_{1}, X_{2}, \ldots, X_{n}\) and \(Y_{1}, Y_{2}, \ldots, Y_{m}\) be independent random samples from the two normal distributions \(N\left(0, \theta_{1}\right)\) and \(N\left(0, \theta_{2}\right)\). (a) Find the likelihood ratio \(\Lambda\) for testing the composite hypothesis \(H_{0}: \theta_{1}=\theta_{2}\) against the composite alternative \(H_{1}: \theta_{1} \neq \theta_{2}\). (b) This \(\Lambda\) is a function of what \(F\) -statistic that would actually be used in this test?
For a numerical example of the \(F\) -test derived in Exercise \(6.5 .7\), here are two generated data sets. The first was generated by the \(\mathrm{R}\) call \(\operatorname{rexp}(10,1 / 20)\), i.e., 10 observations from a \(\Gamma(1,20)\) -distribution. The second was generated by \(\operatorname{rexp}(12,1 / 40)\). The data are rounded and can also be found in the file genexpd. rda. (a) Obtain comparison boxplots of the data sets. Comment. (b) Carry out the F-test of Exercise 6.5.7. Conclude in terms of the problem at level \(0.05\) $$ \begin{aligned} &\mathrm{x}: 11.1 .11 .7 & 12.7 & 9.6 & 14.7 & 1.6 & 1.756 .13 .3 & 2.6 \\ &\mathrm{y}: 55.6 & 40.5 & 32.7 & 25.6 & 70.6 & 1.4 & 51.5 & 12.6 & 16.9 & 63.3 & 5.6 & 66.7 \end{aligned} $$
What do you think about this solution?
We value your feedback to improve our textbook solutions.