Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Problem 1

If \(Y_{1}, \ldots, Y_{n}\) is a random sample from the \(N\left(\mu, \sigma^{2}\right)\) distribution with known \(\sigma^{2}\), show that the likelihood ratio statistic for comparing \(\mu=\mu^{0}\) with general \(\mu\) is \(W\left(\mu^{0}\right)=\) \(n(\bar{Y}-\mu)^{2} / \sigma^{2} .\) Show that \(W\left(\mu^{0}\right)\) is a pivot, and give the likelihood ratio confidence region for \(\mu\).

Problem 1

The logistic density with location and scale parameters \(\mu\) and \(\sigma\) is $$ f(y ; \mu, \sigma)=\frac{\exp \\{(y-\mu) / \sigma\\}}{\sigma[1+\exp \\{(y-\mu) / \sigma\\}]^{2}}, \quad-\infty0 $$ (a) If \(Y\) has density \(f(y ; \mu, 1)\), show that the expected information for \(\mu\) is \(1 / 3\). (b) Instead of observing \(Y\), we observe the indicator \(Z\) of whether or not \(Y\) is positive. When \(\sigma=1\), show that the expected information for \(\mu\) based on \(Z\) is \(e^{\mu} /\left(1+e^{\mu}\right)^{2}\), and deduce that the maximum efficiency of sampling based on \(Z\) rather than \(Y\) is \(3 / 4\). Why is this greatest at \(\mu=0 ?\) (c) Find the expected information \(I(\mu, \sigma)\) based on \(Y\) when \(\sigma\) is unknown. Without doing any calculations, explain why both parameters cannot be estimated based only on \(Z\).

Problem 2

One model for outliers in a normal sample is the mixture $$ f(y ; \mu, \pi)=(1-\pi) \phi(y-\mu)+\pi g(y-\mu), \quad 0 \leq \pi \leq 1, \infty<\mu<\infty $$ where \(g(z)\) has heavier tails than the standard normal density \(\phi(z)\); take \(g(z)=\frac{1}{2} e^{-|z|}\) for example. Typically \(\pi\) will be small or zero. Show that when \(\pi=0\) the likelihood derivative for \(\pi\) has zero mean but infinite variance, and discuss the implications for the likelihood ratio statistic comparing normal and mixture models.

Problem 2

Independent values \(y_{1}, \ldots, y_{n}\) arise from a distribution putting probabilities \(\frac{1}{4}(1+2 \theta)\) \(\frac{1}{4}(1-\theta), \frac{1}{4}(1-\theta), \frac{1}{4}\) on the values \(1,2,3,4\), where \(-\frac{1}{2}<\theta<1\). Show that the likelihood for \(\theta\) is proportional to \((1+2 \theta)^{m_{1}}(1-\theta)^{m_{2}}\) and express \(m_{1}\) and \(m_{2}\) in terms of \(y_{1}, \ldots, y_{n}\). Find the maximum likelihood estimate of \(\theta\) in terms of \(m_{1}\) and \(m_{2}\). Obtain the maximum likelihood estimate and the likelihood ratio statistic for \(\theta=0\) based on data in which the frequencies of \(1,2,3,4\) were \(55,11,8,26 .\) Is it plausible that \(\theta=0 ?\)

Problem 2

Find the likelihood for a random sample \(y_{1}, \ldots, y_{n}\) from the geometric density \(\operatorname{Pr}(Y=y)=\pi(1-\pi)^{y}, y=0,1, \ldots\), where \(0<\pi<1\)

Problem 2

Find maximum likelihood estimates for \(\theta\) based on a random sample of size \(n\) from the densities (i) \(\theta y^{\theta-1}, 00 ;\) (ii) \(\theta^{2} y e^{-\theta y}, y>0, \theta>0 ;\) and (iii) \((\theta+1) y^{-\theta-2}\), \(y>1, \theta>0\)

Problem 2

Let \(\psi(\theta)\) be a 1-1 transformation of \(\theta\), and consider a model with log likelihoods \(\ell(\theta)\) and \(\ell^{*}(\psi)\) in the two parametrizations respectively; \(\ell\) has a unique maximum at which the likelihood equation is satisfied. Show that $$ \frac{\partial \ell^{*}(\psi)}{\partial \psi_{r}}=\frac{\partial \theta^{\mathrm{T}}}{\partial \psi_{r}} \frac{\partial \ell(\theta)}{\partial \theta}, \quad \frac{\partial^{2} \ell^{*}(\psi)}{\partial \psi_{r} \partial \psi_{s}}=\frac{\partial \theta^{\mathrm{T}}}{\partial \psi_{r}} \frac{\partial^{2} \ell(\theta)}{\partial \theta \partial \theta^{\mathrm{T}}} \frac{\partial \theta}{\partial \psi_{s}}+\frac{\partial^{2} \theta^{\mathrm{T}}}{\partial \psi_{r} \partial \psi_{s}} \frac{\partial \ell(\theta)}{\partial \theta} $$ and deduce that $$ I^{*}(\psi)=\frac{\partial \theta^{\mathrm{T}}}{\partial \psi} I(\theta) \frac{\partial \theta}{\partial \psi^{\mathrm{T}}} $$ but that a similar equation holds for observed information only when \(\theta=\widehat{\theta}\).

Problem 3

Verify that the likelihood for \(f(y ; \lambda)=\lambda \exp (-\lambda y), y, \lambda>0\), is invariant to the reparametrization \(\psi=1 / \lambda .\)

Problem 3

\(Y_{1}, \ldots, Y_{n}\) are independent normal random variables with unit variances and means \(\mathrm{E}\left(Y_{j}\right)=\beta x_{j}\), where the \(x_{j}\) are known quantities in \((0,1]\) and \(\beta\) is an unknown parameter. Show that \(\ell(\beta) \equiv-\frac{1}{2} \sum\left(y_{j}-x_{j} \beta\right)^{2}\) and find the expected information \(I(\beta)\) for \(\beta\) Suppose that \(n=10\) and that an experiment to estimate \(\beta\) is to be designed by choosing the \(x_{j}\) appropriately. Show that \(I(\beta)\) is maximized when all the \(x_{j}\) equal \(1 .\) Is this design sensible if there is any possibility that \(\mathrm{E}\left(Y_{j}\right)=\alpha+\beta x_{j}\), with \(\alpha\) unknown?

Problem 3

The Laplace or double exponential distribution has density $$ f(y ; \mu, \sigma)=\frac{1}{2 \sigma} \exp (-|y-\mu| / \sigma), \quad-\infty0 $$ Sketch the log likelihood for a typical sample, and explain why the maximum likelihood estimate is only unique when the sample size is odd. Derive the score statistic and observed information. Is maximum likelihood estimation regular for this distribution?

Access millions of textbook solutions in one place

  • Access over 3 million high quality textbook solutions
  • Access our popular flashcard, quiz, mock-exam and notes features
  • Access our smart AI features to upgrade your learning
Get Vaia Premium now
Access millions of textbook solutions in one place

Recommended explanations on Math Textbooks