Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Each of \({\rm{n}}\) specimens is to be weighed twice on the same scale. Let \({{\rm{X}}_{\rm{i}}}\) and \({{\rm{Y}}_{\rm{i}}}\) denote the two observed weights for the \({\rm{ith}}\) specimen. Suppose\({{\rm{X}}_{\rm{i}}}\) and \({{\rm{Y}}_{\rm{i}}}\) are independent of one another, each normally distributed with mean value \({{\rm{\mu }}_{\rm{i}}}\) (the true weight of specimen \({\rm{i}}\)) and variance \({{\rm{\sigma }}^{\rm{2}}}\) . a. Show that the maximum likelihood estimator of \({{\rm{\sigma }}^{\rm{2}}}\) is \({\widehat {\rm{\sigma }}^{\rm{2}}}{\rm{ = \Sigma }}{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)^{\rm{2}}}{\rm{/(4n)}}\). (Hint: If \({\rm{\bar z = }}\left( {{{\rm{z}}_{\rm{1}}}{\rm{ + }}{{\rm{z}}_{\rm{2}}}} \right){\rm{/2}}\), then \({\rm{\Sigma }}{\left( {{{\rm{z}}_{\rm{i}}}{\rm{ - \bar z}}} \right)^{\rm{2}}}{\rm{ = }}{\left( {{{\rm{z}}_{\rm{1}}}{\rm{ - }}{{\rm{z}}_{\rm{2}}}} \right)^{\rm{2}}}{\rm{/2}}\)) b. Is the mle \({{\rm{\hat \sigma }}^{\rm{2}}}\) an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)? Find an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\). (Hint: For any \({\rm{rv Z,E}}\left( {{{\rm{Z}}^{\rm{2}}}} \right){\rm{ = V(Z) + (E(Z)}}{{\rm{)}}^{\rm{2}}}\). Apply this to \({\rm{Z = }}{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}\).)

Short Answer

Expert verified

a.It is shown that \({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }} {{\rm{Y}}_{\rm{i}}}} \right)}^{\rm{2}}}} }}{{{\rm{4n}}}}\).

b.It isn't objective. \({\rm{\hat \sigma }}_{\rm{1}}^{\rm{2}}{\rm{ = }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }} {{\rm{Y}}_{\rm{i}}}} \right)}^{\rm{2}}}} }}{{{\rm{2n}}}}{\rm{ }}\)

Step by step solution

01

Define exponential function

A function that increases or decays at a rate proportional to its present value is called an exponential function.

02

Explanation

  1. The pdf of random variables\({{\rm{X}}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\),

\({{\rm{f}}_{{{\rm{X}}_{\rm{i}}}}}{\rm{(x) = }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{\rm{x - }}{{\rm{\mu }}_{\rm{i}}}} \right)}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}} \right\}{\rm{,x}} \in {\rm{R}}\)

And the pdf of random variables\({{\rm{Y}}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\)

\({{\rm{f}}_{{{\rm{Y}}_{\rm{i}}}}}{\rm{(x) = }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}} \right\}{\rm{,x}} \in {\rm{R}}\)

because they are all evenly distributed.

The random variables\({{\rm{X}}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\)are unrelated to each other.

Allow joint pdf or pmf for random variables\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\).

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{, n,m}} \in {\rm{N}}\)

where \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\) are unknown parameters. The likelihood function is defined as a function of parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\) where function f is a function of parameter. The maximum likelihood estimates (mle's), or values \(\widehat {{{\rm{\theta }}_{\rm{i}}}}\) for which the likelihood function is maximised, are the maximum likelihood estimates,

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{{\rm{\hat \theta }}}_{\rm{1}}}{\rm{,}}{{{\rm{\hat \theta }}}_{\rm{2}}}{\rm{, \ldots ,}}{{{\rm{\hat \theta }}}_{\rm{m}}}} \right) \ge {\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right)\)

As,\({\rm{i = 1,2, \ldots ,m}}\)for every\({{\rm{\theta }}_{\rm{i}}}\). Maximum likelihood estimators are derived by replacing\({{\rm{X}}_{\rm{i}}}\)with\({{\rm{x}}_{\rm{i}}}\).

Because of the independence, the likelihood function becomes,

\(\begin{aligned}{\rm{f(x,y;}}\mu {\rm{,}}\sigma ) &= \frac{{\rm{1}}}{{\sqrt {{\rm{2p}}{\sigma ^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{x}}_{\rm{1}}}{\rm{ - }}{\mu _{\rm{1}}}} \right)}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}} \right\}{\rm{ \times }}\frac{{\rm{1}}}{{\sqrt {{\rm{2}}\pi {\sigma ^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{x}}_{\rm{2}}}{\rm{ - }}{\mu _{\rm{2}}}} \right)}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}} \right\}\\{\rm{ \times \ldots \times }}\frac{{\rm{1}}}{{\sqrt {{\rm{2p}}{\sigma ^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{x}}_{\rm{n}}}{\rm{ - }}{\mu _{\rm{n}}}} \right)}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}} \right\}{\rm{ \times }}\frac{{\rm{1}}}{{\sqrt {{\rm{2}}\pi {\sigma ^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{y}}_{\rm{1}}}{\rm{ - }}{\mu _{\rm{1}}}} \right)}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}} \right\}\\ \times \frac{{\rm{1}}}{{\sqrt {{\rm{2}}\pi {\sigma ^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{y}}_{\rm{2}}}{\rm{ - }}{\mu _{\rm{2}}}} \right)}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}} \right\}{\rm{ \times \ldots \times }}\frac{{\rm{1}}}{{\sqrt {{\rm{2}}\pi {\sigma ^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{y}}_{\rm{n}}}{\rm{ - }}{\mu _{\rm{n}}}} \right)}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}} \right\}\\& = \prod\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {\frac{{\rm{1}}}{{\sqrt {{\rm{2}}\pi {\sigma ^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{\mu _{\rm{i}}}} \right)}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}} \right\}{\rm{ \times }}\frac{{\rm{1}}}{{\sqrt {{\rm{2}}\pi {\sigma ^{\rm{2}}}} }}{\rm{exp}}\left\{ {{\rm{ - }}\frac{{\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}{\mu _{\rm{i}}}} \right)}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}} \right\}} \right)} \\ &= \frac{{\rm{1}}}{{{{\left( {\sqrt {{\rm{2}}\pi {\sigma ^{\rm{2}}}} } \right)}^{{\rm{2n}}}}}}{\rm{ \times exp}}\left\{ {{\rm{ - }}\frac{{\rm{1}}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}{\rm{ \times }}\left( {\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{\mu _{\rm{i}}}} \right)}^{\rm{2}}}} {\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}{\mu _{\rm{i}}}} \right)}^{\rm{2}}}} } \right)} \right\}\\&= \frac{{\rm{1}}}{{{{\left( {{\rm{2}}\pi {\sigma ^{\rm{2}}}} \right)}^{\rm{n}}}}}{\rm{ \times exp}}\left\{ {{\rm{ - }}\frac{{\rm{1}}}{{{\rm{2}}{\sigma ^{\rm{2}}}}}{\rm{ \times }}\left( {\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{\mu _{\rm{i}}}} \right)}^{\rm{2}}}} {\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}{\mu _{\rm{i}}}} \right)}^{\rm{2}}}} } \right)} \right\}\end{aligned}\)

03

Evaluating the maximum likelihood function

Look at the log likelihood function to locate the maximum.

\(\begin{array}{c}{\rm{lnf(x,y;\mu ,\sigma ) = ln}}\left( {\frac{{\rm{1}}}{{{{\left( {{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}} \right)}^{\rm{n}}}}}{\rm{ \times exp}}\left\{ {{\rm{ - }}\frac{{\rm{1}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}{\rm{ \times }}\left( {\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)}^{\rm{2}}}} {\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)}^{\rm{2}}}} } \right)} \right\}} \right)\\{\rm{ = - ln}}\left( {{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}} \right){\rm{ - }}\frac{{\rm{1}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}{\rm{ \times }}\left( {\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)}^{\rm{2}}}} {\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)}^{\rm{2}}}} } \right)\end{array}\)

The maximum likelihood estimator is generated by calculating the derivative of the log likelihood function in respect of\({{\rm{\mu }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\)and equating it with\({\rm{0}}\)the maximum likelihood estimator\({{\rm{\hat \mu }}_{\rm{i}}}\). As a result, the derivatives are,

\(\begin{aligned}\frac{{\rm{d}}}{{{\rm{d}}{{\rm{\mu }}_{\rm{i}}}}}f(x,y;\mu ,\sigma ) &= \frac{{\rm{d}}}{{{\rm{d}}{{\rm{\mu }}_{\rm{i}}}}}\left( {{\rm{ - ln}}\left( {{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}} \right){\rm{ - }}\frac{{\rm{1}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}{\rm{ \times }}\left( {\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)}^{\rm{2}}}} {\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)}^{\rm{2}}}} } \right)} \right)\\&= 0 - \frac{{\rm{1}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}{\rm{ \times }}\left( {{\rm{2 \times }}\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right){\rm{ \times ( - 1) + 2 \times }}\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right){\rm{ \times ( - 1)}}} \right)\\&= \frac{{\rm{1}}}{{{{\rm{\sigma }}^{\rm{2}}}}}\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}{\rm{ + }}{{\rm{y}}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)\\ & = \frac{{\rm{1}}}{{{{\rm{\sigma }}^{\rm{2}}}}}\left( {{{\rm{x}}_{\rm{i}}}{\rm{ + }}{{\rm{y}}_{\rm{i}}}{\rm{ - 2}}{{\rm{\mu }}_{\rm{i}}}} \right)\end{aligned}\)

As a result, solving equation gives the maximum likelihood estimator\({{\rm{\hat \mu }}_{\rm{i}}}\).

\(\begin{array}{c}\frac{{\rm{1}}}{{{{\rm{\sigma }}^{\rm{2}}}}}\left( {{{\rm{x}}_{\rm{i}}}{\rm{ + }}{{\rm{y}}_{\rm{i}}}{\rm{ - 2}}{{{\rm{\hat \mu }}}_{\rm{i}}}} \right){\rm{ = 0}}\\{\rm{2}}{{{\rm{\hat \mu }}}_{\rm{i}}}{\rm{ = }}{{\rm{x}}_{\rm{i}}}{\rm{ + }}{{\rm{y}}_{\rm{y}}}\end{array}\)

For\({{\rm{\hat \mu }}_{\rm{i}}}\). The maximum likelihood estimator of parameter\({{\rm{\mu }}_{{\rm{i,}}}}{\rm{i = 1,2, \ldots ,n}}\)is,

\({{\rm{\hat \mu }}_{\rm{i}}}{\rm{ = }}\frac{{{{\rm{X}}_{\rm{i}}}{\rm{ + }}{{\rm{Y}}_{\rm{i}}}}}{{\rm{2}}}\)

To get the greatest likelihood \({{\rm{\sigma }}^{\rm{2}}}\) estimator, first substitute \({{\rm{\hat \mu }}_{\rm{i}}}\) in the log likelihood function, then take the derivative in respect of \({{\rm{\sigma }}^{\rm{2}}}\), equal it with zero, and solve the equation to get the result.

04

Evaluating the value

The log likelihood function is transformed by replacing\({{\rm{\hat \mu }}_{\rm{i}}}\).

\(\begin{array}{c}{\rm{lnf(x,y;\hat \mu ,\sigma ) = - ln}}\left( {{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}} \right){\rm{ - }}\frac{{\rm{1}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}{\rm{ \times }}\left( {\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}\frac{{{{\rm{x}}_{\rm{i}}}{\rm{ + }}{{\rm{y}}_{\rm{i}}}}}{{\rm{2}}}} \right)}^{\rm{2}}}} {\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{y}}_{\rm{i}}}{\rm{ - }}\frac{{{{\rm{x}}_{\rm{i}}}{\rm{ + }}{{\rm{y}}_{\rm{i}}}}}{{\rm{2}}}} \right)}^{\rm{2}}}} } \right)\\\mathop {\rm{ = }}\limits^{{\rm{(1)}}} {\rm{ - ln}}\left( {{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}} \right){\rm{ - }}\frac{{\rm{1}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}{\rm{ \times }}\left( {\frac{{\rm{1}}}{{\rm{2}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{y}}_{\rm{i}}}} \right)}^{\rm{2}}}} } \right)\end{array}\)

  1. : It can also be proven from the hint.

The maximum likelihood estimator\({\rm{\hat \sigma }}\)is generated by taking the derivative of the log likelihood function in relation to\({{\rm{\sigma }}^{\rm{2}}}\)and equating it with\({\rm{0}}\).

As a result, the derivative,

\(\begin{array}{c}\frac{{\rm{d}}}{{{\rm{d}}{{\rm{\sigma }}^{\rm{2}}}}}{\rm{f(x,y;\hat \mu ,\sigma ) = }}\frac{{\rm{d}}}{{{\rm{d}}{{\rm{\sigma }}^{\rm{2}}}}}\left( {{\rm{ - ln}}\left( {{\rm{2\pi }}{{\rm{\sigma }}^{\rm{2}}}} \right){\rm{ - }}\frac{{\rm{1}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}}}{\rm{ \times }}\left( {\frac{{\rm{1}}}{{\rm{2}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{y}}_{\rm{i}}}} \right)}^{\rm{2}}}} } \right)} \right)\\{\rm{ = - n}}\frac{{\rm{1}}}{{{{\rm{\sigma }}^{\rm{2}}}}}{\rm{ + }}\frac{{\rm{1}}}{{{\rm{2}}{{\rm{\sigma }}^{\rm{4}}}}}\left( {\frac{{\rm{1}}}{{\rm{2}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{y}}_{\rm{i}}}} \right)}^{\rm{2}}}} } \right)\end{array}\)

As a result, solving equation yields the maximum likelihood estimator\({\rm{\hat \sigma }}\).

\(\begin{aligned}{\rm{ - n}}\frac{{\rm{1}}}{{{{{\rm{\hat \sigma }}}^{\rm{2}}}}}{\rm{ + }}\frac{{\rm{1}}}{{{\rm{2}}{{{\rm{\hat \sigma }}}^{\rm{4}}}}}\left( {\frac{{\rm{1}}}{{\rm{2}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{y}}_{\rm{i}}}} \right)}^{\rm{2}}}} } \right) &= 0 \\\frac{{\rm{1}}}{{{\rm{4}}{{{\rm{\hat \sigma }}}^{\rm{4}}}}}\left( {\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{y}}_{\rm{i}}}} \right)}^{\rm{2}}}} } \right) & = \frac{{\rm{n}}}{{{{{\rm{\hat \sigma }}}^{\rm{2}}}}}\\{\rm{4n}}{{{\rm{\hat \sigma }}}^{\rm{2}}} & = \sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{x}}_{\rm{i}}}{\rm{ - }}{{\rm{y}}_{\rm{i}}}} \right)}^{\rm{2}}}} \end{aligned}\)

For\({\rm{\hat \sigma }}\). The maximum likelihood estimator of parameter\({\rm{\hat \sigma }}\)is,

\({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)}^{\rm{2}}}} }}{{{\rm{4n}}}}\)

05

Explanation

b.If \({\rm{E}}\left( {{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right){\rm{ = }}{{\rm{\sigma }}^{\rm{2}}}\), the mle \({{\rm{\hat \sigma }}^{\rm{2}}}\) will be unbiased. But, since

\(\begin{aligned}{\rm{E}}\left( {{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right)&= E \left( {\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)}^{\rm{2}}}} }}{{{\rm{4n}}}}} \right)\\& = \frac{{\rm{1}}}{{{\rm{4n}}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{E}} {\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)^{\rm{2}}}\\& = \frac{{\rm{1}}}{{{\rm{4n}}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{\rm{V}}\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right){\rm{ + }}{{\left( {{\rm{E}}\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)} \right)}^{\rm{2}}}} \right)} \\ & = \frac{{\rm{1}}}{{{\rm{4n}}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{\rm{V}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ + ( - 1}}{{\rm{)}}^{\rm{2}}}{\rm{V}}\left( {{{\rm{Y}}_{\rm{i}}}} \right){\rm{ + }}{{\left( {{\rm{E}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ - E}}\left( {{{\rm{Y}}_{\rm{i}}}} \right)} \right)}^{\rm{2}}}} \right)} \\& = \frac{{\rm{1}}}{{{\rm{4n}}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\left( {{\rm{2}}{{\rm{\sigma }}^{\rm{2}}}{\rm{ + }}{{\left( {{{\rm{\mu }}_{\rm{i}}}{\rm{ - }}{{\rm{\mu }}_{\rm{i}}}} \right)}^{\rm{2}}}} \right)} \\&= \frac{{\rm{1}}}{{{\rm{4n}}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{2}} {{\rm{\sigma }}^{\rm{2}}} &= \frac{{\rm{1}}}{{{\rm{2n}}}}{\rm{ \times n}}{{\rm{\sigma }}^{\rm{2}}}\\ &= \frac{{\rm{1}}}{{\rm{2}}}{{\rm{\sigma }}^{\rm{2}}} \ne {{\rm{\sigma }}^{\rm{2}}}\end{aligned}\)

The maximum likelihood estimator isn't completely objective. Nonetheless, the estimator,

\({\rm{\hat \sigma }}_{\rm{1}}^{\rm{2}}{\rm{ = }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)}^{\rm{2}}}} }}{{{\rm{2n}}}}{\rm{ }}\)

will be genuine This is a result of

\(\begin{array}{c}{\rm{E}}\left( {{\rm{\hat \sigma }}_{\rm{1}}^{\rm{2}}} \right){\rm{ = E}}\left( {{\rm{2 \times }}{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right)\\{\rm{ = 2 \times }}\frac{{\rm{1}}}{{\rm{2}}}{{\rm{\sigma }}^{\rm{2}}}\\{\rm{ = }}{{\rm{\sigma }}^{\rm{2}}}\end{array}\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Each of 150 newly manufactured items is examined and the number of scratches per item is recorded (the items are supposed to be free of scratches), yielding the following data:

Assume that X has a Poisson distribution with parameter \({\bf{\mu }}.\)and that X represents the number of scratches on a randomly picked item.

a. Calculate the estimate for the data using an unbiased \({\bf{\mu }}.\)estimator. (Hint: for X Poisson, \({\rm{E(X) = \mu }}\) ,therefore \({\rm{E(\bar X) = ?)}}\)

c. What is your estimator's standard deviation (standard error)? Calculate the standard error estimate. (Hint: \({\rm{\sigma }}_{\rm{X}}^{\rm{2}}{\rm{ = \mu }}\), \({\rm{X}}\))

Suppose a certain type of fertilizer has an expected yield per acre of \({{\rm{\mu }}_{\rm{2}}}\)with variance \({{\rm{\sigma }}^{\rm{2}}}\)whereas the expected yield for a second type of fertilizer is with the same variance \({{\rm{\sigma }}^{\rm{2}}}\).Let \({\rm{S}}_{\rm{1}}^{\rm{2}}\) and \({\rm{S}}_{\rm{2}}^{\rm{2}}\)denote the sample variances of yields based on sample sizes \({{\rm{n}}_{\rm{1}}}\)and \({{\rm{n}}_{\rm{2}}}\),respectively, of the two fertilizers. Show that the pooled (combined) estimator

\({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = }}\frac{{\left( {{{\rm{n}}_{\rm{1}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{1}}^{\rm{2}}{\rm{ + }}\left( {{{\rm{n}}_{\rm{2}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{2}}^{\rm{2}}}}{{{{\rm{n}}_{\rm{1}}}{\rm{ + }}{{\rm{n}}_{\rm{2}}}{\rm{ - 2}}}}\)

is an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)

A sample of \({\rm{n}}\) captured Pandemonium jet fighters results in serial numbers\({{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{,}}{{\rm{x}}_{\rm{3}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}\). The CIA knows that the aircraft were numbered consecutively at the factory starting with \({\rm{\alpha }}\)and ending with\({\rm{\beta }}\), so that the total number of planes manufactured is \({\rm{\beta - \alpha + 1}}\) (e.g., if \({\rm{\alpha = 17}}\) and\({\rm{\beta = 29}}\), then \({\rm{29 - 17 + 1 = 13}}\)planes having serial numbers \({\rm{17,18,19, \ldots ,28,29}}\)were manufactured). However, the CIA does not know the values of \({\rm{\alpha }}\) or\({\rm{\beta }}\). A CIA statistician suggests using the estimator \({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ - min}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ + 1}}\)to estimate the total number of planes manufactured.

a. If\({\rm{n = 5, x\_}}\left\{ {\rm{1}} \right\}{\rm{ = 237, x\_}}\left\{ {\rm{2}} \right\}{\rm{ = 375, x\_}}\left\{ {\rm{3}} \right\}{\rm{ = 202, x\_}}\left\{ {\rm{4}} \right\}{\rm{ = 525,}}\)and\({{\rm{x}}_{\rm{5}}}{\rm{ = 418}}\), what is the corresponding estimate?

b. Under what conditions on the sample will the value of the estimate be exactly equal to the true total number of planes? Will the estimate ever be larger than the true total? Do you think the estimator is unbiased for estimating\({\rm{\beta - \alpha + 1}}\)? Explain in one or two sentences.

Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a pdf that is symmetric about \({\rm{\mu }}\). An estimator for \({\rm{\mu }}\) that has been found to perform well for a variety of underlying distributions is the Hodgesโ€“Lehmann estimator. To define it, first compute for each \({\rm{i}} \le {\rm{j}}\) and each \({\rm{j = 1,2, \ldots ,n}}\) the pairwise average \({{\rm{\bar X}}_{{\rm{i,j}}}}{\rm{ = }}\left( {{{\rm{X}}_{\rm{i}}}{\rm{ + }}{{\rm{X}}_{\rm{j}}}} \right){\rm{/2}}\). Then the estimator is \({\rm{\hat \mu = }}\) the median of the \({{\rm{\bar X}}_{{\rm{i,j}}}}{\rm{'s}}\). Compute the value of this estimate using the data of Exercise \({\rm{44}}\) of Chapter \({\rm{1}}\). (Hint: Construct a square table with the \({{\rm{x}}_{\rm{i}}}{\rm{'s}}\) listed on the left margin and on top. Then compute averages on and above the diagonal.)

The shear strength of each of ten test spot welds is determined, yielding the following data (psi):

\(\begin{array}{*{20}{l}}{{\rm{392}}}&{{\rm{376}}}&{{\rm{401}}}&{{\rm{367}}}&{{\rm{389}}}&{{\rm{362}}}&{{\rm{409}}}&{{\rm{415}}}&{{\rm{358}}}&{{\rm{375}}}\end{array}\)

a. Assuming that shear strength is normally distributed, estimate the true average shear strength and standard deviation of shear strength using the method of maximum likelihood.

b. Again assuming a normal distribution, estimate the strength value below which\({\rm{95\% }}\)of all welds will have their strengths. (Hint: What is the\({\rm{95 th}}\)percentile in terms of\({\rm{\mu }}\)and\({\rm{\sigma }}\)? Now use the invariance principle.)

c. Suppose we decide to examine another test spot weld. Let\({\rm{X = }}\)shear strength of the weld. Use the given data to obtain the mle of\({\rm{P(Xยฃ400)}}{\rm{.(Hint:P(Xยฃ400) = \Phi ((400 - \mu )/\sigma )}}{\rm{.)}}\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free