Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

The mean squared error of an estimator \({\rm{\hat \theta }}\) is \({\rm{MSE(\hat \theta ) = E(\hat \theta - \hat \theta }}{{\rm{)}}^{\rm{2}}}\). If \({\rm{\hat \theta }}\) is unbiased, then \({\rm{MSE(\hat \theta ) = V(\hat \theta )}}\), but in general \({\rm{MSE(\hat \theta ) = V(\hat \theta ) + (bias}}{{\rm{)}}^{\rm{2}}}\) . Consider the estimator \({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = K}}{{\rm{S}}^{\rm{2}}}\), where \({{\rm{S}}^{\rm{2}}}{\rm{ = }}\) sample variance. What value of K minimizes the mean squared error of this estimator when the population distribution is normal? (Hint: It can be shown that \({\rm{E}}\left( {{{\left( {{{\rm{S}}^{\rm{2}}}} \right)}^{\rm{2}}}} \right){\rm{ = (n + 1)}}{{\rm{\sigma }}^{\rm{4}}}{\rm{/(n - 1)}}\) In general, it is difficult to find \({\rm{\hat \theta }}\) to minimize \({\rm{MSE(\hat \theta )}}\), which is why we look only at unbiased estimators and minimize \({\rm{V(\hat \theta )}}\).)

Short Answer

Expert verified

The value of \({\rm{K = }}\frac{{{\rm{n - 1}}}}{{{\rm{n - 3}}}}\).

Step by step solution

01

Define exponential function

A function that increases or decays at a rate proportional to its present value is called an exponential function.

02

Explanation

To begin, calculate the mean squared error of\({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = K}}{{\rm{S}}^{\rm{2}}}\). Take note of this:

\({\rm{MSE}}\left( {{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right){\rm{ = V}}\left( {\widehat {{{\rm{\sigma }}^{\rm{2}}}}} \right){\rm{ + Bias}}\left( {{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right)\)

where there is a bias.

\(\begin{aligned}{\rm{Bias}}\left( {{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right) &= E \left( {{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right){\rm{ - }}{{\rm{\sigma }}^{\rm{2}}}\\&= E \left( {{\rm{K}}{{\rm{S}}^{\rm{2}}}} \right){\rm{ - }}{{\rm{\sigma }}^{\rm{2}}}\\ &= K {{\rm{\sigma }}^{\rm{2}}}{\rm{ - }}{{\rm{\sigma }}^{\rm{2}}}\\ & = {{\rm{\sigma }}^{\rm{2}}}{\rm{(K - 1)}}\end{aligned}\)

We used the notion that \({{\rm{S}}^{\rm{2}}}\) is an unbiased \({{\rm{\sigma }}^{\rm{2}}}\) estimator and the definition of bias in this example. The variance can be computed as follows:

\(\begin{aligned}{\rm{V}}\left( {\widehat {{{\rm{\sigma }}^{\rm{2}}}}} \right) &= V \left( {{\rm{K}}{{\rm{S}}^{\rm{2}}}} \right)\\ &= {{\rm{K}}^{\rm{2}}}{\rm{V}}\left( {{{\rm{S}}^{\rm{2}}}} \right)\\ &= {{\rm{K}}^{\rm{2}}}\left( {{\rm{E}}{{\left( {{{\rm{S}}^{\rm{2}}}} \right)}^{\rm{2}}}{\rm{ - }}{{\left( {{\rm{E}}\left( {{{\rm{S}}^{\rm{2}}}} \right)} \right)}^{\rm{2}}}} \right)\\ &= {{\rm{K}}^{\rm{2}}}\left( {\frac{{{\rm{n + 1}}}}{{{\rm{n - 1}}}}{{\rm{\sigma }}^{\rm{4}}}{\rm{ - }}{{\left( {{{\rm{\sigma }}^{\rm{2}}}} \right)}^{\rm{2}}}} \right)\\ & = {{\rm{K}}^{\rm{2}}}{{\rm{\sigma }}^{\rm{4}}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1}}} \right)\end{aligned}\)

03

Evaluating the value

The derivative of the MSE is required to obtain the value of K that minimises the MSE. The estimator\({\rm{\hat \theta }}\)mean squared error is,

\(\begin{array}{c}{\rm{MSE}}\left( {{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right){\rm{ = }}{{\rm{K}}^{\rm{2}}}{{\rm{\sigma }}^{\rm{4}}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1}}} \right){\rm{ - }}{\left( {{{\rm{\sigma }}^{\rm{2}}}{\rm{(K - 1)}}} \right)^{\rm{2}}}\\{\rm{ = }}{{\rm{K}}^{\rm{2}}}{{\rm{\sigma }}^{\rm{4}}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1}}} \right){\rm{ - }}{{\rm{\sigma }}^{\rm{4}}}{{\rm{(K - 1)}}^{\rm{2}}}\end{array}\)

In terms of K, its derivative is,

\(\begin{array}{c}\frac{{\rm{d}}}{{{\rm{dK}}}}{\rm{MSE}}\left( {{{{\rm{\hat \sigma }}}^{\rm{2}}}} \right){\rm{ = }}\frac{{\rm{d}}}{{{\rm{dK}}}}\left( {{{\rm{K}}^{\rm{2}}}{{\rm{\sigma }}^{\rm{4}}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1}}} \right){\rm{ - }}{{\rm{\sigma }}^{\rm{4}}}{{{\rm{(K - 1)}}}^{\rm{2}}}} \right)\\{\rm{ = 2K}}{{\rm{\sigma }}^{\rm{4}}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1}}} \right){\rm{ - 2}}{{\rm{\sigma }}^{\rm{4}}}{\rm{(K - 1)}}\end{array}\)

and theminimum comes from,

\({\rm{2K}}{{\rm{\sigma }}^{\rm{4}}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1}}} \right){\rm{ - 2}}{{\rm{\sigma }}^{\rm{4}}}{\rm{(K - 1) = 0}}\)

or, similarly,

\(\begin{aligned}{\rm{2}}{{\rm{\sigma }}^{\rm{4}}}{\rm{K}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1}}} \right) &= 2 {{\rm{\sigma }}^{\rm{4}}}{\rm{(K - 1)}}\\{\rm{K}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1}}} \right)&= K - 1 \\{\rm{K}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - 1 - 1}}} \right) &= - 1 \\{\rm{K}}\left( {\frac{{{\rm{(n + 1)}}}}{{{\rm{n - 1}}}}{\rm{ - }}\frac{{{\rm{2n - 2}}}}{{{\rm{n - 1}}}}} \right) &= - 1\\{\rm{K}}\left( {\frac{{{\rm{ - n + 3}}}}{{{\rm{n - 1}}}}} \right) &= - 1 \end{aligned}\)

As a result, the K value that minimises MSE is,

\({\rm{K = }}\frac{{{\rm{n - 1}}}}{{{\rm{n - 3}}}}\).

The unbiased estimator (which was derived earlier) is obtained for \({\rm{K = 1}}\) and the maximum likelihood estimator is produced for \({\rm{K = }}\frac{{{\rm{n - 1}}}}{{\rm{n}}}\). As a result, this one is distinct from both, and it is neither unbiased nor mle.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

The accompanying data on flexural strength (MPa) for concrete beams of a certain type was introduced in Example 1.2.

\(\begin{array}{*{20}{r}}{{\rm{5}}{\rm{.9}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{7}}{\rm{.3}}}&{{\rm{6}}{\rm{.3}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{6}}{\rm{.8}}}&{{\rm{7}}{\rm{.0}}}\\{{\rm{7}}{\rm{.6}}}&{{\rm{6}}{\rm{.8}}}&{{\rm{6}}{\rm{.5}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{6}}{\rm{.3}}}&{{\rm{7}}{\rm{.9}}}&{{\rm{9}}{\rm{.0}}}\\{{\rm{3}}{\rm{.2}}}&{{\rm{8}}{\rm{.7}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{7}}{\rm{.7}}}&{{\rm{9}}{\rm{.7}}}\\{{\rm{7}}{\rm{.3}}}&{{\rm{7}}{\rm{.7}}}&{{\rm{11}}{\rm{.6}}}&{{\rm{11}}{\rm{.3}}}&{{\rm{11}}{\rm{.8}}}&{{\rm{10}}{\rm{.7}}}&{}\end{array}\)

Calculate a point estimate of the mean value of strength for the conceptual population of all beams manufactured in this fashion, and state which estimator you used\({\rm{(Hint:\Sigma }}{{\rm{x}}_{\rm{i}}}{\rm{ = 219}}{\rm{.8}}{\rm{.)}}\)

b. Calculate a point estimate of the strength value that separates the weakest 50% of all such beams from the strongest 50 %, and state which estimator you used.

c. Calculate and interpret a point estimate of the population standard deviation\({\rm{\sigma }}\). Which estimator did you use?\({\rm{(Hint:}}\left. {{\rm{\Sigma x}}_{\rm{i}}^{\rm{2}}{\rm{ = 1860}}{\rm{.94}}{\rm{.}}} \right)\)

d. Calculate a point estimate of the proportion of all such beams whose flexural strength exceeds\({\rm{10MPa}}\). (Hint: Think of an observation as a "success" if it exceeds 10.)

e. Calculate a point estimate of the population coefficient of variation\({\rm{\sigma /\mu }}\), and state which estimator you used.

An estimator \({\rm{\hat \theta }}\) is said to be consistent if for any \( \in {\rm{ > 0}}\), \({\rm{P(|\hat \theta - \theta |}} \ge \in {\rm{)}} \to {\rm{0}}\) as \({\rm{n}} \to \infty \). That is, \({\rm{\hat \theta }}\) is consistent if, as the sample size gets larger, it is less and less likely that \({\rm{\hat \theta }}\) will be further than \( \in \) from the true value of \({\rm{\theta }}\). Show that \({\rm{\bar X}}\) is a consistent estimator of \({\rm{\mu }}\) when \({{\rm{\sigma }}^{\rm{2}}}{\rm{ < }}\infty \) , by using Chebyshevโ€™s inequality from Exercise \({\rm{44}}\) of Chapter \({\rm{3}}\). (Hint: The inequality can be rewritten in the form \({\rm{P}}\left( {\left| {{\rm{Y - }}{{\rm{\mu }}_{\rm{Y}}}} \right| \ge \in } \right) \le {\rm{\sigma }}_{\rm{Y}}^{\rm{2}}{\rm{/}} \in \). Now identify \({\rm{Y}}\) with \({\rm{\bar X}}\).)

The shear strength of each of ten test spot welds is determined, yielding the following data (psi):

\(\begin{array}{*{20}{l}}{{\rm{392}}}&{{\rm{376}}}&{{\rm{401}}}&{{\rm{367}}}&{{\rm{389}}}&{{\rm{362}}}&{{\rm{409}}}&{{\rm{415}}}&{{\rm{358}}}&{{\rm{375}}}\end{array}\)

a. Assuming that shear strength is normally distributed, estimate the true average shear strength and standard deviation of shear strength using the method of maximum likelihood.

b. Again assuming a normal distribution, estimate the strength value below which\({\rm{95\% }}\)of all welds will have their strengths. (Hint: What is the\({\rm{95 th}}\)percentile in terms of\({\rm{\mu }}\)and\({\rm{\sigma }}\)? Now use the invariance principle.)

c. Suppose we decide to examine another test spot weld. Let\({\rm{X = }}\)shear strength of the weld. Use the given data to obtain the mle of\({\rm{P(Xยฃ400)}}{\rm{.(Hint:P(Xยฃ400) = \Phi ((400 - \mu )/\sigma )}}{\rm{.)}}\)

A vehicle with a particular defect in its emission control system is taken to a succession of randomly selected mechanics until\({\rm{r = 3}}\)of them have correctly diagnosed the problem. Suppose that this requires diagnoses by\({\rm{20}}\)different mechanics (so there were\({\rm{17}}\)incorrect diagnoses). Let\({\rm{p = P}}\)(correct diagnosis), so\({\rm{p}}\)is the proportion of all mechanics who would correctly diagnose the problem. What is the mle of\({\rm{p}}\)? Is it the same as the mle if a random sample of\({\rm{20}}\)mechanics results in\({\rm{3}}\)correct diagnoses? Explain. How does the mle compare to the estimate resulting from the use of the unbiased estimator?

Let\({\rm{X}}\)represent the error in making a measurement of a physical characteristic or property (e.g., the boiling point of a particular liquid). It is often reasonable to assume that\({\rm{E(X) = 0}}\)and that\({\rm{X}}\)has a normal distribution. Thus, the pdf of any particular measurement error is

\({\rm{f(x;\theta ) = }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi \theta }}} }}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/2\theta }}}}\quad {\rm{ - \yen < x < \yen}}\)

(Where we have used\({\rm{\theta }}\)in place of\({{\rm{\sigma }}^{\rm{2}}}\)). Now suppose that\({\rm{n}}\)independent measurements are made, resulting in measurement errors\({{\rm{X}}_{\rm{1}}}{\rm{ = }}{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{ = }}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}{\rm{ = }}{{\rm{x}}_{\rm{n}}}{\rm{.}}\)Obtain the mle of\({\rm{\theta }}\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free