Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Suppose the true average growth\({\rm{\mu }}\)of one type of plant during a l-year period is identical to that of a second type, but the variance of growth for the first type is\({{\rm{\sigma }}^{\rm{2}}}\), whereas for the second type the variance is\({\rm{4}}{{\rm{\sigma }}^{\rm{2}}}{\rm{. Let }}{{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{m}}}\)be\({\rm{m}}\)independent growth observations on the first type (so\({\rm{E}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ = \mu ,V}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ = \sigma\hat 2}}\)$ ), and let\({{\rm{Y}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{Y}}_{\rm{n}}}\)be\({\rm{n}}\)independent growth observations on the second type\(\left( {{\rm{E}}\left( {{{\rm{Y}}_{\rm{i}}}} \right){\rm{ = \mu ,V}}\left( {{{\rm{Y}}_{\rm{j}}}} \right){\rm{ = 4}}{{\rm{\sigma }}^{\rm{2}}}} \right)\)

a. Show that the estimator\({\rm{\hat \mu = \delta \bar X + (1 - \delta )\bar Y}}\)is unbiased for\({\rm{\mu }}\)(for\({\rm{0 < \delta < 1}}\), the estimator is a weighted average of the two individual sample means).

b. For fixed\({\rm{m}}\)and\({\rm{n}}\), compute\({\rm{V(\hat \mu ),}}\)and then find the value of\({\rm{\delta }}\)that minimizes\({\rm{V(\hat \mu )}}\). (Hint: Differentiate\({\rm{V(\hat \mu )}}\)with respect to\({\rm{\delta }}{\rm{.)}}\)

Short Answer

Expert verified

a) \({\rm{\hat \mu is an unbiased estimator for \mu }}{\rm{. }}\)

b) The value \({\rm{\delta = }}\frac{{{\rm{4m}}}}{{{\rm{n + 4m}}}}\) minimized\({\rm{V(\hat \mu )}}\).

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Proofing estimator unbiased

a)

Consider the given information,

\(\begin{aligned}{\rm{E}}\left( {{{\rm{X}}_{\rm{i}}}} \right) &= \mu V \left( {{{\rm{X}}_{\rm{i}}}} \right)\\& = {{\rm{\sigma }}^{\rm{2}}}{\rm{E}}\left( {{{\rm{Y}}_{\rm{i}}}} \right) & = \mu V \left( {{{\rm{Y}}_{\rm{i}}}} \right)\\ & = {{\rm{\sigma }}^{\rm{2}}}{\rm{\hat \mu }}\\ &= \delta \bar X + (1 - \delta )\bar Y\end{aligned}\)

The mean of the sampling distribution of the sample mean is the population mean:

\(\begin{array}{c}{\rm{E(\bar X) = E}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ = \mu }}\\{\rm{E(\bar Y) = E}}\left( {{{\rm{Y}}_{\rm{i}}}} \right){\rm{ = \mu }}\end{array}\)

Determine the expected value of \({\rm{\hat \mu }}\) :

\(\begin{aligned} E(\hat \mu ) &= E(\delta \bar X + (1 - \delta )\bar Y) \\ &= \delta E(\bar X) + (1 - \delta )E(\bar Y)\\ &= \delta \mu + (1 - \delta )\mu \\ & = \delta \mu + \mu - \delta \mu \\ &= \mu \end{aligned}\)

Since, the expected value of \({\rm{\hat \mu }}\) is \({\rm{\mu ,\hat \mu }}\)is called an unbiased estimator for\({\rm{\mu }}\).

Now, finding the value of\({\rm{\delta }}\),

\(\begin{aligned}{\rm{E}}\left( {{{\rm{X}}_{\rm{i}}}} \right) &= \mu V\left( {{{\rm{X}}_{\rm{i}}}} \right)\\ &= {{\rm{\sigma }}^{\rm{2}}}{\rm{E}}\left( {{{\rm{Y}}_{\rm{i}}}} \right)\\ &= \mu V \left( {{{\rm{Y}}_{\rm{i}}}} \right)\\ & = {{\rm{\sigma }}^{\rm{2}}}{\rm{\hat \mu = \delta \bar X + (1 - \delta )\bar Y}}\end{aligned}\)

The mean of the sampling distribution of the sample mean is the population mean:

\(\begin{aligned}E(\bar X) &= E\left( {{{\rm{X}}_{\rm{i}}}} \right)\\ &= \mu E(\bar Y) = E \left( {{{\rm{Y}}_{\rm{i}}}} \right)\\ &= \mu \end{aligned}\)

The population variance divided by the sample size equals the variance of the sampling distribution of the sample mean:

\(\begin{array}{c}{\rm{V(\bar X) = }}\frac{{{\rm{V}}\left( {{{\rm{X}}_{\rm{i}}}} \right)}}{{\rm{m}}}\\{\rm{ = }}\frac{{{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{m}}}{\rm{V(\bar Y)}}\\{\rm{ = }}\frac{{{\rm{V}}\left( {{{\rm{Y}}_{\rm{i}}}} \right)}}{{\rm{n}}}\\{\rm{ = }}\frac{{{\rm{4}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}\end{array}\)

Determine the variance of \({\rm{\hat \mu }}\)(using the property \({\rm{V(aX + bY) = }}{{\rm{a}}^{\rm{2}}}{\rm{V(x) + }}{{\rm{b}}^{\rm{2}}}{\rm{V(Y)}}\)for the variance, when \({\rm{X}}\)and \({\rm{Y}}\)are independent):

03

Calculation

(b)

Consider the given information,

\(\begin{array}{l}{\rm{V(\hat \mu ) = V(\delta \bar X + (1 - \delta )\bar Y)}}\\{\rm{ = }}{{\rm{\delta }}^{\rm{2}}}{\rm{V(\bar X) + (1 - \delta }}{{\rm{)}}^{\rm{2}}}{\rm{V(\bar Y)}}\\{\rm{ = }}{{\rm{\delta }}^{\rm{2}}}\frac{{{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{m}}}{\rm{ + (1 - \delta }}{{\rm{)}}^{\rm{2}}}\frac{{{\rm{4}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}\\{\rm{ = }}\frac{{{{\rm{\delta }}^{\rm{2}}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{m}}}{\rm{ + }}\frac{{{\rm{4}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ - }}\frac{{{\rm{8\delta }}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ + }}\frac{{{\rm{4}}{{\rm{\delta }}^{\rm{2}}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}\\{\rm{ = }}\frac{{{{\rm{\delta }}^{\rm{2}}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{m}}}{\rm{ + }}\frac{{{\rm{4}}{{\rm{\delta }}^{\rm{2}}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ - }}\frac{{{\rm{8\delta }}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ + }}\frac{{{\rm{4}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}\end{array}\)

Differentiate with respect to\({\rm{\delta }}\):

\(\begin{aligned}\frac{{\rm{d}}}{{{\rm{d\delta }}}}V(\hat \mu ) &= \frac{{\rm{d}}}{{{\rm{d\delta }}}}\left( {\frac{{{{\rm{\delta }}^{\rm{2}}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{m}}}{\rm{ + }}\frac{{{\rm{4}}{{\rm{\delta }}^{\rm{2}}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ - }}\frac{{{\rm{8\delta }}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ + }}\frac{{{\rm{4}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}} \right)\\ &= \frac{{{\rm{2\delta }}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{m}}}{\rm{ + }}\frac{{{\rm{8\delta }}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ - }}\frac{{{\rm{8}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}\end{aligned}\)

The minimum is the value for \({\rm{\delta }}\)for which the expression becomes zero:

\(\frac{{{\rm{2\delta }}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{m}}}{\rm{ + }}\frac{{{\rm{8\delta }}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ - }}\frac{{{\rm{8}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}{\rm{ = 0}}\)

Add \(\frac{{{\rm{8}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}\) to each side of the equation:

\(\frac{{{\rm{2n\delta }}{{\rm{\sigma }}^{\rm{2}}}{\rm{ + 8m\delta }}{{\rm{\sigma }}^{\rm{2}}}}}{{{\rm{mn}}}}{\rm{ = }}\frac{{{\rm{8}}{{\rm{\sigma }}^{\rm{2}}}}}{{\rm{n}}}\)

Multiply each side of the equation by\({\rm{mn}}\):

\({\rm{2n\delta }}{{\rm{\sigma }}^{\rm{2}}}{\rm{ + 8m\delta }}{{\rm{\sigma }}^{\rm{2}}}{\rm{ = 8}}{{\rm{\sigma }}^{\rm{2}}}{\rm{m}}\)

Factor out\({\rm{\delta }}\):

\(\left( {{\rm{2n}}{{\rm{\sigma }}^{\rm{2}}}{\rm{ + 8m}}{{\rm{\sigma }}^{\rm{2}}}} \right){\rm{\delta = 8}}{{\rm{\sigma }}^{\rm{2}}}{\rm{m}}\)

Divide each side of the equation by\({\rm{2n}}{{\rm{\sigma }}^{\rm{2}}}{\rm{ + 8m}}{{\rm{\sigma }}^{\rm{2}}}\):

\({\rm{\delta = }}\frac{{{\rm{8}}{{\rm{\sigma }}^{\rm{2}}}{\rm{m}}}}{{{\rm{2n}}{{\rm{\sigma }}^{\rm{2}}}{\rm{ + 8m}}{{\rm{\sigma }}^{\rm{2}}}}}\)

Divide the numerator and denominator by\({\rm{2}}{{\rm{\sigma }}^{\rm{2}}}\):

\({\rm{\delta = }}\frac{{{\rm{4m}}}}{{{\rm{n + 4m}}}}\)

Thus, the value \({\rm{\delta = }}\frac{{{\rm{4m}}}}{{{\rm{n + 4m}}}}\) minimized\({\rm{V(\hat \mu )}}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose a certain type of fertilizer has an expected yield per acre of \({{\rm{\mu }}_{\rm{2}}}\)with variance \({{\rm{\sigma }}^{\rm{2}}}\)whereas the expected yield for a second type of fertilizer is with the same variance \({{\rm{\sigma }}^{\rm{2}}}\).Let \({\rm{S}}_{\rm{1}}^{\rm{2}}\) and \({\rm{S}}_{\rm{2}}^{\rm{2}}\)denote the sample variances of yields based on sample sizes \({{\rm{n}}_{\rm{1}}}\)and \({{\rm{n}}_{\rm{2}}}\),respectively, of the two fertilizers. Show that the pooled (combined) estimator

\({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = }}\frac{{\left( {{{\rm{n}}_{\rm{1}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{1}}^{\rm{2}}{\rm{ + }}\left( {{{\rm{n}}_{\rm{2}}}{\rm{ - 1}}} \right){\rm{S}}_{\rm{2}}^{\rm{2}}}}{{{{\rm{n}}_{\rm{1}}}{\rm{ + }}{{\rm{n}}_{\rm{2}}}{\rm{ - 2}}}}\)

is an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)

Consider randomly selecting \({\rm{n}}\) segments of pipe and determining the corrosion loss (mm) in the wall thickness for each one. Denote these corrosion losses by \({{\rm{Y}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{Y}}_{\rm{n}}}\). The article “A Probabilistic Model for a Gas Explosion Due to Leakages in the Grey Cast Iron Gas Mains” (Reliability Engr. and System Safety (\({\rm{(2013:270 - 279)}}\)) proposes a linear corrosion model: \({{\rm{Y}}_{\rm{i}}}{\rm{ = }}{{\rm{t}}_{\rm{i}}}{\rm{R}}\), where \({{\rm{t}}_{\rm{i}}}\) is the age of the pipe and \({\rm{R}}\), the corrosion rate, is exponentially distributed with parameter \({\rm{\lambda }}\). Obtain the maximum likelihood estimator of the exponential parameter (the resulting mle appears in the cited article). (Hint: If \({\rm{c > 0}}\) and \({\rm{X}}\) has an exponential distribution, so does \({\rm{cX}}\).)

At time \({\rm{t = 0}}\), there is one individual alive in a certain population. A pure birth process then unfolds as follows. The time until the first birth is exponentially distributed with parameter \({\rm{\lambda }}\). After the first birth, there are two individuals alive. The time until the first gives birth again is exponential with parameter \({\rm{\lambda }}\), and similarly for the second individual. Therefore, the time until the next birth is the minimum of two exponential (\({\rm{\lambda }}\)) variables, which is exponential with parameter \({\rm{2\lambda }}\). Similarly, once the second birth has occurred, there are three individuals alive, so the time until the next birth is an exponential \({\rm{rv}}\) with parameter \({\rm{3\lambda }}\), and so on (the memoryless property of the exponential distribution is being used here). Suppose the process is observed until the sixth birth has occurred and the successive birth times are \({\rm{25}}{\rm{.2,41}}{\rm{.7,51}}{\rm{.2,55}}{\rm{.5,59}}{\rm{.5,61}}{\rm{.8}}\) (from which you should calculate the times between successive births). Derive the mle of l. (Hint: The likelihood is a product of exponential terms.)

The article from which the data in Exercise 1 was extracted also gave the accompanying strength observations for cylinders:

\(\begin{array}{l}\begin{array}{*{20}{r}}{{\rm{6}}{\rm{.1}}}&{{\rm{5}}{\rm{.8}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{7}}{\rm{.1}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{9}}{\rm{.2}}}&{{\rm{6}}{\rm{.6}}}&{{\rm{8}}{\rm{.3}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{8}}{\rm{.3}}}\\{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\\\begin{array}{*{20}{l}}{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\end{array}\)

Prior to obtaining data, denote the beam strengths by X1, … ,Xm and the cylinder strengths by Y1, . . . , Yn. Suppose that the Xi ’s constitute a random sample from a distribution with mean m1 and standard deviation s1 and that the Yi ’s form a random sample (independent of the Xi ’s) from another distribution with mean m2 and standard deviation\({{\rm{\sigma }}_{\rm{2}}}\).

a. Use rules of expected value to show that \({\rm{\bar X - \bar Y}}\)is an unbiased estimator of \({{\rm{\mu }}_{\rm{1}}}{\rm{ - }}{{\rm{\mu }}_{\rm{2}}}\). Calculate the estimate for the given data.

b. Use rules of variance from Chapter 5 to obtain an expression for the variance and standard deviation (standard error) of the estimator in part (a), and then compute the estimated standard error.

c. Calculate a point estimate of the ratio \({{\rm{\sigma }}_{\rm{1}}}{\rm{/}}{{\rm{\sigma }}_{\rm{2}}}\)of the two standard deviations.

d. Suppose a single beam and a single cylinder are randomly selected. Calculate a point estimate of the variance of the difference \({\rm{X - Y}}\) between beam strength and cylinder strength.

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) from the pdf

\({\rm{f(x;\theta ) = }}{\rm{.5(1 + \theta x)}}\quad {\rm{ - 1£ x£ 1}}\)

where \({\rm{ - 1£ \theta £ 1}}\) (this distribution arises in particle physics). Show that \({\rm{\hat \theta = 3\bar X}}\) is an unbiased estimator of\({\rm{\theta }}\). (Hint: First determine\({\rm{\mu = E(X) = E(\bar X)}}\).)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free