Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)represent a random sample from a Rayleigh distribution with pdf

\({\rm{f(x,\theta ) = }}\frac{{\rm{x}}}{{\rm{\theta }}}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/(2\theta )}}}}\quad {\rm{x > 0}}\)a. It can be shown that\({\rm{E}}\left( {{{\rm{X}}^{\rm{2}}}} \right){\rm{ = 2\theta }}\). Use this fact to construct an unbiased estimator of\({\rm{\theta }}\)based on\({\rm{\Sigma X}}_{\rm{i}}^{\rm{2}}\)(and use rules of expected value to show that it is unbiased).

b. Estimate\({\rm{\theta }}\)from the following\({\rm{n = 10}}\)observations on vibratory stress of a turbine blade under specified conditions:

\(\begin{array}{*{20}{l}}{{\rm{16}}{\rm{.88}}}&{{\rm{10}}{\rm{.23}}}&{{\rm{4}}{\rm{.59}}}&{{\rm{6}}{\rm{.66}}}&{{\rm{13}}{\rm{.68}}}\\{{\rm{14}}{\rm{.23}}}&{{\rm{19}}{\rm{.87}}}&{{\rm{9}}{\rm{.40}}}&{{\rm{6}}{\rm{.51}}}&{{\rm{10}}{\rm{.95}}}\end{array}\)

Short Answer

Expert verified

a) The estimator is unbiased estimator\(\frac{{\sum {{\rm{X}}_{\rm{i}}^{\rm{2}}} }}{{{\rm{2n}}}}\).

b) The estimated value is\({\rm{\hat \theta = 74}}{\rm{.505}}\).

Step by step solution

01

Definition

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Proofing estimator unbiased

a)

This part is practically solved-because of,

\({\rm{E}}\left( {{{\rm{X}}^{\rm{2}}}} \right){\rm{ = 2\theta }}\)

It is easy to notice that the unbiased estimator of parameter \(\theta \)is

\(\frac{{\sum {{\rm{X}}_{\rm{i}}^{\rm{2}}} }}{{{\rm{2n}}}}\)

The proof of this claim follows

\(\begin{aligned}E(\hat \theta ) &= E\left( {\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} }}{{{\rm{2n}}}}} \right)\\ &= \frac{{\rm{1}}}{{{\rm{2n}}}}{\rm{E}}\left( {\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} } \right)\\ & = \frac{{\rm{1}}}{{{\rm{2n}}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{E}} \left( {{\rm{X}}_{\rm{i}}^{\rm{2}}} \right)\\&= \frac{{\rm{1}}}{{{\rm{2n}}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{2}} {\rm{\theta }}\\ &= \frac{{\rm{1}}}{{\rm{n}}}{\rm{ \times n \times \theta }}\\ &= \theta \end{aligned}\)

Which, proofs that the estimator is unbiased for\({\rm{\theta }}\).

03

Finding the value of \({\rm{\delta }}\)

b)

Given \({\rm{n = 10}}\)observations, the sum of squared data is:

\(\begin{aligned}\sum\limits_{{\rm{i = 1}}}^{{\rm{10}}} {{\rm{x}}_{\rm{i}}^{\rm{2}}} &= 16{\rm{.8}}{{\rm{8}}^{\rm{2}}}{\rm{ + 10}}{\rm{.2}}{{\rm{3}}^{\rm{2}}}{\rm{ + \ldots + 10}}{\rm{.9}}{{\rm{5}}^{\rm{2}}}\\ &= 1490{\rm{.1058}}\end{aligned}\)

Therefore, the estimate is\({\rm{\hat \theta = }}\frac{{\rm{1}}}{{{\rm{20}}}}{\rm{ \times 1490}}{\rm{.1058 = 74}}{\rm{.505}}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

A diagnostic test for a certain disease is applied to\({\rm{n}}\)individuals known to not have the disease. Let\({\rm{X = }}\)the number among the\({\rm{n}}\)test results that are positive (indicating presence of the disease, so\({\rm{X}}\)is the number of false positives) and\({\rm{p = }}\)the probability that a disease-free individual's test result is positive (i.e.,\({\rm{p}}\)is the true proportion of test results from disease-free individuals that are positive). Assume that only\({\rm{X}}\)is available rather than the actual sequence of test results.

a. Derive the maximum likelihood estimator of\({\rm{p}}\). If\({\rm{n = 20}}\)and\({\rm{x = 3}}\), what is the estimate?

b. Is the estimator of part (a) unbiased?

c. If\({\rm{n = 20}}\)and\({\rm{x = 3}}\), what is the mle of the probability\({{\rm{(1 - p)}}^{\rm{5}}}\)that none of the next five tests done on disease-free individuals are positive?

The article from which the data in Exercise 1 was extracted also gave the accompanying strength observations for cylinders:

\(\begin{array}{l}\begin{array}{*{20}{r}}{{\rm{6}}{\rm{.1}}}&{{\rm{5}}{\rm{.8}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{7}}{\rm{.1}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{9}}{\rm{.2}}}&{{\rm{6}}{\rm{.6}}}&{{\rm{8}}{\rm{.3}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{8}}{\rm{.3}}}\\{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\\\begin{array}{*{20}{l}}{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\end{array}\)

Prior to obtaining data, denote the beam strengths by X1, … ,Xm and the cylinder strengths by Y1, . . . , Yn. Suppose that the Xi ’s constitute a random sample from a distribution with mean m1 and standard deviation s1 and that the Yi ’s form a random sample (independent of the Xi ’s) from another distribution with mean m2 and standard deviation\({{\rm{\sigma }}_{\rm{2}}}\).

a. Use rules of expected value to show that \({\rm{\bar X - \bar Y}}\)is an unbiased estimator of \({{\rm{\mu }}_{\rm{1}}}{\rm{ - }}{{\rm{\mu }}_{\rm{2}}}\). Calculate the estimate for the given data.

b. Use rules of variance from Chapter 5 to obtain an expression for the variance and standard deviation (standard error) of the estimator in part (a), and then compute the estimated standard error.

c. Calculate a point estimate of the ratio \({{\rm{\sigma }}_{\rm{1}}}{\rm{/}}{{\rm{\sigma }}_{\rm{2}}}\)of the two standard deviations.

d. Suppose a single beam and a single cylinder are randomly selected. Calculate a point estimate of the variance of the difference \({\rm{X - Y}}\) between beam strength and cylinder strength.

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a pdf\({\rm{f(x)}}\)that is symmetric about\({\rm{\mu }}\), so that\({\rm{\backslash widetildeX}}\)is an unbiased estimator of\({\rm{\mu }}\). If\({\rm{n}}\)is large, it can be shown that\({\rm{V (\tilde X)\gg 1/}}\left( {{\rm{4n(f(\mu )}}{{\rm{)}}^{\rm{2}}}} \right)\).

a. Compare\({\rm{V(\backslash widetildeX)}}\)to\({\rm{V(\bar X)}}\)when the underlying distribution is normal.

b. When the underlying pdf is Cauchy (see Example 6.7),\({\rm{V(\bar X) = \yen}}\), so\({\rm{\bar X}}\)is a terrible estimator. What is\({\rm{V(\tilde X)}}\)in this case when\({\rm{n}}\)is large?

Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a pdf that is symmetric about \({\rm{\mu }}\). An estimator for \({\rm{\mu }}\) that has been found to perform well for a variety of underlying distributions is the Hodges–Lehmann estimator. To define it, first compute for each \({\rm{i}} \le {\rm{j}}\) and each \({\rm{j = 1,2, \ldots ,n}}\) the pairwise average \({{\rm{\bar X}}_{{\rm{i,j}}}}{\rm{ = }}\left( {{{\rm{X}}_{\rm{i}}}{\rm{ + }}{{\rm{X}}_{\rm{j}}}} \right){\rm{/2}}\). Then the estimator is \({\rm{\hat \mu = }}\) the median of the \({{\rm{\bar X}}_{{\rm{i,j}}}}{\rm{'s}}\). Compute the value of this estimate using the data of Exercise \({\rm{44}}\) of Chapter \({\rm{1}}\). (Hint: Construct a square table with the \({{\rm{x}}_{\rm{i}}}{\rm{'s}}\) listed on the left margin and on top. Then compute averages on and above the diagonal.)

Suppose the true average growth\({\rm{\mu }}\)of one type of plant during a l-year period is identical to that of a second type, but the variance of growth for the first type is\({{\rm{\sigma }}^{\rm{2}}}\), whereas for the second type the variance is\({\rm{4}}{{\rm{\sigma }}^{\rm{2}}}{\rm{. Let }}{{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{m}}}\)be\({\rm{m}}\)independent growth observations on the first type (so\({\rm{E}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ = \mu ,V}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ = \sigma\hat 2}}\)$ ), and let\({{\rm{Y}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{Y}}_{\rm{n}}}\)be\({\rm{n}}\)independent growth observations on the second type\(\left( {{\rm{E}}\left( {{{\rm{Y}}_{\rm{i}}}} \right){\rm{ = \mu ,V}}\left( {{{\rm{Y}}_{\rm{j}}}} \right){\rm{ = 4}}{{\rm{\sigma }}^{\rm{2}}}} \right)\)

a. Show that the estimator\({\rm{\hat \mu = \delta \bar X + (1 - \delta )\bar Y}}\)is unbiased for\({\rm{\mu }}\)(for\({\rm{0 < \delta < 1}}\), the estimator is a weighted average of the two individual sample means).

b. For fixed\({\rm{m}}\)and\({\rm{n}}\), compute\({\rm{V(\hat \mu ),}}\)and then find the value of\({\rm{\delta }}\)that minimizes\({\rm{V(\hat \mu )}}\). (Hint: Differentiate\({\rm{V(\hat \mu )}}\)with respect to\({\rm{\delta }}{\rm{.)}}\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free