Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let\({\rm{X}}\)have a Weibull distribution with parameters\({\rm{\alpha }}\)and\({\rm{\beta }}\), so

\(\begin{array}{l}{\rm{E(X) = \beta \times \Gamma (1 + 1/\alpha )V(X)}}\\{\rm{ = }}{{\rm{\beta }}^{\rm{2}}}\left\{ {{\rm{\Gamma (1 + 2/\alpha ) - (\Gamma (1 + 1/\alpha )}}{{\rm{)}}^{\rm{2}}}} \right\}\end{array}\)

a. Based on a random sample\({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\), write equations for the method of moments estimators of\({\rm{\beta }}\)and\({\rm{\alpha }}\). Show that, once the estimate of\({\rm{\alpha }}\)has been obtained, the estimate of\({\rm{\beta }}\)can be found from a table of the gamma function and that the estimate of\({\rm{\alpha }}\)is the solution to a complicated equation involving the gamma function.

b. If\({\rm{n = 20,\bar x = 28}}{\rm{.0}}\), and\({\rm{\Sigma x}}_{\rm{i}}^{\rm{2}}{\rm{ = 16,500}}\), compute the estimates. (Hint:\(\left. {{{{\rm{(\Gamma (1}}{\rm{.2))}}}^{\rm{2}}}{\rm{/\Gamma (1}}{\rm{.4) = }}{\rm{.95}}{\rm{.}}} \right)\)

Short Answer

Expert verified

a) The moment estimator \({\rm{\hat \beta }}\)can be obtained once the equation is solved. The goal was to figure out how to get the method estimators alpha and\({\rm{\hat \beta }}\).

b) The estimates are\(\hat \alpha = 5;\hat \beta = \frac{{28}}{{\Gamma (1.2)}}\)

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

a)

Let random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) have the same distribution as pmf or pdf \(f\left( {x;{\theta _1},{\theta _2}, \ldots ,{\theta _m}} \right),m \^I N\) with unknown parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\).

By equating sample moments to corresponding population moments and solving for unknown parameters\(\widehat {{{\rm{\theta }}_{\rm{1}}}}{\rm{,}}\widehat {{{\rm{\theta }}_{\rm{2}}}}{\rm{, \ldots ,}}\widehat {{{\rm{\theta }}_{\rm{m}}}}\), the moment estimators\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)may be obtained.

The specified distribution in this exercise is Weibull's distribution with parameters\({\rm{\alpha }}\)and\({\rm{\beta }}\),forwhichmomentestimatorsmustbedeveloped.

The sample moment of first order is

\({\rm{\bar X = }}\frac{{\rm{1}}}{{\rm{n}}}\left( {{{\rm{X}}_{\rm{1}}}{\rm{ + }}{{\rm{X}}_{\rm{2}}}{\rm{ + \ldots + }}{{\rm{X}}_{\rm{n}}}} \right)\)

and the population moment of first order is

\({\rm{E(X) = \beta \times \Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)\)

The first equation in the system of equations from which the moment estimators are obtained is

\({\rm{\bar X = E(X)}}\)

The sample moment of second order is:

\(\frac{{\rm{1}}}{{\rm{n}}}\left( {{\rm{X}}_{\rm{1}}^{\rm{2}}{\rm{ + X}}_{\rm{2}}^{\rm{2}}{\rm{ + \ldots + X}}_{\rm{n}}^{\rm{2}}} \right)\)

And the population moment of second order is:

\({\rm{E}}\left( {{{\rm{X}}^{\rm{2}}}} \right){\rm{ = V(X) + (E(X)}}{{\rm{)}}^{\rm{2}}}{\rm{ - }}{{\rm{\beta }}^{\rm{2}}}\left\{ {{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{\rm{\alpha }}}} \right){\rm{ - }}{{\left( {{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)} \right)}^{\rm{2}}}} \right\}{\rm{ - }}{\left\{ {{\rm{\beta \times \Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)} \right\}^{\rm{2}}}{\rm{ - }}{{\rm{\beta }}^{\rm{2}}}{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{\rm{\alpha }}}} \right)\)

The moment estimators are derived from the second equation in the system of equations.

\(\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} {\rm{ - E}}\left( {{{\rm{X}}^{\rm{2}}}} \right)\)

As a result, the system of equations that must be solved for\({\rm{\hat \alpha }}\) and \({\rm{\hat \beta }}\) is

\(\begin{array}{*{20}{r}}{{\rm{\bar X - \hat \beta \times \Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\hat \alpha }}}}} \right){\rm{,}}}\\{\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} {\rm{ - }}{{{\rm{\hat \beta }}}^{\rm{2}}}{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{{\rm{\hat \alpha }}}}} \right){\rm{.}}}\end{array}\)

Hence, \(\beta \)can be computed from the first equation as

\({\rm{\hat \beta = }}\frac{{{\rm{\bar X}}}}{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\dot \alpha }}}}} \right)}}{\rm{.}}\)

In order to compute\({\rm{\hat \beta }}\), first \({\rm{\hat \alpha }}\) needs to be determined and gamma functioned evaluated.

From first equation, by squaring both sides, the following stands

\({{\rm{\bar X}}^{\rm{2}}}{\rm{ = }}{{\rm{\hat \beta }}^{\rm{2}}}{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\hat \alpha }}}}} \right){\rm{.}}\)

or equally

\({{\rm{\hat \beta }}^{\rm{2}}}{\rm{ = }}\frac{{{{{\rm{\bar X}}}^{\rm{2}}}}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\hat \alpha }}}}} \right)}}{\rm{.}}\)

Now plug in in the second equation:

\(\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} {\rm{ = }}\frac{{{{{\rm{\bar X}}}^{\rm{2}}}}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)}}{\rm{ \times \Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{{\rm{\hat \alpha }}}}} \right)\)

or equally

\(\frac{{\rm{1}}}{{\rm{n}}}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} }}{{{{{\rm{\bar X}}}^{\rm{2}}}}}{\rm{ = }}\frac{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{\rm{\alpha }}}} \right)}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)}}{\rm{.}}\)

The only unknown variable in the last equation is \(\hat \alpha \), and the moment estimator alpha may be obtained by solving this equation. The moment estimator \({\rm{\hat \beta }}\)can be obtained once the equation is solved.

Because, as stated in the exercise, solving this equation is difficult, there is no need to solve it. The goal was to figure out how to get the method estimators alpha and \({\rm{\hat \beta }}\).

03

Explanation

b)

Consider the given information,

\(\frac{{\rm{1}}}{{\rm{n}}}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} }}{{{{{\rm{\bar X}}}^{\rm{2}}}}}{\rm{ = }}\frac{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{{\rm{\hat \alpha }}}}} \right)}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)}}\)

For given\({\rm{n = 20,\bar x = 28}}\), and \(\sum {{\rm{x}}_{\rm{i}}^{\rm{2}}} {\rm{ = 16,500,\hat \alpha }}\) needs to be found.

The following is true:

\(\frac{{\rm{1}}}{{{\rm{20}}}}{\rm{ \times }}\left( {\frac{{{\rm{16,500}}}}{{{\rm{2}}{{\rm{8}}^{\rm{2}}}}}} \right){\rm{ = 1}}{\rm{.05}}\)

therefore,

\(\frac{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{\frac{{\rm{\alpha }}}{{\rm{\alpha }}}}}} \right)}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{\rm{\alpha }}}} \right)}}{\rm{ = 1}}{\rm{.05}}{\rm{.}}\)

From the hint

\(\frac{{{{{\rm{(\Gamma (1}}{\rm{.2))}}}^{\rm{2}}}}}{{{\rm{\Gamma (1}}{\rm{.4)}}}}{\rm{ = 0}}{\rm{.95}}\)

or equally

\(\begin{array}{l}\frac{{{\rm{\Gamma (1 + 0}}{\rm{.4)}}}}{{{{\rm{\Gamma }}^{\rm{2}}}{\rm{(1 + 0}}{\rm{.2)}}}}{\rm{ = }}\frac{{\rm{1}}}{{{\rm{0}}{\rm{.95}}}}\\\frac{{{\rm{\Gamma (1 + 0}}{\rm{.4)}}}}{{{{\rm{\Gamma }}^{\rm{2}}}{\rm{(1 + 0}}{\rm{.2)}}}}{\rm{ = 1}}{\rm{.05}}\end{array}\)

which means that

\(\begin{array}{l}\frac{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{2}}}{{{\rm{\bar \alpha }}}}} \right)}}{{{{\rm{\Gamma }}^{\rm{2}}}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\dot \alpha }}}}} \right)}}{\rm{ = 1}}{\rm{.05}}\\{\rm{ = }}\frac{{{\rm{\Gamma (1 + 0}}{\rm{.4)}}}}{{{{\rm{\Gamma }}^{\rm{2}}}{\rm{(1 + 0}}{\rm{.2)}}}}\end{array}\)

Therefore, because of this equality, the following must hold

\(\frac{{\rm{2}}}{{{\rm{\hat \alpha }}}}{\rm{ = 0}}{\rm{.4}}\)

hence,

\({\rm{\hat \alpha = 5}}{\rm{.}}\)

From the estimator

\({\rm{\hat \beta = }}\frac{{{\rm{\bar X}}}}{{{\rm{\Gamma }}\left( {{\rm{1 + }}\frac{{\rm{1}}}{{{\rm{\bar \alpha }}}}} \right)}}\)

The estimate is computed as follows:\({\rm{\hat \beta = }}\frac{{{\rm{28}}}}{{{\rm{\Gamma (1}}{\rm{.2)}}}}\)

04

Additional

The average value of a function \(f\left( x,y \right)\) over a rectangle \(R\) is defined to be \({{f}_{ave}}=\frac{1}{A\left( R \right)}\iint\limits_{R}{f\left( x,y \right)dA}\).

Find the average value of \(f\) over the given rectangle, \(f\left( x,y \right)={{e}^{y}}\sqrt{x+{{e}^{y}}}\), \(R=\left( 0,4 \right)\times \left( 0,1 \right)\).

Given: \(f\left( x,y \right)={{e}^{y}}\sqrt{x+{{e}^{y}}}\)

\(R=\left( 0,4 \right)\times \left( 0,1 \right)\)

To find: average value of f.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\) from the shifted exponential pdf

\({\rm{f(x;\lambda ,\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{\lambda }}{{\rm{e}}^{{\rm{ - \lambda (x - \theta )}}}}}&{{\rm{x}} \ge {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\). Taking \({\rm{\theta = 0}}\) gives the pdf of the exponential distribution considered previously (with positive density to the right of zero). An example of the shifted exponential distribution appeared in Example \({\rm{4}}{\rm{.5}}\), in which the variable of interest was time headway in traffic flow and \({\rm{\theta = }}{\rm{.5}}\) was the minimum possible time headway. a. Obtain the maximum likelihood estimators of \({\rm{\theta }}\) and \({\rm{\lambda }}\). b. If \({\rm{n = 10}}\) time headway observations are made, resulting in the values \({\rm{3}}{\rm{.11,}}{\rm{.64,2}}{\rm{.55,2}}{\rm{.20,5}}{\rm{.44,3}}{\rm{.42,10}}{\rm{.39,8}}{\rm{.93,17}}{\rm{.82}}\), and \({\rm{1}}{\rm{.30}}\), calculate the estimates of \({\rm{\theta }}\) and \({\rm{\lambda }}\).

An investigator wishes to estimate the proportion of students at a certain university who have violated the honor code. Having obtained a random sample of\({\rm{n}}\)students, she realizes that asking each, "Have you violated the honor code?" will probably result in some untruthful responses. Consider the following scheme, called a randomized response technique. The investigator makes up a deck of\({\rm{100}}\)cards, of which\({\rm{50}}\)are of type I and\({\rm{50}}\)are of type II.

Type I: Have you violated the honor code (yes or no)?

Type II: Is the last digit of your telephone number a\({\rm{0 , 1 , or 2}}\)(yes or no)?

Each student in the random sample is asked to mix the deck, draw a card, and answer the resulting question truthfully. Because of the irrelevant question on type II cards, a yes response no longer stigmatizes the respondent, so we assume that responses are truthful. Let\({\rm{p}}\)denote the proportion of honor-code violators (i.e., the probability of a randomly selected student being a violator), and let\({\rm{\lambda = P}}\)(yes response). Then\({\rm{\lambda }}\)and\({\rm{p}}\)are related by\({\rm{\lambda = }}{\rm{.5p + (}}{\rm{.5)(}}{\rm{.3)}}\).

a. Let\({\rm{Y}}\)denote the number of yes responses, so\({\rm{Y\sim}}\)Bin\({\rm{(n,\lambda )}}\). Thus Y / n is an unbiased estimator of\({\rm{\lambda }}\). Derive an estimator for\({\rm{p}}\)based on\({\rm{Y}}\). If\({\rm{n = 80}}\)and\({\rm{y = 20}}\), what is your estimate? (Hint: Solve\({\rm{\lambda = }}{\rm{.5p + }}{\rm{.15}}\)for\({\rm{p}}\)and then substitute\({\rm{Y/n}}\)for\({\rm{\lambda }}\).)

b. Use the fact that\({\rm{E(Y/n) = \lambda }}\)to show that your estimator\({\rm{\hat p}}\)is unbiased.

c. If there were\({\rm{70}}\)type I and\({\rm{30}}\)type II cards, what would be your estimator for\({\rm{p}}\)?

The accompanying data on flexural strength (MPa) for concrete beams of a certain type was introduced in Example 1.2.

\(\begin{array}{*{20}{r}}{{\rm{5}}{\rm{.9}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{7}}{\rm{.3}}}&{{\rm{6}}{\rm{.3}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{6}}{\rm{.8}}}&{{\rm{7}}{\rm{.0}}}\\{{\rm{7}}{\rm{.6}}}&{{\rm{6}}{\rm{.8}}}&{{\rm{6}}{\rm{.5}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{6}}{\rm{.3}}}&{{\rm{7}}{\rm{.9}}}&{{\rm{9}}{\rm{.0}}}\\{{\rm{3}}{\rm{.2}}}&{{\rm{8}}{\rm{.7}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{7}}{\rm{.7}}}&{{\rm{9}}{\rm{.7}}}\\{{\rm{7}}{\rm{.3}}}&{{\rm{7}}{\rm{.7}}}&{{\rm{11}}{\rm{.6}}}&{{\rm{11}}{\rm{.3}}}&{{\rm{11}}{\rm{.8}}}&{{\rm{10}}{\rm{.7}}}&{}\end{array}\)

Calculate a point estimate of the mean value of strength for the conceptual population of all beams manufactured in this fashion, and state which estimator you used\({\rm{(Hint:\Sigma }}{{\rm{x}}_{\rm{i}}}{\rm{ = 219}}{\rm{.8}}{\rm{.)}}\)

b. Calculate a point estimate of the strength value that separates the weakest 50% of all such beams from the strongest 50 %, and state which estimator you used.

c. Calculate and interpret a point estimate of the population standard deviation\({\rm{\sigma }}\). Which estimator did you use?\({\rm{(Hint:}}\left. {{\rm{\Sigma x}}_{\rm{i}}^{\rm{2}}{\rm{ = 1860}}{\rm{.94}}{\rm{.}}} \right)\)

d. Calculate a point estimate of the proportion of all such beams whose flexural strength exceeds\({\rm{10MPa}}\). (Hint: Think of an observation as a "success" if it exceeds 10.)

e. Calculate a point estimate of the population coefficient of variation\({\rm{\sigma /\mu }}\), and state which estimator you used.

a. Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a uniform distribution on \({\rm{(0,\theta )}}\). Then the mle of \({\rm{\theta }}\) is \({\rm{\hat \theta = Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\). Use the fact that \({\rm{Y}} \le {\rm{y}}\) if each \({{\rm{X}}_{\rm{i}}} \le {\rm{y}}\) to derive the cdf of Y. Then show that the pdf of \({\rm{Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\) is \({{\rm{f}}_{\rm{Y}}}{\rm{(y) = }}\left\{ {\begin{array}{*{20}{c}}{\frac{{{\rm{n}}{{\rm{y}}^{{\rm{n - 1}}}}}}{{{{\rm{\theta }}^{\rm{n}}}}}}&{{\rm{0}} \le {\rm{y}} \le {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

b. Use the result of part (a) to show that the mle is biased but that \({\rm{(n + 1)}}\)\({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{/n}}\) is unbiased.

At time \({\rm{t = 0, 20}}\) identical components are tested. The lifetime distribution of each is exponential with parameter \({\rm{\lambda }}\). The experimenter then leaves the test facility unmonitored. On his return \({\rm{24}}\) hours later, the experimenter immediately terminates the test after noticing that \({\rm{y = 15}}\) of the \({\rm{20}}\) components are still in operation (so \({\rm{5}}\) have failed). Derive the mle of \({\rm{\lambda }}\). (Hint: Let \({\rm{Y = }}\) the number that survive \({\rm{24}}\) hours. Then \({\rm{Y}} \sim {\rm{Bin(n,p)}}\). What is the mle of \({\rm{p}}\)? Now notice that \({\rm{p = P(}}{{\rm{X}}_{\rm{i}}} \ge {\rm{24)}}\), where \({{\rm{X}}_{\rm{i}}}\) is exponentially distributed. This relates \({\rm{\lambda }}\) to \({\rm{p}}\), so the former can be estimated once the latter has been.)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free