Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let\({\rm{X}}\)denote the proportion of allotted time that a randomly selected student spends working on a certain aptitude test. Suppose the pdf of\({\rm{X}}\)is

\({\rm{f(x;\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{(\theta + 1)}}{{\rm{x}}^{\rm{\theta }}}}&{{\rm{0£ x£ 1}}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

where\({\rm{ - 1 < \theta }}\). A random sample of ten students yields data\({{\rm{x}}_{\rm{1}}}{\rm{ = }}{\rm{.92,}}{{\rm{x}}_{\rm{2}}}{\rm{ = }}{\rm{.79,}}{{\rm{x}}_{\rm{3}}}{\rm{ = }}{\rm{.90,}}{{\rm{x}}_{\rm{4}}}{\rm{ = }}{\rm{.65,}}{{\rm{x}}_{\rm{5}}}{\rm{ = }}{\rm{.86}}\),\({{\rm{x}}_{\rm{6}}}{\rm{ = }}{\rm{.47,}}{{\rm{x}}_{\rm{7}}}{\rm{ = }}{\rm{.73,}}{{\rm{x}}_{\rm{8}}}{\rm{ = }}{\rm{.97,}}{{\rm{x}}_{\rm{9}}}{\rm{ = }}{\rm{.94,}}{{\rm{x}}_{{\rm{10}}}}{\rm{ = }}{\rm{.77}}\).

a. Use the method of moments to obtain an estimator of\({\rm{\theta }}\), and then compute the estimate for this data.

b. Obtain the maximum likelihood estimator of\({\rm{\theta }}\), and then compute the estimate for the given data.

Short Answer

Expert verified

a) The estimate data is\({\rm{\hat \theta = }}\frac{{\rm{1}}}{{{\rm{1 - \bar X}}}}{\rm{ - 2; \hat \theta = 3}}\).

b) The estimate data is \({\rm{\hat \theta = - }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{X}}_{\rm{i}}}}}{\rm{ - 1;\hat \theta = 3}}{\rm{.12}}{\rm{.}}\)

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

(a)

Let random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)have same distribution with pmf or pdf\({\rm{f}}\left( {{\rm{x;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{,m\hat I N}}\), where the parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\) are unknown. The moment estimators

\(\widehat {{{\rm{\theta }}_{\rm{1}}}}{\rm{,}}\widehat {{{\rm{\theta }}_{\rm{2}}}}{\rm{, \ldots ,}}\widehat {{{\rm{\theta }}_{\rm{m}}}}\)can be obtaining by equating sample moment to the corresponding population moments and solving for unknown parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\).

There is only one unknown parameter\({\rm{\theta }}\), therefore, by solving equation

\({\rm{\bar X = E(X)}}\)for\({\rm{\theta }}\), the moment estimator \({\rm{\hat \theta }}\) will be obtained. Remember that \({\rm{\bar X}}\)is the sample moment of first order, and \({\rm{E(X)}}\)is the population moment of first order.

The following is true for the expected value

\(\begin{aligned}E(X) &= \int_{\rm{0}}^{\rm{1}} {\rm{x}} {\rm{(\theta + 1)}}{{\rm{x}}^{\rm{\theta }}}{\rm{dx}}\\&= \left. {{\rm{(\theta + 1) \times }}\frac{{{{\rm{x}}^{\rm{\theta }}}{\rm{ + 2}}}}{{{\rm{\theta + 2}}}}} \right|_{\rm{0}}^{\rm{1}}\\&= \frac{{{\rm{\theta + 1}}}}{{{\rm{\theta + 2}}}}{\rm{.}}\end{aligned}\)

The solution of equation\({\rm{\bar X = E(X)}}\)is:

\(\begin{aligned} \bar X &= \frac{{{\rm{\hat \theta + 1}}}}{{{\rm{\hat \theta + 2}}}}\\ \bar X &= \frac{{{\rm{\hat \theta + 1 + (1 - 1)}}}}{{{\rm{\hat \theta + 2}}}}\\ \bar X &= \frac{{{\rm{\hat \theta + 2}}}}{{{\rm{\hat \theta + 2}}}}{\rm{ - }}\frac{{\rm{1}}}{{{\rm{\hat \theta + 2}}}}\\\ \bar X - 1 &= - \frac{{\rm{1}}}{{{\rm{\hat \theta + 2}}}}{\rm{\hat \theta + 2}}\\ &= \frac{{\rm{1}}}{{{\rm{1 - \bar X}}}}\end{aligned}\)

Which, yields the moment estimator \({\rm{\hat \theta }}\)

\({\rm{\hat \theta = }}\frac{{\rm{1}}}{{{\rm{1 - \bar X}}}}{\rm{ - 2}}\)

The Sample Mean \({\rm{\bar x}}\)of observations \({{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}\) is given by:

\({\rm{\bar x = }}\frac{{{{\rm{x}}_{\rm{1}}}{\rm{ + }}{{\rm{x}}_{\rm{2}}}{\rm{ + \ldots + }}{{\rm{x}}_{\rm{n}}}}}{{\rm{n}}}{\rm{ = }}\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} \)

Using this, the sample moment of first order is

\({\rm{\bar x = }}\frac{{\rm{1}}}{{{\rm{10}}}}{\rm{(0}}{\rm{.92 + 0}}{\rm{.79 + \ldots + 0}}{\rm{.88) = 0}}{\rm{.8}}\)

Therefore, the estimate \({\rm{\hat \theta }}\)is\(\frac{{\rm{1}}}{{{\rm{1 - 0}}{\rm{.8}}}}{\rm{ - 2 = 3}}\).

03

Explanation

(b)

Let random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)have joint pdf or pmb,

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{,}}\quad {\rm{n,m\hat I N}}\)

Where, the parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)are unknown. When function \({\rm{f}}\)is a function of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\), it is called the

likelihood function

Values \({{\rm{\hat \theta }}_{\rm{i}}}\)that maximize the likelihood function are the maximum likelihood estimates (mle's), or equally values \({{\rm{\hat \theta }}_{\rm{i}}}\)for which

for every\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\). By substituting\({{\rm{X}}_{\rm{i}}}{\rm{ with }}{{\rm{x}}_{\rm{i}}}\), the

maximum likelihood estimators

are obtained.

The pdf is given in the exercise. The likelihood function (assuming independence) becomes

\(\begin{aligned}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\&= (\theta + 1)x_{\rm{1}}^{\rm{\theta }}{\rm{ \times (\theta + 1)x}}_{\rm{2}}^{\rm{\theta }}{\rm{ \times \ldots \times (\theta + 1)x}}_{\rm{n}}^{\rm{\theta }}\\&= (\theta + 1 {{\rm{)}}^{\rm{2}}}{\rm{ \times }}{\left( {{{\rm{x}}_{\rm{1}}}{\rm{ \times }}{{\rm{x}}_{\rm{2}}}{\rm{ \times \ldots \times }}{{\rm{x}}_{\rm{n}}}} \right)^{\rm{\theta }}}\end{aligned}\)

In order to find maximum, look at the log likelihood function

\(\begin{aligned}{\rm{lnf}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\&= ln \left( {{{{\rm{(\theta + 1)}}}^{\rm{n}}}{\rm{ \times }}{{\left( {{{\rm{x}}_{\rm{1}}}{\rm{ \times }}{{\rm{x}}_{\rm{2}}}{\rm{ \times \ldots \times }}{{\rm{x}}_{\rm{n}}}} \right)}^{\rm{\theta }}}} \right)\\ &= n \times ln(\theta + 1) + \theta \times \sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{.}}\end{aligned}\)

By taking derivative of log likelihood function in respect to \({\rm{\theta }}\)and equating it to \({\rm{0}}\) the maximum likelihood estimator is obtained. Therefore, the derivative is

\(\begin{aligned}\frac{{\rm{d}}}{{{\rm{d\theta }}}}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\& = \frac{{\rm{d}}}{{{\rm{d\theta }}}}\left( {{\rm{2 \times ln(\theta + 1) + \theta \times }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}} \right)\\{\rm{ = n \times }}\frac{{\rm{1}}}{{{\rm{\theta + 1}}}}{\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{.}}\end{aligned}\)

Therefore, the maximum likelihood estimator is obtained by solving equation

\({\rm{n \times }}\frac{{\rm{1}}}{{{\rm{\hat \theta + 1}}}}{\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{ = 0}}\)

For\({\rm{\hat \theta }}\). Obviously, the solution is

\({\rm{\hat \theta = - }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{X}}_{\rm{i}}}}}{\rm{ - 1}}{\rm{.}}\)

which is the maximum likelihood estimator.

By taking \({\rm{ln}}{{\rm{x}}_{\rm{i}}}\)for every\({\rm{i = 1,2, \ldots ,10}}\), and summing the values, the maximum likelihood estimate is obtained as

\({\rm{\hat \theta = - }}\frac{{{\rm{10}}}}{{{\rm{ - 2}}{\rm{.4295}}}}{\rm{ - 1 = 3}}{\rm{.12}}\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider randomly selecting \({\rm{n}}\) segments of pipe and determining the corrosion loss (mm) in the wall thickness for each one. Denote these corrosion losses by \({{\rm{Y}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{Y}}_{\rm{n}}}\). The article “A Probabilistic Model for a Gas Explosion Due to Leakages in the Grey Cast Iron Gas Mains” (Reliability Engr. and System Safety (\({\rm{(2013:270 - 279)}}\)) proposes a linear corrosion model: \({{\rm{Y}}_{\rm{i}}}{\rm{ = }}{{\rm{t}}_{\rm{i}}}{\rm{R}}\), where \({{\rm{t}}_{\rm{i}}}\) is the age of the pipe and \({\rm{R}}\), the corrosion rate, is exponentially distributed with parameter \({\rm{\lambda }}\). Obtain the maximum likelihood estimator of the exponential parameter (the resulting mle appears in the cited article). (Hint: If \({\rm{c > 0}}\) and \({\rm{X}}\) has an exponential distribution, so does \({\rm{cX}}\).)

Each of \({\rm{n}}\) specimens is to be weighed twice on the same scale. Let \({{\rm{X}}_{\rm{i}}}\) and \({{\rm{Y}}_{\rm{i}}}\) denote the two observed weights for the \({\rm{ith}}\) specimen. Suppose\({{\rm{X}}_{\rm{i}}}\) and \({{\rm{Y}}_{\rm{i}}}\) are independent of one another, each normally distributed with mean value \({{\rm{\mu }}_{\rm{i}}}\) (the true weight of specimen \({\rm{i}}\)) and variance \({{\rm{\sigma }}^{\rm{2}}}\) . a. Show that the maximum likelihood estimator of \({{\rm{\sigma }}^{\rm{2}}}\) is \({\widehat {\rm{\sigma }}^{\rm{2}}}{\rm{ = \Sigma }}{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)^{\rm{2}}}{\rm{/(4n)}}\). (Hint: If \({\rm{\bar z = }}\left( {{{\rm{z}}_{\rm{1}}}{\rm{ + }}{{\rm{z}}_{\rm{2}}}} \right){\rm{/2}}\), then \({\rm{\Sigma }}{\left( {{{\rm{z}}_{\rm{i}}}{\rm{ - \bar z}}} \right)^{\rm{2}}}{\rm{ = }}{\left( {{{\rm{z}}_{\rm{1}}}{\rm{ - }}{{\rm{z}}_{\rm{2}}}} \right)^{\rm{2}}}{\rm{/2}}\)) b. Is the mle \({{\rm{\hat \sigma }}^{\rm{2}}}\) an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)? Find an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\). (Hint: For any \({\rm{rv Z,E}}\left( {{{\rm{Z}}^{\rm{2}}}} \right){\rm{ = V(Z) + (E(Z)}}{{\rm{)}}^{\rm{2}}}\). Apply this to \({\rm{Z = }}{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}\).)

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) from the pdf

\({\rm{f(x;\theta ) = }}{\rm{.5(1 + \theta x)}}\quad {\rm{ - 1£ x£ 1}}\)

where \({\rm{ - 1£ \theta £ 1}}\) (this distribution arises in particle physics). Show that \({\rm{\hat \theta = 3\bar X}}\) is an unbiased estimator of\({\rm{\theta }}\). (Hint: First determine\({\rm{\mu = E(X) = E(\bar X)}}\).)

The mean squared error of an estimator \({\rm{\hat \theta }}\) is \({\rm{MSE(\hat \theta ) = E(\hat \theta - \hat \theta }}{{\rm{)}}^{\rm{2}}}\). If \({\rm{\hat \theta }}\) is unbiased, then \({\rm{MSE(\hat \theta ) = V(\hat \theta )}}\), but in general \({\rm{MSE(\hat \theta ) = V(\hat \theta ) + (bias}}{{\rm{)}}^{\rm{2}}}\) . Consider the estimator \({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = K}}{{\rm{S}}^{\rm{2}}}\), where \({{\rm{S}}^{\rm{2}}}{\rm{ = }}\) sample variance. What value of K minimizes the mean squared error of this estimator when the population distribution is normal? (Hint: It can be shown that \({\rm{E}}\left( {{{\left( {{{\rm{S}}^{\rm{2}}}} \right)}^{\rm{2}}}} \right){\rm{ = (n + 1)}}{{\rm{\sigma }}^{\rm{4}}}{\rm{/(n - 1)}}\) In general, it is difficult to find \({\rm{\hat \theta }}\) to minimize \({\rm{MSE(\hat \theta )}}\), which is why we look only at unbiased estimators and minimize \({\rm{V(\hat \theta )}}\).)

The accompanying data on flexural strength (MPa) for concrete beams of a certain type was introduced in Example 1.2.

\(\begin{array}{*{20}{r}}{{\rm{5}}{\rm{.9}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{7}}{\rm{.3}}}&{{\rm{6}}{\rm{.3}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{6}}{\rm{.8}}}&{{\rm{7}}{\rm{.0}}}\\{{\rm{7}}{\rm{.6}}}&{{\rm{6}}{\rm{.8}}}&{{\rm{6}}{\rm{.5}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{6}}{\rm{.3}}}&{{\rm{7}}{\rm{.9}}}&{{\rm{9}}{\rm{.0}}}\\{{\rm{3}}{\rm{.2}}}&{{\rm{8}}{\rm{.7}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{7}}{\rm{.7}}}&{{\rm{9}}{\rm{.7}}}\\{{\rm{7}}{\rm{.3}}}&{{\rm{7}}{\rm{.7}}}&{{\rm{11}}{\rm{.6}}}&{{\rm{11}}{\rm{.3}}}&{{\rm{11}}{\rm{.8}}}&{{\rm{10}}{\rm{.7}}}&{}\end{array}\)

Calculate a point estimate of the mean value of strength for the conceptual population of all beams manufactured in this fashion, and state which estimator you used\({\rm{(Hint:\Sigma }}{{\rm{x}}_{\rm{i}}}{\rm{ = 219}}{\rm{.8}}{\rm{.)}}\)

b. Calculate a point estimate of the strength value that separates the weakest 50% of all such beams from the strongest 50 %, and state which estimator you used.

c. Calculate and interpret a point estimate of the population standard deviation\({\rm{\sigma }}\). Which estimator did you use?\({\rm{(Hint:}}\left. {{\rm{\Sigma x}}_{\rm{i}}^{\rm{2}}{\rm{ = 1860}}{\rm{.94}}{\rm{.}}} \right)\)

d. Calculate a point estimate of the proportion of all such beams whose flexural strength exceeds\({\rm{10MPa}}\). (Hint: Think of an observation as a "success" if it exceeds 10.)

e. Calculate a point estimate of the population coefficient of variation\({\rm{\sigma /\mu }}\), and state which estimator you used.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free