Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

\({{\rm{X}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a gamma distribution with parameters \({\rm{\alpha }}\) and \({\rm{\beta }}\). a. Derive the equations whose solutions yield the maximum likelihood estimators of \({\rm{\alpha }}\) and \({\rm{\beta }}\). Do you think they can be solved explicitly? b. Show that the mle of \({\rm{\mu = \alpha \beta }}\) is \(\widehat {\rm{\mu }}{\rm{ = }}\overline {\rm{X}} \).

Short Answer

Expert verified

(a) Equations are,

\(\begin{array}{c}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{ - nln\hat \beta - n}}\frac{{\rm{1}}}{{{\rm{\Gamma (\hat \alpha )}}}}{\rm{ \times }}\frac{{\rm{d}}}{{{\rm{d\hat \alpha }}}}{\rm{\Gamma (\hat \alpha ) = 0}}\\{\rm{n\hat \alpha }}\frac{{\rm{1}}}{{{\rm{\hat \beta }}}}{\rm{ + }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} }}{{{{{\rm{\hat \beta }}}^{\rm{2}}}}}{\rm{ = 0}}\end{array}\)

(b) Shown, \(\overline {\rm{X}} {\rm{ = }}\widehat {\rm{\mu }}\).

Step by step solution

01

Define equations

A mathematical language that asserts that two algebraic expressions must be equal in nature is known as an equation.

02

Explanation

(a) The probability density function (pdf) of random variable \({\rm{X}}\) with gamma distribution is,

\({\rm{f(x;\alpha ,\beta ) = }}\frac{{\rm{1}}}{{{{\rm{\beta }}^{\rm{\alpha }}}{\rm{\Gamma (\alpha )}}}}{{\rm{x}}^{{\rm{\alpha - 1}}}}{{\rm{e}}^{{\rm{ - x/\beta }}}}{\rm{,x}} \ge {\rm{0}}\)

Where the parameters\({\rm{\alpha }}\)and\({\rm{\beta }}\)fulfil\({\rm{\alpha > 0}}\)and\({\rm{\beta > 0}}\), and is zero for\({\rm{x < 0}}\).

Allow joint pdf or pmb for random variables\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\).

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{, n,m}} \in {\rm{N}}\)

where\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)are unknown parameters. The likelihood function is defined as a function of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)where function f is a function of parameter. The maximum likelihood estimates (mle's), or values\(\widehat {{{\rm{\theta }}_{\rm{i}}}}\)for which the likelihood function is maximised, are the maximum likelihood estimates,

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{{\rm{\hat \theta }}}_{\rm{1}}}{\rm{,}}{{{\rm{\hat \theta }}}_{\rm{2}}}{\rm{, \ldots ,}}{{{\rm{\hat \theta }}}_{\rm{m}}}} \right) \ge {\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right)\)

As,\({\rm{i = 1,2, \ldots ,m}}\)for every\({{\rm{\theta }}_{\rm{i}}}\). Maximum likelihood estimators are derived by replacing\({{\rm{X}}_{\rm{i}}}\)with\({{\rm{x}}_{\rm{i}}}\).

Because of the independence, the likelihood function becomes,

\(\begin{array}{c}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\alpha ,\beta }}} \right){\rm{ = }}\frac{{\rm{1}}}{{{{\rm{\beta }}^{\rm{\alpha }}}{\rm{\Gamma (\alpha )}}}}{\rm{x}}_{\rm{1}}^{{\rm{\alpha - 1}}}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}_{\rm{1}}}{\rm{/\beta }}}}{\rm{ \times }}\frac{{\rm{1}}}{{{{\rm{\beta }}^{\rm{\alpha }}}{\rm{\Gamma (\alpha )}}}}{\rm{x}}_{\rm{2}}^{{\rm{\alpha - 1}}}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}_{\rm{2}}}{\rm{/\beta }}}}{\rm{ \times \ldots \times }}\frac{{\rm{1}}}{{{{\rm{\beta }}^{\rm{\alpha }}}{\rm{\Gamma (\alpha )}}}}{\rm{x}}_{\rm{n}}^{{\rm{\alpha - 1}}}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}_{\rm{n}}}{\rm{/\beta }}}}\\{\rm{ = }}{\left( {{{\rm{x}}_{\rm{1}}}{{\rm{x}}_{\rm{2}}}{\rm{ \times \ldots \times }}{{\rm{x}}_{\rm{n}}}} \right)^{{\rm{\alpha - 1}}}}{\rm{ \times }}\frac{{\rm{1}}}{{{{\rm{\beta }}^{{\rm{n\alpha }}}}{{\rm{\Gamma }}^{\rm{n}}}{\rm{(\alpha )}}}}{\rm{ \times exp}}\left\{ {{\rm{ - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} {\rm{/\beta }}} \right\}\end{array}\)

03

Evaluating the maximum likelihood estimators

Look at the log likelihood function to determine the maximum.

\(\begin{array}{c}{\rm{ln f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\alpha ,\beta }}} \right){\rm{ = ln}}\left( {{{\left( {{{\rm{x}}_{\rm{1}}}{{\rm{x}}_{\rm{2}}}{\rm{ \times \ldots \times }}{{\rm{x}}_{\rm{n}}}} \right)}^{{\rm{\alpha - 1}}}}{\rm{ \times }}\frac{{\rm{1}}}{{{{\rm{\beta }}^{{\rm{n\alpha }}}}{{\rm{\Gamma }}^{\rm{n}}}{\rm{(\alpha )}}}}{\rm{ \times exp}}\left\{ {{\rm{ - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} {\rm{/\beta }}} \right\}} \right)\\{\rm{ = (\alpha - 1)}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{ - n\alpha ln\beta - nln\Gamma (\alpha ) - }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} }}{{\rm{\beta }}}\end{array}\)

The maximum likelihood estimator is generated by taking the derivative of the log likelihood function in regard to\({\rm{\alpha }}\)and\({\rm{\beta }}\)equating it to\({\rm{0}}\).

As a result, the derivative,

\(\begin{array}{c}\frac{{\rm{d}}}{{{\rm{d\alpha }}}}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\alpha ,\beta }}} \right){\rm{ = }}\frac{{\rm{d}}}{{{\rm{d\alpha }}}}\left( {{\rm{(\alpha - 1)}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{ - n\alpha ln\beta - nln\Gamma (\alpha ) - }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} }}{{\rm{\beta }}}} \right)\\{\rm{ = }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{ - nln\beta - n}}\frac{{\rm{1}}}{{{\rm{\Gamma (\alpha )}}}}{\rm{ \times }}\frac{{\rm{d}}}{{{\rm{d\alpha }}}}{\rm{\Gamma (\alpha )}}\end{array}\)

In the case of\({\rm{\beta }}\), the derivative is,

\(\begin{array}{c}\frac{{\rm{d}}}{{{\rm{d\beta }}}}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\alpha ,\beta }}} \right){\rm{ = }}\frac{{\rm{d}}}{{{\rm{d\beta }}}}\left( {{\rm{(\alpha - 1)}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{ - n\alpha ln\beta - nln\Gamma (\alpha ) - }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} }}{{\rm{\beta }}}} \right)\\{\rm{ = - n\alpha }}\frac{{\rm{1}}}{{\rm{\beta }}}{\rm{ + }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} }}{{{{\rm{\beta }}^{\rm{2}}}}}\end{array}\)

As a result, solving equation provides the maximum likelihood estimator.

\(\begin{array}{c}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{ln}}} {{\rm{x}}_{\rm{i}}}{\rm{ - nln\hat \beta - n}}\frac{{\rm{1}}}{{{\rm{\Gamma (\hat \alpha )}}}}{\rm{ \times }}\frac{{\rm{d}}}{{{\rm{d\hat \alpha }}}}{\rm{\Gamma (\hat \alpha ) = 0}}\\{\rm{n\hat \alpha }}\frac{{\rm{1}}}{{{\rm{\hat \beta }}}}{\rm{ + }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} }}{{{{{\rm{\hat \beta }}}^{\rm{2}}}}}{\rm{ = 0}}\end{array}\)

For\({\rm{\hat \alpha }}\)and\({\rm{\hat \beta }}\).

The solution cannot be deduced explicitly since it is extremely complex and requires a great deal of background knowledge.

04

Explanation

(b) The following is true based on the second equation in (a).

\({\rm{ - n\hat \alpha }}\frac{{\rm{1}}}{{{\rm{\hat \beta }}}}{\rm{ + }}\frac{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} }}{{{{{\rm{\hat \beta }}}^{\rm{2}}}}}{\rm{ = 0}}\)

or, alternatively,

\(\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{{\rm{x}}_{\rm{i}}}} {\rm{ = \hat \alpha \times \hat \beta }}\)

The expected value of random variable X with gamma distribution, according to the proposition.

\({\rm{E(X) = \alpha \beta }}\)

As a result, the maximum likelihood estimator is,

\({\rm{\bar X = \hat mu}}\)

As, \({\rm{\mu = \alpha \beta }}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

An investigator wishes to estimate the proportion of students at a certain university who have violated the honor code. Having obtained a random sample of\({\rm{n}}\)students, she realizes that asking each, "Have you violated the honor code?" will probably result in some untruthful responses. Consider the following scheme, called a randomized response technique. The investigator makes up a deck of\({\rm{100}}\)cards, of which\({\rm{50}}\)are of type I and\({\rm{50}}\)are of type II.

Type I: Have you violated the honor code (yes or no)?

Type II: Is the last digit of your telephone number a\({\rm{0 , 1 , or 2}}\)(yes or no)?

Each student in the random sample is asked to mix the deck, draw a card, and answer the resulting question truthfully. Because of the irrelevant question on type II cards, a yes response no longer stigmatizes the respondent, so we assume that responses are truthful. Let\({\rm{p}}\)denote the proportion of honor-code violators (i.e., the probability of a randomly selected student being a violator), and let\({\rm{\lambda = P}}\)(yes response). Then\({\rm{\lambda }}\)and\({\rm{p}}\)are related by\({\rm{\lambda = }}{\rm{.5p + (}}{\rm{.5)(}}{\rm{.3)}}\).

a. Let\({\rm{Y}}\)denote the number of yes responses, so\({\rm{Y\sim}}\)Bin\({\rm{(n,\lambda )}}\). Thus Y / n is an unbiased estimator of\({\rm{\lambda }}\). Derive an estimator for\({\rm{p}}\)based on\({\rm{Y}}\). If\({\rm{n = 80}}\)and\({\rm{y = 20}}\), what is your estimate? (Hint: Solve\({\rm{\lambda = }}{\rm{.5p + }}{\rm{.15}}\)for\({\rm{p}}\)and then substitute\({\rm{Y/n}}\)for\({\rm{\lambda }}\).)

b. Use the fact that\({\rm{E(Y/n) = \lambda }}\)to show that your estimator\({\rm{\hat p}}\)is unbiased.

c. If there were\({\rm{70}}\)type I and\({\rm{30}}\)type II cards, what would be your estimator for\({\rm{p}}\)?

At time \({\rm{t = 0}}\), there is one individual alive in a certain population. A pure birth process then unfolds as follows. The time until the first birth is exponentially distributed with parameter \({\rm{\lambda }}\). After the first birth, there are two individuals alive. The time until the first gives birth again is exponential with parameter \({\rm{\lambda }}\), and similarly for the second individual. Therefore, the time until the next birth is the minimum of two exponential (\({\rm{\lambda }}\)) variables, which is exponential with parameter \({\rm{2\lambda }}\). Similarly, once the second birth has occurred, there are three individuals alive, so the time until the next birth is an exponential \({\rm{rv}}\) with parameter \({\rm{3\lambda }}\), and so on (the memoryless property of the exponential distribution is being used here). Suppose the process is observed until the sixth birth has occurred and the successive birth times are \({\rm{25}}{\rm{.2,41}}{\rm{.7,51}}{\rm{.2,55}}{\rm{.5,59}}{\rm{.5,61}}{\rm{.8}}\) (from which you should calculate the times between successive births). Derive the mle of l. (Hint: The likelihood is a product of exponential terms.)

Consider randomly selecting \({\rm{n}}\) segments of pipe and determining the corrosion loss (mm) in the wall thickness for each one. Denote these corrosion losses by \({{\rm{Y}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{Y}}_{\rm{n}}}\). The article โ€œA Probabilistic Model for a Gas Explosion Due to Leakages in the Grey Cast Iron Gas Mainsโ€ (Reliability Engr. and System Safety (\({\rm{(2013:270 - 279)}}\)) proposes a linear corrosion model: \({{\rm{Y}}_{\rm{i}}}{\rm{ = }}{{\rm{t}}_{\rm{i}}}{\rm{R}}\), where \({{\rm{t}}_{\rm{i}}}\) is the age of the pipe and \({\rm{R}}\), the corrosion rate, is exponentially distributed with parameter \({\rm{\lambda }}\). Obtain the maximum likelihood estimator of the exponential parameter (the resulting mle appears in the cited article). (Hint: If \({\rm{c > 0}}\) and \({\rm{X}}\) has an exponential distribution, so does \({\rm{cX}}\).)

The accompanying data on flexural strength (MPa) for concrete beams of a certain type was introduced in Example 1.2.

\(\begin{array}{*{20}{r}}{{\rm{5}}{\rm{.9}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{7}}{\rm{.3}}}&{{\rm{6}}{\rm{.3}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{6}}{\rm{.8}}}&{{\rm{7}}{\rm{.0}}}\\{{\rm{7}}{\rm{.6}}}&{{\rm{6}}{\rm{.8}}}&{{\rm{6}}{\rm{.5}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{6}}{\rm{.3}}}&{{\rm{7}}{\rm{.9}}}&{{\rm{9}}{\rm{.0}}}\\{{\rm{3}}{\rm{.2}}}&{{\rm{8}}{\rm{.7}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{7}}{\rm{.7}}}&{{\rm{9}}{\rm{.7}}}\\{{\rm{7}}{\rm{.3}}}&{{\rm{7}}{\rm{.7}}}&{{\rm{11}}{\rm{.6}}}&{{\rm{11}}{\rm{.3}}}&{{\rm{11}}{\rm{.8}}}&{{\rm{10}}{\rm{.7}}}&{}\end{array}\)

Calculate a point estimate of the mean value of strength for the conceptual population of all beams manufactured in this fashion, and state which estimator you used\({\rm{(Hint:\Sigma }}{{\rm{x}}_{\rm{i}}}{\rm{ = 219}}{\rm{.8}}{\rm{.)}}\)

b. Calculate a point estimate of the strength value that separates the weakest 50% of all such beams from the strongest 50 %, and state which estimator you used.

c. Calculate and interpret a point estimate of the population standard deviation\({\rm{\sigma }}\). Which estimator did you use?\({\rm{(Hint:}}\left. {{\rm{\Sigma x}}_{\rm{i}}^{\rm{2}}{\rm{ = 1860}}{\rm{.94}}{\rm{.}}} \right)\)

d. Calculate a point estimate of the proportion of all such beams whose flexural strength exceeds\({\rm{10MPa}}\). (Hint: Think of an observation as a "success" if it exceeds 10.)

e. Calculate a point estimate of the population coefficient of variation\({\rm{\sigma /\mu }}\), and state which estimator you used.

Suppose the true average growth\({\rm{\mu }}\)of one type of plant during a l-year period is identical to that of a second type, but the variance of growth for the first type is\({{\rm{\sigma }}^{\rm{2}}}\), whereas for the second type the variance is\({\rm{4}}{{\rm{\sigma }}^{\rm{2}}}{\rm{. Let }}{{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{m}}}\)be\({\rm{m}}\)independent growth observations on the first type (so\({\rm{E}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ = \mu ,V}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{ = \sigma\hat 2}}\)$ ), and let\({{\rm{Y}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{Y}}_{\rm{n}}}\)be\({\rm{n}}\)independent growth observations on the second type\(\left( {{\rm{E}}\left( {{{\rm{Y}}_{\rm{i}}}} \right){\rm{ = \mu ,V}}\left( {{{\rm{Y}}_{\rm{j}}}} \right){\rm{ = 4}}{{\rm{\sigma }}^{\rm{2}}}} \right)\)

a. Show that the estimator\({\rm{\hat \mu = \delta \bar X + (1 - \delta )\bar Y}}\)is unbiased for\({\rm{\mu }}\)(for\({\rm{0 < \delta < 1}}\), the estimator is a weighted average of the two individual sample means).

b. For fixed\({\rm{m}}\)and\({\rm{n}}\), compute\({\rm{V(\hat \mu ),}}\)and then find the value of\({\rm{\delta }}\)that minimizes\({\rm{V(\hat \mu )}}\). (Hint: Differentiate\({\rm{V(\hat \mu )}}\)with respect to\({\rm{\delta }}{\rm{.)}}\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free