Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

At time \({\rm{t = 0}}\), there is one individual alive in a certain population. A pure birth process then unfolds as follows. The time until the first birth is exponentially distributed with parameter \({\rm{\lambda }}\). After the first birth, there are two individuals alive. The time until the first gives birth again is exponential with parameter \({\rm{\lambda }}\), and similarly for the second individual. Therefore, the time until the next birth is the minimum of two exponential (\({\rm{\lambda }}\)) variables, which is exponential with parameter \({\rm{2\lambda }}\). Similarly, once the second birth has occurred, there are three individuals alive, so the time until the next birth is an exponential \({\rm{rv}}\) with parameter \({\rm{3\lambda }}\), and so on (the memoryless property of the exponential distribution is being used here). Suppose the process is observed until the sixth birth has occurred and the successive birth times are \({\rm{25}}{\rm{.2,41}}{\rm{.7,51}}{\rm{.2,55}}{\rm{.5,59}}{\rm{.5,61}}{\rm{.8}}\) (from which you should calculate the times between successive births). Derive the mle of l. (Hint: The likelihood is a product of exponential terms.)

Short Answer

Expert verified

The value is \({\rm{\hat \lambda = 0}}{\rm{.0436}}\).

Step by step solution

01

Define exponential function

A function that increases or decays at a rate proportional to its present value is called an exponential function.

02

Explanation

As, X is a random variable that can be used with pdf,

\({{\rm{f}}_{\rm{X}}}{\rm{(x) = }}\left\{ {\begin{array}{*{20}{l}}{{\rm{\lambda }}{{\rm{e}}^{{\rm{ - \lambda x}}}}}&{{\rm{,x}} \ge {\rm{0}}}\\{\rm{0}}&{{\rm{,x < 0}}}\end{array}} \right.\)

With parameter\({\rm{\lambda }}\), it is said to have an exponential distribution.

Denote with

\({{\rm{X}}_{\rm{1}}}\)= time till the first birth,

\({{\rm{X}}_{\rm{2}}}{\rm{ = }}\) time between the first and second births,

.

.

\({{\rm{X}}_{\rm{n}}}{\rm{ = }}\) time between the \({{\rm{n}}^{{\rm{th }}}}\) and \({{\rm{(n - 1)}}^{{\rm{th}}}}\) births,

As, \({{\rm{X}}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\) are independent and exponentially distributed random variables with appropriate parameters \({\rm{ - }}{{\rm{X}}_{\rm{1}}}\) with parameter \({\rm{\lambda }}\), \({{\rm{X}}_{\rm{2}}}\) with parameter \({\rm{2\lambda }}\),..., \({{\rm{X}}_{\rm{n}}}\) with parameter \({\rm{n\lambda }}\).

Allow joint pdf or pmb for random variables\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\).

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{, n,m}} \in {\rm{N}}\)

where\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)are unknown parameters. The likelihood function is defined as a function of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\)where function f is a function of parameter. The maximum likelihood estimates (mle's), or values\(\widehat {{{\rm{\theta }}_{\rm{i}}}}\)for which the likelihood function is maximised, are the maximum likelihood estimates,

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{{\rm{\hat \theta }}}_{\rm{1}}}{\rm{,}}{{{\rm{\hat \theta }}}_{\rm{2}}}{\rm{, \ldots ,}}{{{\rm{\hat \theta }}}_{\rm{m}}}} \right) \ge {\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right)\)

As, \({\rm{i = 1,2, \ldots ,m}}\) for every \({{\rm{\theta }}_{\rm{i}}}\). Maximum likelihood estimators are derived by replacing \({{\rm{X}}_{\rm{i}}}\) with \({{\rm{x}}_{\rm{i}}}\).

As, \({{\rm{X}}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,n}}\) are independent and exponentially distributed random variables with appropriate parameters \({\rm{ - }}{{\rm{X}}_{\rm{1}}}\) with parameter \({\rm{\lambda }}\), \({{\rm{X}}_{\rm{2}}}\)

03

Evaluating the maximum likelihood estimate

Because of the independence, the likelihood function becomes,

\(\begin{array}{c}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\lambda }}} \right){\rm{ = \lambda }}{{\rm{e}}^{{\rm{ - \lambda }}{{\rm{x}}_{\rm{1}}}}}{\rm{ \times (2\lambda )}}{{\rm{e}}^{{\rm{ - 2\lambda }}{{\rm{x}}_{\rm{2}}}}}{\rm{ \times \ldots \times (n\lambda )}}{{\rm{e}}^{{\rm{ - n\lambda }}\left( {{{\rm{x}}_{\rm{n}}}{\rm{ - \theta }}} \right)}}\\{\rm{ = n \times (n - 1) \times \ldots \times 1 \times }}{{\rm{\lambda }}^{\rm{n}}}{\rm{ \times exp}}\left\{ {{\rm{\lambda }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}} \right\}\\{\rm{ = n! \times }}{{\rm{\lambda }}^{\rm{n}}}{\rm{ \times exp}}\left\{ {{\rm{\lambda }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}} \right\}\end{array}\)

Look at the log likelihood function to determine the maximum.

\(\begin{array}{c}{\rm{lnf}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\lambda ,}}} \right){\rm{ = ln}}\left( {{\rm{n! \times}}{{\rm{\lambda }}^{\rm{n}}}{\rm{ \times exp}}\left\{ {{\rm{\lambda }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}} \right\}} \right)\\{\rm{ = lnn! + nln\lambda - \lambda }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}\end{array}\)

The maximum likelihood estimator is generated by taking the derivative of the log likelihood function in regard to \({\rm{\lambda }}\) and equating it to \({\rm{0}}\).

As a result, the derivative,

\(\begin{array}{c}\frac{{\rm{d}}}{{{\rm{d\theta }}}}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{x}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\lambda }}} \right){\rm{ = }}\frac{{\rm{d}}}{{{\rm{d\lambda }}}}\left( {{\rm{lnn! + nln\lambda - \lambda }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}} \right)\\{\rm{ = }}\frac{{\rm{n}}}{{\rm{\lambda }}}{\rm{ - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}\end{array}\)

As a result, solving equation provides the maximum likelihood estimator\(\widehat {\rm{\lambda }}\).

\(\begin{array}{c}\frac{{\rm{n}}}{{\rm{\lambda }}}{\rm{ - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}{\rm{ = 0}}\\\frac{{\rm{n}}}{{\rm{\lambda }}}{\rm{ = }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}\end{array}\)

For\({\rm{\hat \lambda }}\). Hence, the maximum likelihood estimator of parameter\({\rm{\lambda }}\)is,

\({\rm{\hat \lambda = }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{X}}_{\rm{i}}}}}\)

04

Evaluating the value

We need the values x, which are the times between successive births, based on the serial birth times. The following formula is used to calculate the values:

\({{\rm{x}}_{\rm{1}}}{\rm{ = 25}}{\rm{.2}}\) the time of the first birth;

\({{\rm{x}}_{\rm{2}}}{\rm{ = 41}}{\rm{.7 - 25}}{\rm{.2 = 16}}{\rm{.5}}\) the period of time between the first and second births;

\({{\rm{x}}_{\rm{3}}}{\rm{ = 51}}{\rm{.2 - 41}}{\rm{.7 = 9}}{\rm{.5}}\) the time between the second and third births;

\({{\rm{x}}_{\rm{4}}}{\rm{ = 55}}{\rm{.5 - 51 = 4}}{\rm{.3}}\) the time between the third and fourth births;

\({{\rm{x}}_{\rm{5}}}{\rm{ = 59}}{\rm{.5 - 55}}{\rm{.5 = 4}}\) the time between the fourth and fifth births;

\({{\rm{x}}_{\rm{6}}}{\rm{ = 61}}{\rm{.8 - 59}}{\rm{.5 = 2}}{\rm{.3}}\) the time between the fifth and sixth births;

As a result, the total can now be calculated as follows:

\(\begin{array}{c}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}{\rm{ = 1 \times }}{{\rm{x}}_{\rm{1}}}{\rm{ + 2 \times }}{{\rm{x}}_{\rm{2}}}{\rm{ + \ldots + 6 \times }}{{\rm{x}}_{\rm{6}}}\\{\rm{ = 1 \times 25}}{\rm{.2 + 2 \times 16}}{\rm{.5 + \ldots + 6 \times 2}}{\rm{.3}}\\{\rm{ = 137}}{\rm{.7}}\end{array}\)

Last but not least, the maximum likelihood estimates of\({\rm{\lambda }}\)is,

\(\begin{array}{c}{\rm{\hat \lambda = }}\frac{{\rm{n}}}{{\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\rm{i}} {{\rm{x}}_{\rm{i}}}}}\\{\rm{ = }}\frac{{\rm{6}}}{{{\rm{137}}{\rm{.7}}}}\\{\rm{ = 0}}{\rm{.0436}}\end{array}\)

Therefore, \({\rm{\hat \lambda = 0}}{\rm{.0436}}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

a. Let \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) be a random sample from a uniform distribution on \({\rm{(0,\theta )}}\). Then the mle of \({\rm{\theta }}\) is \({\rm{\hat \theta = Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\). Use the fact that \({\rm{Y}} \le {\rm{y}}\) if each \({{\rm{X}}_{\rm{i}}} \le {\rm{y}}\) to derive the cdf of Y. Then show that the pdf of \({\rm{Y = max}}\left( {{{\rm{X}}_{\rm{i}}}} \right)\) is \({{\rm{f}}_{\rm{Y}}}{\rm{(y) = }}\left\{ {\begin{array}{*{20}{c}}{\frac{{{\rm{n}}{{\rm{y}}^{{\rm{n - 1}}}}}}{{{{\rm{\theta }}^{\rm{n}}}}}}&{{\rm{0}} \le {\rm{y}} \le {\rm{\theta }}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

b. Use the result of part (a) to show that the mle is biased but that \({\rm{(n + 1)}}\)\({\rm{max}}\left( {{{\rm{X}}_{\rm{i}}}} \right){\rm{/n}}\) is unbiased.

Urinary angiotensinogen (AGT) level is one quantitative indicator of kidney function. The article โ€œUrinary Angiotensinogen as a Potential Biomarker of Chronic Kidney Diseasesโ€ (J. of the Amer. Society of Hypertension, \({\rm{2008: 349 - 354}}\)) describes a study in which urinary AGT level \({\rm{(\mu g)}}\) was determined for a sample of adults with chronic kidney disease. Here is representative data (consistent with summary quantities and descriptions in the cited article):

An appropriate probability plot supports the use of the lognormal distribution (see Section \({\rm{4}}{\rm{.5}}\)) as a reasonable model for urinary AGT level (this is what the investigators did).

a. Estimate the parameters of the distribution. (Hint: Rem ember that \({\rm{X}}\) has a lognormal distribution with parameters \({\rm{\mu }}\) and \({{\rm{\sigma }}^{\rm{2}}}\) if \({\rm{ln(X)}}\) is normally distributed with mean \({\rm{\mu }}\) and variance \({{\rm{\sigma }}^{\rm{2}}}\).)

b. Use the estimates of part (a) to calculate an estimate of the expected value of AGT level. (Hint: What is \({\rm{E(X)}}\)?)

Consider a random sample \({{\rm{X}}_{\rm{1}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\) from the pdf

\({\rm{f(x;\theta ) = }}{\rm{.5(1 + \theta x)}}\quad {\rm{ - 1ยฃ xยฃ 1}}\)

where \({\rm{ - 1ยฃ \theta ยฃ 1}}\) (this distribution arises in particle physics). Show that \({\rm{\hat \theta = 3\bar X}}\) is an unbiased estimator of\({\rm{\theta }}\). (Hint: First determine\({\rm{\mu = E(X) = E(\bar X)}}\).)

Each of \({\rm{n}}\) specimens is to be weighed twice on the same scale. Let \({{\rm{X}}_{\rm{i}}}\) and \({{\rm{Y}}_{\rm{i}}}\) denote the two observed weights for the \({\rm{ith}}\) specimen. Suppose\({{\rm{X}}_{\rm{i}}}\) and \({{\rm{Y}}_{\rm{i}}}\) are independent of one another, each normally distributed with mean value \({{\rm{\mu }}_{\rm{i}}}\) (the true weight of specimen \({\rm{i}}\)) and variance \({{\rm{\sigma }}^{\rm{2}}}\) . a. Show that the maximum likelihood estimator of \({{\rm{\sigma }}^{\rm{2}}}\) is \({\widehat {\rm{\sigma }}^{\rm{2}}}{\rm{ = \Sigma }}{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)^{\rm{2}}}{\rm{/(4n)}}\). (Hint: If \({\rm{\bar z = }}\left( {{{\rm{z}}_{\rm{1}}}{\rm{ + }}{{\rm{z}}_{\rm{2}}}} \right){\rm{/2}}\), then \({\rm{\Sigma }}{\left( {{{\rm{z}}_{\rm{i}}}{\rm{ - \bar z}}} \right)^{\rm{2}}}{\rm{ = }}{\left( {{{\rm{z}}_{\rm{1}}}{\rm{ - }}{{\rm{z}}_{\rm{2}}}} \right)^{\rm{2}}}{\rm{/2}}\)) b. Is the mle \({{\rm{\hat \sigma }}^{\rm{2}}}\) an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)? Find an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\). (Hint: For any \({\rm{rv Z,E}}\left( {{{\rm{Z}}^{\rm{2}}}} \right){\rm{ = V(Z) + (E(Z)}}{{\rm{)}}^{\rm{2}}}\). Apply this to \({\rm{Z = }}{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}\).)

An estimator \({\rm{\hat \theta }}\) is said to be consistent if for any \( \in {\rm{ > 0}}\), \({\rm{P(|\hat \theta - \theta |}} \ge \in {\rm{)}} \to {\rm{0}}\) as \({\rm{n}} \to \infty \). That is, \({\rm{\hat \theta }}\) is consistent if, as the sample size gets larger, it is less and less likely that \({\rm{\hat \theta }}\) will be further than \( \in \) from the true value of \({\rm{\theta }}\). Show that \({\rm{\bar X}}\) is a consistent estimator of \({\rm{\mu }}\) when \({{\rm{\sigma }}^{\rm{2}}}{\rm{ < }}\infty \) , by using Chebyshevโ€™s inequality from Exercise \({\rm{44}}\) of Chapter \({\rm{3}}\). (Hint: The inequality can be rewritten in the form \({\rm{P}}\left( {\left| {{\rm{Y - }}{{\rm{\mu }}_{\rm{Y}}}} \right| \ge \in } \right) \le {\rm{\sigma }}_{\rm{Y}}^{\rm{2}}{\rm{/}} \in \). Now identify \({\rm{Y}}\) with \({\rm{\bar X}}\).)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free