Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let\({\rm{X}}\)represent the error in making a measurement of a physical characteristic or property (e.g., the boiling point of a particular liquid). It is often reasonable to assume that\({\rm{E(X) = 0}}\)and that\({\rm{X}}\)has a normal distribution. Thus, the pdf of any particular measurement error is

\({\rm{f(x;\theta ) = }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi \theta }}} }}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/2\theta }}}}\quad {\rm{ - \yen < x < \yen}}\)

(Where we have used\({\rm{\theta }}\)in place of\({{\rm{\sigma }}^{\rm{2}}}\)). Now suppose that\({\rm{n}}\)independent measurements are made, resulting in measurement errors\({{\rm{X}}_{\rm{1}}}{\rm{ = }}{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{ = }}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}{\rm{ = }}{{\rm{x}}_{\rm{n}}}{\rm{.}}\)Obtain the mle of\({\rm{\theta }}\).

Short Answer

Expert verified

The solution is \({\rm{\hat \theta = }}\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} \)which the maximum likelihood estimator.

Step by step solution

01

Introduction

An estimator is a rule for computing an estimate of a given quantity based on observable data: the rule (estimator), the quantity of interest (estimate), and the output (estimate) are all distinct.

02

Explanation

Let random variables \({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)have joint pdf or pmb

\({\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;}}{{\rm{\theta }}_{\rm{1}}}{\rm{,}}{{\rm{\theta }}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{\theta }}_{\rm{m}}}} \right){\rm{,}}\quad {\rm{n,m\hat I N}}\)

Where, the parameters \({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\) are unknown. When function \({\rm{f}}\)is a function of parameters\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\), it is called the

likelihood function

Values \({{\rm{\hat \theta }}_{\rm{i}}}\)that maximize the likelihood function are the maximum likelihood estimates (mle's), or equally values \({{\rm{\hat \theta }}_{\rm{i}}}\)for which

for every\({{\rm{\theta }}_{\rm{i}}}{\rm{,i = 1,2, \ldots ,m}}\). By substituting \({{\rm{x}}_{\rm{i}}}\)with\({{\rm{x}}_{\rm{i}}}\), the

maximum likelihood estimators

are obtained.

The likelihood function, because of the independence, becomes

\(\begin{aligned}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\& = \frac{{\rm{1}}}{{\sqrt {{\rm{2\pi \times \theta }}} }}{\rm{ \times exp}}\left\{ {{\rm{ - }}\frac{{{\rm{x}}_{\rm{1}}^{\rm{2}}}}{{{\rm{2\theta }}}}} \right\}{\rm{ \times }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi \times \theta }}} }}{\rm{ \times exp}}\left\{ {{\rm{ - }}\frac{{{\rm{x}}_{\rm{2}}^{\rm{2}}}}{{{\rm{2\theta }}}}} \right\}{\rm{ \times \ldots \times }}\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi \times \theta }}} }}{\rm{ \times exp}}\left\{ {{\rm{ - }}\frac{{{\rm{x}}_{\rm{n}}^{\rm{2}}}}{{{\rm{2\theta }}}}} \right\}\\& = \prod\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{\rm{1}}}{{\sqrt {{\rm{2\pi \times \theta }}} }}} {\rm{ \times exp}}\left\{ {{\rm{ - }}\frac{{{\rm{x}}_{\rm{i}}^{\rm{2}}}}{{{\rm{2\theta }}}}} \right\}\\&= (2\pi \sigma {{\rm{)}}^{{\rm{ - n/2}}}}{\rm{ \times exp}}\left\{ {{\rm{ - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{{\rm{x}}_{\rm{i}}^{\rm{2}}}}{{{\rm{2\sigma }}}}} } \right\}\end{aligned}\)

In order to find maximum, look at the log likelihood function

\(\begin{array}{l}{\rm{lnf}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\{\rm{ = ln}}\left( {{{{\rm{(2\pi \sigma )}}}^{{\rm{ - n/2}}}}{\rm{ \times exp}}\left\{ {{\rm{ - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{{\rm{x}}_{\rm{i}}^{\rm{2}}}}{{{\rm{2\sigma }}}}} } \right\}} \right)\\{\rm{ = - }}\frac{{\rm{n}}}{{\rm{2}}}{\rm{ln(2\pi ) - }}\frac{{\rm{n}}}{{\rm{2}}}{\rm{ln\theta - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{{\rm{x}}_{\rm{i}}^{\rm{2}}}}{{{\rm{2\theta }}}}} \end{array}\)

By taking derivative of log likelihood function in respect to \({\rm{\theta }}\)and equating it to \({\rm{0}}\) the maximum likelihood estimator is obtained. Therefore, the derivative is

\(\begin{array}{l}\frac{{\rm{d}}}{{{\rm{d\theta }}}}{\rm{f}}\left( {{{\rm{x}}_{\rm{1}}}{\rm{,}}{{\rm{x}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{x}}_{\rm{n}}}{\rm{;\theta }}} \right)\\ &= \frac{{\rm{d}}}{{{\rm{d\theta }}}}\left( {{\rm{ - }}\frac{{\rm{n}}}{{\rm{2}}}{\rm{ln(2\pi ) - }}\frac{{\rm{n}}}{{\rm{2}}}{\rm{ln\theta - }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{{\rm{x}}_{\rm{i}}^{\rm{2}}}}{{{\rm{2\theta }}}}} } \right)\\&= 0 - \frac{{\rm{n}}}{{{\rm{2\theta }}}}{\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{{\rm{x}}_{\rm{i}}^{\rm{2}}}}{{{\rm{2}}{{\rm{\theta }}^{\rm{2}}}}}} \\&= - \frac{{\rm{n}}}{{{\rm{2\theta }}}}{\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{{\rm{x}}_{\rm{i}}^{\rm{2}}}}{{{\rm{2}}{{\rm{\theta }}^{\rm{2}}}}}} \end{array}\)

Therefore, the maximum likelihood estimator is obtained by solving equation

\(\begin{aligned}{\rm{ - }}\frac{{\rm{n}}}{{{\rm{2\hat \theta }}}}{\rm{ + }}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {\frac{{{\rm{x}}_{\rm{i}}^{\rm{2}}}}{{{\rm{2}}\widehat {{{\rm{\theta }}^{\rm{2}}}}}}} \\&= 0 - n\theta + \sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{x}}_{\rm{i}}^{\rm{2}}} \\ &= 0\end{aligned}\)
For\({\rm{\hat \theta }}\).

Therefore, the solution is\({\rm{\hat \theta = }}\frac{{\rm{1}}}{{\rm{n}}}\sum\limits_{{\rm{i = 1}}}^{\rm{n}} {{\rm{X}}_{\rm{i}}^{\rm{2}}} \)which is the maximum likelihood estimator

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Each of \({\rm{n}}\) specimens is to be weighed twice on the same scale. Let \({{\rm{X}}_{\rm{i}}}\) and \({{\rm{Y}}_{\rm{i}}}\) denote the two observed weights for the \({\rm{ith}}\) specimen. Suppose\({{\rm{X}}_{\rm{i}}}\) and \({{\rm{Y}}_{\rm{i}}}\) are independent of one another, each normally distributed with mean value \({{\rm{\mu }}_{\rm{i}}}\) (the true weight of specimen \({\rm{i}}\)) and variance \({{\rm{\sigma }}^{\rm{2}}}\) . a. Show that the maximum likelihood estimator of \({{\rm{\sigma }}^{\rm{2}}}\) is \({\widehat {\rm{\sigma }}^{\rm{2}}}{\rm{ = \Sigma }}{\left( {{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}} \right)^{\rm{2}}}{\rm{/(4n)}}\). (Hint: If \({\rm{\bar z = }}\left( {{{\rm{z}}_{\rm{1}}}{\rm{ + }}{{\rm{z}}_{\rm{2}}}} \right){\rm{/2}}\), then \({\rm{\Sigma }}{\left( {{{\rm{z}}_{\rm{i}}}{\rm{ - \bar z}}} \right)^{\rm{2}}}{\rm{ = }}{\left( {{{\rm{z}}_{\rm{1}}}{\rm{ - }}{{\rm{z}}_{\rm{2}}}} \right)^{\rm{2}}}{\rm{/2}}\)) b. Is the mle \({{\rm{\hat \sigma }}^{\rm{2}}}\) an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\)? Find an unbiased estimator of \({{\rm{\sigma }}^{\rm{2}}}\). (Hint: For any \({\rm{rv Z,E}}\left( {{{\rm{Z}}^{\rm{2}}}} \right){\rm{ = V(Z) + (E(Z)}}{{\rm{)}}^{\rm{2}}}\). Apply this to \({\rm{Z = }}{{\rm{X}}_{\rm{i}}}{\rm{ - }}{{\rm{Y}}_{\rm{i}}}\).)

Consider randomly selecting \({\rm{n}}\) segments of pipe and determining the corrosion loss (mm) in the wall thickness for each one. Denote these corrosion losses by \({{\rm{Y}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{Y}}_{\rm{n}}}\). The article “A Probabilistic Model for a Gas Explosion Due to Leakages in the Grey Cast Iron Gas Mains” (Reliability Engr. and System Safety (\({\rm{(2013:270 - 279)}}\)) proposes a linear corrosion model: \({{\rm{Y}}_{\rm{i}}}{\rm{ = }}{{\rm{t}}_{\rm{i}}}{\rm{R}}\), where \({{\rm{t}}_{\rm{i}}}\) is the age of the pipe and \({\rm{R}}\), the corrosion rate, is exponentially distributed with parameter \({\rm{\lambda }}\). Obtain the maximum likelihood estimator of the exponential parameter (the resulting mle appears in the cited article). (Hint: If \({\rm{c > 0}}\) and \({\rm{X}}\) has an exponential distribution, so does \({\rm{cX}}\).)

An estimator \({\rm{\hat \theta }}\) is said to be consistent if for any \( \in {\rm{ > 0}}\), \({\rm{P(|\hat \theta - \theta |}} \ge \in {\rm{)}} \to {\rm{0}}\) as \({\rm{n}} \to \infty \). That is, \({\rm{\hat \theta }}\) is consistent if, as the sample size gets larger, it is less and less likely that \({\rm{\hat \theta }}\) will be further than \( \in \) from the true value of \({\rm{\theta }}\). Show that \({\rm{\bar X}}\) is a consistent estimator of \({\rm{\mu }}\) when \({{\rm{\sigma }}^{\rm{2}}}{\rm{ < }}\infty \) , by using Chebyshev’s inequality from Exercise \({\rm{44}}\) of Chapter \({\rm{3}}\). (Hint: The inequality can be rewritten in the form \({\rm{P}}\left( {\left| {{\rm{Y - }}{{\rm{\mu }}_{\rm{Y}}}} \right| \ge \in } \right) \le {\rm{\sigma }}_{\rm{Y}}^{\rm{2}}{\rm{/}} \in \). Now identify \({\rm{Y}}\) with \({\rm{\bar X}}\).)

An investigator wishes to estimate the proportion of students at a certain university who have violated the honor code. Having obtained a random sample of\({\rm{n}}\)students, she realizes that asking each, "Have you violated the honor code?" will probably result in some untruthful responses. Consider the following scheme, called a randomized response technique. The investigator makes up a deck of\({\rm{100}}\)cards, of which\({\rm{50}}\)are of type I and\({\rm{50}}\)are of type II.

Type I: Have you violated the honor code (yes or no)?

Type II: Is the last digit of your telephone number a\({\rm{0 , 1 , or 2}}\)(yes or no)?

Each student in the random sample is asked to mix the deck, draw a card, and answer the resulting question truthfully. Because of the irrelevant question on type II cards, a yes response no longer stigmatizes the respondent, so we assume that responses are truthful. Let\({\rm{p}}\)denote the proportion of honor-code violators (i.e., the probability of a randomly selected student being a violator), and let\({\rm{\lambda = P}}\)(yes response). Then\({\rm{\lambda }}\)and\({\rm{p}}\)are related by\({\rm{\lambda = }}{\rm{.5p + (}}{\rm{.5)(}}{\rm{.3)}}\).

a. Let\({\rm{Y}}\)denote the number of yes responses, so\({\rm{Y\sim}}\)Bin\({\rm{(n,\lambda )}}\). Thus Y / n is an unbiased estimator of\({\rm{\lambda }}\). Derive an estimator for\({\rm{p}}\)based on\({\rm{Y}}\). If\({\rm{n = 80}}\)and\({\rm{y = 20}}\), what is your estimate? (Hint: Solve\({\rm{\lambda = }}{\rm{.5p + }}{\rm{.15}}\)for\({\rm{p}}\)and then substitute\({\rm{Y/n}}\)for\({\rm{\lambda }}\).)

b. Use the fact that\({\rm{E(Y/n) = \lambda }}\)to show that your estimator\({\rm{\hat p}}\)is unbiased.

c. If there were\({\rm{70}}\)type I and\({\rm{30}}\)type II cards, what would be your estimator for\({\rm{p}}\)?

The mean squared error of an estimator \({\rm{\hat \theta }}\) is \({\rm{MSE(\hat \theta ) = E(\hat \theta - \hat \theta }}{{\rm{)}}^{\rm{2}}}\). If \({\rm{\hat \theta }}\) is unbiased, then \({\rm{MSE(\hat \theta ) = V(\hat \theta )}}\), but in general \({\rm{MSE(\hat \theta ) = V(\hat \theta ) + (bias}}{{\rm{)}}^{\rm{2}}}\) . Consider the estimator \({{\rm{\hat \sigma }}^{\rm{2}}}{\rm{ = K}}{{\rm{S}}^{\rm{2}}}\), where \({{\rm{S}}^{\rm{2}}}{\rm{ = }}\) sample variance. What value of K minimizes the mean squared error of this estimator when the population distribution is normal? (Hint: It can be shown that \({\rm{E}}\left( {{{\left( {{{\rm{S}}^{\rm{2}}}} \right)}^{\rm{2}}}} \right){\rm{ = (n + 1)}}{{\rm{\sigma }}^{\rm{4}}}{\rm{/(n - 1)}}\) In general, it is difficult to find \({\rm{\hat \theta }}\) to minimize \({\rm{MSE(\hat \theta )}}\), which is why we look only at unbiased estimators and minimize \({\rm{V(\hat \theta )}}\).)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free