Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Use the data in table 11.19 on page 762.This time fits the model developed in Example 12.5.6.use the prior hyperparameters \(\,{{\bf{\lambda }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = }}{{\bf{\alpha }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}\,{\bf{ = 1,}}\,\,{{\bf{\beta }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 0}}{\bf{.1}},{{\bf{\mu }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 0}}{\bf{.001}}\)and \({{\bf{\psi }}_{\scriptstyle{\bf{0}}\atop\scriptstyle\,}}{\bf{ = 800}}\)obtain a sample of 10,000 from the posterior joint distribution of the parameters. Estimate the posterior mean of the three parameters \({{\bf{\mu }}_{\scriptstyle{\bf{1}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{\mu }}_{\scriptstyle{\bf{2}}\atop\scriptstyle\,}}{\bf{,}}{{\bf{\mu }}_{\scriptstyle{\bf{3}}\atop\scriptstyle\,}}\)

Short Answer

Expert verified

\(\exp \left\{ { - {{\frac{{{u_{\scriptstyle0\atop\scriptstyle\,}}\left( {\psi - {\psi _{\scriptstyle0\atop\scriptstyle\,}}} \right)}}{2}}^2} - \sum\limits_{i = 1}^p {{\tau _{\scriptstylei\atop\scriptstyle\,}}\left( {{\beta _{\scriptstyle0\atop\scriptstyle\,}} + \frac{{{n_{\scriptstylei\atop\scriptstyle\,}}{{\left( {{\mu _{\scriptstylei\atop\scriptstyle\,}} - {{\overline y }_{\scriptstylei\atop\scriptstyle\,}}} \right)}^2} + {w_{\scriptstylei\atop\scriptstyle\,}} + {\lambda _{\scriptstyle0\atop\scriptstyle\,}}{{\left( {{\mu _{\scriptstylei\atop\scriptstyle\,}} - \psi } \right)}^2}}}{2}} \right)} } \right\}\)

\({w_{\scriptstylei\atop\scriptstyle\,}} = \sum\limits_{j = 1}^{{n_{\scriptstylei\atop\scriptstyle\,}}} {{{\left( {{y_{\scriptstyleij\atop\scriptstyle\,}} - {{\overline y }_{\scriptstylei\atop\scriptstyle\,}}} \right)}^2}} ,\,\,i = 1,2...,p.\)

Step by step solution

01

Definition of a probability density function

In statistics, a probability density function (PDF) is a function whose integral is calculated to determine the probabilities associated with a continuous random variable.

Get the data from the mentioned table. There is the total of \({n_{\scriptstyle1\atop\scriptstyle\,}} = {n_{\scriptstyle2\atop\scriptstyle\,}} = {n_{\scriptstyle3\atop\scriptstyle\,}} = 6\) observations and let. The \(p = 3\)following product is the joint probability density function, that is,

\(likelood\;function{\rm{ }} \times {\rm{ }}cond.\;{\mu _{\scriptstylei\atop\scriptstyle\,}}{\rm{ }}given\;{\tau _{\scriptstylei\atop\scriptstyle\,}}{\rm{ }}and\;\psi {\rm{ }} \times {\rm{ }}prior{\rm{ }}{\tau _{\scriptstylei\atop\scriptstyle\,}} \times {\rm{ }}prior{\rm{ }}\psi \)

\(\exp \left\{ { - {{\frac{{{u_{\scriptstyle0\atop\scriptstyle\,}}\left( {\psi - {\psi _{\scriptstyle0\atop\scriptstyle\,}}} \right)}}{2}}^2} - \sum\limits_{i = 1}^p {{\tau _{\scriptstylei\atop\scriptstyle\,}}\left( {{\beta _{\scriptstyle0\atop\scriptstyle\,}} + \frac{{{n_{\scriptstylei\atop\scriptstyle\,}}{{\left( {{\mu _{\scriptstylei\atop\scriptstyle\,}} - {{\overline y }_{\scriptstylei\atop\scriptstyle\,}}} \right)}^2} + {w_{\scriptstylei\atop\scriptstyle\,}} + {\lambda _{\scriptstyle0\atop\scriptstyle\,}}{{\left( {{\mu _{\scriptstylei\atop\scriptstyle\,}} - \psi } \right)}^2}}}{2}} \right)} } \right\}\)

\(\,\,\,\, \times {\prod ^p}_{\scriptstylei = 1\atop\scriptstyle\,}\,{\tau _{\scriptstylei\atop\scriptstyle\,}}^{{\alpha _{\scriptstyle0 + \left( {{n_{\scriptstylei\atop\scriptstyle\,}} + 1} \right)/2 - 1..\,\,\,\,\atop\scriptstyle\,}}}\,(1)\)

Here, \({w_{\scriptstylei\atop\scriptstyle\,}}\) is defined by

\({w_{\scriptstylei\atop\scriptstyle\,}} = \sum\limits_{j = 1}^{{n_{\scriptstylei\atop\scriptstyle\,}}} {{{\left( {{y_{\scriptstyleij\atop\scriptstyle\,}} - {{\overline y }_{\scriptstylei\atop\scriptstyle\,}}} \right)}^2}} ,\,\,i = 1,2...,p.\)

From Eq. (1) it is true that, for fixed \({\mu _{\scriptstylei\atop\scriptstyle\,}},i = 1,2,..,p\,\,and\,\,\,\psi ,\) with τ as an argument of the function, is proportional to a probability density function of the gamma distribution with parameters

\({\alpha _{\scriptstyle0\atop\scriptstyle\,}} + \left( {{n_{\scriptstylei\atop\scriptstyle\,}} + {\rm{ }}1} \right)/2\)

And

\({\beta _{\scriptstyle0\atop\scriptstyle\,}} + \frac{1}{2}\left( {{n_{\scriptstylei\atop\scriptstyle\,}}\left( {{\mu _{\scriptstylei\atop\scriptstyle\,}} - \overline {{y_{\scriptstylei\atop\scriptstyle\,}}} } \right) + {w_{\scriptstylei\atop\scriptstyle\,}} + {\lambda _{\scriptstyle0\atop\scriptstyle\,}}{{\left( {{\mu _{\scriptstylei\atop\scriptstyle\,}} - \psi } \right)}^2}.} \right)\)

From Eq., \(\left( 1 \right)\) it is true that for fixed \({\tau _{\scriptstylei\atop\scriptstyle\,}}\) \({\tau _{\scriptstylei\atop\scriptstyle\,}},i = 1,2,..,p\,,\) and \({\mu _{\scriptstylei\atop\scriptstyle\,}},i = 1,2,..,p\)with ψ as an argument of the function, is proportional to a probability density function of the normal distribution with mean

\(\frac{1}{{{\mu _{\scriptstyle0\atop\scriptstyle\,}} + {\lambda _{\scriptstyle0\atop\scriptstyle\,}}\sum\nolimits_{i = 1}^p {{\tau _{\scriptstylei\atop\scriptstyle\,}}} }}\left( {{\mu _{\scriptstylei\atop\scriptstyle\,}}{\psi _{\scriptstyle0\atop\scriptstyle\,}} + \sum\limits_{i = 1}^p {{\tau _{\scriptstylei\atop\scriptstyle\,}}{\mu _{\scriptstylei\atop\scriptstyle\,}}} } \right)\)

And precision

\({\mu _{\scriptstyle0\atop\scriptstyle\,}} + {\lambda _{\scriptstyle0\atop\scriptstyle\,}}\sum\limits_{i = 1}^p {{\tau _{\scriptstylei\atop\scriptstyle\,}}.} \)

From Eq., \(\left( 1 \right)\) it is true that, for fixed \({\tau _{\scriptstylei\atop\scriptstyle\,}},i = 1,2,..,p\,,\) and ψ with one of them\({\mu _{\scriptstylei\atop\scriptstyle\,}},i = 1,2,..,p\) (the rest are fixed) as argument of the function, is proportional to a probability density function of the normal distribution with mean

\(\frac{1}{{{n_{\scriptstylei\atop\scriptstyle\,}} + {\lambda _{\scriptstyle0\atop\scriptstyle\,}}}}\left( {{n_{\scriptstylei\atop\scriptstyle\,}}\overline {{y_{\scriptstylei\atop\scriptstyle\,}}} + {\lambda _{\scriptstyle0\atop\scriptstyle\,}}\psi } \right)\)

And precision

\({\tau _{\scriptstylei\atop\scriptstyle\,}}\left( {{n_{\scriptstylei\atop\scriptstyle\,}} + {\lambda _{\scriptstyle0\atop\scriptstyle\,}}} \right).\)

One may use any software to simulate this distribution. Follow the Gibbs sampling algorithm and the code below to get the results.

02

Gibbs sampling

Gibbs Samplingis a Monte Carlo Markov Chain method for estimating complex joint distributions that draw an instance from the distribution of each variable iteratively based on the current values of the other variables.

a) The model is explained clearly in the mentioned example. To fit it, use the following Gibbs algorithm.

The Gibbs Sampling Algorithm: The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be a simulated value from the conditional distribution of \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution of \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

\(\left( {4.} \right)\)Repeat steps \(\,2.\,\,and\,3.\) \(\,i\) where\(\,i + 1\)

\(\begin{aligned}{c}Small{\rm{ }} &= {\rm{ }}c\left( {810,{\rm{ }}820,{\rm{ }}820,{\rm{ }}835,{\rm{ }}835,{\rm{ }}835} \right){\rm{ }}\\Medium{\rm{ }} &= {\rm{ }}c\left( {840,{\rm{ }}840,{\rm{ }}840,{\rm{ }}845,{\rm{ }}855,{\rm{ }}850} \right){\rm{ }}\\Large{\rm{ }} &= {\rm{ }}c\left( {785,{\rm{ }}790,{\rm{ }}785,{\rm{ }}760,{\rm{ }}760,{\rm{ }}770} \right)\end{aligned}\)

\(\begin{aligned}{c}\# n1{\rm{ }} &= {\rm{ }}n2{\rm{ }} &= {\rm{ }}n3{\rm{ }} &= {\rm{ }}6\\{\rm{ }}n{\rm{ }} &= {\rm{ }}length\left( {Small} \right){\rm{ }}\\\# number{\rm{ }}of{\rm{ }}groups{\rm{ }}\\p{\rm{ }} &= {\rm{ }}3\end{aligned}\)

\(\begin{aligned}{c}{\rm{ }}\# initialization{\rm{ }}of{\rm{ }}the{\rm{ }}prior{\rm{ }}hyperparameters{\rm{ }}\\lambda0{\rm{ }} &= {\rm{ }}1\\alpha0{\rm{ }} &= {\rm{ }}1{\rm{ }}\\beta0{\rm{ }} &= {\rm{ }}0.1{\rm{ }}\\u0{\rm{ }} &= {\rm{ }}0.001{\rm{ }}\end{aligned}\)

\(\begin{aligned}{c}\,psi0{\rm{ }} &= {\rm{ }}800{\rm{ }}\\\,\,AvgSmall{\rm{ }} &= {\rm{ }}mean\left( {Small} \right)\\{\rm{ }}AvgMedium{\rm{ }} &= {\rm{ }}mean\left( {Medium} \right)\\{\rm{ }}AvgLarge{\rm{ }} &= {\rm{ }}mean\left( {Large} \right)\\\,\# computation{\rm{ }}of{\rm{ }}wi{\rm{ }}\end{aligned}\)

\(\begin{aligned}{c}w{\rm{ }} &= {\rm{ }}rep\left( {NA,{\rm{ }}3} \right){\rm{ }}\\\,w\left( 1 \right){\rm{ }} &= {\rm{ }}0\\{\rm{ }}w\left( 2 \right){\rm{ }} &= {\rm{ }}0\\{\rm{ }}w\left( 3 \right){\rm{ }} &= {\rm{ }}0\end{aligned}\)

\(\begin{aligned}{l}temporarySmall &= {(Small\,\,\,(i) - AvgSmall)^2}\,\,\,\,\,\,\,\,\\\,w(1) &= w(1) + temporarySmall\end{aligned}\)

\(\,\,w(1) = w(1) + temporarySmall\)

\(\begin{aligned}{c}temporaryMedium &= {(Medium(i) - AvgMedium)^2}\\w(2) &= w(2) + temporaryMedium\\temporaryLarge &= {(Large(i) - AvgLarge)^2}\\w(3) &= w(3) + temporaryLarge\\\} \end{aligned}\)

\(\begin{aligned}{c}\# initialization{\rm{ }}for{\rm{ }}the{\rm{ }}simulation{\rm{ }}\\I{\rm{ }} &= {\rm{ }}10000{\rm{ }}\\mu1{\rm{ }} &= {\rm{ }}rep\left( {NA,{\rm{ }}I} \right)\\tau1{\rm{ }} &= {\rm{ }}rep\left( {NA,{\rm{ }}I} \right)\end{aligned}\)

\(\begin{aligned}{l}mu1\left( 1 \right) &= 850\\mu2\left( 1 \right) &= 850\\mu3\left( 1 \right) = 850{\rm{ }}\\psi\left( 1 \right){\rm{ }} &= {\rm{ }}800\end{aligned}\)

\(\begin{aligned}{c}mu2{\rm{ }} &= {\rm{ }}rep\left( {NA,{\rm{ }}I} \right){\rm{ }}\\tau2{\rm{ }} &= {\rm{ }}rep\left( {NA,{\rm{ }}I} \right)\\mu3{\rm{ }} &= {\rm{ }}rep\left( {NA,{\rm{ }}I} \right){\rm{ }}\\tau3{\rm{ }} &= {\rm{ }}rep\left( {NA,{\rm{ }}I} \right)\end{aligned}\)

\(\begin{aligned}{c}psi{\rm{ }} &= {\rm{ }}rep\left( {NA,{\rm{ }}I} \right)\\BurnIn{\rm{ }} &= {\rm{ }}100\end{aligned}\)

\(\begin{aligned}{c}for(i\,\,\,in2:I)\{ \\tau1(i) &= rgamma(1,shape = alpha0 + (n + 1)/2,\\scale = beta0 + (n*{(mu1(i - 1) - AvgSmall)^2} + w(1)\\\\tau2(i) &= rgamma(1,shape &= alpha0 + (n + 1)/2,\\scale &= beta0 + (n*{(mu1(i - 1) - AvgMedium)^2} + w(\end{aligned}\)

\(\begin{aligned}{c}tau3(i) &= rgamma(1,shape = alpha0 + (n + 1)/2,\\scale &= beta0 + (n*{(mu1(i - 1) - AvgLarge)^2} + w(3\\enumerator = lambda0*(tau1(i)*mu1(i - 1) + tau2(i)*mu2(i - 1) + t\\denominator &= u0 + lambda0*(tau1(i) + tau2(i) + tau3(i))\\meanPsi = u0*psi0 + enumerator/denominator\end{aligned}\)

\(\begin{aligned}{c}\# has{\rm{ }}to{\rm{ }}be{\rm{ }}greater{\rm{ }}than{\rm{ }}0\\varPsi{\rm{ }} = {\rm{ }}max\left( {u0{\rm{ }} + {\rm{ }}lambda0{\rm{ }}*{\rm{ }}\left( {tau1\left( i \right){\rm{ }} + {\rm{ }}tau2\left( i \right){\rm{ }} + {\rm{ }}tau3\left( i \right)} \right),{\rm{ }}0} \right){\rm{ }}\\psi\left( i \right){\rm{ }} = {\rm{ }}rnorm\left( {1,{\rm{ }}mean{\rm{ }} = {\rm{ }}meanPsi,{\rm{ }}sd{\rm{ }} = {\rm{ }}sqrt\left( {varPsi} \right)} \right)\end{aligned}\)

\(\begin{aligned}{c}sd1{\rm{ }} = {\rm{ }}max\left( {sqrt\left( {tau1\left( i \right)*\left( {n{\rm{ }} + {\rm{ }}lambda0} \right)} \right),{\rm{ }}0} \right)\\mu1\left( i \right){\rm{ }} = {\rm{ }}rnorm(1,{\rm{ }}mean = \left( {n{\rm{ }}*{\rm{ }}AvgSmall{\rm{ }} + {\rm{ }}lambda0{\rm{ }}*{\rm{ }}psi\left( i \right)} \right)/\left( {n{\rm{ }} + {\rm{ }}lam{\rm{ }}sd{\rm{ }} = {\rm{ }}sd1} \right){\rm{ }}\\\\sd2{\rm{ }} = {\rm{ }}max\left( {sqrt\left( {tau2\left( i \right){\rm{ }}*{\rm{ }}\left( {n{\rm{ }} + {\rm{ }}lambda0} \right)} \right),{\rm{ }}0} \right){\rm{ }}\\mu2\left( i \right){\rm{ }} = {\rm{ }}rnorm(1,{\rm{ }}mean{\rm{ }} = {\rm{ }}\left( {n{\rm{ }}*{\rm{ }}AvgMedium{\rm{ }} + {\rm{ }}lambda0{\rm{ }}*{\rm{ }}psi\left( i \right)} \right)/\left( {n{\rm{ }} + {\rm{ }}la{\rm{ }}sd{\rm{ }} = {\rm{ }}sd2} \right)\end{aligned}\)

\(\begin{aligned}{c}sd3{\rm{ }} = {\rm{ }}max\left( {sqrt\left( {tau3\left( i \right){\rm{ }}*{\rm{ }}\left( {n{\rm{ }} + {\rm{ }}lambda0} \right)} \right),{\rm{ }}0} \right)\\mu3\left( i \right){\rm{ }} = {\rm{ }}rnorm(1,{\rm{ }}mean{\rm{ }} = {\rm{ }}\left( {n{\rm{ }}*{\rm{ }}AvgLarge{\rm{ }} + {\rm{ }}lambda0{\rm{ }}*{\rm{ }}psi\left( i \right)} \right)/\left( {n{\rm{ }} + {\rm{ }}lam{\rm{ }}sd{\rm{ }} = {\rm{ }}sd3} \right){\rm{ }}\\{\rm{\} }}\end{aligned}\)

\(\begin{aligned}{l}mu1{\rm{ }} = {\rm{ }}mu1\left( { - \left( {1:BurnIn} \right)} \right){\rm{ }}\\mu2{\rm{ }} = {\rm{ }}mu2\left( { - \left( {1:BurnIn} \right)} \right)\\mu3{\rm{ }} = {\rm{ }}mu3\left( { - \left( {1:BurnIn} \right)} \right){\rm{ }}\\mean\left( {mu1} \right){\rm{ }}\end{aligned}\)

The number of simulations I, shall be large enough. The given code below gives all required values and the algorithm itself.

The initial hyperparameters of the prior distributions are

\({\lambda _{\scriptstyle0\atop\scriptstyle\,}}{\rm{ }} = {\rm{ }}1,{\rm{ }}{\mu _{\scriptstyle0\atop\scriptstyle\,}} = 1,{\rm{ }}{\alpha _{\scriptstyle0\atop\scriptstyle\,}} = {\rm{ }}0.5,{\rm{ }}{\beta _{\scriptstyle0\atop\scriptstyle\,}} = 0.5.\)

The initial result obtained with the code is

\(\begin{aligned}{l}\overline {{y_{\scriptstyle1\atop\scriptstyle\,}}} &= {\rm{ }}825.8,\\\overline {{y_{\scriptstyle1\atop\scriptstyle\,}}} &= {\rm{ }}845,\\\overline {{y_{\scriptstyle1\atop\scriptstyle\,}}} &= {\rm{ }}775,\end{aligned}\)

\(\begin{aligned}{l}{w_{\scriptstyle1\atop\scriptstyle\,}} &= 571,\\{w_{\scriptstyle2\atop\scriptstyle\,}} &= 200,\\{w_{\scriptstyle3\atop\scriptstyle\,}} = 900.\end{aligned}\)

Then, after initialization and the simulation, the estimate of the posterior mean means are:

Hence,

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_n}\) be i.i.d. with the normal distribution having mean \(\mu \) and precision \(\tau \).Gibbs sampling allows one to use a prior distribution for \(\left( {\mu ,\tau } \right)\) in which \(\mu \) and\(\tau \) are independent. With mean \({\mu _0}\) and variance, \({\gamma _0}\) Let the prior distribution of \(\tau \)being the gamma distribution with parameters \({\alpha _0}\) and \({\beta _0}\) .

a. Show that the Table \({\bf{12}}{\bf{.8}}\) specifies the appropriate conditional distribution for each parameter given the other.

b. Use the new Mexico nursing home data(Examples \({\bf{12}}{\bf{.5}}{\bf{.2}}\,{\bf{and}}\,{\bf{12}}{\bf{.5}}{\bf{.3}}\) ). Let the prior hyperparameters be \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 2,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 6300,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 200}}\), and \({\gamma _0} = 6.35 \times {10^{ - 4}}.\) Implement a Gibbs sampler to find the posterior distribution \(\left( {\mu ,\tau } \right).\,\) . In particular, calculate an interval containing \(95\) percent of the posterior distribution of \(\mu \)

Test the standard normal pseudo-random number generator on your computer by generating a sample of size 10,000 and drawing a normal quantile plot. How straight does the plot appear to be?

In Example 12.6.7, let \(\left( {{X^*},{Y^*}} \right)\) be a random draw from the sample distribution\({F_n}\) . Prove that the correlation \({X^*} and {Y^*}\) is R in Eq. (12.6.2).

Consider the power calculation done in Example 9.5.5.

a. Simulate \({v_0} = 1000\) i.i.d. noncentral t pseudo-random variables with 14 degrees of freedom and noncentrality parameter \(1.936.\)

b. Estimate the probability that a noncentral t random variable with 14 degrees of freedom and noncentrality parameter \(1.936\) is at least \(1.761.\) Also, compute the standard simulation error.

c. Suppose that we want our estimator of the noncentral t probability in part (b) to be closer than \(0.01\) the true value with probability \(0.99.\) How many noncentral t random variables do we need to simulate?

Use the blood pressure data in Table 9.2 that was described in Exercise 10 of Sec. 9.6. Suppose now that we are not confident that the variances are the same for the two treatment groups. Perform a parametric bootstrap analysis of the sort done in Example 12.6.10. Use v=10,000 bootstrap simulations.

a. Estimate the probability of type I error for a two-sample t-test whose nominal level is \({\alpha _0} = 0.1.\)

b. Correct the level of the two-sample t-test by computing the appropriate quantile of the bootstrap distribution of \(\left| {{U^{(i)}}} \right|.\)

c. Compute the standard simulation error for the quantile in part (b).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free