Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

In Example 12.5.6, we used a hierarchical model. In that model, the parameters\({\mu _i},...,{\mu _P}\,\)were independent random variables with\({\mu _i}\)having the normal distribution with mean ψ and precision\({\lambda _0}{T_i}\,\)conditional on ψ and\({T_1},\,....{T_P}\). To make the model more general, we could also replace\({\lambda _0}\)with an unknown parameter\(\lambda \). That is, let the\({\mu _i}\)’s be independent with\({\mu _i}\)having the normal distribution with mean ψ and precision\(\,\lambda {T_i}\)conditional on\(\psi \),\(\lambda \) and\({T_1},\,....{T_P}\). Let\(\lambda \)have the gamma distribution with parameters\({\gamma _0}\)and\(\,{\delta _0}\), and let\(\lambda \)be independent of ψ and\({T_1},\,....{T_P}\). The remaining parameters have the prior distributions stated in Example 12.5.6.

a. Write the product of the likelihood and the prior as a function of the parameters\({\mu _i},...,{\mu _P}\,\), \({T_1},\,....{T_P}\)ψ, and\(\lambda \).

b. Find the conditional distributions of each parameter given all of the others. Hint: For all the parameters besides\(\lambda \), the distributions should be almost identical to those given in Example 12.5.6. It wherever\({\lambda _0}\)appears, of course, something will have to change.

c. Use a prior distribution in which α0 = 1, β0 = 0.1, u0 = 0.001, γ0 = δ0 = 1, and \({\psi _0}\)= 170. Fit the model to the hot dog calorie data from Example 11.6.2. Compute the posterior means of the four μi’s and 1/τi’s.

Short Answer

Expert verified
  1. The product of the two functions uses parameters given in the exercise and distribution \(\lambda \).
  2. The conditional distribution of \(\lambda \) all other parameters is gamma distribution.
  3. The estimated posterior means of \({\mu _i}\) \(i = 1,2,3,4\) are, respectively, 157 158.6, 118.9, and 160.4.
  4. The estimated posterior means for the \(1/{T_i}\) \(i = 1,2,3,4\) are, respectively, 487, 598.9, 479.44, and 548.1.

Step by step solution

01

Normal distribution and gamma distribution

Recall what is stated in the mentioned example and the normal and gamma distribution probability density functions. Then, the required product is a product of

\(\exp \left\{ { - \frac{{{u_0}{{\left( {\psi - {\psi _0}} \right)}^2}}}{2} - \sum\limits_{i = 1}^p {{T_i}} \left( {{\beta _0} + \frac{{{n_i}{{\left( {{\mu _i} - \mathop {{y_i}}\limits^\_ } \right)}^2} + {w_i} + \lambda {{\left( {{\mu _i} - \psi } \right)}^2}}}{2}} \right)} \right\}\)

and the following expression

\({\lambda ^{p/2 + {\gamma _0} - 1}}\exp \left( { - \lambda {\delta _0}} \right)\,\,\mathop \Pi \limits_{i = 1}^p T_i^{{\alpha _0} + \left( {{n_i} + 1} \right)/2 - 1},\)

and \({w_i}\) is given in the same way as in the example

\({w_i} = \sum\limits_{j = 1}^{{n_i}} {{{\left( {{y_{ij}} - \mathop {{y_i}}\limits^\_ } \right)}^2}} ,\,\,\,\,i = 1,2,...,p.\)

The second term corresponds to the prior distribution.

02

Conditional distribution 

As mentioned in the hint, the conditional distributions stay practically the same as in the example.

If they look at a product as a function of \({T_i}\), then you get a gamma distribution with parameters

\({\alpha _0} + \frac{{{n_i} + 1}}{2}\,\) and \(\,\,{\beta _0} + \frac{{{n_i}{{\left( {{\mu _i} - \mathop {{y_i}}\limits^\_ } \right)}^2} + {w_i} + \lambda {{\left( {{\mu _i} - \psi } \right)}^2}}}{2}\)

Next, as a function of \(\psi \)by writing it in a form widely known, one gets a normal distribution with parameters

\(\frac{{{u_0}{\psi _0} + \lambda \sum\nolimits_{i = 1}^p {\,\,} {\mu _i}}}{{{u_0} + \lambda \sum\nolimits_{i = 1}^p {{T_i}\,} }}\,\) and \({u_0} + \lambda \sum\limits_{i = 1}^p {{T_i}} \,.\)

Before commenting on the distribution of given all others, notice that the product above looks like a probability density function of a normal distribution when it is a function of for the same reason. The mean and precision are given with

\(\frac{{{n_i}\mathop {{y_i}}\limits^\_ + \lambda \psi }}{{{n_i} + {\lambda _0}}}\) and \(\,{u_0}{T_i}\,\left( {{n_i} - \lambda } \right).\)

Finally, if the product is seen as a function of \(\lambda \), it approximately probability density function of gamma distribution with parameters

\(\frac{p}{2} + {\gamma _0}\,\) and \({\delta _0} + \frac{1}{2}\sum\limits_{i = 1}^p {{T_i}} {\left( {{\mu _i} - \psi } \right)^2}.\)

03

Estimated posterior 

The parameters that should be used are given in the exercise, and the others are

\({n_1} = 20\) for beef,

\({n_2} = 17\) for meat,

\({n_3} = 17\) for poultry,

\({n_4} = 9\) for specialty.

Means \({\mu _i}\) \(i = 1,2,3,4\)correspond to the indices of \({n_i}\), \(i = 1,2,3,4\)

The code used for this simulation uses N = 20000 Markov chains with I = 100000 iterations/steps. It means that there are a total of I = 100000 parameter vector from which the result is obtained. Note that the given code should be changed as the initial parameters are different. The estimated posterior means of \({\mu _i}\), \(i = 1,2,3,4\) are, respectively, 157, 158.6, 118.9, and 160.4. Similarly, the estimated posterior means for the \(1/{T_i}\) \(i = 1,2,3,4\) are respectively, 487 598.9, 479.44, and 548.1

Here, the result of part(a), part(b) and part(c)

  1. The product of the two functions uses parameters given in the exercise and distribution \(\lambda \).
  2. The conditional distribution of \(\lambda \) all other parameters is gamma distribution.
  3. The estimated posterior means of \({\mu _i}\) \(i = 1,2,3,4\) are, respectively, 157 158.6, 118.9, and 160.4.
  4. The estimated posterior means for the \(1/{T_i}\) \(i = 1,2,3,4\) are, respectively, 487 598.9, 479.44, and 548.1.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Use the data in the Table \({\bf{11}}{\bf{.5}}\) on page \({\bf{699}}\) suppose that \({{\bf{y}}_{\bf{i}}}\) is the logarithm of pressure \({x_i}\)and is the boiling point for the I the observation \({\bf{i = 1,}}...{\bf{,17}}{\bf{.}}\) Use the robust regression scheme described in Exercise \({\bf{8}}\) to \({\bf{a = 5, b = 0}}{\bf{.1}}\,\,{\bf{and f = 0}}{\bf{.1}}{\bf{.}}\) Estimate the posterior means and standard deviations of the parameter \({{\bf{\beta }}_{\bf{0}}}{\bf{,}}{{\bf{\beta }}_{\bf{1}}}\,\) and n.

Use the blood pressure data in Table 9.2 that was described in Exercise 10 of Sec. 9.6. Suppose now that we are not confident that the variances are the same for the two treatment groups. Perform a parametric bootstrap analysis of the sort done in Example 12.6.10. Use v=10,000 bootstrap simulations.

a. Estimate the probability of type I error for a two-sample t-test whose nominal level is \({\alpha _0} = 0.1.\)

b. Correct the level of the two-sample t-test by computing the appropriate quantile of the bootstrap distribution of \(\left| {{U^{(i)}}} \right|.\)

c. Compute the standard simulation error for the quantile in part (b).

Let X and Y be independent random variables with \(X\) having the t distribution with five degrees of freedom and Y having the t distribution with three degrees of freedom. We are interested in \(E\left( {|X - Y|} \right).\)

a. Simulate 1000 pairs of \(\left( {{X_i},{Y_i}} \right)\) each with the above joint distribution and estimate \(E\left( {|X - Y|} \right).\)

b. Use your 1000 simulated pairs to estimate the variance of \(|X - Y|\) also.

c. Based on your estimated variance, how many simulations would you need to be 99 percent confident that your estimator is within the actual mean?

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right){\bf{ = cg}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f for \(\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right){\bf{,}}\)each \({x_{2\,}}\)let\({{\bf{h}}_{{\bf{2}}\,}}\left( {{{\bf{x}}_{{\bf{1}}\,}}} \right){\bf{ = g}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) that is what we get by considering \({\bf{g}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) as a function of \({{\bf{x}}_{{\bf{1}}\,}}\)for fixed \({{\bf{x}}_{2\,}}\)show that there is a multiplicative factor \({{\bf{c}}_{{\bf{2}}\,}}\)that does not depend on such that is the conditional p.d.f of \({{\bf{x}}_{{\bf{1}}\,}}\) given \(\left( {{{\bf{x}}_{{\bf{2}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\)

Consider, once again, the model described in Example \({\bf{7}}{\bf{.5}}{\bf{.10}}{\bf{.}}\) Assume that \({\bf{n = 10}}\) the observed values of \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_{{\bf{1}}0}}\) are

\( - 0.92,\,\, - 0.33,\,\, - 0.09,\,\,\,0.27,\,\,\,0.50, - 0.60,\,1.66,\, - 1.86,\,\,\,3.29,\,\,\,2.30\).

a. Fit the model to the observed data using the Gibbs sampling algorithm developed in Exercise. Use the following prior hyperparameters: \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 0}}\,{\bf{and}}\,{\bf{ }}{{\bf{\lambda }}_{\bf{0}}}{\bf{ = 1}}\)

b. For each i, estimate the posterior probability that \({\rm{ }}{{\rm{x}}_i}\)came for the normal distribution with unknown mean and variance.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free