Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

In Example 12.5.6, we modeled the parameters \({\tau _1}, \ldots {\tau _\pi }\) as i.i.d. having the gamma distribution with parameters \({\alpha _0}\) , and \({\beta _0}.\) We could have added a level to the hierarchical model that would allow the \({\tau _\iota }\) 's to come from a distribution with an unknown parameter. For example, suppose that we model the \({\tau _\iota }\) 's as conditionally independent, having the gamma distribution with parameters \({\alpha _0}\) and \(\beta \) given \(\beta \). Let \(\beta \) be independent of \(\psi \) and \({\mu _1}, \ldots ,{\mu _p}\) with \(\beta \) having the prior distributions as specified in Example 12.5.6.

a. Write the product of the likelihood and the prior as a function of the parameters \({\mu _1}, \ldots ,{\mu _p},{\tau _1}, \ldots ,{\tau _\pi },\psi ,\) \(\beta \).

b. Find the conditional distributions of each parameter given all of the others. Hint: For all the parameters besides \(\beta \), the distributions should be almost identical to those given in Example 12.5.6. It wherever \({\beta _0}.\) appears, of course, something will have to change.

c. Use a prior distribution in which and \({\psi _0} = 170.\) Fit the model to the hot dog calorie data from Example 11.6.2. Compute the posterior means of the four \({\mu _i}'s\) and

\(1/{\tau _i}^\prime s.\)

Short Answer

Expert verified

(a) The product of the two functions uses parameters given in the exercise and the distribution of\(\beta .\)

(b) The conditional distribution of\(\beta .\)given all other parameters is gamma distribution.

(c) The estimated posterior means of\({\mu _i},i = 1,2,3,4\)are respectively,\(156.8,158.4,120.3,\)and\(160.1.\)

The estimated posterior means for the \(1/{\tau _i},i = 1,2,3,4\) are, respectively, \(494.9,609.4,545.6,\) and \(570.5.\)

Step by step solution

01

(a) To find the product of likelihood and the prior function

One can get the desired product by using the conclusion in the mentioned example and the probability density functions of normal distribution and gamma distribution. The required product is a product of the following two functions

\(exp - \frac{{{u_0}{{\left( {\psi - {\psi _0}} \right)}^2}}}{2} - \int_{i = 1}^p {{\tau _i}} \beta + \frac{{{n_i}{{\left( {{\mu _i} - {{\bar y}_i}} \right)}^2} + {w_i} + {\lambda _0}{{\left( {\mu - \psi } \right)}^2}}}{2}\)

and the following expression

and\({w_i}\)is given in the same way as in the example

\({w_i} = _{j = 1}^{{n_i}}{\left( {{y_{ij}} - {{\bar y}_i}} \right)^2},\;\;\;{\kern 1pt} i = 1,2, \ldots ,p\)

The second term corresponds to the prior distribution.

02

(b) To find the conditional distributions of each parameter

As mentioned in the hint, the conditional distributions stay practically the same as in the example.

If they look at a product as a function,\({\tau _i},\)then you get a gamma distribution with parameters

\({\alpha _0} + \frac{{{n_i} + 1}}{2} and \beta + \frac{{{n_i}{{\left( {{\mu _i} - {{\bar y}_i}} \right)}^2} + {w_i} + {\lambda _0}{{\left( {\mu - \psi } \right)}^2}}}{2}\)

Next, as a function of\(\psi ,\)by writing it in a form widely known, one gets a normal distribution with parameters

Before commenting on the distribution of\(\beta .\)given all others, notice that the product above looks like a probability density function of a normal distribution when it is a function of\({\mu _i}\)for the same reason. The mean and precision are given with

\(\frac{{{n_i}{{\bar y}_i} + {\lambda _0}\psi }}{{{n_i} + {\lambda _0}}}\;\;\;{\kern 1pt} and \;\;\;{\kern 1pt} {u_0}{\tau _i}\left( {{n_i} + {\lambda _0}} \right)\)

Finally, if the product is seen as a function of\(\beta .\), it approximately probability density function of gamma distribution with parameters

03

(c) To find the prior distributions

The parameters that should be used are given in the exercise, and the others are:

\(\begin{aligned}{l}{n_1} = 20 for beef \\{n_2} = 17 for meat \\{n_3} = 17 for poultry \\{n_4} = 9 for specialty. \end{aligned}\)

Means\({\mu _i},i = 1,2,3,4\)correspond to the indices of\({n_i}, i = 1,2,3,4.\)

The code used for this simulation uses\(N = 20000\)Markov chains with\(I = 100000\)iterations or steps. It means that there is a total of\(I = 100000\)parameter vectors from which the result is obtained.

Note that the given code should be changed as the initial parameters are different. The estimated posterior means of\({\mu _i},i = 1,2,3,4\)are, respectively, 156.8, 158.4, 120.3, and 160.1. Similarly, the estimated posterior means for the\(1/{\tau _i},i = 1,2,3,4\)are, respectively,\(494.9,609.4,545.6,\)and\(570.5\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider, once again, the model described in Example \({\bf{7}}{\bf{.5}}{\bf{.10}}{\bf{.}}\) Assume that \({\bf{n = 10}}\) the observed values of \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_{{\bf{1}}0}}\) are

\( - 0.92,\,\, - 0.33,\,\, - 0.09,\,\,\,0.27,\,\,\,0.50, - 0.60,\,1.66,\, - 1.86,\,\,\,3.29,\,\,\,2.30\).

a. Fit the model to the observed data using the Gibbs sampling algorithm developed in Exercise. Use the following prior hyperparameters: \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 0}}\,{\bf{and}}\,{\bf{ }}{{\bf{\lambda }}_{\bf{0}}}{\bf{ = 1}}\)

b. For each i, estimate the posterior probability that \({\rm{ }}{{\rm{x}}_i}\)came for the normal distribution with unknown mean and variance.

Suppose that \(\left( {{X_1},{Y_1}} \right),...,\left( {{X_n},{Y_n}} \right)\) form a random sample from a bivariate normal distribution with means \({\mu _x} and {\mu _y},variances \sigma _x^2and \sigma _y^2,and correlation \rho .\) Let R be the sample correlation. Prove that the distribution of R depends only on \(\rho ,not on {\mu _x},{\mu _y},\sigma _x^2,or \sigma _y^2.\)

Let \({x_1},...,{x_n}\) be the observed values of a random sample \(X = \left( {{x_1},...,{x_n}} \right)\) . Let \({F_n}\)be the sample c.d.f. Let \(\,{j_1},...\,,{j_n}\) be a random sample with replacement from the numbers \(\left\{ {1,.....,n} \right\}\) Define\({x_i}^ * = x{j_i}\) for \(i = 1,..,n.\)ashow that \({x^ * } = \left( {{x_1}^ * ,...,{x_n}^ * } \right)\) is an i.i.d. sample from the distribution\({F_n}\)

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f. Suppose that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {\bf{i}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\)has the joint p.d.f. Let \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)be the result of applying steps \(2\,\,and\,\,3\) of the Gibbs sampling algorithm on-page \({\bf{824}}\). Prove that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\) and \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)also have the joint p.d.f. f.

Let \({\bf{Y}}\) be a random variable with some distribution. Suppose that you have available as many pseudo-random variables as you want with the same distribution as \({\bf{Y}}\). Describe a simulation method for estimating the skewness of the distribution of \({\bf{Y}}\). (See Definition 4.4.1.)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free