Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f. Suppose that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {\bf{i}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\)has the joint p.d.f. Let \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)be the result of applying steps \(2\,\,and\,\,3\) of the Gibbs sampling algorithm on-page \({\bf{824}}\). Prove that \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {\bf{i}} \right)}} \right)\) and \(\left( {{{\bf{x}}_{{\bf{1}}\,}}^{\left( {{\bf{i + 1}}} \right)}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}^{\left( {{\bf{i + 1}}} \right)}} \right)\)also have the joint p.d.f. f.

Short Answer

Expert verified

The Gibbs sampling algorithm.

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be a simulated value from the conditional distribution \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

Use the Gibbs Sampling Algorithm

Step by step solution

01

Definition of Gibbs Sampling Algorithm

Gibbs Sampling is a Monte Carlo Markov Chain method for estimating complex joint distributions that draw an instance from the distribution of each variable iteratively based on the current values of the other variables.

The Gibbs Sampling Algorithm:

The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be a simulated value from the conditional distribution \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

\(\left( {4.} \right)\)Repeat steps \(\,2.\,\,and\,3.\)\(\,i\) where\(\,i + 1\)

Let \(f\left( {{x_1},{x_2}} \right)\) be the joint p.d.f. of \(\,\,\left( {{x_1}^{\left( i \right)},{x_2}^{\left( i \right)}} \right)\)The conditional distribution of\(\,{x_1}\) given that \(\,{X_2} = {x_2}^{\left( {i + 1} \right)}\), denoted \({f_1}\)with is

\({g_1}\left( {{x_1}\mid {x_2}} \right) = \frac{{\left( {{x_1},{x_2}} \right)}}{{{f_2}\left( {{x_2}} \right)\,\,\,\,}}\)

02

Marginal probability density function

In the case of a pair of random variables (X, Y), the density function of random variable X (or Y) considered alone is known as the marginal density function.

The \({f_2}\,\)is the marginal probability density function of \(\,\,{x_2}^{\left( i \right)}\).

Step 2 is \(\,{x_1}^{\left( {i + 1} \right)}\,\)a simulated value from the conditional distribution of \({x_1}\,\)given that \({X_2} = {x_2}^{\left( i \right)}\,\), which implies that the joint p.d.f. of \(\,\left( {{x_1}^{\left( {i + 1} \right)},{x_2}^{\left( i \right)}} \right)\) is the product of \(\,{g_{1\,\,}}\)and \({g_{2\,\,}}\)

\({g_1}\left( {{x_1}\mid {x_2}} \right)\,{f_2}\left( {{x_2}} \right) = f\left( {{x_1},{x_2}} \right).\)

Next, since it is also the joint p.d.f. of \(\,\,\,\left( {{x_1}^{\left( i \right)},{x_2}^{\left( i \right)}} \right)\), the marginal distribution must be the same as the marginal distribution \({x_1}^{\left( {i + 1} \right)}\)because integrating overall \(\,{x_2}\,\,\)gives the same function.

Similarly, in the algorithm step\({x_2}^{\left( {i + 1} \right)}\) is a simulated value from the conditional distribution of \({x_2}\,\)given that\({X_1} = {x_1}^{\left( {i + 1} \right)}\). Analogously, it must be that the marginal distribution of\(\,{x_2}^{\left( i \right)}\,\) must be the same as the marginal distribution of\(\,{x_2}^{\left( {i + 1} \right)\,\,\,}\), which implies that\(\left( {{x_1}^{\left( {i + 1} \right)},{x_2}^{\left( {i + 1} \right)}} \right)\,\,\) it must have the same joint distribution as\(\,\,\,\left( {{x_1}^{\left( i \right)},{x_2}^{\left( i \right)}} \right)\)

Hence,

Use the Gibbs Sampling Algorithm.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In Example 12.5.6, we used a hierarchical model. In that model, the parameters\({\mu _i},...,{\mu _P}\,\)were independent random variables with\({\mu _i}\)having the normal distribution with mean ฯˆ and precision\({\lambda _0}{T_i}\,\)conditional on ฯˆ and\({T_1},\,....{T_P}\). To make the model more general, we could also replace\({\lambda _0}\)with an unknown parameter\(\lambda \). That is, let the\({\mu _i}\)โ€™s be independent with\({\mu _i}\)having the normal distribution with mean ฯˆ and precision\(\,\lambda {T_i}\)conditional on\(\psi \),\(\lambda \) and\({T_1},\,....{T_P}\). Let\(\lambda \)have the gamma distribution with parameters\({\gamma _0}\)and\(\,{\delta _0}\), and let\(\lambda \)be independent of ฯˆ and\({T_1},\,....{T_P}\). The remaining parameters have the prior distributions stated in Example 12.5.6.

a. Write the product of the likelihood and the prior as a function of the parameters\({\mu _i},...,{\mu _P}\,\), \({T_1},\,....{T_P}\)ฯˆ, and\(\lambda \).

b. Find the conditional distributions of each parameter given all of the others. Hint: For all the parameters besides\(\lambda \), the distributions should be almost identical to those given in Example 12.5.6. It wherever\({\lambda _0}\)appears, of course, something will have to change.

c. Use a prior distribution in which ฮฑ0 = 1, ฮฒ0 = 0.1, u0 = 0.001, ฮณ0 = ฮด0 = 1, and \({\psi _0}\)= 170. Fit the model to the hot dog calorie data from Example 11.6.2. Compute the posterior means of the four ฮผiโ€™s and 1/ฯ„iโ€™s.

If \({\bf{X}}\)has the \({\bf{p}}.{\bf{d}}.{\bf{f}}.\)\({\bf{1/}}{{\bf{x}}^{\bf{2}}}\)for\({\bf{x > 1}}\), the mean of \({\bf{X}}\) is infinite. What would you expect to happen if you simulated a large number of random variables with this \({\bf{p}}.{\bf{d}}.{\bf{f}}.\) and computed their average?

Suppose that we wish to approximate the integral\(\int g (x)dx\). Suppose that we have a p.d.f. \(f\)that we shall use as an importance function. Suppose that \(g(x)/f(x)\) is bounded. Prove that the importance sampling estimator has finite variance.

Let \(U\) have the uniform distribution on the interval\((0,1)\). Show that the random variable \(W\)defined in Eq. (12.4.6) has the p.d.f. \(h\)defined in Eq. (12.4.5).

Let X and Y be independent random variables with \(X\) having the t distribution with five degrees of freedom and Y having the t distribution with three degrees of freedom. We are interested in \(E\left( {|X - Y|} \right).\)

a. Simulate 1000 pairs of \(\left( {{X_i},{Y_i}} \right)\) each with the above joint distribution and estimate \(E\left( {|X - Y|} \right).\)

b. Use your 1000 simulated pairs to estimate the variance of \(|X - Y|\) also.

c. Based on your estimated variance, how many simulations would you need to be 99 percent confident that your estimator is within the actual mean?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free