Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Consider, once again, the model described in Example \({\bf{7}}{\bf{.5}}{\bf{.10}}{\bf{.}}\) Assume that \({\bf{n = 10}}\) the observed values of \({{\bf{X}}_{\bf{1}}},...,{{\bf{X}}_{{\bf{1}}0}}\) are

\( - 0.92,\,\, - 0.33,\,\, - 0.09,\,\,\,0.27,\,\,\,0.50, - 0.60,\,1.66,\, - 1.86,\,\,\,3.29,\,\,\,2.30\).

a. Fit the model to the observed data using the Gibbs sampling algorithm developed in Exercise. Use the following prior hyperparameters: \({{\bf{\alpha }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\beta }}_{\bf{0}}}{\bf{ = 1,}}{{\bf{\mu }}_{\bf{0}}}{\bf{ = 0}}\,{\bf{and}}\,{\bf{ }}{{\bf{\lambda }}_{\bf{0}}}{\bf{ = 1}}\)

b. For each i, estimate the posterior probability that \({\rm{ }}{{\rm{x}}_i}\)came for the normal distribution with unknown mean and variance.

Short Answer

Expert verified

a) The Gibbs Sampling Algorithm: The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

b) Total of \(i = 1,2,...,10\)estimations of the posterior probability.

((a)) Use Gibbs Algorithm.

((b)) \(0.286,{\rm{ }}0.289,{\rm{ }}0.306,{\rm{ }}0.341,{\rm{ }}0.365,{\rm{ }}0.285,{\rm{ }}0.659,{\rm{ }}0.378,{\rm{ }}0.951,{\rm{ }}0.826\)

Step by step solution

01

(a) Definition of Gibbs sampling

Gibbs Sampling is a Monte Carlo Markov Chain method for estimating complex joint distributions that draw an instance from the distribution of each variable iteratively based on the current values of the other variables.

The Gibbs Sampling Algorithm: The steps of the algorithm are

\(\left( {1.} \right)\,\)Pick starting values \({x_2}^{\left( 0 \right)}\) for \(\,{x_2}\) , and let \(\,\,i = 0\,\)

\(\left( {2.} \right)\,\)let be a simulated value from the conditional distribution of \(\,{x_1}\)given that \(\,\,{X_1} = {x_2}^{\left( i \right)}\)

\(\left( {3.} \right)\,\)Let\(\,{x_2}^{\left( {i + 1} \right)\,\,}\,\) be a simulated value from the conditional distribution of \(\,{x_2}\) given that \(\,{X_1} = {x_1}^{\left( {i + 1} \right)}\)

\(\left( {4.} \right)\)Repeat steps \(\,2.\,\,and\,3.\) \(\,i\) where\(\,i + 1\)

Given data, the hyperparameters, and the Gibbs Algorithms, after running \(10\) Markov Chains with a total of \(N = 100000\) samples.

One may obtain the required values to estimate part(b) probabilities.

First, one may use for loop to fit the model described in the previous Exercise.

02

(b) Estimation probabilities

Total of estimations of the posterior probability.

Total of \(i = 1,2,...,10\)estimations of the posterior probability.

Using the method in the previous exercise, the 10 estimated probabilities are as follows

\(0.286,{\rm{ }}0.289,{\rm{ }}0.306,{\rm{ }}0.341,{\rm{ }}0.365,{\rm{ }}0.285,{\rm{ }}0.659,{\rm{ }}0.378,{\rm{ }}0.951,{\rm{ }}0.826\)

Hence,

((a)) Use Gibbs Algorithm.

((b)) \(0.286,{\rm{ }}0.289,{\rm{ }}0.306,{\rm{ }}0.341,{\rm{ }}0.365,{\rm{ }}0.285,{\rm{ }}0.659,{\rm{ }}0.378,{\rm{ }}0.951,{\rm{ }}0.826\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In Example 12.5.6, we modeled the parameters \({\tau _1}, \ldots {\tau _\pi }\) as i.i.d. having the gamma distribution with parameters \({\alpha _0}\) , and \({\beta _0}.\) We could have added a level to the hierarchical model that would allow the \({\tau _\iota }\) 's to come from a distribution with an unknown parameter. For example, suppose that we model the \({\tau _\iota }\) 's as conditionally independent, having the gamma distribution with parameters \({\alpha _0}\) and \(\beta \) given \(\beta \). Let \(\beta \) be independent of \(\psi \) and \({\mu _1}, \ldots ,{\mu _p}\) with \(\beta \) having the prior distributions as specified in Example 12.5.6.

a. Write the product of the likelihood and the prior as a function of the parameters \({\mu _1}, \ldots ,{\mu _p},{\tau _1}, \ldots ,{\tau _\pi },\psi ,\) \(\beta \).

b. Find the conditional distributions of each parameter given all of the others. Hint: For all the parameters besides \(\beta \), the distributions should be almost identical to those given in Example 12.5.6. It wherever \({\beta _0}.\) appears, of course, something will have to change.

c. Use a prior distribution in which and \({\psi _0} = 170.\) Fit the model to the hot dog calorie data from Example 11.6.2. Compute the posterior means of the four \({\mu _i}'s\) and

\(1/{\tau _i}^\prime s.\)

The \({\chi ^2}\) goodness-of-fit test (see Chapter 10) is based on an asymptotic approximation to the distribution of the test statistic. For small to medium samples, the asymptotic approximation might not be very good. Simulation can be used to assess how good the approximation is. Simulation can also be used to estimate the power function of a goodness-of-fit test. For this exercise, assume that we are performing the test that was done in Example 10.1.6. The idea illustrated in this exercise applies to all such problems.

a. Simulate \(v = 10,000\) samples of size \(n = 23\) from the normal distribution with a mean of 3.912 and variance of 0.25. For each sample, compute the \({\chi ^2}\) goodness of fit statistic Q using the same four intervals that were used in Example 10.1.6. Use the simulations to estimate the probability that Q is greater than or equal to the 0.9,0.95 and 0.99 quantiles of the \({\chi ^2}\) distribution with three degrees of freedom.

b. Suppose that we are interested in the power function of a \({\chi ^2}\) goodness-of-fit test when the actual distribution of the data is the normal distribution with a mean of 4.2 and variance of 0.8. Use simulation to estimate the power function of the level 0.1,0.05 and 0.01 tests at the alternative specified.

Consider the power calculation done in Example 9.5.5.

a. Simulate \({v_0} = 1000\) i.i.d. noncentral t pseudo-random variables with 14 degrees of freedom and noncentrality parameter \(1.936.\)

b. Estimate the probability that a noncentral t random variable with 14 degrees of freedom and noncentrality parameter \(1.936\) is at least \(1.761.\) Also, compute the standard simulation error.

c. Suppose that we want our estimator of the noncentral t probability in part (b) to be closer than \(0.01\) the true value with probability \(0.99.\) How many noncentral t random variables do we need to simulate?

Suppose that we wish to approximate the integral\(\int g (x)dx\). Suppose that we have a p.d.f. \(f\)that we shall use as an importance function. Suppose that \(g(x)/f(x)\) is bounded. Prove that the importance sampling estimator has finite variance.

If \({\bf{X}}\)has the \({\bf{p}}.{\bf{d}}.{\bf{f}}.\)\({\bf{1/}}{{\bf{x}}^{\bf{2}}}\)for\({\bf{x > 1}}\), the mean of \({\bf{X}}\) is infinite. What would you expect to happen if you simulated a large number of random variables with this \({\bf{p}}.{\bf{d}}.{\bf{f}}.\) and computed their average?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free