Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Consider the following mixed discrete-continuous pdf for a random vector \((X, Y)\) (discussed in Casella and George, 1992): $$ f(x, y) \propto\left\\{\begin{array}{ll} \left(\begin{array}{l} n \\ x \end{array}\right) y^{x+\alpha-1}(1-y)^{n-x+\beta-1} & x=0,1, \ldots, n, 00\) and \(\beta>0\). (a) Show that this function is indeed a joint, mixed discrete-continuous pdf by finding the proper constant of proportionality. (b) Determine the conditional pdfs \(f(x \mid y)\) and \(f(y \mid x)\). (c) Write the Gibbs sampler algorithm to generate random samples on \(X\) and \(Y\). (d) Determine the marginal distributions of \(X\) and \(Y\).

Short Answer

Expert verified
The joint pdf is a mixed discrete-continuous pdf with the proper constant of proportionality found by applying integration over a certain range for x and y. The conditional pdfs can be calculated using Bayes' theorem and the marginal pdfs are calculated through integration/summation over the other variable. The Gibbs sampler algorithm for generating random samples on a mixed discrete-continuous pdf is based on iterative sampling steps.

Step by step solution

01

Show this is a mixed discrete-continuous pdf

To demonstrate \(f(x, y)\) is a joint, mixed discrete-continuous pdf, integrate \(f(x, y)\) over \(x\) for \(0 \leq x \leq n\) and \(y\) for \(0 < y < 1\). After integrating, the result should be multiplied by an unknown constant \(C\) for normalization. This process will lead to finding the proper constant of proportionality.
02

Compute the conditional pdfs

Using Bayes' theorem, the conditional pdf \(f(x|y)\) can be computed as the joint pdf \(f(x, y)\) divided by the marginal pdf of \(Y\), \(f(y)\). Likewise, the conditional pdf \(f(y|x)\) can be computed as the joint pdf divided by the marginal pdf of \(X\), \(f(x)\). Here, \(f(x)\) and \(f(y)\) are the marginal pdfs obtained from the first step.
03

Write the Gibbs sampler algorithm

The Gibbs sampler algorithm proceeds by sampling one variable at a time with the remaining variable(s) fixed to their current values. First, initialize x and y. Next, generate \(x_{t+1}|y=y_t\) using the conditional pdf \(f(x|y)\), and then generate \(y_{t+1}|x=x_{t+1}\) using \(f(y|x)\). Repeat these steps for a large number of iterations.
04

Determine the marginal distributions

The marginal distribution of a variable in a pdf involves integrating or summing over all other variables. The marginal distribution \(f(x)\) can be obtained by integrating \(f(x, y)\) over \(y\) for all values of \(x\). Similarly, \(f(y)\) can be obtained by summing \(f(x, y)\) over \(x\) for all values of \(y\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider the Bayes model \(X_{i} \mid \theta, i=1,2, \ldots, n \sim\) iid with distribution \(b(1, \theta), 0<\theta<1\) (a) Obtain the Jeffreys' prior for this model. (b) Assume squared-error loss and obtain the Bayes estimate of \(\theta\).

In Example 11.1.2, let \(n=30, \alpha=10\), and \(\beta=5\), so that \(\delta(y)=(10+y) / 45\) is the Bayes estimate of \(\theta\). (a) If \(Y\) has a binomial distribution \(b(30, \theta)\), compute the risk \(E\left\\{[\theta-\delta(Y)]^{2}\right\\}\). (b) Find values of \(\theta\) for which the risk of part (a) is less than \(\theta(1-\theta) / 30\), the risk associated with the maximum likelihood estimator \(Y / n\) of \(\theta\).

Consider the hierarchical Bayes model $$ \begin{aligned} Y & \sim b(n, p), \quad 00 \\ \theta & \sim \Gamma(1, a), \quad a>0 \text { is specified. } \end{aligned} $$ (a) Assuming squared-error loss, write the Bayes estimate of \(p\) as in expression (11.4.3). Integrate relative to \(\theta\) first. Show that both the numerator and denominator are expectations of a beta distribution with parameters \(y+1\) and \(n-y+1\). (b) Recall the discussion around expression (11.3.2). Write an explicit Monte Carlo algorithm to obtain the Bayes estimate in part (a).

Let \(X_{1}, X_{2}, \ldots, X_{10}\) be a random sample of size \(n=10\) from a gamma distribution with \(\alpha=3\) and \(\beta=1 / \theta\). Suppose we believe that \(\theta\) has a gamma distribution with \(\alpha=10\) and \(\beta=2\). (a) Find the posterior distribution of \(\theta\). (b) If the observed \(\bar{x}=18.2\), what is the Bayes point estimate associated with square-error loss function? (c) What is the Bayes point estimate using the mode of the posterior distribution? (d) Comment on an HDR interval estimate for \(\theta\). Would it be easier to find one having equal tail probabilities? Hint: Can the posterior distribution be related to a chi-square distribution?

Let \(\mathbf{X}_{1}, \mathbf{X}_{2}, \ldots, \mathbf{X}_{n}\) be a random sample from a multivariate normal normal distribution with mean vector \(\boldsymbol{\mu}=\left(\mu_{1}, \mu_{2}, \ldots, \mu_{k}\right)^{\prime}\) and known positive definite covariance matrix \(\mathbf{\Sigma}\). Let \(\overline{\mathbf{X}}\) be the mean vector of the random sample. Suppose that \(\mu\) has a prior multivariate normal distribution with mean \(\boldsymbol{\mu}_{0}\) and positive definite covariance matrix \(\boldsymbol{\Sigma}_{0}\). Find the posterior distribution of \(\mu\), given \(\overline{\mathbf{X}}=\overline{\mathbf{x}}\). Then find the Bayes estimate \(E(\boldsymbol{\mu} \mid \overline{\mathbf{X}}=\overline{\mathbf{x}})\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free