Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Consider the hierarchical Bayes model $$ \begin{aligned} Y & \sim b(n, p), \quad 00 \\ \theta & \sim \Gamma(1, a), \quad a>0 \text { is specified. } \end{aligned} $$ (a) Assuming squared-error loss, write the Bayes estimate of \(p\) as in expression (11.4.3). Integrate relative to \(\theta\) first. Show that both the numerator and denominator are expectations of a beta distribution with parameters \(y+1\) and \(n-y+1\). (b) Recall the discussion around expression (11.3.2). Write an explicit Monte Carlo algorithm to obtain the Bayes estimate in part (a).

Short Answer

Expert verified
The Bayes estimate of p is given by \[p = \frac{E(B(y+1, n-y+1))}{E(B(n+2, n-y+1))}\] It can be obtained using a Monte Carlo algorithm that involves repeated random sampling to produce sequences of p. This algorithm proceeds by generating a random number \(\theta*\) from Γ(1, a), and then generating a random number p* from \(\theta* p^{\theta*-1}\), repeated over numerous iterations. The sequence of p* then is an approximation to the marginal posterior distribution of p.

Step by step solution

01

Finding the Bayes estimate of p

Remembering the postulate of Bayes’ rule, integrate with respect to \(\theta\) first. Given, \(Y \sim b(n, p)\) and \(p \mid \theta \sim h(p \mid \theta)=\theta p^{\theta-1}\), we can write the Bayes estimate of p as follows: \[p = \frac{\int_0^1 p(\theta p^{\theta-1})dp}{\int_0^1 (\theta p^{\theta-1})dp}\]
02

Evidencing numerator and denominator as expectations of a beta distribution

Using formula for Beta Distribution i.e., \(B(x, y) = \int_0^1 t^{x-1} (1-t)^{y-1} dt\), we can determine the numerator and denominator as the expectation of a Beta Distribution function with parameters \(y+1\) and \(n-y+1\). Hence, \[\int_0^1 p(\theta p^{\theta-1})dp = E(B(y+1, n-y+1))\] as the numerator, and \[\int_0^1 (\theta p^{\theta-1})dp = E(B(n+2, n-y+1))\] as the denominator. Now, we can write the bayes estimate of p as: \[p = \frac{E(B(y+1, n-y+1))}{E(B(n+2, n-y+1))}\]
03

Writing a Monte Carlo algorithm

Monte Carlo algorithm is a method to generate sequences of random outcomes and approximate the statistical properties of the population. Let's build a Monte Carlo algorithm: \n1. Generate a random number \(\theta^{*}\) from Γ(1, a).\n2. Generate a random number p* from \(\theta* p^{\theta*-1}\).\nRepeating the process for a large number of iterations, p* will be an estimate of the marginal posterior of p.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider the Bayes model $$ \begin{aligned} X_{i} \mid \theta & \sim \operatorname{iid} \Gamma\left(1, \frac{1}{\theta}\right) \\ \Theta \mid \beta & \sim \Gamma(2, \beta) \end{aligned} $$ By performing the following steps, obtain the empirical Bayes estimate of \(\theta\). (a) Obtain the likelihood function $$ m(\mathbf{x} \mid \beta)=\int_{0}^{\infty} f(\mathbf{x} \mid \theta) h(\theta \mid \beta) d \theta $$ (b) Obtain the mle \(\widehat{\beta}\) of \(\beta\) for the likelihood \(m(\mathbf{x} \mid \beta)\). (c) Show that the posterior distribution of \(\Theta\) given \(\mathbf{x}\) and \(\widehat{\beta}\) is a gamma distribution. (d) Assuming squared-error loss, obtain the empirical Bayes estimator.

Let \(f(x \mid \theta), \theta \in \Omega\), be a pdf with Fisher information, \((6.2 .4), I(\theta)\). Consider the Bayes model $$ \begin{aligned} X \mid \theta & \sim f(x \mid \theta), \quad \theta \in \Omega \\ \Theta & \sim h(\theta) \propto \sqrt{I(\theta)} \end{aligned} $$ (a) Suppose we are interested in a parameter \(\tau=u(\theta)\). Use the chain rule to prove that $$ \sqrt{I(\tau)}=\sqrt{I(\theta)}\left|\frac{\partial \theta}{\partial \tau}\right| . $$ (b) Show that for the Bayes model (11.2.2), the prior pdf for \(\tau\) is proportional to \(\sqrt{I(\tau)}\) The class of priors given by expression (11.2.2) is often called the class of Jeffreys' priors; see Jeffreys (1961). This exercise shows that Jeffreys' priors exhibit an invariance in that the prior of a parameter \(\tau\), which is a function of \(\theta\), is also proportional to the square root of the information for \(\tau\).

Let \(X_{1}, X_{2}, \ldots, X_{n}\) denote a random sample from a distribution that is \(N\left(\theta, \sigma^{2}\right)\), where \(-\infty<\theta<\infty\) and \(\sigma^{2}\) is a given positive number. Let \(Y=\bar{X}\) denote the mean of the random sample. Take the loss function to be \(\mathcal{L}[\theta, \delta(y)]=|\theta-\delta(y)|\). If \(\theta\) is an observed value of the random variable \(\Theta\) that is \(N\left(\mu, \tau^{2}\right)\), where \(\tau^{2}>0\) and \(\mu\) are known numbers, find the Bayes solution \(\delta(y)\) for a point estimate \(\theta\).

Let \(X_{1}, X_{2}, \ldots, X_{n}\) denote a random sample from a Poisson distribution with mean \(\theta, 0<\theta<\infty\). Let \(Y=\sum_{1}^{n} X_{i} .\) Use the loss function \(\mathcal{L}[\theta, \delta(y)]=\) \([\theta-\delta(y)]^{2}\). Let \(\theta\) be an observed value of the random variable \(\Theta\). If \(\Theta\) has the prior \(\operatorname{pdf} h(\theta)=\theta^{\alpha-1} e^{-\theta / \beta} / \Gamma(\alpha) \beta^{\alpha}\), for \(0<\theta<\infty\), zero elsewhere, where \(\alpha>0, \beta>0\) are known numbers, find the Bayes solution \(\delta(y)\) for a point estimate for \(\theta\).

Consider the Bayes model \(X_{i} \mid \theta, i=1,2, \ldots, n \sim\) iid with distribution \(\Gamma(1, \theta), \theta>0\) $$ \Theta \sim h(\theta) \propto \frac{1}{\theta} $$ (a) Show that \(h(\theta)\) is in the class of Jeffreys' priors. (b) Show that the posterior pdf is $$ h(\theta \mid y) \propto\left(\frac{1}{\theta}\right)^{n+2-1} e^{-y / \theta}, $$ where \(y=\sum_{i=1}^{n} x_{i}\) (c) Show that if \(\tau=\theta^{-1}\), then the posterior \(k(\tau \mid y)\) is the pdf of a \(\Gamma(n, 1 / y)\) distribution. (d) Determine the posterior pdf of \(2 y \tau\). Use it to obtain a \((1-\alpha) 100 \%\) credible interval for \(\theta\). (e) Use the posterior pdf in part (d) to determine a Bayesian test for the hypotheses \(H_{0}: \theta \geq \theta_{0}\) versus \(H_{1}: \theta<\theta_{0}\), where \(\theta_{0}\) is specified.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free