Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

(a) Let \(y_{1}, \ldots, y_{n}\) be a Poisson random sample with mean \(\theta\), and suppose that the prior density for \(\theta\) is gamma, $$ \pi(\theta)=g(\theta ; \alpha, \lambda)=\frac{\lambda^{\alpha} \theta^{\alpha-1}}{\Gamma(\alpha)} \exp (-\lambda \theta), \quad \theta>0, \lambda, \alpha>0 $$ Show that the posterior density of \(\theta\) is \(g\left(\theta ; \alpha+\sum y_{j}, \lambda+n\right)\), and find conditions under which the posterior density remains proper as \(\alpha \downarrow 0\) even though the prior density becomes improper in the limit. (b) Show that \(\int \theta g(\theta ; \alpha, \lambda) d \theta=\alpha / \lambda\). Find the prior and posterior means \(\mathrm{E}(\theta)\) and \(\mathrm{E}(\theta\) ) \(y\) ), and hence give an interpretation of the prior parameters. (c) Let \(Z\) be a new Poisson variable independent of \(Y_{1}, \ldots, Y_{n}\), also with mean \(\theta .\) Find its posterior predictive density. To what density does this converge as \(n \rightarrow \infty\) ? Does this make sense?

Short Answer

Expert verified
Posterior density is gamma with parameters \( \alpha + \sum y_i, \lambda + n \). Prior mean is \( \frac{\alpha}{\lambda} \), posterior mean is \( \frac{\alpha + \sum y_i}{\lambda + n} \). Posterior predictive density is negative binomial.

Step by step solution

01

Define Likelihood Function

The likelihood function for a Poisson random sample with mean \( \theta \) is given by \( L(\theta) = \prod_{i=1}^{n} \frac{\theta^{y_i} e^{-\theta}}{y_i!} = \theta^{\sum y_i} e^{-n\theta} \cdot \text{(constant)} \). Simplifying this, it becomes \( \theta^{\sum y_i} e^{-n\theta} \).
02

Define Prior Density

The prior density for \( \theta \) is given by a gamma distribution: \( \pi(\theta) = \frac{\lambda^{\alpha} \theta^{\alpha-1}}{\Gamma(\alpha)} e^{-\lambda \theta} \).
03

Compute Posterior Density

Combine the likelihood and the prior to compute the posterior density. The posterior density is proportional to the product: \( \theta^{\sum y_i} e^{-n\theta} \cdot \theta^{\alpha - 1} e^{-\lambda \theta} = \theta^{\alpha + \sum y_i - 1} e^{-(\lambda + n)\theta} \). This is the kernel of a gamma distribution with parameters \( \alpha + \sum y_i \) and \( \lambda + n \).
04

Check Posterior Properness

For the posterior density to remain proper despite an improper prior as \( \alpha \downarrow 0 \), the sum \( \sum y_i \) must be positive, as this requirement ensures that the shape parameter of the gamma distribution remains positive.
05

Calculate Expectation of Gamma Distribution

The expectation of a gamma distribution \( g(\theta; \alpha, \lambda) \) is given by \( \mathrm{E}(\theta) = \frac{\alpha}{\lambda} \).
06

Find Prior and Posterior Means

The prior mean is \( \mathrm{E}(\theta) = \frac{\alpha}{\lambda} \). The posterior mean, given the posterior parameters \( \alpha + \sum y_i \) and \( \lambda + n \), is \( \mathrm{E}(\theta \mid y) = \frac{\alpha + \sum y_i}{\lambda + n} \). The parameter \( \alpha \) indicates prior information on the mean, and \( \lambda \) reflects prior belief in precision.
07

Determine Posterior Predictive Density

For a new observation \( Z \) with a Poisson mean \( \theta \), the posterior predictive density \( P(Z = z \mid y) \) follows a negative binomial distribution as \( \text{NB}(z; \alpha + \sum y_i, \frac{n}{\lambda + n}) \). As \( n \to \infty \), it converges to a Poisson distribution with mean equal to the sample mean, reflecting the law of large numbers.
08

Interpret the Result

The predictive density convergence makes sense as, with increasing sample size, the data provides most of the information about \( \theta \), decreasing the influence of the prior.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Poisson Distribution
The Poisson distribution is a widely used probability distribution for modeling the number of events occurring within a fixed period of time or space. It's characterized by its parameter \( \theta \), which is the mean or expected number of events. A Poisson random variable \( Y \) represents the count of these events, and its probability mass function is given by:
  • \( P(Y = y) = \frac{\theta^y e^{-\theta}}{y!} \) for \( y = 0, 1, 2, \ldots \)
This distribution is particularly useful when dealing with rare events where the mean and variance are equal. In the context of Bayesian inference, the Poisson distribution serves as the likelihood function when modeling data believed to follow this distribution type.
Gamma Distribution
The gamma distribution is a continuous probability distribution often used as a prior in Bayesian statistics, particularly for parameters that represent rates or scales, such as the mean rate in a Poisson process. Characterized by two parameters, \( \alpha \) (shape) and \( \lambda \) (rate), its probability density function is:
  • \( g(\theta ; \alpha, \lambda) = \frac{\lambda^{\alpha} \theta^{\alpha-1}}{\Gamma(\alpha)} \exp(-\lambda \theta) \), where \( \theta > 0 \)
The parameter \( \alpha \) relates to the weight of prior information, while \( \lambda \) represents the prior expected precision. In Bayesian analysis, choosing a gamma distribution as a prior for a Poisson mean (like \( \theta \) ) appropriately reflects prior beliefs about the rate of events. It is flexible and computationally convenient, as it combines with a Poisson likelihood to yield another gamma distribution.
Posterior Distribution
The posterior distribution in Bayesian inference combines prior beliefs with new evidence from data. For our Poisson mean \( \theta \), given a gamma prior, the posterior distribution is also gamma due to the conjugate prior relationship. This combined expression is:
  • \( g(\theta ; \alpha + \sum y_i, \lambda + n) \)
Here, \( \alpha + \sum y_i \) and \( \lambda + n \) are the posterior shape and rate parameters, respectively. This update is based on new data summary presented by \( \sum y_i \) and the number of data points \( n \). The posterior allows us to refine our belief about \( \theta \), balancing prior information with data-derived insights. It's critical for making informed decisions using both prior knowledge and new observations.
Predictive Density
The predictive density provides a way to forecast future observations based on the current model and data. For a new Poisson variable \( Z \) with mean \( \theta \), the predictive density incorporates both the observed data and the prior beliefs, providing a probabilistic model for new data points:
  • \( P(Z = z \mid y) \approx \text{NB}(z; \alpha + \sum y_i, \frac{n}{\lambda + n}) \)
Where NB stands for the Negative Binomial distribution, reflecting the uncertainty in \( \theta \). With increasing sample size \( n \), the influence of the prior diminishes, and this density converges to a Poisson distribution centered on observed data. This convergence illustrates how more data leads to predictions grounded in empirical evidence, aligning with the concept of the law of large numbers.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In the usual normal linear regression model, \(y=X \beta+\varepsilon\), suppose that \(\sigma^{2}\) is known and that \(\beta\) has prior density $$ \pi(\beta)=\frac{1}{|\Omega|^{1 / 2}(2 \pi)^{p / 2}} \exp \left\\{-\left(\beta-\beta_{0}\right)^{\mathrm{T}} \Omega^{-1}\left(\beta-\beta_{0}\right) / 2\right\\} $$ where \(\Omega\) and \(\beta_{0}\) are known. Find the posterior density of \(\beta\).

The lifetime in months, \(y\), of an individual with a certain disease is thought to be exponential with mean \(1 /(\alpha+\beta x)\), where \(\alpha, \beta>0\) are unknown parameters and \(x\) a known covariate. Data \(\left(x_{j}, y_{j}\right)\) are observed for \(n\) independent individuals, some of the lifetimes being right-censored. The prior density for \(\alpha\) and \(\beta\) is $$ \pi(\alpha, \beta)=a b \exp (-\alpha a-\beta b), \quad \alpha, \beta>0 $$ where \(a, b>0\) are specified. Show that an approximate predictive density for the uncensored lifetime, \(z\), of a future individual with covariate \(t\) is $$ \widehat{f}\left(z \mid t, y_{1}, \ldots, y_{n}\right)=(\widehat{\alpha}+\widehat{\beta} t) \exp \\{-(\widehat{\alpha}+\widehat{\beta} t) z\\}, \quad z>0 $$ where \(\widehat{\alpha}\) and \(\widehat{\beta}\) satisfy the equations $$ b+\sum_{j=1}^{n} x_{j} y_{j}=\sum_{j \in U} \frac{x_{j}}{\alpha+\beta x_{j}}, \quad a+\sum_{j=1}^{n} y_{j}=\sum_{j \in U} \frac{1}{\alpha+\beta x_{j}} $$ and \(U\) denotes the set of uncensored individuals.

Suppose that \(Y_{1}, \ldots, Y_{n}\) are taken from an AR(1) process with innovation variance \(\sigma^{2}\) and correlation parameter \(\rho\) such that \(|\rho|<1\). Show that $$ \operatorname{var}(\bar{Y})=\frac{\sigma^{2}}{n^{2}\left(1-\rho^{2}\right)}\left\\{n+2 \sum_{j=1}^{n-1}(n-j) \rho^{j}\right\\} $$ and deduce that as \(n \rightarrow \infty\) for any fixed \(\rho, n \operatorname{var}(\bar{Y}) \rightarrow \sigma^{2} /(1-\rho)^{2}\). What happens when \(|\rho|=1 ?\) Discuss estimation of \(\operatorname{var}(\bar{Y})\) based on \((n-1)^{-1} \sum\left(Y_{j}-\bar{Y}\right)^{2}\) and an estimate \(\widehat{\rho}\).

Show that if \(y_{1}, \ldots, y_{n}\) is a random sample from an exponential family with conjugate prior \(\pi(\theta \mid \lambda, m)\), any finite mixture of conjugate priors, $$ \sum_{j=1}^{k} p_{j} \pi\left(\theta, \lambda_{j}, m_{j}\right), \quad \sum_{j} p_{j}=1, p_{j} \geq 0 $$ is also conjugate. Check the details when \(y_{1}, \ldots, y_{n}\) is a random sample from the Bernoulli distribution with probability \(\theta\).

Suppose that \(\left(U_{1}, U_{2}\right)\) lies in a product space, of form \(U_{1} \times U_{2}\) (a) Show that $$ \pi\left(u_{1}\right)=\frac{\pi\left(u_{1} \mid u_{2}\right)}{\pi\left(u_{2} \mid u_{1}\right)} \pi\left(u_{2}\right), \quad \text { for any } u_{1} \in \mathcal{U}_{1}, u_{2} \in \mathcal{U}_{2} $$ and deduce that for each \(u_{2} \in \mathcal{U}_{2}\) and an arbitrary \(u_{1}^{\prime} \in \mathcal{U}_{1}\), $$ \pi\left(u_{2}\right)=\left\\{\int \frac{\pi\left(u_{1} \mid u_{2}\right)}{\pi\left(u_{2} \mid u_{1}\right)} d u_{1}\right\\}^{-1}=\frac{\pi\left(u_{2} \mid u_{1}^{\prime}\right)}{\pi\left(u_{1}^{\prime} \mid u_{2}\right)}\left\\{\int \frac{\pi\left(u_{2} \mid u_{1}^{\prime}\right)}{\pi\left(u_{1}^{\prime} \mid u_{2}\right)} d u_{2}\right\\}^{-1} $$ (b) If \(U_{2}^{1}, \ldots, U_{2}^{S}\) is a random sample from \(\pi\left(u_{2} \mid u_{1}^{\prime}\right)\), show that $$ \widehat{\pi}\left(u_{2}\right)=\frac{\pi\left(u_{2} \mid u_{1}^{\prime}\right)}{\pi\left(u_{1}^{\prime} \mid u_{2}\right)}\left\\{S^{-1} \sum_{s=1}^{s} \pi\left(u_{1}^{\prime} \mid U_{2}^{s}\right)^{-1}\right\\}^{-1} \stackrel{P}{\longrightarrow} \pi\left(u_{2}\right) \text { as } S \rightarrow \infty $$ (c) Verify that the code below applies this approach to the bivariate normal model in Example \(11.21\) Does this work well? Why not? Try with \(u_{1}^{\prime}=-2,-1,0\). What lesson does this example suggest for the use of this approach in general?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free