Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(f(x \mid \theta), \theta \in \Omega\), be a pdf with Fisher information, \((6.2 .4), I(\theta)\). Consider the Bayes model $$ \begin{aligned} X \mid \theta & \sim f(x \mid \theta), \quad \theta \in \Omega \\ \Theta & \sim h(\theta) \propto \sqrt{I(\theta)} \end{aligned} $$ (a) Suppose we are interested in a parameter \(\tau=u(\theta)\). Use the chain rule to prove that $$ \sqrt{I(\tau)}=\sqrt{I(\theta)}\left|\frac{\partial \theta}{\partial \tau}\right| . $$ (b) Show that for the Bayes model (11.2.2), the prior pdf for \(\tau\) is proportional to \(\sqrt{I(\tau)}\) The class of priors given by expression (11.2.2) is often called the class of Jeffreys' priors; see Jeffreys (1961). This exercise shows that Jeffreys' priors exhibit an invariance in that the prior of a parameter \(\tau\), which is a function of \(\theta\), is also proportional to the square root of the information for \(\tau\).

Short Answer

Expert verified
The relationship between the Fisher information for \( \tau = u(\theta) \) and \( \theta \) is given by \( \sqrt{I(\tau)} = \sqrt{I(\theta)} \left |\frac{\partial \theta}{\partial \tau} \right | \). For the Bayes model, the prior pdf for \( \tau \) is proportional to \( \sqrt{I(\tau)} \), proving that Jeffreys' priors preserve their form under reparametrizations.

Step by step solution

01

Apply chain rule

By chain rule of differentiation, the derivative of \( \tau = u(\theta) \) with respect to \( \theta \) is given by \( \frac{\partial u}{\partial \theta} \). Now derivative of \( \theta \) with respect to \( \tau \) is the reciprocal of this, denoted by \( \frac{\partial \theta}{\partial \tau} \). Hence the reciprocal of the change in \( \theta \) for a unit change in \( \tau \) is given by \( \left | \frac{\partial \theta}{\partial \tau} \right | \).
02

Apply results from Step 1 to derive Fisher's information for \( \tau \)

We know that Fisher's information is invariant under reparametrizations. Therefore, if \( \tau \) is a one-to-one function of \( \theta \), then the Fisher information for \( \tau \), denoted by \( I(\tau) \), is equal to the Fisher information for \( \theta \), denoted by \( I(\theta) \), times \( \left|\frac{\partial \theta}{\partial \tau}\right|^2 \). Take square root on both sides, we get the equation required to be proved in question (a): \( \sqrt{I(\tau)} = \sqrt{I(\theta)} \left |\frac{\partial \theta}{\partial \tau} \right | \).
03

Prove the prior pdf for \( \tau \) is proportional to \( \sqrt{I(\tau)} \)

We are given that the prior of \( \theta \) is proportional to \( \sqrt{I(\theta)} \). Therefore, the prior for \( \tau \), a function of \( \theta \), should also be proportional to the square root of the Fisher information for \( \tau \), i.e., \( h(\tau) \propto \sqrt{I(\tau)} \). This shows that Jeffreys' priors preserve their form under reparametrizations, as is required to be shown in question (b).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider the following mixed discrete-continuous pdf for a random vector \((X, Y)\) (discussed in Casella and George, 1992): $$ f(x, y) \propto\left\\{\begin{array}{ll} \left(\begin{array}{l} n \\ x \end{array}\right) y^{x+\alpha-1}(1-y)^{n-x+\beta-1} & x=0,1, \ldots, n, 00\) and \(\beta>0\). (a) Show that this function is indeed a joint, mixed discrete-continuous pdf by finding the proper constant of proportionality. (b) Determine the conditional pdfs \(f(x \mid y)\) and \(f(y \mid x)\). (c) Write the Gibbs sampler algorithm to generate random samples on \(X\) and \(Y\). (d) Determine the marginal distributions of \(X\) and \(Y\).

Let \(\mathbf{X}_{1}, \mathbf{X}_{2}, \ldots, \mathbf{X}_{n}\) be a random sample from a multivariate normal normal distribution with mean vector \(\boldsymbol{\mu}=\left(\mu_{1}, \mu_{2}, \ldots, \mu_{k}\right)^{\prime}\) and known positive definite covariance matrix \(\mathbf{\Sigma}\). Let \(\overline{\mathbf{X}}\) be the mean vector of the random sample. Suppose that \(\mu\) has a prior multivariate normal distribution with mean \(\boldsymbol{\mu}_{0}\) and positive definite covariance matrix \(\boldsymbol{\Sigma}_{0}\). Find the posterior distribution of \(\mu\), given \(\overline{\mathbf{X}}=\overline{\mathbf{x}}\). Then find the Bayes estimate \(E(\boldsymbol{\mu} \mid \overline{\mathbf{X}}=\overline{\mathbf{x}})\).

In Example 11.1.2, let \(n=30, \alpha=10\), and \(\beta=5\), so that \(\delta(y)=(10+y) / 45\) is the Bayes estimate of \(\theta\). (a) If \(Y\) has a binomial distribution \(b(30, \theta)\), compute the risk \(E\left\\{[\theta-\delta(Y)]^{2}\right\\}\). (b) Find values of \(\theta\) for which the risk of part (a) is less than \(\theta(1-\theta) / 30\), the risk associated with the maximum likelihood estimator \(Y / n\) of \(\theta\).

Example 11.4.1 dealt with a hierarchical Bayes model for a conjugate family of normal distributions. Express that model as $$ \begin{aligned} &\bar{X} \mid \Theta \sim N\left(\theta, \frac{\sigma^{2}}{n}\right), \sigma^{2} \text { is known } \\ &\Theta \mid \tau^{2} \quad \sim N\left(0, \tau^{2}\right) \end{aligned} $$ Obtain the empirical Bayes estimator of \(\theta\).

Consider the Bayes model $$ \begin{aligned} X_{i} \mid \theta & \sim \operatorname{iid} \Gamma\left(1, \frac{1}{\theta}\right) \\ \Theta \mid \beta & \sim \Gamma(2, \beta) \end{aligned} $$ By performing the following steps, obtain the empirical Bayes estimate of \(\theta\). (a) Obtain the likelihood function $$ m(\mathbf{x} \mid \beta)=\int_{0}^{\infty} f(\mathbf{x} \mid \theta) h(\theta \mid \beta) d \theta $$ (b) Obtain the mle \(\widehat{\beta}\) of \(\beta\) for the likelihood \(m(\mathbf{x} \mid \beta)\). (c) Show that the posterior distribution of \(\Theta\) given \(\mathbf{x}\) and \(\widehat{\beta}\) is a gamma distribution. (d) Assuming squared-error loss, obtain the empirical Bayes estimator.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free