Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(Y_{n}\) be the \(n\) th order statistic of a random sample of size \(n\) from a distribution with pdf \(f(x \mid \theta)=1 / \theta, 00, \beta>0\). Find the Bayes solution \(\delta\left(y_{n}\right)\) for a point estimate of \(\theta\).

Short Answer

Expert verified
The Bayes point estimate of θ is \( δ(y_n) = (β + n) / (β + n - 1) * y_n \).

Step by step solution

01

Derive the Joint PDF of (θ, Yn)

We need to calculate the joint pdf of (θ, Yn). Since the order statistic Yn is from a uniform distribution, we have the pdf as \( f(y_n|θ) \) which equals to \( θ^{-n} \) for 0 < y_n < θ. Combining this with the prior pdf \( h(θ) \), we get the joint pdf as \( h(y_n, θ) = [β α^β / θ^β+n] \) for θ > y_n. The upper limit of θ is \( ∞ \).
02

Calculate the marginal PDF

To calculate the marginal pdf of Yn, denoted as \( g(y_n) \), we need to integrate over the joint pdf. The result is \( g(y_n) = ∫_y_n ^∞ h(y_n, θ) = β α^β / y_n^β \) .
03

Find the conditional PDF

The conditional pdf of θ given Yn, denoted as \( h(θ|y_n) \), can be calculated using the joint pdf and the marginal pdf such that \( h(θ|y_n) = h(y_n, θ) / g(y_n) = n / (β + n) θ^-(β+n+1) \) for \( θ > y_n \).
04

Calculate Expected Loss

The next step will be to calculate the expected loss. The Bayes estimate of θ minimizes the expected loss, which in this case is given by \( E θ|Y_n [ L(θ, δ(Y_n)) = ∫ L(θ, δ(y_n)) h(θ|y_n) \(dθ) = ∫ [_θ - δ(y_n)]^2 h(θ|y_n) \(dθ) \).
05

Find the minima

We need to differentiate the expected loss with respect to δ and equate it to zero to find the point which minimizes the loss and thereby giving the Bayes solution. After carrying out derivative and setting it to zero, we obtain \( δ(y_n) = (β + n) / (β + n - 1) * y_n \).
06

Posterior Distribution

The posterior distribution of θ, given Y_n is Γ(β+n, y_n), from which we obtain the minimal loss estimator \( δ(y_n) = (β + n) / (β + n - 1) * y_n. \) This is the Bayes solution for a point estimate of θ in this problem.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(Y_{4}\) be the largest order statistic of a sample of size \(n=4\) from a distribution with uniform pdf \(f(x ; \theta)=1 / \theta, 0

Let \(Y\) have a binomial distribution in which \(n=20\) and \(p=\theta\). The prior probabilities on \(\theta\) are \(P(\theta=0.3)=2 / 3\) and \(P(\theta=0.5)=1 / 3\). If \(y=9\), what are the posterior probabilities for \(\theta=0.3\) and \(\theta=0.5\) ?

Let \(f(x \mid \theta), \theta \in \Omega\), be a pdf with Fisher information, \((6.2 .4), I(\theta)\). Consider the Bayes model $$ \begin{aligned} X \mid \theta & \sim f(x \mid \theta), \quad \theta \in \Omega \\ \Theta & \sim h(\theta) \propto \sqrt{I(\theta)} \end{aligned} $$ (a) Suppose we are interested in a parameter \(\tau=u(\theta)\). Use the chain rule to prove that $$ \sqrt{I(\tau)}=\sqrt{I(\theta)}\left|\frac{\partial \theta}{\partial \tau}\right| . $$ (b) Show that for the Bayes model (11.2.2), the prior pdf for \(\tau\) is proportional to \(\sqrt{I(\tau)}\) The class of priors given by expression (11.2.2) is often called the class of Jeffreys' priors; see Jeffreys (1961). This exercise shows that Jeffreys' priors exhibit an invariance in that the prior of a parameter \(\tau\), which is a function of \(\theta\), is also proportional to the square root of the information for \(\tau\).

Example 11.4.1 dealt with a hierarchical Bayes model for a conjugate family of normal distributions. Express that model as $$ \begin{aligned} &\bar{X} \mid \Theta \sim N\left(\theta, \frac{\sigma^{2}}{n}\right), \sigma^{2} \text { is known } \\ &\Theta \mid \tau^{2} \quad \sim N\left(0, \tau^{2}\right) \end{aligned} $$ Obtain the empirical Bayes estimator of \(\theta\).

Let \(X_{1}, X_{2}, \ldots, X_{n}\) denote a random sample from a Poisson distribution with mean \(\theta, 0<\theta<\infty\). Let \(Y=\sum_{1}^{n} X_{i} .\) Use the loss function \(\mathcal{L}[\theta, \delta(y)]=\) \([\theta-\delta(y)]^{2}\). Let \(\theta\) be an observed value of the random variable \(\Theta\). If \(\Theta\) has the prior \(\operatorname{pdf} h(\theta)=\theta^{\alpha-1} e^{-\theta / \beta} / \Gamma(\alpha) \beta^{\alpha}\), for \(0<\theta<\infty\), zero elsewhere, where \(\alpha>0, \beta>0\) are known numbers, find the Bayes solution \(\delta(y)\) for a point estimate for \(\theta\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free