Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(Y\) have a binomial distribution in which \(n=20\) and \(p=\theta\). The prior probabilities on \(\theta\) are \(P(\theta=0.3)=2 / 3\) and \(P(\theta=0.5)=1 / 3\). If \(y=9\), what are the posterior probabilities for \(\theta=0.3\) and \(\theta=0.5\) ?

Short Answer

Expert verified
The posterior probabilities for \(\theta=0.3\) and \(\theta=0.5\) are calculated by applying Bayes' theorem and normalizing them. Detailed calculations depend on the exact values of binomial coefficients and likelihood expressions.

Step by step solution

01

Calculate the likelihoods

Given that \(Y\sim Bin(n=20,\theta)\), the likelihoods for \(\theta=0.3\) and \(\theta=0.5\) when \(y=9\) are \(L(0.3 | y=9) = \binom{20}{9}*(0.3)^9*(1-0.3)^{20-9} \) and \(L(0.5 | y=9) = \binom{20}{9}*(0.5)^9*(1-0.5)^{20-9} \), respectively.
02

Calculate the priors

The prior probabilities are given as \(P(\theta=0.3)=2/3\) and \(P(\theta=0.5)=1/3\). These probabilities represent the belief about \(\theta\) before seeing the data.
03

Apply the Bayes' theorem

The posterior probability is calculated using Bayes' theorem: \(P(\theta | y) = \frac{L(\theta | y)P(\theta)}{P(y)}\). However as \(P(y)\) remains constant for each \(\theta\) (and might be hard to calculate), we can focus on the numerator \(L(\theta | y)P(\theta)\). For \(\theta=0.3\), this yields \(\frac{2}{3}*L(0.3|y=9)\), and for \(\theta=0.5\), it yields \(\frac{1}{3}*L(0.5|y=9)\)
04

Normalize posterior probabilities

Finally normalize the probabilities by dividing them by the sum of probabilities, to ensure they sum to 1

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider the Bayes model \(X_{i} \mid \theta, i=1,2, \ldots, n \sim\) iid with distribution \(b(1, \theta), 0<\theta<1\) (a) Obtain the Jeffreys' prior for this model. (b) Assume squared-error loss and obtain the Bayes estimate of \(\theta\).

Consider the Bayes model \(X_{i} \mid \theta, i=1,2, \ldots, n \quad \sim\) iid with distribution Poisson \((\theta), \theta>0\) $$ \Theta \sim h(\theta) \propto \theta^{-1 / 2} $$ (a) Show that \(h(\theta)\) is in the class of Jeffreys' priors. (b) Show that the posterior pdf of \(2 n \theta\) is the pdf of a \(\chi^{2}(2 y+1)\) distribution, where \(y=\sum_{i=1}^{n} x_{i}\) (c) Use the posterior pdf of part (b) to obtain a \((1-\alpha) 100 \%\) credible interval for \(\theta\). (d) Use the posterior pdf in part (d) to determine a Bayesian test for the hypotheses \(H_{0}: \theta \geq \theta_{0}\) versus \(H_{1}: \theta<\theta_{0}\), where \(\theta_{0}\) is specified.

Consider the Bayes model \(X_{i} \mid \theta, i=1,2, \ldots, n \sim\) iid with distribution \(\Gamma(1, \theta), \theta>0\) $$ \Theta \sim h(\theta) \propto \frac{1}{\theta} $$ (a) Show that \(h(\theta)\) is in the class of Jeffreys' priors. (b) Show that the posterior pdf is $$ h(\theta \mid y) \propto\left(\frac{1}{\theta}\right)^{n+2-1} e^{-y / \theta}, $$ where \(y=\sum_{i=1}^{n} x_{i}\) (c) Show that if \(\tau=\theta^{-1}\), then the posterior \(k(\tau \mid y)\) is the pdf of a \(\Gamma(n, 1 / y)\) distribution. (d) Determine the posterior pdf of \(2 y \tau\). Use it to obtain a \((1-\alpha) 100 \%\) credible interval for \(\theta\). (e) Use the posterior pdf in part (d) to determine a Bayesian test for the hypotheses \(H_{0}: \theta \geq \theta_{0}\) versus \(H_{1}: \theta<\theta_{0}\), where \(\theta_{0}\) is specified.

Consider the following mixed discrete-continuous pdf for a random vector \((X, Y)\) (discussed in Casella and George, 1992): $$ f(x, y) \propto\left\\{\begin{array}{ll} \left(\begin{array}{l} n \\ x \end{array}\right) y^{x+\alpha-1}(1-y)^{n-x+\beta-1} & x=0,1, \ldots, n, 00\) and \(\beta>0\). (a) Show that this function is indeed a joint, mixed discrete-continuous pdf by finding the proper constant of proportionality. (b) Determine the conditional pdfs \(f(x \mid y)\) and \(f(y \mid x)\). (c) Write the Gibbs sampler algorithm to generate random samples on \(X\) and \(Y\). (d) Determine the marginal distributions of \(X\) and \(Y\).

Let \(X_{1}, X_{2}, \ldots, X_{n}\) denote a random sample from a distribution that is \(N\left(\theta, \sigma^{2}\right)\), where \(-\infty<\theta<\infty\) and \(\sigma^{2}\) is a given positive number. Let \(Y=\bar{X}\) denote the mean of the random sample. Take the loss function to be \(\mathcal{L}[\theta, \delta(y)]=|\theta-\delta(y)|\). If \(\theta\) is an observed value of the random variable \(\Theta\) that is \(N\left(\mu, \tau^{2}\right)\), where \(\tau^{2}>0\) and \(\mu\) are known numbers, find the Bayes solution \(\delta(y)\) for a point estimate \(\theta\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free