Chapter 11: Problem 4
Two independent samples \(Y_{1}, \ldots, Y_{n} \stackrel{\text { iid }}{\sim} N\left(\mu, \sigma^{2}\right)\) and \(X_{1}, \ldots, X_{m} \stackrel{\text { iid }}{\sim} N\left(\mu, c \sigma^{2}\right)\) are available, where \(c>0\) is known. Find posterior densities for \(\mu\) and \(\sigma\) based on prior \(\pi(\mu, \sigma) \propto 1 / \sigma\).
Short Answer
Step by step solution
Define the Likelihoods
Compute the Likelihood for Second Sample
Combine Likelihood Functions
Apply the Prior
Determine Posterior for \(\mu\)
Determine Posterior for \(\sigma\)
Unlock Step-by-Step Solutions & Ace Your Exams!
-
Full Textbook Solutions
Get detailed explanations and key concepts
-
Unlimited Al creation
Al flashcards, explanations, exams and more...
-
Ads-free access
To over 500 millions flashcards
-
Money-back guarantee
We refund you if you fail your exam.
Over 30 million students worldwide already upgrade their learning with Vaia!
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Posterior Distribution
Think of the posterior distribution as combining prior beliefs with new evidence. Specifically, the posterior is formed by multiplying the prior distribution and the likelihood function. In mathematical terms, the posterior distribution for parameters like \(\mu\) and \(\sigma\) can be expressed as:
- \( \pi(\mu, \sigma | Y, X) \propto \text{Prior}(\mu, \sigma) \times \text{Likelihood}(Y, X | \mu, \sigma) \)
Likelihood Function
In our exercise, the likelihood function for two independent samples is derived based on the normal distribution assumptions. For each sample group, the likelihood is:
- For sample \(Y_1, \ldots, Y_n\): \[ L_Y(\mu, \sigma^2) = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left(-\frac{(Y_i - \mu)^2}{2\sigma^2}\right) \]
- For sample \(X_1, \ldots, X_m\): \[ L_X(\mu, \sigma^2) = \prod_{j=1}^{m} \frac{1}{\sqrt{2\pi c \sigma^2}} \exp\left(-\frac{(X_j - \mu)^2}{2c\sigma^2}\right) \]
Prior Distribution
For this exercise, the given prior is:
- \( \pi(\mu, \sigma) \propto \frac{1}{\sigma} \)
By using this prior in conjunction with the likelihood function, we derive the posterior distribution, which enables us to update our beliefs about \(\mu\) and \(\sigma\) after observing the data.