Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

If \(Z\) is standard normal, then \(Y=\exp (\mu+\sigma Z)\) is said to have the log-normal distribution. Show that \(\mathrm{E}\left(Y^{r}\right)=\exp (r \mu) M_{Z}(r \sigma)\) and hence give expressions for the mean and variance of \(Y\). Show that although all its moments are finite, \(Y\) does not have a moment- generating function.

Short Answer

Expert verified
The mean of \( Y \) is \( \exp(\mu + \frac{\sigma^2}{2}) \) and the variance is \( \exp(2\mu + \sigma^2)(\exp(\sigma^2) - 1) \). \( Y \) has no MGF due to divergence.

Step by step solution

01

Understand the Expression for Expectation

To find \( \mathrm{E}(Y^r) \), we start with the definition of the expectation of a function of a random variable. For a function \( g(Z) \) of a standard normal variable \( Z \), the expectation is given by the integral: \[ \mathrm{E}[g(Z)] = \int_{-\infty}^{\infty} g(z) \cdot f_Z(z) \, dz \] where \( f_Z(z) \) is the standard normal probability density function. In our problem, \( Y = \exp(\mu + \sigma Z) \), so \( Y^r = \exp(r(\mu + \sigma Z)) = \exp(r\mu) \cdot \exp(r\sigma Z) \). Therefore, \[ \mathrm{E}\left( Y^r \right) = \exp(r\mu) \int_{-\infty}^{\infty} \exp(r\sigma z) \cdot f_Z(z) \, dz. \] This integral form represents the moment-generating function, \( M_Z(r\sigma) \).
02

Recognize the Moment-Generating Function of Standard Normal Distribution

The moment-generating function (MGF) for a standard normal variable \( Z \), denoted \( M_Z(t) \), is given by \[ M_Z(t) = \exp\left(\frac{t^2}{2}\right). \] Thus, for \( t = r\sigma \), we have \[ M_Z(r\sigma) = \exp\left(\frac{(r\sigma)^2}{2}\right). \] Substituting this into the expression for \( \mathrm{E}[Y^r] \), we have: \[ \mathrm{E}(Y^r) = \exp(r\mu) \cdot \exp\left(\frac{(r\sigma)^2}{2}\right) = \exp\left(r\mu + \frac{r^2\sigma^2}{2}\right). \]
03

Derive the Mean and Variance of the Log-Normal Distribution

The first moment \( \mathrm{E}(Y) \) is found by setting \( r = 1 \): \[ \mathrm{E}(Y) = \exp\left(\mu + \frac{\sigma^2}{2}\right). \] The second moment \( \mathrm{E}(Y^2) \) is found by setting \( r = 2 \): \[ \mathrm{E}(Y^2) = \exp\left(2\mu + 2\sigma^2\right). \] The variance \( \mathrm{Var}(Y) = \mathrm{E}(Y^2) - \left(\mathrm{E}(Y)\right)^2 \): \[ \mathrm{Var}(Y) = \exp\left(2\mu + 2\sigma^2\right) - \left[ \exp\left(\mu + \frac{\sigma^2}{2}\right) \right]^2 = \exp\left(2\mu + 2\sigma^2\right) - \exp\left(2\mu + \sigma^2\right). \] Simplifying gives \[ \mathrm{Var}(Y) = \exp\left(2\mu + \sigma^2\right) \left( \exp(\sigma^2) - 1 \right). \]
04

Show the Log-Normal Distribution Lacks an MGF

Even though the moments are finite, the log-normal distribution does not have an MGF because the integral \( \int_{-\infty}^{\infty} \exp(te^{\mu + \sigma z}) \cdot f_Z(z) \, dz \) does not converge for any \( t eq 0 \). The exponentiated variable \( e^{\mu + \sigma z} \) causes the function to grow too rapidly, leading to divergence of the integral, prohibiting the existence of an MGF that is well-defined in a neighborhood around zero.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Moment-Generating Function
The moment-generating function (MGF) is a powerful tool in probability theory. It is used to find all moments of a random variable. The MGF of a random variable X is defined as: \[ M_X(t) = \mathrm{E}\left( e^{tX} \right). \] This function, if it exists for values around zero, uniquely characterizes the distribution of X.
  • For a standard normal variable Z, the MGF, \( M_Z(t) \), is \( e^{\frac{t^2}{2}} \).
  • For log-normal variables, since \( Y = \exp(\mu + \sigma Z) \), \( Y^r = \exp(r\mu) \cdot \exp(r\sigma Z) \), and its expectation can be expressed in terms of the MGF of Z.
  • However, the log-normal distribution itself lacks an MGF because of integral divergence outside the neighborhood of zero.
This lack of MGF indicates the unique properties of log-normal distributions, especially in financial modeling where such distributions often appear.
Expectation of Random Variables
Expectation is essentially the average value or mean of a random variable. It provides insight into the variable's central tendency.
  • The expectation of a continuous random variable Y is calculated using an integral over all possible values of Y: \[ \mathrm{E}[Y] = \int_{-\infty}^{\infty} y \cdot f_Y(y) \, dy. \]
  • For a log-normal variable Y, characterized as \( Y = \exp(\mu + \sigma Z) \), the mean can be expressed as \( \mathrm{E}(Y) = \exp(\mu + \frac{\sigma^2}{2}) \).
The expectation tells us the average value that the variable is expected to take, guiding many practical applications, like calculating returns on investments.
Variance of Log-Normal Distribution
Variance measures the spread or dispersion of a set of values around the mean. For a log-normal distribution, the variance is notably different from normal distributions.
  • The variance of Y in a log-normal distribution is calculated using the formula: \[ \mathrm{Var}(Y) = \exp(2\mu + \sigma^2) (\exp(\sigma^2) - 1). \]
  • This equation arises from subtracting the square of the mean from the second moment (\( \mathrm{E}(Y^2) \)): \[ \mathrm{Var}(Y) = \mathrm{E}(Y^2) - (\mathrm{E}(Y))^2. \]
Understanding variance is crucial as it reflects the risk or volatility inherent in predicting future outcomes of random processes, such as financial forecasts.
Properties of Distribution Functions
Distribution functions reveal the behavior of random variables and their probabilities. Although the log-normal distribution does not have a moment-generating function, it possesses several noteworthy properties:
  • Skewness: Log-normal distributions are skewed to the right, meaning they have a longer right tail compared to a normal distribution.
  • Non-existence of MGF: Unlike normal distributions, the MGF of log-normal variables does not exist because its integral representation diverges for any t not equal to zero.
  • Finite moments: Despite the lack of an MGF, all finite moments (e.g., mean, variance) exist and can be calculated.
These properties impact various real-world applications, from assessing time-to-failure in reliability engineering to analyzing stock prices in finance.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

The Cholesky decomposition of an \(p \times p\) symmetric positive matrix \(\Omega\) is the unique lower triangular \(p \times p\) matrix \(L\) such that \(L L^{\mathrm{T}}=\Omega\). Find the distribution of \(\mu+L Z\), where \(Z\) is a vector containing a standard normal random sample \(Z_{1}, \ldots, Z_{p}\), and hence give an algorithm to generate from the multivariate normal distribution.

Suppose \(Y \sim N_{p}(\mu, \Omega)\) and \(a\) and \(b\) are \(p \times 1\) vectors of constants. Find the distribution of \(X_{1}=a^{\mathrm{T}} Y\) conditional on \(X_{2}=b^{\mathrm{T}} Y=x_{2} .\) Under what circumstances does this not depend on \(x_{2} ?\)

Show how to use inversion to generate Bernoulli random variables. If \(0<\pi<1\), what distribution has \(\sum_{j=1}^{m} I\left(U_{j} \leq \pi\right) ?\)

If \(X\) has density \(\lambda e^{-\lambda x}, x>0\), show that \(\operatorname{Pr}(r-1 \leq X \leq r)=e^{-\lambda(r-1)}\left(1-e^{-\lambda}\right)\) If \(Y\) has geometric density \(\operatorname{Pr}(Y=r)=\pi(1-\pi)^{r-1}\), for \(r=1,2, \ldots\) and \(0<\pi<1\), show that \(Y \stackrel{D}{=}\lceil\log U / \log (1-\pi)\rceil\). Hence give an algorithm to generate geometric variables.

I am uncertain about what will happen when I next roll a die, about the exact amount of money at present in my bank account, about the weather tomorrow, and about what will happen when I die. Does uncertainty mean the same thing in all these contexts? For which is variation due to repeated sampling meaningful, do you think?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free