Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \({\bf{f}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right){\bf{ = cg}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) be a joint p.d.f for \(\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right){\bf{,}}\)each \({x_{2\,}}\)let\({{\bf{h}}_{{\bf{2}}\,}}\left( {{{\bf{x}}_{{\bf{1}}\,}}} \right){\bf{ = g}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) that is what we get by considering \({\bf{g}}\left( {{{\bf{x}}_{{\bf{1}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\) as a function of \({{\bf{x}}_{{\bf{1}}\,}}\)for fixed \({{\bf{x}}_{2\,}}\)show that there is a multiplicative factor \({{\bf{c}}_{{\bf{2}}\,}}\)that does not depend on such that is the conditional p.d.f of \({{\bf{x}}_{{\bf{1}}\,}}\) given \(\left( {{{\bf{x}}_{{\bf{2}}\,}}{\bf{,}}{{\bf{x}}_{{\bf{2}}\,}}} \right)\)

Short Answer

Expert verified

That is what we get by considering.

\({g_1}\left( {{x_1}\mid {x_2}} \right)\frac{{\left( {{x_1},{x_2}} \right)}}{{{f_2}\left( {{x_2}} \right)\,\,\,\,}} = \frac{{cg\left( {{x_1},{x_2}} \right)}}{{{f_2}\left( {{x_2}} \right)\,\,\,\,}}.\)

\({x_2}\,\,\,\)is fixed, and for each \({x_2}\,\,\,\)

\({h_2}\left( {{x_1}} \right)\, = g\left( {{x_1},{x_2}} \right),\)

\({c_2} = \frac{c}{{{f_2}\left( {{x_2}} \right)\,}}.\)

Step by step solution

01

Definition of the conditional probability density function

The random variable's probability distribution is updated by taking into account some information that gives rise to a conditional probability distribution. A conditional probability density function can characterize this distribution.

The conditional probability density function of \({X_2}\,\), given that \({X_2} = {x_2}\,\,\)

\({g_1}\left( {{x_1}\mid {x_2}} \right)\frac{{\left( {{x_1},{x_2}} \right)}}{{{f_2}\left( {{x_2}} \right)\,\,\,\,}} = \frac{{cg\left( {{x_1},{x_2}} \right)}}{{{f_2}\left( {{x_2}} \right)\,\,\,\,}}.\)

\({x_2}\,\,\,\)is fixed, and for each \({x_2}\,\,\,\)

\({h_2}\left( {{x_1}} \right)\, = g\left( {{x_1},{x_2}} \right),\)

The conditional probability density function of \({X_2}\,\),given that \({X_2} = {x_2}\,\,\,\,\)

02

Conditional probability

A conditional probability density function can characterize this distribution.

Then the conditional probability density function of \({X_2}\,\)is

given that \({X_2} = {x_2}\,\,\,\,\)

\({g_1}\left( {{x_1}\mid {x_2}} \right)\,\, = \frac{{cg\left( {{x_1},{x_2}} \right)}}{{{f_2}\left( {{x_2}} \right)\,\,\,\,}}{h_2}\left( {{x_1}} \right)\, = {c_2}\)

\({c_2} = \frac{c}{{{f_2}\left( {{x_2}} \right)\,}}.\)

hence,

\({c_2} = \frac{c}{{{f_2}\left( {{x_2}} \right)\,}}.\)

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Use the data in Table 10.6 on page 640. We are interested in the bias of the sample median as an estimator of the median of the distribution.

a. Use the non-parametric bootstrap to estimate this bias.

b. How many bootstrap samples does it appear that you need in order to estimate the bias to within .05 with a probability of 0.99?

In Sec. 10.2, we discussed \({\chi ^2}\) goodness-of-fit tests for composite hypotheses. These tests required computing M.L.E.'s based on the numbers of observations that fell into the different intervals used for the test. Suppose instead that we use the M.L.E.'s based on the original observations. In this case, we claimed that the asymptotic distribution of the \({x^2}\) test statistic was somewhere between two different \({\chi ^2}\) distributions. We can use simulation to better approximate the distribution of the test statistic. In this exercise, assume that we are trying to test the same hypotheses as in Example 10.2.5, although the methods will apply in all such cases.

a. Simulate \(v = 1000\) samples of size \(n = 23\) from each of 10 different normal distributions. Let the normal distributions have means of \(3.8,3.9,4.0,4.1,\) and \(4.2\) Let the distributions have variances of 0.25 and 0.8. Use all 10 combinations of mean and variance. For each simulated sample, compute the \({\chi ^2}\) statistic Q using the usual M.L.E.'s of \(\mu \) , and \({\sigma ^2}.\) For each of the 10 normal distributions, estimate the 0.9,0.95, and 0.99 quantiles of the distribution of Q.

b. Do the quantiles change much as the distribution of the data changes?

c. Consider the test that rejects the null hypothesis if \(Q \ge 5.2.\) Use simulation to estimate the power function of this test at the following alternative: For each \(i,\left( {{X_i} - 3.912} \right)/0.5\) has the t distribution with five degrees of freedom.

Let \(U\) have the uniform distribution on the interval\((0,1)\). Show that the random variable \(W\)defined in Eq. (12.4.6) has the p.d.f. \(h\)defined in Eq. (12.4.5).

The method of antithetic variates is a technique for reducing the variance of simulation estimators. Antithetic variates are negatively correlated random variables with an expected mean and variance. The variance of the average of two antithetic variates is smaller than the variance of the average of two i.i.d. variables. In this exercise, we shall see how to use antithetic variates for importance sampling, but the method is very general. Suppose that we wish to compute \(\smallint \,g\left( x \right)\,\,dx\), and we wish to use the importance function f. Suppose that we generate pseudo-random variables with the p.d.f. f using the integral probability transformation. For \(\,{\bf{i = 1,2,}}...{\bf{,\nu ,}}\,\)let \({{\bf{X}}^{\left( {\bf{i}} \right)}}{\bf{ = }}{{\bf{F}}^{{\bf{ - 1}}}}\left( {{\bf{1 - }}{{\bf{U}}^{\left( {\bf{i}} \right)}}} \right)\), where \({{\bf{U}}^{\left( {\bf{i}} \right)}}\)has the uniform distribution on the interval (0, 1) and F is the c.d.f. Corresponding to the p.d.f. f . For each \(\,{\bf{i = 1,2,}}...{\bf{,\nu ,}}\,\) define

\(\begin{aligned}{l}{{\bf{T}}^{\left( {\bf{i}} \right)}}{\bf{ = }}{{\bf{F}}^{ - {\bf{1}}}}\left( {{\bf{1}} - {{\bf{U}}^{\left( {\bf{i}} \right)}}} \right)\,\,{\bf{.}}\\{{\bf{W}}^{\left( {\bf{i}} \right)}}{\bf{ = }}\frac{{{\bf{g}}\left( {{{\bf{X}}^{\left( {\bf{i}} \right)}}} \right)}}{{{\bf{f}}\left( {{{\bf{X}}^{\left( {\bf{i}} \right)}}} \right)}}\\{{\bf{V}}^{\left( {\bf{i}} \right)}}{\bf{ = }}\frac{{{\bf{g}}\left( {{{\bf{T}}^{\left( {\bf{i}} \right)}}} \right)}}{{{\bf{f}}\left( {{{\bf{T}}^{\left( {\bf{i}} \right)}}} \right)}}\\{{\bf{Y}}^{\left( {\bf{i}} \right)}}{\bf{ = 0}}{\bf{.5}}\left( {{{\bf{W}}^{\left( {\bf{i}} \right)}}{\bf{ + k}}{{\bf{V}}^{\left( {\bf{i}} \right)}}} \right){\bf{.}}\end{aligned}\)

Our estimator of\(\smallint \,{\bf{g}}\left( {\bf{x}} \right)\,\,{\bf{dx}}\)is then\({\bf{Z = }}\frac{{\bf{I}}}{{\bf{\nu }}}\sum\nolimits_{{\bf{i = 1}}}^{\bf{\nu }} {{{\bf{Y}}^{\left( {\bf{i}} \right)}}{\bf{.}}} \).

a. Prove that\({T^{\left( i \right)}}\)has the same distribution as\({X^{\left( i \right)}}\).

b. Prove that\({\bf{E}}\left( {\bf{Z}} \right){\bf{ = }}\smallint \,\,{\bf{g}}\left( {\bf{x}} \right)\,\,{\bf{dx}}\).

c. If\({\bf{g}}\left( {\bf{x}} \right)\,{\bf{/f}}\left( {\bf{x}} \right)\)it is a monotone function, explain why we expect it \({{\bf{V}}^{\left( {\bf{i}} \right)}}\)to be negatively correlated.

d. If \({{\bf{W}}^{\left( {\bf{i}} \right)}}\) and \({{\bf{V}}^{\left( {\bf{i}} \right)}}\)are negatively correlated, show that Var(Z) is less than the variance one would get with 2v simulations without antithetic variates.

If \({\bf{X}}\)has the \({\bf{p}}.{\bf{d}}.{\bf{f}}.\)\({\bf{1/}}{{\bf{x}}^{\bf{2}}}\)for\({\bf{x > 1}}\), the mean of \({\bf{X}}\) is infinite. What would you expect to happen if you simulated a large number of random variables with this \({\bf{p}}.{\bf{d}}.{\bf{f}}.\) and computed their average?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free