Chapter 10: Problem 8
Let \(X\) be a continuous random variable with cdf \(F(x)\). Suppose \(Y=X+\Delta\), where \(\Delta>0\). Show that \(Y\) is stochastically larger than \(X\).
Chapter 10: Problem 8
Let \(X\) be a continuous random variable with cdf \(F(x)\). Suppose \(Y=X+\Delta\), where \(\Delta>0\). Show that \(Y\) is stochastically larger than \(X\).
All the tools & learning materials you need for study success - in one app.
Get started for freeSuppose \(X\) is a random variable with mean 0 and variance \(\sigma^{2}\). Recall that the function \(F_{x, \epsilon}(t)\) is the cdf of the random variable \(U=I_{1-e} X+\left[1-I_{1-e}\right] W\), where \(X, I_{1-\epsilon}\), and \(W\) are independent random variables, \(X\) has cdf \(F_{X}(t), \underline{W}\) has cdf \(\Delta_{x}(t)\), and \(I_{1-\epsilon}\) has a binomial \((1,1-\epsilon)\) distribution. Define the functional \(\operatorname{Var}\left(F_{X}\right)=\operatorname{Var}(X)=\sigma^{2}\). Note that the functional at the contaminated \(\operatorname{cdf} F_{x, c}(t)\) has the variance of the random variable \(U=I_{1-e} X+\left[1-I_{1-\epsilon}\right] W\). To derive the influence function of the variance, perform the following steps: (a) Show that \(E(U)=\epsilon x\). (b) Show that \(\operatorname{Var}(U)=(1-\epsilon) \sigma^{2}+\epsilon x^{2}-\epsilon^{2} x^{2}\) (c) Obtain the partial derivative of the right side of this equation with respect to \(\epsilon\). This is the influence function. Hint: Because \(I_{1-e}\) is a Bernoulli random variable, \(I_{1-\epsilon}^{2}=I_{1-e} .\) Why?
Consider the rank correlation coefficient given by \(r_{q c}\) in part (c) of
Exercise 10.8.5. Let \(Q_{2 X}\) and \(Q_{2 Y}\) denote the medians of the samples
\(X_{1}, \ldots, X_{n}\) and \(Y_{1}, \ldots, Y_{n}\), respectively. Now consider
the four quadrants:
$$
\begin{aligned}
I &=\left\\{(x, y): x>Q_{2 X}, y>Q_{2 Y}\right\\} \\
I I &=\left\\{(x, y): x
Let \(X_{1}, X_{2}, \ldots, X_{n}\) be a random sample that follows the location model (10.2.1). In this exercise we want to compare the sign tests and \(t\) -test of the hypotheses \((10.2 .2) ;\) so we assume the random errors \(\varepsilon_{i}\) are symmetrically distributed about \(0 .\) Let \(\sigma^{2}=\operatorname{Var}\left(\varepsilon_{i}\right) .\) Hence the mean and the median are the same for this location model. Assume, also, that \(\theta_{0}=0 .\) Consider the large sample version of the \(t\) -test, which rejects \(H_{0}\) in favor of \(H_{1}\) if \(\bar{X} /(\sigma / \sqrt{n})>z_{\alpha}\). (a) Obtain the power function, \(\gamma_{t}(\theta)\), of the large sample version of the \(t\) -test. (b) Show that \(\gamma_{t}(\theta)\) is nondecreasing in \(\theta\). (c) Show that \(\gamma_{t}\left(\theta_{n}\right) \rightarrow 1-\Phi\left(z_{\alpha}-\sigma \theta^{*}\right)\), under the sequence of local alternatives \((10.2 .13)\) (d) Based on part (c), obtain the sample size determination for the \(t\) -test to detect \(\theta^{*}\) with approximate power \(\gamma^{*}\). (e) Derive the \(\operatorname{ARE}(S, t)\) given in \((10.2 .27)\).
Prove that a pdf (or pmf) \(f(x)\) is symmetric about 0 if and only if its mgf is symmetric about 0, provided the mgf exists.
Suppose the random variable \(e\) has cdf \(F(t)\). Let \(\varphi(u)=\sqrt{12}[u-(1 / 2)]\), \(0
What do you think about this solution?
We value your feedback to improve our textbook solutions.