Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Optimal signed-rank based methods also exist for the one-sample problem. In this exercise, we briefly discuss these methods. Let \(X_{1}, X_{2}, \ldots, X_{n}\) follow the location model $$ X_{i}=\theta+e_{i}, \quad(10.5 .39) $$ where \(e_{1}, e_{2}, \ldots, e_{n}\) are iid with pdf \(f(x)\), which is symmetric about \(0 ;\) i.e., \(f(-x)=\) \(f(x)\) (a) Show that under symmetry the optimal two-sample score function \((10.5 .26)\) satisfies $$ \varphi_{f}(1-u)=-\varphi_{f}(u), \quad 00 $$ Our decision rule for the statistic \(W_{\varphi^{+}}\) is to reject \(H_{0}\) in favor of \(H_{1}\) if \(W_{\varphi^{+}} \geq\) \(k\), for some \(k\). Write \(W_{\varphi^{+}}\) in terms of the anti-ranks, \((10.3 .5) .\) Show that \(W_{\varphi^{+}}\) is distribution-free under \(H_{0}\). (f) Determine the mean and variance of \(W_{\varphi^{+}}\) under \(H_{0}\). (g) Assuming that, when properly standardized, the null distribution is asymptotically normal, determine the asymptotic test.

Short Answer

Expert verified
The signed-rank method is a non-parametric statistical hypothesis test used when the distribution of the data is unknown. It's used in the exercise to prove that the two-sample score function is an odd function. This odd function and its derivative are then used to define the score and then prove the common tests that can in the terms of this score statistic. Finally, distribution property of the score statistic is verified, and its mean, variance and asymptotic distribution is calculated.

Step by step solution

01

Proof of the optimal two-sample score function to be an odd function

We have to prove that under symmetry, the optimal two-sample score function satisfies \(\varphi_{f}(1-u)=-\varphi_{f}(u), 0<u<1\). From the understanding of symmetric distributions we know that if \(f(x)\) is a density function of a symmetric distribution, \(F(x)\) the associated cumulative distribution function and \(\varphi(u)\) is the associated score function then \(F(x) = 1- F(-x)\) and \(\varphi(u) = F^{-1}(1-u)\). Hence when we substitute \(1-u\) in place of \(u\) for the score function we get \(\varphi_{f}(1-u)=F^{-1}(1-(1-u)) = F^{-1}(u)\). Now by using the property of a symmetric distribution \(\varphi_{f}(u)\=-F^{-1}(1-u)\). Hence proved. Also, for a symmetric distribution, the value at \(u=\frac{1}{2}\) is 0 as both halves of the distribution mirror each other.
02

Proof of the non-negativity of the two-sample score function

We need to prove that if \(\varphi(u)\) is a nondecreasing odd function about \(\frac{1}{2}\), then \(\varphi^{+}(u)=\varphi[(u+1) / 2] \geq 0\). For this, we need to see that since \(\varphi(u)\) is nondecreasing and an odd function about \(\frac{1}{2}\), for any \(u \leq \frac{1}{2}\), \(\varphi(u) <= 0\). And for \(u \geq \frac{1}{2}\), we have \(\varphi(u) >= 0\). Thus, for \(\varphi^{+}(u)=\varphi[(u+1) / 2]\) the value of \((u+1) / 2\) lies between \(0.5\) and \(1\), hence \(\varphi^+(u)\geq0\).
03

Proof of converting the score to two well known test statistics

We have to show that \(W_{\varphi+}\) reduces to a linear function of the signed-rank test statistic and the sign test statistic under two different conditions. First, let's assume that \(\varphi(u) = 2u - 1\). Writing out the expression for \(W_{\varphi+}\), it becomes \(\sum_{i=1}^{n} \operatorname{sgn}\left(X_{i}\right) a^{+}\left(R\left|X_{i}\right|\right)\), where \(a^{+}(i) = \varphi^+(i / (n+1)) = 2(i/(n+1)) - 1\). Now the signed rank statistic is given as \(\sum_{i=1}^{n} \mathrm{sgn}(\xi_i)R_i\) (where \(R_i\) is the rank of \(|\xi_i|\)). Hence it can be seen that \(W_{\varphi+}\) is a linear function of signed rank statistic for this case. Now consider \(\varphi(u)=\operatorname{sgn}(2u - 1)\). In this case \(a^{+}(i) = \varphi^{+}(i / (n+1)) = \mathrm{sgn}((2i / (n+1)) - 1)\). The sign test statistic is given as \(\sum_{i=1}^{n} \mathrm{sgn}(\xi_i)\). Hence it can be also seen that \(W_{\varphi+}\) is a linear function of sign test statistic for this case.
04

Writing the score in terms of anti-ranks and proof of distribution

The anti-rank \(S_i\) is given by \(n + 1 - R_i\). We know that \(W_{\varphi+} = \sum_{i=1}^{n}\mathrm{sgn}(X_i) a^{+}(R_{|X_i|})\). It is also stated that we can write \(W_{\varphi+}\) in terms of \(S_i\), or anti-ranks, so that we can say \(W_{\varphi+} = \sum_{i=1}^{n}\mathrm{sgn}(X_i) a^{+}(S_{|X_i|})\). Now we need to prove that \(W_{\varphi+}\) is distribution-free under \(H_0\). This can be proved by recognizing the fact that when \(\theta = 0\), \(W_{\varphi+}\) distribution only depends on the ranks of \(|X_i|\) or \(S_{|X_i|}\), and is unaffected by the true distribution of \(X_i\), making it distribution-free.
05

Finding the mean and variance of the score statistic under null hypothesis

Mean and variance of \(W_{\varphi+}\) under \(H_0\) where \(\theta=0\) have standard formulas given by \(\mu_{W_{\varphi+}} = n(\mu_{\varphi+})\) \(Var_{W_{\varphi+}} = n(\sigma^2_{\varphi+}-(\mu_{\varphi+})^2+nCov_{\varphi+,\varphi+})\). Here, \(\mu_{\varphi+}\), \(\sigma_{\varphi+}\) and \(Cov_{\varphi+,\varphi+}\) are the mean, standard deviation and covariance of the score function.
06

Finding the asymptotic test

The asymptotic test can be found by using the Central Limit Theorem (CLT) which states that if you have a sum of many independent random variables, under certain conditions, the distribution of that sum will approximate a normal distribution. In this case, under \(H_0\), if we take \(W_{\varphi+}\) as the sum of many independent random variables, when standardized it will follow an asymptotically normal distribution, z~\(N(0,1)\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Show that the power function of the sign test is nonincreasing for the hypotheses $$ H_{0}: \theta=\theta_{0} \text { versus } H_{1}: \theta<\theta_{0} $$

Often influence functions are derived by differentiating implicitly the defining equation for the functional at the contaminated cdf \(F_{x, e}(t),(10.9 .13) .\) Consider the mean functional with the defining equation (10.9.10). Using the linearity of the differential, first show that the defining equation at the cdf \(F_{x, \epsilon}(t)\) can be expressed as $$ \begin{aligned} 0=\int_{-\infty}^{\infty}\left[t-T\left(F_{x, \epsilon}\right)\right] d F_{x, \epsilon}(t)=&(1-\epsilon) \int_{-\infty}^{\infty}\left[t-T\left(F_{x, \epsilon}\right)\right] f_{X}(t) d t \\ &+\epsilon \int_{-\infty}^{\infty}\left[t-T\left(F_{x, \epsilon}\right)\right] d \Delta(t) \end{aligned} $$ Recall that we want \(\partial T\left(F_{x, \epsilon}\right) / \partial \epsilon .\) Obtain this by implicitly differentiating the above equation with respect to \(\epsilon\).

Let \(x_{1}, x_{2}, \ldots, x_{n}\) be a realization of a random sample. Consider the Hodges-Lehmann estimate of location given in expression (10.9.4). Show that the breakdown point of this estimate is \(0.29 .\) Hint: Suppose we corrupt \(m\) data points. We need to determine the value of \(m\) that results in corruption of one-half of the Walsh averages. Show that the corruption of \(m\) data points leads to $$ p(m)=m+\left(\begin{array}{c} m \\ 2 \end{array}\right)+m(n-m) $$ corrupted Walsh averages. Hence the finite sample breakdown point is the "correct" solution of the quadratic equation \(p(m)=n(n+1) / 4\).

Consider the location Model (10.3.35). Assume that the pdf of the random errors, \(f(x)\), is symmetric about \(0 .\) Let \(\widehat{\theta}\) be a location estimator of \(\theta\). Assume that \(E\left(\widehat{\theta}^{4}\right)\) exists. (a) Show that \(\widehat{\theta}\) is an unbiased estimator of \(\theta\). Hint: Assume without loss of generality that \(\theta=0 ;\) start with \(E(\hat{\theta})=\) \(E\left[\widehat{\theta}\left(X_{1}, \ldots, X_{n}\right)\right]\); and use the fact that \(X_{i}\) is symmetrically distributed about \(0 .\) (b) As in Section \(10.3 .4\), suppose we generate \(n_{s}\) independent samples of size \(n\) from the pdf \(f(x)\) which is symmetric about \(0 .\) For the \(i\) th sample, let \(\widehat{\theta}_{i}\) be the estimate of \(\theta\). Show that \(n_{s}^{-1} \sum_{i=1}^{n_{x}} \widehat{\theta}_{i}^{2} \rightarrow V(\hat{\theta})\), in probability.

Let \(X\) be a random variable with cdf \(F(x)\) and let \(T(F)\) be a functional. We say that \(T(F)\) is a scale functional if it satisfies the three properties $$ \text { (i) } T\left(F_{a X}\right)=a T\left(F_{X}\right), \text { for } a>0 $$ (ii) \(T\left(F_{X+b}\right)=T\left(F_{X}\right), \quad\) for all \(b\) $$ \text { (iii) } T\left(F_{-X}\right)=T\left(F_{X}\right) \text { . } $$ Show that the following functionals are scale functionals. (a) The standard deviation, \(T\left(F_{X}\right)=(\operatorname{Var}(X))^{1 / 2}\). (b) The interquartile range, \(T\left(F_{X}\right)=F_{X}^{-1}(3 / 4)-F_{X}^{-1}(1 / 4)\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free