Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Consider the location Model (10.3.35). Assume that the pdf of the random errors, \(f(x)\), is symmetric about \(0 .\) Let \(\widehat{\theta}\) be a location estimator of \(\theta\). Assume that \(E\left(\widehat{\theta}^{4}\right)\) exists. (a) Show that \(\widehat{\theta}\) is an unbiased estimator of \(\theta\). Hint: Assume without loss of generality that \(\theta=0 ;\) start with \(E(\hat{\theta})=\) \(E\left[\widehat{\theta}\left(X_{1}, \ldots, X_{n}\right)\right]\); and use the fact that \(X_{i}\) is symmetrically distributed about \(0 .\) (b) As in Section \(10.3 .4\), suppose we generate \(n_{s}\) independent samples of size \(n\) from the pdf \(f(x)\) which is symmetric about \(0 .\) For the \(i\) th sample, let \(\widehat{\theta}_{i}\) be the estimate of \(\theta\). Show that \(n_{s}^{-1} \sum_{i=1}^{n_{x}} \widehat{\theta}_{i}^{2} \rightarrow V(\hat{\theta})\), in probability.

Short Answer

Expert verified
Part (a): By using the fact that the expected value of an unbiased estimator equals the parameter being estimated, it is found that \(\widehat{\theta}\) is an unbiased estimator of \(\theta\). Part (b): With the help of the Law of Large Numbers, it is proven that \(n_{s}^{-1} \sum_{i=1}^{n_{x}} \widehat{\theta}_{i}^{2}\) converges to the variance of \(\widehat{\theta}\), \(V(\widehat{\theta})\), in probability.

Step by step solution

01

Part (a) - Step 1: Recall the Definition of an Unbiased Estimator

By definition, an estimator \(T\) is said to be an unbiased estimator of the parameter \(\theta\) if it fulfills the condition \(E(T) = \theta\). To prove \(\widehat{\theta}\) is an unbiased estimator, it will be necessary to show that \(E(\widehat{\theta}) = \theta\).
02

Part (a) - Step 2: Calculate the Expected Value

Using the hint, start with \(E(\widehat{\theta}) = E[\widehat{\theta}(X_{1}, \ldots, X_{n})]\). Given that the distribution is symmetric about \(0\) and \(\theta = 0\), we find that \(E(\widehat{\theta})\) equals \(0\) as well, since in a symmetric distribution the mean equals the median, which is \(0\). Therefore, we have \(E(\widehat{\theta}) = \theta = 0\).
03

Part (a) - Step 3: Conclusion

Since \(E(\widehat{\theta})\) equals \(\theta\), \(\widehat{\theta}\) is an unbiased estimator of \(\theta\).
04

Part (b) - Step 1: Recall the Law of Large Numbers

According to the Law of Large Numbers, if \(X_1, X_2, \ldots, X_n\) are \(n_s\) independent and identically distributed random variables with finite expected values, the average \(X = n_{s}^{-1} \sum_{i=1}^{n_{s}} X_{i}\) converges in probability to \(E(X)\). This is the principle that will be used to solve part (b).
05

Part (b) - Step 2: Apply the Law of Large Numbers

Applying the Law of Large Numbers to the given expression \(n_{s}^{-1} \sum_{i=1}^{n_{x}} \widehat{\theta}_{i}^{2}\), it converges in probability to the expected value \(E(\widehat{\theta}^2)\). Since \(E(\widehat{\theta}^2)\) represents the mean square error, which is equal to the variance (\(V(\widehat{\theta})\)) as \(\widehat{\theta}\) is unbiased, we find that the expression converges to \(V(\widehat{\theta})\).
06

Part (b) - Step 3: Conclusion

With the help of the Law of Large Numbers, we have conclude that \(n_{s}^{-1} \sum_{i=1}^{n_{x}} \widehat{\theta}_{i}^{2} \rightarrow V(\hat{\theta})\) in probability.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

For any \(n \times 1\) vector \(\mathbf{v}\), define the function \(\|\mathbf{v}\|_{W}\) by $$ \|\mathbf{v}\|_{W}=\sum_{i=1}^{n} a_{W}\left(R\left(v_{i}\right)\right) v_{i} $$ where \(R\left(v_{i}\right)\) denotes the rank of \(v_{i}\) among \(v_{1}, \ldots, v_{n}\) and the Wilcoxon scores are given by \(a_{W}(i)=\varphi_{W}[i /(n+1)]\) for \(\varphi_{W}(u)=\sqrt{12}[u-(1 / 2)] .\) By using the correspondence between order statistics and ranks, show that $$ \|\mathbf{v}\|_{W}=\sum_{i=1}^{n} a(i) v_{(i)}, $$ where \(v_{(1)} \leq \cdots \leq v_{(n)}\) are the ordered values of \(v_{1}, \ldots, v_{n} .\) Then, by establishing the following properties, show that the function \((10.9 .53)\) is a pseudo-norm on \(R^{n} .\) (a) \(\|\mathbf{v}\|_{W} \geq 0\) and \(\|\mathbf{v}\|_{W}=0\) if and only if \(v_{1}=v_{2}=\cdots=v_{n}\). Hint: First, because the scores \(a(i)\) sum to 0, show that $$ \sum_{i=1}^{n} a(i) v_{(i)}=\sum_{ij} a(i)\left[v_{(i)}-v_{(j)}\right] $$ where \(j\) is the largest integer in the set \(\\{1,2, \ldots, n\\}\) such that \(a(j)<0\). (b) \(\|c \mathbf{v}\|_{W}=|c|\|\mathbf{v}\|_{W}\), for all \(c \in R\). (c) \(\|\mathbf{v}+\mathbf{w}\|_{W} \leq\|\mathbf{v}\|_{W}+\|\mathbf{w}\|_{W}\), for all \(\mathbf{v}, \mathbf{w} \in R^{n}\) Hint: Determine the permutations, say, \(i_{k}\) and \(j_{k}\) of the integers \(\\{1,2, \ldots, n\\}\), which maximize \(\sum_{k=1}^{n} c_{i_{k}} d_{j_{k}}\) for the two sets of numbers \(\left\\{c_{1}, \ldots, c_{n}\right\\}\) and \(\left\\{d_{1}, \ldots, d_{n}\right\\} .\)

Let \(X\) be a continuous random variable with cdf \(F(x)\). Suppose \(Y=X+\Delta\), where \(\Delta>0\). Show that \(Y\) is stochastically larger than \(X\).

Let \(\theta\) denote the median of a random variable \(X\). Consider testing $$ H_{0}: \theta=0 \text { versus } H_{1}: \theta>0 . $$ Suppose we have a sample of size \(n=25\). (a) Let \(S(0)\) denote the sign test statistic. Determine the level of the test: reject \(H_{0}\) if \(S(0) \geq 16\) (b) Determine the power of the test in part (a) if \(X\) has \(N(0.5,1)\) distribution. (c) Assuming \(X\) has finite mean \(\mu=\theta\), consider the asymptotic test of rejecting \(H_{0}\) if \(\bar{X} /(\sigma / \sqrt{n}) \geq k\). Assuming that \(\sigma=1\), determine \(k\) so the asymptotic test has the same level as the test in part (a). Then determine the power of this test for the situation in part (b).

Often influence functions are derived by differentiating implicitly the defining equation for the functional at the contaminated cdf \(F_{x, e}(t),(10.9 .13) .\) Consider the mean functional with the defining equation (10.9.10). Using the linearity of the differential, first show that the defining equation at the cdf \(F_{x, \epsilon}(t)\) can be expressed as $$ \begin{aligned} 0=\int_{-\infty}^{\infty}\left[t-T\left(F_{x, \epsilon}\right)\right] d F_{x, \epsilon}(t)=&(1-\epsilon) \int_{-\infty}^{\infty}\left[t-T\left(F_{x, \epsilon}\right)\right] f_{X}(t) d t \\ &+\epsilon \int_{-\infty}^{\infty}\left[t-T\left(F_{x, \epsilon}\right)\right] d \Delta(t) \end{aligned} $$ Recall that we want \(\partial T\left(F_{x, \epsilon}\right) / \partial \epsilon .\) Obtain this by implicitly differentiating the above equation with respect to \(\epsilon\).

Suppose \(X\) is a random variable with mean 0 and variance \(\sigma^{2}\). Recall that the function \(F_{x, \epsilon}(t)\) is the cdf of the random variable \(U=I_{1-e} X+\left[1-I_{1-e}\right] W\), where \(X, I_{1-\epsilon}\), and \(W\) are independent random variables, \(X\) has cdf \(F_{X}(t), \underline{W}\) has cdf \(\Delta_{x}(t)\), and \(I_{1-\epsilon}\) has a binomial \((1,1-\epsilon)\) distribution. Define the functional \(\operatorname{Var}\left(F_{X}\right)=\operatorname{Var}(X)=\sigma^{2}\). Note that the functional at the contaminated \(\operatorname{cdf} F_{x, c}(t)\) has the variance of the random variable \(U=I_{1-e} X+\left[1-I_{1-\epsilon}\right] W\). To derive the influence function of the variance, perform the following steps: (a) Show that \(E(U)=\epsilon x\). (b) Show that \(\operatorname{Var}(U)=(1-\epsilon) \sigma^{2}+\epsilon x^{2}-\epsilon^{2} x^{2}\) (c) Obtain the partial derivative of the right side of this equation with respect to \(\epsilon\). This is the influence function. Hint: Because \(I_{1-e}\) is a Bernoulli random variable, \(I_{1-\epsilon}^{2}=I_{1-e} .\) Why?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free