Chapter 3: Problem 13
i. Consider the simple regression model \(y=\beta_{0}+\beta_{1} x+u\) under the first four Gauss-Markov assumptions. For some function \(g(x),\) for example \(g(x)=x^{2}\) or \(g(x)=\log \left(1+x^{2}\right),\) define \(z_{i}=g\left(x_{i}\right) .\) Define a slope estimator as $$ \tilde{\beta}_{1}=\left(\sum_{i=1}^{n}\left(z_{i}-\bar{z}\right) y_{i}\right) /\left(\sum_{i=1}^{n}\left(z_{i}-\bar{z}\right) x_{i}\right) $$ Show that \(\bar{\beta}_{1}\) is linear and unbiased. Remember, because \(\mathrm{E}(u | x)=0,\) you can treat both \(x_{i}\) and \(z_{i}\) as nonrandom in your derivation. ii. Add the homoskedasticity assumption, MLR.5. Show that $$ \operatorname{Var}\left(\tilde{\beta}_{1}\right)=\sigma^{2}\left(\sum_{i=1}^{n}\left(z_{i}-\bar{z}\right)^{2}\right) /\left(\sum_{i=1}^{n}\left(z_{i}-\bar{z}\right) x_{i}\right)^{2} $$ iii. Show directly that, under the Gauss-Markov assumptions, Var( \(\left.\hat{\beta}_{1}\right) \leq \operatorname{Var}\left(\tilde{\beta}_{1}\right),\) where \(\hat{\beta}_{1}\) is the OLS estimator. [Hint: The Cauchy-Schwartz inequality in Appendix B implies that $$ \left(n^{-1} \sum_{i=1}^{n}\left(z_{i}-\bar{z}\right)\left(x_{i}-\bar{x}\right)\right)^{2} \leq\left(n^{-1} \sum_{i=1}^{n}\left(z_{i}-\bar{z}\right)^{2}\right)\left(n^{-1} \sum_{i=1}^{n}\left(x_{i}-\bar{x}\right)^{2}\right) $$ notice that we can drop \(\bar{x}\) from the sample covariance.
Short Answer
Step by step solution
Key Concepts
These are the key concepts you need to understand to accurately answer the question.