Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let \(g M_{t}\) be the annual growth in the money supply and let unem, be the unemployment rate. Assuming that unem_ follows a stable AR(1) process, explain in detail how you would test whether \(g M\) Granger causes unem.

Short Answer

Expert verified
Test if past values of money supply growth forecast unemployment using an F-test on an extended AR(1) model.

Step by step solution

01

Understanding Granger Causality

Granger causality is a statistical hypothesis test to determine if one time series can forecast another. In this case, we want to see if past values of the growth in money supply (\(g M_t\)) help in predicting the unemployment rate (unem).
02

Specify the AR(1) Model for Unemployment

Since unem follows a stable AR(1) process, the model is specified as:\[unem_t = \alpha + \beta \cdot unem_{t-1} + \epsilon_t\]where \(\alpha\) is a constant, \(\beta\) is the autoregressive coefficient, and \(\epsilon_t\) is the error term.
03

Extend the Model to Include g M_t

To test if \(g M_t\) Granger causes unem, extend the AR(1) model by including past values of \(g M_t\):\[unem_t = \alpha + \beta \cdot unem_{t-1} + \sum_{i=1}^{p} \gamma_i \cdot g M_{t-i} + \epsilon_t\]Here, \(p\) is the number of lags for \(g M_t\) and \(\gamma_i\) are the coefficients that determine the influence of past \(g M_t\) on unem.
04

Conduct Hypothesis Testing

Test the joint significance of the coefficients \(\gamma_i\) by forming the null hypothesis \(H_0: \gamma_1 = \gamma_2 = ... = \gamma_p = 0\). This can be done using an F-test in a regression setup to determine if past values of \(g M_t\) significantly improve the forecasting of unem.
05

Interpret the Results

If the null hypothesis is rejected, it indicates that the past values of \(g M_t\) do have predictive power for unem, meaning \(g M\) Granger causes unem. If not rejected, there is no evidence to claim Granger causality from \(g M_t\) to unem.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Time Series Analysis
Time series analysis is a statistical technique that deals with time-ordered data points. This type of analysis is essential when working with data that are collected over time, like monthly sales data, daily temperatures, or in this case, annual growth in the money supply and unemployment rates.
Time series analysis aims to extract meaningful statistics and characteristics of the data. Common methods include decomposition of the series, identifying trends, detecting seasonality, and more.
The goal is to use these insights for
  • modeling the underlying structures,
  • making predictions, and
  • understanding how different variables interact with each other over time.
Analyzing time series data is crucial since it can reveal patterns that are not obvious when merely looking at summary statistics or scattered data points.
Autoregressive Model
An autoregressive (AR) model is a type of statistical model used for time series data. It is based on the idea that the current value of the series is related to its past values. This type of model is particularly useful for capturing patterns and making predictions from time-dependent data.
In an AR model, the future value of a variable is assumed to be a linear function of several past observations. For instance, an AR(1) model predicts the current value based on the immediately preceding value. This can be mathematically represented as:\[unem_t = \alpha + \beta \cdot unem_{t-1} + \epsilon_t\] where:
  • \(\alpha\) is a constant term,
  • \(\beta\) represents the relationship with the past value, and
  • \(\epsilon_t\) is the error term accounting for randomness.
This model is called 'autoregressive' because it regresses against itself. It's a fundamental concept in time series analysis, helping analysts to forecast future values based on a simple linear relationship.
Hypothesis Testing
Hypothesis testing is a statistical method used to make decisions or infer conclusions about populations based on sample data. In the context of time series analysis, it helps determine if certain patterns or relationships observed in the sample data can be generalized to the larger population.
The process begins with forming two hypotheses: the null hypothesis \(H_0\) and the alternative hypothesis \(H_1\). The null hypothesis usually states that there is no effect or no relationship between variables being studied.
To test these hypotheses, statisticians use test statistics derived from the data, assessing whether observed results are compatible with the stated null hypothesis. A common threshold in hypothesis testing is the significance level, often set at 5%, which delineates the probability of rejecting the null hypothesis when it is actually true.
If the test statistic meets criteria suggesting low probability that the null hypothesis is true, we reject \(H_0\) and infer support for the alternative hypothesis.
Forecasting
Forecasting involves predicting future values based on past and present information, and it is a vital component of time series analysis. For instance, using an autoregressive model, one may attempt to forecast future unemployment rates by utilizing past unemployment data points.
By leveraging statistical models and past trends, analysts aim to anticipate changes in key metrics, enabling businesses and policymakers to make informed decisions. Forecasting can be achieved through multiple methods, such as:
  • Exponential Smoothing
  • Moving Averages
  • Autoregressive Integrated Moving Average (ARIMA)
The accuracy of forecasts heavily depends on how well the model captures the underlying patterns and dynamics of the time series data. Therefore, selecting the right model and adequately validating it are paramount steps in the forecasting process.
Statistical Hypothesis Test
A statistical hypothesis test is a formal technique that compares two hypotheses using sample data. It is central to methods like Granger causality, which tests if one time series can predict another.
Within this framework, the null hypothesis can be technically expressed. For example, suppose we want to test if the growth in money supply (\(g M_t\)) Granger causes unemployment (unem). Here, the null hypothesis might be formulated as:\(H_0: \gamma_1 = \gamma_2 = \ldots = \gamma_p = 0\)This states that past values of \(g M_t\) do not help predict future unemployment rates (unem). Employing statistical tests like the F-test can determine whether these \(\gamma\) coefficients are jointly significant.
If the calculated value of the test statistic is beyond a critical threshold, we reject \(H_0\), providing evidence that \(g M_t\) indeed has predictive power over unem in the context of Granger causality.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose that \(\left\\{y_{t}\right\\}\) and \(\left\\{z_{t}\right\\}\) are \(\mathrm{I}(1)\) series, but \(y_{t}-\beta z_{t}\) is \(\mathrm{I}(0)\) for some \(\beta \neq 0 .\) Show that for any \(\delta \neq \beta\) \(y_{t}-\delta z_{t}\) must be I(1).

Consider the geometric distributed model in equation (18.8), written in estimating equation form as in equation (18.11): $$y_{t}=\alpha_{0}+\gamma z_{t}+\rho y_{t-1}+v_{t}$$ where \(v_{t}=u_{t}-\rho u_{t-1}\) i. Suppose that you are only willing to assume the sequential exogeneity assumption in (18.6). Why is \(z_{t}\) generally correlated with \(v_{t} ?\) ii. Explain why estimating (18.11) by IV, using instruments \(\left(z_{t}, z_{t-1}\right),\) is generally inconsistent under \((18.6) .\) Using the IV estimator, can you test whether \(z_{t}\) and \(v_{t}\) are correlated? iii. Evaluate the following proposal when only (18.6) holds: Estimate (18.11) by IV using instruments \(\left(z_{t-1}, z_{t-2}\right)\) iv. Explain what you gain by estimating (18.11) by 2 SLS using instruments \(\left(z_{t}, z_{t-1}, z_{t-2}\right)\) v. In equation \((18.16),\) the estimating equation for a rational distributed lag model, how would you estimate the parameters under (18.6) only? Might there be some practical problems with your approach?

Using the monthly data in VOLAT, the following model was estimated: $$ \begin{aligned} \widehat{p c i p} &=1.54+.344 p c i p_{-1}+.074 p c i p_{-2}+.073 p c i p_{-3}+.031 p c s p_{-1} \\ &(.56)(.042) \\ n &=554, R^{2}=.174, \overline{R^{2}}=.168 \end{aligned} $$ where \(p c i p\) is the percentage change in monthly industrial production, at an annualized rate, and \(p c s p\) is the percentage change in the Standard \& Poor's 500 Index, also at an annualized rate. i. If the past three months of pcip are zero and \(p c s p_{-1}=0,\) what is the predicted growth in industrial production for this month? Is it statistically different from zero? ii. If the past three months of pcip are zero but \(p c s p_{-1}=10,\) what is the predicted growth in industrial production? iii. What do you conclude about the effects of the stock market on real economic activity?

Consider the error correction model in equation \((18.37) .\) Show that if you add another lag of the error correction term, \(y_{t-2}-\beta x_{t-2},\) the equation suffers from perfect collinearity. (Hint: Show that \(\left.y_{t-2}-\beta x_{t-2} \text { is a perfect linear function of } y_{t-1}-\beta x_{t-1}, \Delta x_{t-1}, \text { and } \Delta y_{t-1} .\right)\).

Suppose that \(y_{t}\) follows the model $$ \begin{aligned} y_{t} &=\alpha+\delta_{1} z_{t-1}+u_{t} \\ u_{t} &=\rho u_{t-1}+e_{t} \\ \mathrm{E}\left(e_{t} | I_{t-1}\right) &=0 \end{aligned} $$ where \(I_{t-1}\) contains \(y\) and \(z\) dated at \(t-1\) and earlier. i. Show that \(\mathrm{E}\left(y_{t+1} | I_{t}\right)=(1-\rho) \alpha+\rho y_{t}+\delta_{1} z_{t}-\rho \delta_{1} z_{t-1}\). (Hint: Write \(u_{t-1}=y_{t-1}-\alpha-\delta_{1} z_{t-2}\) and plug this into the second equation; then, plug the result into the first equation and take the conditional expectation. ii. Suppose that you use \(n\) observations to estimate \(\alpha, \delta_{1},\) and \(\rho .\) Write the equation for forecasting \(y_{n+1}\) iii. Explain why the model with one lag of \(z\) and \(\mathrm{AR}(1)\) serial correlation is a special case of the model $$y_{t}=\alpha_{0}+\rho y_{t-1}+\gamma_{1} z_{t-1}+\gamma_{2} z_{t-2}+e_{t}$$ iv. What does part (iii) suggest about using models with \(\mathrm{AR}(1)\) serial correlation for forecasting?

See all solutions

Recommended explanations on History Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free