2
$\begingroup$

I am currently developing a Vector Autoregressive Model, and I have the model fully specified as follows:

$$X_t=AX_{t-1} +Z_t$$

where $X$ and $Z$ are $n \times 1$ column vectors, and $A$ is an $n\times n$ matrix.

In one dimension, I know how to use a best linear predictor by looking at the autocovariance matrix. However, it is not so clear how to do this in multiple dimensions, since there is not only an autocovariance matrix for each dimension, but also cross terms in the serial covariances. I have a hunch about how this could be done, but would really appreciate it if anyone had any tips or a reference to look at regarding best linear predictors.

On a closely related note, I did find this paper: http://faculty.washington.edu/ezivot/econ584/notes/varModels.pdf

which explains how to forecast using VAR models (see section 11.3). However, this forecast simply uses an iterative procedure in order to get the prediction. So another question is: is this approach equivalent to the best linear predictor method?

$\endgroup$

1 Answer 1

1
$\begingroup$

I quote from the link you have provided:

The best linear predictor in terms of minimum mean squared error (MSE), of $\mathbf{Y}_{t+1}$ or 1-step forecast based on information available at time $T$ is

$$\mathbf{Y}_{T+1|T}=c+\mathbf{\Pi}_1\mathbf{Y}_T+...+\mathbf{\Pi}_p\mathbf{Y}_{T-p+1}$$

This is for VAR(p) model. You have VAR(1). Further quote

Forecasts for longer horizons $h$ ($h$-step forecasts) may be obtained using chain-rule of forecasting as

$$\mathbf{Y}_{T+h|T}=c+\mathbf{\Pi}_1\mathbf{Y}_{T+h-1|T}+...+\mathbf{\Pi}_p\mathbf{Y}_{T+h-p|T}$$

Such type of forecasting is the commonly accepted way of forecasting VAR models. It is implemented for example in vars R package. If you compare it to the univariate case you will see that for AR(p) model the forecasts are the same. Furthermore thery are optimal in a sense that minimise the MSE, you can consult Lütkepohl (New introduction to multiple time series analysis, 2005) for more details.

$\endgroup$
5
  • $\begingroup$ thank you for noting that as it points to my exact confusion. Lets assume for now that p=1, and Then we have simply $\endgroup$ Commented Jan 21, 2015 at 14:40
  • $\begingroup$ ...$Y_{t+h|T}=\Pi^h*Y_T$. But I thought that in general the BLP used as many past observations as you wanted, rather than just the most recent one. So how could they be the same? $\endgroup$ Commented Jan 21, 2015 at 14:50
  • $\begingroup$ But this is a property of the AR(1) process. Only the last observation counts. If more observations are useful for forecasting then you do not have AR(1) model. $\endgroup$ Commented Jan 21, 2015 at 14:52
  • $\begingroup$ Ok, I see what you mean. Perhaps the reason I thought additional observations would be useful is because I was conflating uncertainty in the parameters T with the prediction problem. Perhaps I need to look at bayesian techniques. $\endgroup$ Commented Jan 21, 2015 at 15:15
  • 1
    $\begingroup$ The classic VAR forecasts ignore uncertainty in parameters. $\endgroup$ Commented Jan 21, 2015 at 15:18

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.