1
$\begingroup$

Assume the 2x1 random vector, $E_t$, has a multivariate normal distribution with a mean vector of 0 and covariance matrix of $Q_t$. Note here that $E_t$ is a time series variable and the covariance matrix is differs across time

$$E_t = [e_{1,t}, e_{2,t}]^T\sim MVN(0, Q_t), \quad t=0,..,T$$

Now let $E$ be the matrix of all the time series observations stacked on top of each other:

$$E = \begin{pmatrix} e_{0,0} & e_{1, 0}\\ e_{0,1} & e_{1, 1}\\... & ...\\e_{0,T} & e_{1,T}\end{pmatrix} $$

Lastly, let $\hat{E}$ be the vectorization of $E$:

$$\hat{E} = vec(E) = \begin{pmatrix} e_{0,0} & e_{0,1} & ... & e_{0,T}& e_{1,0} & e_{1,1} & ... & e_{1,T}\end{pmatrix}^T$$

My question is how to specify the covariance function of the vectorized $\hat{E}$

That is $$vec(E) \sim MVN(0, ???)$$

What I've tried:

It's clear that $E$ has a Matrix Normal distribution with some row and column covariance matrices $U$ and $V$, which implies that $\hat{E} \sim MVN(0, V\bigotimes U)$ but I don't see how to make this work since the column-wise covariance matrix is time dependent. If $Q_t = Q$ for all $t$ and the observations are IID than we would have $$\hat{E}\sim MVN(0, Q\bigotimes I)$$ but in this case $Q$ is time varying and there is potentially dependence across time

$\endgroup$

1 Answer 1

1
$\begingroup$

In general, you can't write the covariance matrix as a Kronecker product form, i.e. $E$ does not necessarily follow a matrix normal distribution, even if its vectorization is multivariate normal.

To see why, note that $$\hat{E}=\left(E^{0\top},E^{1\top}\right)^{\top}=\begin{bmatrix}E^{0}\\E^{1}\end{bmatrix},$$ where $E^{k}=\left(e_{k,0},e_{k,1},\ldots,e_{k,T}\right)$ for $k=0,1$. Therefore, the covariance matrix should be $$\mathbb{E}\left[\hat{E}\hat{E}^{\top}\right]=\begin{bmatrix}\mathbb{E}\left[E^{0}E^{0\top}\right]&\mathbb{E}\left[E^{0}E^{1\top}\right]\\\mathbb{E}\left[E^{1}E^{0\top}\right]&\mathbb{E}\left[E^{1}E^{1\top}\right]\end{bmatrix}.$$ You can see that the $t,t$ diagonal components in all four block matrices constitute $Q_{t}$, and you still need more information to recover the off-diagonal components.

An easier way to represent the covariance matrix is to consider vectorization of $E^{\top}$ instead of $E$ it self. Let $\tilde{E}=\mathrm{vec}\left(E^{\top}\right)$, then $$\tilde{E}=\left(E_{0}^{\top},E_{1}^{\top},\ldots,E_{T}^{\top}\right)^{\top}=\begin{bmatrix}E_{0}\\E_{1}\\\vdots\\E_{T}\end{bmatrix},$$ the covariance matrix is $$\mathbb{E}\left[\tilde{E}\tilde{E}^{\top}\right]=\begin{bmatrix}\mathbb{E}\left[E_{0}E_{0}^{\top}\right]&\mathbb{E}\left[E_{0}E_{1}^{\top}\right]&\cdots&\mathbb{E}\left[E_{0}E_{T}^{\top}\right]\\\mathbb{E}\left[E_{1}E_{0}^{\top}\right]&\mathbb{E}\left[E_{1}E_{1}^{\top}\right]&\cdots&\mathbb{E}\left[E_{1}E_{T}^{\top}\right]\\\vdots&\vdots&\ddots&\vdots\\\mathbb{E}\left[E_{T}E_{0}^{\top}\right]&\mathbb{E}\left[E_{T}E_{1}^{\top}\right]&\cdots&\mathbb{E}\left[E_{T}E_{T}^{\top}\right]\end{bmatrix}=\begin{bmatrix}Q_{0,0}&Q_{0,1}&\cdots&Q_{0,T}\\Q_{1,0}&Q_{1,1}&\cdots&Q_{1,T}\\\vdots&\vdots&\ddots&\vdots\\Q_{T,0}&Q_{T,1}&\cdots&Q_{T,T}\end{bmatrix},$$ where $Q_{t,s}=\mathbb{E}\left[E_{t}E_{s}^{\top}\right]$ for $t,s=0,1,\ldots,T$.

Remark 0. $Q_{t,t}=Q_{t}$ as defined in the question.
Remark 1. $Q_{t,s}=Q_{s,t}^{\top}$.
Remark 2. To make sure $E$ is matrix normal distributed, a sufficient condition is $\left\{E_{t}\right\}_{t=0}^{T}$ being stationary and $\mathrm{corr}\left(e_{0,t},e_{0,s}\right)=\mathrm{corr}\left(e_{1,t},e_{1,s}\right)$, in which case $\mathbb{E}\left[\tilde{E}\tilde{E}^{\top}\right]=R\otimes Q$ with $R$ being correlation matrix.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.