0
$\begingroup$

I have a computational model that involves having a set of $K$ covariance matrices, $\{\Sigma_1, ..., \Sigma_K\}$ with each $\Sigma_i \in R^{n \times n}$. Storing all these full covariance matrices is memory-expensive because $n$ is in the order of thousands, so I want to store them more efficiently.

The spectral decomposition theorem says that each matrix is diagonalizable, $\Sigma_i = P^T D P$, so the matrix can be encoded as a set of $n$ vectors that are the columns of $P$, and the $n$ diagonal elements of $D$. Similar to this, I wonder whether there's a way to generate a common basis to these matrices so that each one can then be represented by just a set of coefficients for this basis. Maybe the basis would be over-complete, but still more efficient that storing each full covariance matrix.

It may be relevant to add that these matrices are only used in the model to compute quadratic forms (i.e. $x'\Sigma_i x$), and that I expect the different $\Sigma_i$ to be somewhat similar to one another.

$\endgroup$
2
  • $\begingroup$ Some search leads to arxiv.org/pdf/2010.06305.pdf, $\endgroup$ Commented Dec 12, 2023 at 19:01
  • $\begingroup$ The answer is no unless the matrices all commute. Unless you're willing to work with approximations, probably the best you can do is to store one-half of each matrix because they are symmetric. $\endgroup$ Commented Dec 12, 2023 at 19:04

0

Start asking to get answers

Find the answer to your question by asking.

Ask question