I have a computational model that involves having a set of $K$ covariance matrices, $\{\Sigma_1, ..., \Sigma_K\}$ with each $\Sigma_i \in R^{n \times n}$. Storing all these full covariance matrices is memory-expensive because $n$ is in the order of thousands, so I want to store them more efficiently.
The spectral decomposition theorem says that each matrix is diagonalizable, $\Sigma_i = P^T D P$, so the matrix can be encoded as a set of $n$ vectors that are the columns of $P$, and the $n$ diagonal elements of $D$. Similar to this, I wonder whether there's a way to generate a common basis to these matrices so that each one can then be represented by just a set of coefficients for this basis. Maybe the basis would be over-complete, but still more efficient that storing each full covariance matrix.
It may be relevant to add that these matrices are only used in the model to compute quadratic forms (i.e. $x'\Sigma_i x$), and that I expect the different $\Sigma_i$ to be somewhat similar to one another.