0
$\begingroup$

Problem: I want to solve the eigenvalue problem $$x=Ax$$ to the eigenvalue $1$ for a large matrix (roughly $N^3\times N^3$ and $N$ ranges from 10 to 100) where $A$ is stochastic (i.e. all entries are non-negative and row sum =1) and sparse (at most 10 entries per row are non-zero). However, $A$ is not explicitly given (but the product $Av$ for some vector $v$ is given) and I think it's cumbersome and inefficient to build and store such a matrix. Therefore, I decided to use the power iteration $$x_{k+1}=Ax_k$$ which works, but converges really slowly. I read that the inverse iteration $$y_{k+1}=(A-\mu I)^{-1}y_k$$ for some $\mu$ close to 1 could converge faster, but since I don't have $A$ itself, I don't know how to compute $(A-I)^{-1}$.

I have hardly any experience with such iterative methods or large/sparse matrix techniques which is why I'm asking:

Question 1: Is there a matrix-free inverse iteration suitable for this problem?

Question 2: Is there another trick (some preconditioning) to speed up the power iteration?

Question 3: Is there a more efficient technique that could solve this system?

Any help or comment is greatly appreciated!

$\endgroup$
5
  • 1
    $\begingroup$ For the inversion just solve problems of the form $(A-\mu I) y_{k+1} = y_k$ with some iterative solver. You can use some preconditioner for the iterative solver yes. You can also look into Arnoldi ir Lanczos iterations for the estimation of eigenvalues. $\endgroup$ Commented Apr 12, 2024 at 18:01
  • $\begingroup$ Maybe try solving $(A-I)x=0$ with a sparse direct solver (e.g MUMPS) or an iterative solver (e.g. GMRES). Maybe you will not need a preconditionner if the eigenvalues are all clustered, which might be the case here ? $\endgroup$ Commented Apr 13, 2024 at 19:47
  • $\begingroup$ lightxbulb and @Laurent90 thank you for your comments, I'll try to apply the algorithms you suggested. Concerning the eigenvalues, all I can say (at least I think so....) is that 1 is the largest eigenvalue (Perron-Frobenius). Using $\mu=1$ would cause divergence. My concern is that using $\mu<1$ could cause the iteration to converge to another close eigenvalue/vector. Is this a valid concern? Moreover, I read that solving $(A-I)x=0$ could be difficult because methods tend to find the trivial solution $x=0$ (mathoverflow.net/questions/410566/…) $\endgroup$ Commented Apr 13, 2024 at 21:07
  • $\begingroup$ ARPACK is an eigenvalue solver that does not require anything other than the action of a matrix. SLEPc can also handle matrices that are only provided via their action. $\endgroup$ Commented Apr 14, 2024 at 3:37
  • $\begingroup$ You might look into LOBPCG, I have had better luck with it than Lanczos/Arnoldi methods when it comes to finding small eigenpairs / inverse iteration. I would imagine it's not too hard to incorporate some shifting. $\endgroup$ Commented May 2, 2024 at 20:52

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.