Problem: I want to solve the eigenvalue problem $$x=Ax$$ to the eigenvalue $1$ for a large matrix (roughly $N^3\times N^3$ and $N$ ranges from 10 to 100) where $A$ is stochastic (i.e. all entries are non-negative and row sum =1) and sparse (at most 10 entries per row are non-zero). However, $A$ is not explicitly given (but the product $Av$ for some vector $v$ is given) and I think it's cumbersome and inefficient to build and store such a matrix. Therefore, I decided to use the power iteration $$x_{k+1}=Ax_k$$ which works, but converges really slowly. I read that the inverse iteration $$y_{k+1}=(A-\mu I)^{-1}y_k$$ for some $\mu$ close to 1 could converge faster, but since I don't have $A$ itself, I don't know how to compute $(A-I)^{-1}$.
I have hardly any experience with such iterative methods or large/sparse matrix techniques which is why I'm asking:
Question 1: Is there a matrix-free inverse iteration suitable for this problem?
Question 2: Is there another trick (some preconditioning) to speed up the power iteration?
Question 3: Is there a more efficient technique that could solve this system?
Any help or comment is greatly appreciated!