I would like to use a polynomial filtering scheme for finding the eigenvalues around a target angle for a given unitary matrix U. The scheme can be found in this paper. To reiterate it we do a polynomial transformation of a given matrix $U$ via $g(U)= \sum_{m=0}^{k} e^{-i m \phi} U^{m}$. Here $\phi$ is the target angle around which I want to find the eigenvalues, and $k$ is some optimally chosen number to have good convergence. It should be noted that conventionally the same task is done using a shift-invert type method. Shift-invert is easier to use in Mathematica as well, and I have no problem with it. But since it is desirable to go to higher system sizes, the memory requirements of the shift-invert blow up pretty fast, and polynomial filtering is an efficient way to do such filtering.
I was trying to implement the same in Mathematica, and this is where the problem come. I did the very naive brute force thing that can be done, which is I evaluated the polynomial and then used Mathematica to do Arnoldi, and this time we do not have to do the shift as that part has already been taken care of. But there is an obvious apparent problem with this: the method is not "matrix-free". Well, I do not have to do that part as ARPACK package already does the job for me. But I am kind of clueless if there is even a way to use ARPACK as an external package in Mathematica and do this whole computation in a matrix-free manner. I tried to see further into the matter, but at least to my knowledge, it seems that Mathematica does not offer matrix-free computations at the moment. But is it still possible for me to use ARPACK (which is used in hindsight) in mathematica such that I give it $g(U)$ and the rest would be taken care of by it, and then I can extract the final values? Any help or suggestion in this regard would be really helpful. Thanks.
Edit1:
^(\[CapitalNu]=11; nev = 50; U = RandomVariate[CircularUnitaryMatrixDistribution[2^(\[CapitalNu]]]; phitg = Pi/2; (*Target phase*) k = 0.8 2^(\[CapitalNu] + 1)/Floor[2^(\[CapitalNu]/2 + 1)]; So I take a simple unitary matrix from a COE ensemble. I understand it is not an optimal choice ( one would need sparse matrix to really see the efficiency but for the time being it does the job). Then I define a function to calculate the matrix polynomial as stated before
gk[k_,\[Phi]_,x_]:=Sum[Exp[-I m \[Phi]] MatrixPower[x,m],{m,0,k}] Now I find the polynomially filtered $U$
Upoly = gkm[k, \[Pi]/2, U]; Now if you calculate the eigenvalues of "maximum absolute value" of $Upoly$ then they should match with the eigenvalues for $U$ at $\phi= \pi/2$. For $U$ you can simply do
Eigenvalues[U, 10, Method -> {"Arnoldi", "Shift" -> Exp[I \[Pi]/2]}] while for $Upoly$ it is to be noticed that the eigenvalues are now transformed ( the eigenvectors remains the same). The eigenvalues are simply given by $\psi_{i}^{\dagger}. U.\psi_{i}$. You can do that in mathematica as follows
num = 10; {eval, evec} = Eigensystem[Upoly2, num, Method -> {"Arnoldi", "Criteria" -> "Magnitude"}]; ParallelTable[Conjugate[evec[[i]]] . U . evec[[i]], {i, 1, num}] // Sort) One can easily check that both of them are the same. But I noticed that the current way is completely useless because I am storing a large matrix, which I obtain after polynomial transformation. It completely beats the purpose for which this method is made. The only way is to do a matrix-free evaluation directly using ARPACK package. It is easier to do with python as I can see ( the paper also use the same) but I would like to know/see/understand if it is doable in Mathematica or not.
Edit 2: I have been thinking about it for a while. I already have a code (where I have to explicitly store the dense matrix). Is there still a way to do better than what I already have? So I verified all day today that with this code I can go as far as $2^{14} \times 2^{14} $ matrix size. I tried with $2^{15} \times 2^{15}$ it is still possible to do shift-invert, but my RAM usage just went crazy (I was on a workstation with 250GB RAM), so I quit it before it hanged (the RAM usage went upto 165 GB to calculate 100 eigenvalues). As I realized, the point of polynomial filtering is to avoid this shift invert part and just calculate $g(U)$ and then calculate eigenvalues with the largest modulus (this is what transformation $g(U)$ achieves). Can anyone suggest an optimal way then to calculate $g(U)$? For now, I do it the brute force way, as in the above code. Maybe another piece of information that would be important is that finally when I am going to use this, I will have $U$ of the form $U = e^{- i H_{2}} e^{-i H_{1}}$, where $H_{2}$ is a diagonal matrix and $H_{2}$ is highly sparse matrix for some many-body system. Does this help in any way to calculate the polynomial faster?


