I would like to evaluate a polynomial of matrix $g(A)= \sum_{m=0}^{k}e^{-i m \phi} A^{m}$ where $\phi$ is some angle provided by the user and $k$ is some positve integer. So I did a very naive thing to evaluate it
gkm[k_,\[Phi]_,x_]:= Module[{mpow}, mpow= IdentityMatrix[Length[x]]; result= mpow; Table[ mpow= mpow . x; result+= Exp[-I m \[Phi]]mpow;, {m,1,k}]; Return[result] ] Basically, I am saving the result in each iteration so that I do not have to multiply $m$ times to get $U^{m}$ . However, this is not really optimal. So for eg. if $x$ were not matrices then the polynomial is efficiently evaluated using Horner's method. However, as mentioned in Wikipedia, it is not optimal if $x$ is a matrix. From there I went to the method of preprocessing, however the method completely eludes me. I was wondering if there is an efficient way to evaluate polynomial $g(A)$ faster than what I already have. I require it for matrices of dimensions $2^{16}$ and $k \sim 12$ ( which I would run on the workstation that I have access to).
Edit 1: I understand that the series is a geometric series, so it can be exactly summed as mentioned in the comment below. However, I would like to avoid finding an inverse at any cost. Another way that I already tried is using the MatrixFunction command of Mathematica and weirdly, it gives an incorrect result even for matrices of dimension $2^{5}$ (I have no idea why). I am assuming it is doing some kind of approximation scheme, but it is a bad one, it seems (atleast to me), though it is surely faster than what I already have.
As a benchmark, the code that I wrote already does the job for a $2^{10}$ dimensional matrix with $k=4$ in $\sim 7.5s$ (Macbook Air M1). Matrix $x$ is fully dense, complex with floating point entries.
Edit2: I tried to use LinearSolve for the case when I needed to calculate the inverse (closed result for the sum). It is not bad, with slightly more memory requirements than the gkm function that I wrote. It seems that I need to reduce the number of matrix-matrix multiplications as possible. So maybe Horner's method would be a way to go at least. Here what I tried and might give a little bit better than what I already have. If a polynomial $p(x)$ is given then Mathematica can already give the Horner's form using HornerForm. However, now I am a bit stuck on how to use the resulting form for matrices. As an example consider $k=4, \phi= \pi/2$ then I can do the following
gk[k_,\[Phi]_,x_]:=Sum[Exp[-I m \[Phi]] x^m,{m,0,k}] HornerForm[gk[k, \[Pi]/2, x]] 1+x (-I+x (-1+x (I+x))) But now I want $x$ to be a matrix and $1$ to be the identity matrix, and the simple product should be changed to dot product for matrices. Is there a way to convert it into a form such that it can be used for matrices? Yes there is and I found the way.
(HornerForm[gk[k,\[Pi]/2,y]]/.{Times-> Dot,-I -> -I IdentityMatrix[2^\[CapitalNu]],I->I IdentityMatrix[2^\[CapitalNu]],1-> IdentityMatrix[2^\[CapitalNu]],-1-> -IdentityMatrix[2^\[CapitalNu]],y->U}) The above substitution takes care of it, and I have checked it. The time taken is similar to the first code that I wrote. so it seems it is still the same.
![Time for m matrix-vector multiplications with g[A]](https://i.sstatic.net/Fy6WepUV.png)
xyou have. Are those exact numbers or floating point ones? $\endgroup$MatrixFunction's poor accuracy here is the infamous sign problem, for which AFAIK the only general remedy is more accuracy.MatrixFunctionis probably using actual floats, whereas Mathematica will in other contexts generally do things exactly if it can and can be told to use arithmetic to some specific precision otherwise. $\endgroup$