2

I am trying to find the eigenvectors and eigenvalues for very large square symmetric matrices. They are Laplacian matrices on the vicsek fractal. The matrices are in the scale of (5n ⋅ 4 + 1) where i am looking at n=6, 7, 8 ..., so in the scale
of 62 501, 312 501, 1 562 501 ... .

So far i have tried:

Since they are symmetric i use numpy.linalg.eigh for which we get the runtime of

|n value|# of col. & rows|Time in Seconds| |-------|----------------|---------------| | 4 | 2 501 | ~14 | | 5 | 12 501 | ~442 | 

I have also tried numpy for n = 7, but ended up running out of memory.

Next i tried scipy.scarse.linalg.eigsh since the laplacian matrices are sparse (there are at max 5 entries pr. column that are non-zero) but this took ~72 seconds to run for n = 4 and took well over 600 seconds for n = 5 (i ended up stopping the program for taking too long).

I have also tried the package mpmath but that ended up being much slower.

I don't think it is my own code that is slowing this down, since i only call the functions from either numpy or scipy.

def laplacianOperatorMatrix(allPointsDict: dict): return [len(allPointsDict[i]["neighbours"]) if i == j else -1 if j in allPointsDict[i]["neighbours"] else 0 for j in allPointsDict] NpEigvec, NpEigMatrix = np.linalg.eigh(laplacianOperatorMatrix(allPointsDict="some dict")) SciEigVec, SciEigMatrix = linalg.eigsh(sparse.csc_matrix(laplacianOperatorMatrix(allPointsDict="some dict")), k=pow(5, n)*4 + 1 - 1) 

Is there way to optimize the finding of the eigenvectors? Or maybe some packages/method for finding eigenvectors that i don't know of?

4
  • Do you need all or only some of the eigenvalues and eigenvectors? Commented Mar 20 at 16:43
  • @jared I need them all, but in the scipy.sparse.linalg.eigsh i can remove 1, since it never finds the eigevalue 0. So i know what eigenvalue/eigenvector i don't have. Commented Mar 20 at 16:48
  • 2
    "The matrices are in the scale of (5n ⋅ 4 + 1)" This is a red flag. For n=8, this means (5**8*4+1)**2=2_441_409_375_001 items, so >9000 GiB of memory to store the matrix. Then you need to compute it. The best algorithm for eigenvalues is O(n**3) where n is the side of the matrix (for square matrices). See en.wikipedia.org/wiki/Eigenvalue_algorithm . It can be faster on sparse matrices. This means the order of magnitude of time taken is in the order of magnitude of at least few weeks (probably months in practice) on a fast CPU with a very good implementation scaling perfectly. Commented Mar 20 at 17:24
  • 1
    Fortunately, you seems to use sparse matrices. In this case, a better algorithm can run in O(dn²) time where d is the number of nnz values. This can be done using the "Implicitly Restarted Lanczos Method" but scipy.scarse.linalg.eigsh already use it. Note that the article mention faster algorithm for specific problems/matrices like "symmetric tridiagonal eigenvalue problems". Commented Mar 20 at 17:35

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.