For problems with exact coordinates, one could code up the definition of eigenvector. The function eigV finds the eigenvalue for a given vector in the form L == value or returns False if there is none; the function eigQ returns True if there exists an eigenvalue for
ClearAll[eigQ, eigV]; eigV[m_, v_] := Reduce@Thread[(m - SparseArray[{i_, i_} :> L, Dimensions[m]]).v == 0]; eigV[m_][v_] := eigV[m, v]; (* operator form *) eigQ[m_, v_] := Resolve@Exists[L, eigV[m, v]]; eigQ[m_][v_] := eigQ[m, v]; (* operator form *)
Examples:
eigQ[h] /@ {y, {-I (-2 + Sqrt[3]), 1 - Sqrt[3], 1}} (* {False, True} *) eigV[h] /@ {y, {-I (-2 + Sqrt[3]), 1 - Sqrt[3], 1}} (* {False, L == 1 - Sqrt[3]} *)
Or simply
eigQ[h, y] (* False *)
For approximate problems, one would have to account for rounding error.
h.y/y. Dividing byydivides element-wise, and so if it's an eigenvector, each element of the resulting vector should be the same (which is the eigenvalue). $\endgroup$Solve[h.y == lambda*y, lambda]. It is an eigenvector iff solution set is nonempty. $\endgroup$