5
$\begingroup$

I have a smallish (e.g. 2x2 or 4x4, but ideally up to 10x10) non-symmetric square matrix $\mathbf{A}(x)$. I need to define a function $f(x)$ which is the sum of the eigenvalues with positive real part of $\mathbf{A}(x)$, for $x \in [x_a,x_b]$.

I then wish to use this function as a normalisation factor within a system of differential equations, $\mathbf{y}= (\mathbf{Q}(x) - f(x) \cdot \mathbf{I}) \,\mathbf{y}$.

For a small initial example, for the 2x2 case $\mathbf{Q}=\mathbf{A}$.

A = {{0, 1}, {-1 + x^2, 0}}; Q = A; 

The eigenvalues here become imaginary for $|x|<1$, so I need to be able to detect those transitions cleanly.

Plot[Evaluate@Eigenvalues[A], {x, -4, 4}] 

enter image description here

xa = -4; xb = 4; yVector = {y[1][x], y[2][x]}; ICs = {y[1][xa] == 0, y[2][xa] == 1}; Clear[eee, totalPosEigs]; eee[z_] = Eigenvalues[N[A/.x->z]]; totalPosEigs[z_?NumericQ] := totalPosEigs[z] = (Sow[z]; Total@Select[eee[z], Re[#] > 0 &]); yeqn = Thread[D[yVector, x] == (Q - totalPosEigs[x]*IdentityMatrix[Length[Q]]).yVector]; NDSolve[{yeqn, ICs}, Array[y, {Length[yVector]}], {x, xa, xb}]; // AbsoluteTiming (* {0.01045, Null} *) 

This works, and is fast enough for the 2x2 case. However, as the matrix $\mathbf{A}$ gets larger (here 4x4), this approach gets a bit too clunky (and Q is now 6x6):

A = {{0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}, {-(1/ϵ^4), -((4 Cos[x])/ϵ^2), -((4 Sin[x])/ϵ^2), 0}} /. ϵ -> 1/10; Q = {{0, 1, 0, 0, 0, 0}, {0, 0, 1, 1, 0, 0}, {-220 Cos[x], -220 Sin[x], 0, 0, 1, 0}, {0, 0, 0, 0, 1, 0}, {10000, 0, 0, -220 Sin[x], 0, 1}, {0, 10000, 0, 220 Cos[x], 0, 0}}; 

The eigenvalue structure is also more complicated in this case:

Plot[Evaluate@Re@Eigenvalues[A], {x, xa, xb}] 

enter image description here

xa = -4; xb = 4; yVector = Through[Array[y, {Length[Q]}][x]]; ICs = {y[1][xa] == 0, y[2][xa] == 0, y[3][xa] == 0, y[4][xa] == 0, y[5][xa] == -1, y[6][xa] == 0}; Clear[eee, totalPosEigs]; eee[z_] = Eigenvalues[N[A /. x -> z]]; totalPosEigs[z_?NumericQ] := totalPosEigs[z] = (Sow[z]; Chop@Total@Select[eee[z], Re[#] > 0 &]); yeqn = Thread[D[yVector, x] == (Q - totalPosEigs[x]*IdentityMatrix[Length[Q]]).yVector]; AbsoluteTiming[{sol, {pts}} = Reap[NDSolve[{yeqn, ICs}, Array[y, {Length[yVector]}], {x, xa, xb}]];] {0.229799, Null} 

Here totalPosEigs has been evaluated 567 times (even with caching), each evaluation is fast but they are adding up to contribute most of the time of the calculation. If I was to just use the sum of the absolute value of the eigenvalues for instance it takes a quarter of the time, so the Select is a large part of the slowdown.

Its worth noting that the change points may have an imaginary part when they become positive, and the matrix may contain interpolation functions of $x$.

$\endgroup$
3
  • $\begingroup$ To clarify: do you need the sum of the positive eigenvalues (as your text says: positive real part and zero imaginary part) or the sum of the eigenvalues with positive real part (as your code says: positive real part and arbitrary imaginary part)? $\endgroup$ Commented May 15, 2019 at 9:09
  • $\begingroup$ @Roman, sorry, positive real part yes. I'll clarify that, thanks. $\endgroup$ Commented May 15, 2019 at 10:17
  • 2
    $\begingroup$ There is no need to Sow/Reap the points at which totalPosEigs is called, as these are stored as DownValues anyway. You can extract them with pts = Cases[DownValues[totalPosEigs], RuleDelayed[_[totalPosEigs[x_]], y_] /; NumericQ[x] && NumericQ[y] :> {x, y}] after the code finishes. $\endgroup$ Commented May 15, 2019 at 12:17

1 Answer 1

5
$\begingroup$

Hmmm. I would have set up this system as follows and this way, it is about 5 times as fast:

A = {{0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}, {-(1/ϵ^4), -((4 Cos[x])/ϵ^2), -((4 Sin[x])/ϵ^2), 0}} /. ϵ -> 1/10; Q = {{0, 1, 0, 0, 0, 0}, {0, 0, 1, 1, 0, 0}, {-220 Cos[x], -220 Sin[x], 0, 0, 1, 0}, {0, 0, 0, 0, 1, 0}, {10000, 0, 0, -220 Sin[x], 0, 1}, {0, 10000, 0, 220 Cos[x], 0, 0}}; n = Length[Q]; cA = With[{code = N@A}, Compile[{{x, _Real}}, code, CompilationTarget -> "C"]]; cQ = With[{code = N@Q}, Compile[{{x, _Real}}, code, CompilationTarget -> "C"]]; sysmat[x_?NumericQ] := With[{ λ = (1. - UnitStep[-Re[#]]).# &@Eigenvalues[cA[x]] }, cQ[x] - DiagonalMatrix[ConstantArray[λ, n]] ]; xa = -4; xb = 4; Y0 = {0, 0, 0, 0, -1, 0}; Ysol = NDSolveValue[{ Y'[x] == sysmat[x].Y[x], Y[xa] == Y0 }, Y, {x, xa, xb} ]; // AbsoluteTiming // First 

0.043133

Most essential steps:

  • Compiling definitions of A and Q.
  • Teplacing Select by UnitStep and Total by Dot.
  • Performing all relevant computation in sysmat with scoped variables so that memoization is actually not required.

Addendum

OP asked whether this would work also with ParametricNDSolveValue if A and Q depend on a further parameter, say z. My first idea was to add z as a further argument sysmat to sysmat, but upon calling ParametricNDSolveValue, this resulted in an error message

"Dependent variables {Y,sysmat[x,z]} cannot depend on parameters {z}."

I hve no idea for what reason this happens. I consider it a bug.

One can circumvent this by submitting also Y[x] as variable to the function F on the right hand side of the ODE:

F[x_?NumericQ, z_?NumericQ, Y_?VectorQ] := With[{λ = (1. - UnitStep[-Re[#]]).# &@Eigenvalues[cA[x]]}, (cQ[x] - DiagonalMatrix[ConstantArray[λ, n]]).Y ]; Y0 = {0, 0, 0, 0, -1, 0}; Ysol = ParametricNDSolveValue[ { Y'[x] == F[x, z, Y[x]], Y[xa] == Y0 }, Y, {x, xa, xb}, z ]; 

(I am well aware that the parameter z does not effect the result.)

$\endgroup$
8
  • $\begingroup$ Thank you Henrik. I don't think the compilation is making any real difference here, but the UnitStep/Dot magic instead of Select makes such a difference. $\endgroup$ Commented May 15, 2019 at 12:54
  • $\begingroup$ Actually, compiling the functions cA and cQ here takes ~0.1 seconds each, the compilation is helping the speed (when there are no interpolation functions in the matrix, else compiling needs MainEvaluate anyway) but at the trade-off of the time to compile it. I can probably rewrite my code to only need to compile once for multiple parameters. $\endgroup$ Commented May 15, 2019 at 15:05
  • $\begingroup$ Yes, of course: You may add just further arguments to the compiled functions to supply further parameters. $\endgroup$ Commented May 15, 2019 at 15:16
  • $\begingroup$ Thanks Henrik, i rewrote my code to only compile once for all x and another parameter and to use the vectorized form. $\endgroup$ Commented May 16, 2019 at 18:59
  • $\begingroup$ You're welcome! $\endgroup$ Commented May 16, 2019 at 19:00

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.