6
$\begingroup$

By default, NIntegrate works with MachinePrecision and its PrecisionGoal is set to Automatic which is effectively a value near 6:

In[1]:= Options[NIntegrate, {WorkingPrecision, PrecisionGoal}] Out[1]= {WorkingPrecision -> MachinePrecision, PrecisionGoal -> Automatic} 

I need sufficiently higher accuracy when computing the integrals similar to this one:

dpdA[i_] := NIntegrate[ Cos[φ] Cos[i*φ] Exp[Sum[-Cos[j*φ], {j, 11}]], {φ, 0, Pi}, Method -> {Automatic, "SymbolicProcessing" -> None}] 

The integral cannot be taken symbolically, so "SymbolicProcessing" is off.

Actually I need to compute such integrals thousands of times during an optimization procedure in order to find best coefficients a[j] under the summation:

dpdA[i_] := NIntegrate[Cos[φ] Cos[i φ] Exp[Sum[(-a[j]) Cos[j φ], {j, 11}]], {φ, 0, Pi}, Method -> {Automatic, "SymbolicProcessing" -> None}] 

Is there a way to precondition this integral in order to make integration with high WorkingPrecision faster? Perhaps using Experimental`NumericalFunction?

The problem is that when I increase PrecisionGoal to 15 and consequently WorkingPrecision to a value higher than MachinePrecision I get very low performance.

$\endgroup$
7
  • 3
    $\begingroup$ With NIntegrate, performance is usually dictated by the appropriateness or otherwise of the chosen method. Try, for example, dpdA[i_] := NIntegrate[Cos[φ] Cos[i*φ] Exp[Sum[-Cos[j*φ], {j, 11}]], {φ, 0, Pi}, WorkingPrecision -> $MachinePrecision, MinRecursion -> 3, MaxRecursion -> 5, Method -> {"GlobalAdaptive", Method -> {"GaussKronrodRule", "Points" -> 20}, "SymbolicProcessing" -> False}]. This works well for small arguments, but for larger arguments the integrand is highly oscillatory so another method could be better. $\endgroup$ Commented Jul 28, 2012 at 0:24
  • 3
    $\begingroup$ @Oleksandr is right, a different method would be better. You say you don't want symbolic processing, so you can't use "LevinRule" as the Method (though it seems to work nicely on your integral when I tried it, and removing "SymbolicProcessing" -> None to that effect). I would suggest trying "ClenshawCurtisOscillatoryRule" as the Method. See if it helps. $\endgroup$ Commented Jul 28, 2012 at 0:40
  • $\begingroup$ Maybe you can speed up the computation by transforming your integral with the substitution u==Cos[\[CurlyPhi]], \[CurlyPhi]==ArcCos[u], du/Sqrt[1-u^2]==d\[CurlyPhi]. With that transformation you get rid of all the costly trigonometric function calls so NIntegrate might give you shorter integration times. $\endgroup$ Commented Aug 10, 2012 at 18:53
  • $\begingroup$ ok, i take back that suggestion. I just tested it, and it takes roughly double the integration time because the integrand has singularities at the ends. $\endgroup$ Commented Aug 10, 2012 at 18:59
  • $\begingroup$ @Thies, in fact, when confronted with an integral with a $\sqrt{1-u^2}$ factor over the interval $(-1,1)$ to be evaluated numerically, the first instinct should be the substitution $u=\sin\,v$ or $u=\cos\,v$ to mitigate the endpoint singularities... $\endgroup$ Commented Sep 7, 2012 at 10:14

1 Answer 1

2
$\begingroup$

Maybe the solution would be to forget about NIntegrate and to try to do it 'by hand', e.g.

integrate[f_, nx_, prec_: MachinePrecision] := Module[{xg, fg}, xg = Table[(2 i - 1)/(2*nx) \[Pi], {i, 1, nx}]; fg = N[f /@ xg, prec]; \[Pi]/Sqrt[nx] First@FourierDCT[fg, 2] ]; 

I'm assuming that the integrand has the following form

$$ f(x)=\sum_{j\geq 0}a_{j}\cos\left(j x\right) $$

then, the integral of $f(x)$ can be rewritten as

$$ \int_{0}^{\pi} f(x) dx = \int_{0}^{\pi} \sum_{j\geq 0}a_{j}\cos\left(j x\right) dx = \sum_{j\geq 0}a_{j}\int_{0}^{\pi}\cos\left(j x\right) dx $$

The last integral is $\pi\delta_{j0}$, so the final result is

$$ \int_{0}^{\pi} f(x) dx = \pi a_{0}$$

I'm using FourierDCT[] to get the $a_{0}$, for that I need to probe the $f(x)$ function at a compatible set of collocation points (factor $1/\sqrt{nx}$ comes from normalization used in FourierDCT[]). This gives an exact result for function being a linear combination of cosines up to $\cos((nx-1)x)$. In your case

Do[a[j] = RandomReal[{}, WorkingPrecision -> 50], {j, 11}]; f = Function[{\[CurlyPhi], i}, Cos[\[CurlyPhi]] Cos[i \[CurlyPhi]] Exp[Sum[(-a[j]) Cos[j \[CurlyPhi]], {j, 11}]]]; 

The integrate[] procedure can produce result within an order of magnitude shorter time (using grid of 'just' 128 points)

NIntegrate[f[x, 2], {x, 0, \[Pi]}, AccuracyGoal -> 25, PrecisionGoal -> 25, WorkingPrecision -> 50] // AbsoluteTiming (* => {0.265530, -0.63130651358588386138570371796273647256851197182433} *) integrate[f[#, 2] &, 128, 50] // AbsoluteTiming (* => {0.030267, -0.631306513585883861385703717912986612789831494874} *) 

In addition, one can verify how does the error change with respect to number of points (nx)

convData = Table[{nx, integrate[f[#, 2] &, nx, 50]}, {nx, 2^Range[3, 8]}]; ListPlot[MapAt[Log10@Norm[# - convData[[-1, 2]]] &, convData, {All, 2}], Joined -> True, PlotMarkers -> Automatic, AxesLabel -> {"nx", "Log10[Error]"}] 

enter image description here

Of course due to the oscilatory character of your function the problem would be that with higher $i$ number of grid points nx needs to be increased.

$\endgroup$
0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.