The point here is that one can approach compiled speeds with thoughtful Mma code.
For reference:
- I used @Henrik's example with all parameters set to
1.. - A high-precision version
opHP of the OP's integrals are computed. - The OP's
modelLN2[1., 1., 1., 1., 1., 1., 1.] took 1.79s on my machine. - @Henrik's
cc (Gaussian quadrature) example took 0.00359s on my machine. - All timings below are under 0.002s.
High-precision version of OP's code (it took almost 27s but timing is irrelevant):
opHP = Block[{AN = 1, tN = 1, tr = 1, ALN = 1, xmf = 1, w = 1, y0 = 1}, Table[ AN*Exp[-(i*0.128`32 - 0.128`32)/tN] + Exp[-(i*0.128`32 - 0.128`32)/tr]* NIntegrate[ ALN*Exp[-(1/w^2)*(Log[k/xmf])^2]*Exp[-k*(i*0.128 - 0.128)] // Rationalize , {k, 0.0001, 4.0} , Method -> {"GaussKronrodRule", "Points" -> 9, "SymbolicProcessing" -> 0}, MinRecursion -> 2, WorkingPrecision -> 32] , {i, 1.0, 5000.0, 1.0}]]; // AbsoluteTiming
A fast, uncompiled version:
{pts, wts} = Most@NIntegrate`GaussRuleData[31, MachinePrecision]; dd = Block[{ AN = 1., tN = 1., tr = 1., ALN = 1., xmf = 1., w = 1., y0 = 1. , k = Rescale[pts, {0, 1}, {0.0001, 4.}] , wts = (4. - 0.0001) wts}, Block[{iRange = Subdivide[0., -4999.*0.128, 4999]}, AN*Exp[iRange/tN] + ALN*(Exp[iRange/tr]*Exp[ Table[(-1/w^2)*(Log[k/xmf])^2 + k*i2 , {i2, iRange/tr}] ] . wts) ] ]; // RepeatedTiming (* {0.00190383, Null} *) MinMax[(opHP - dd)/opHP] (* relative precision *) (* {-8.649*10^-9, 9.59972*10^-9} *)
Compare with compiled code with targets "C" and "WVM" (the default):
int2 = Compile[{ {AN, _Real}, {tN, _Real}, {tr, _Real}, {ALN, _Real}, {xmf, _Real}, {w, _Real}, {y0, _Real} , {k, _Real, 1}, {wts, _Real, 1}}, Block[{ii = Table[(1 - i)*0.128, {i, 1.0, 5000.0, 1.0}]}, AN*Exp[ii/tN] + ALN*(Exp[ii/tr]*Exp[ Table[(-1/w^2)*(Log[k/xmf])^2 + k*i2, {i2, ii/tr}] ] . wts) ] (*,CompilationTarget->"C"*) (*optional*) , RuntimeOptions -> "Speed" ]; (* "C" *) {pts, wts} = Most@NIntegrate`GaussRuleData[31, MachinePrecision]; ee = int2[1., 1., 1., 1., 1., 1., 1., Rescale[pts, {0, 1}, {0.0001, 4.}], (4 - 0.0001) wts]; // RepeatedTiming (* {0.001502, Null} *) (* "WVM" *) {pts, wts} = Most@NIntegrate`GaussRuleData[31, MachinePrecision]; ee = int2[1., 1., 1., 1., 1., 1., 1., Rescale[pts, {0, 1}, {0.0001, 4.}], (4 - 0.0001) wts]; // RepeatedTiming (* {0.00189007, Null} *) MinMax[(opHP - ee)/opHP] (* relative precision *) (* {-8.649*10^-9, 9.59972*10^-9} *)
The compiled-to-C version is significantly faster, between 20% and 25%. The WVM version is just a little faster. (Note: There is variation in both AbsoluteTiming[] and RepeatedTiming[], with less in RepeatedTiming[]. The WVM was usually faster than the uncompiled version. I think it is correct to say it is faster.)
modelLN2returns a list which is convolved with another list usingListConvolve. After that the resulting list is compared with experimental data to make a fit. In other words,modelLN2is a theoretical model which is used to fit experimental data via theFindMinimumfunction - this function callsmodelLN2many times, and it takes about 30 minutes to do the fit to data. The 2.5 s is the time my computer needs to callmodelLN2only once. $\endgroup$NDSolve$\endgroup$