Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

6
  • 2
    $\begingroup$ f3[x_] := f3[x - 1]^2 - f3[x - 1] + 1 requires two calls to f3[x - 1], which require four calls to f3[x - 2], etc. exponentially. This is because f3[x_] is defined with a delayed assignment and might have side-effects, so cannot be auto-cached by the kernel. If you define counter = 0; f3[x_] := (counter++; f3[x - 1]^2 - f3[x - 1] + 1); then after calling f3[25] we have counter equal to $16777215=2^{24}-1$, showing the exponential calling pattern. $\endgroup$ Commented Mar 29, 2022 at 10:17
  • $\begingroup$ In short, without memoization the values get recomputed. If they are defined recursively that can be slow, in particular if there is more than one recomputation per step. (@Roman and @DanielHuber explain this quite well, I'm just summarizing.) $\endgroup$ Commented Mar 29, 2022 at 15:08
  • $\begingroup$ I feel this explained in this old Q&A: mathematica.stackexchange.com/questions/2639/… $\endgroup$ Commented Mar 29, 2022 at 17:04
  • $\begingroup$ @Roman As you know, I'm trying to understand why I see a timing benefit when memoization is added to a recursion like f[x_]:= f[x-1] +f[x-1]+1, but not with f[x_]:= f[x-1]+1. It sounds like you're saying that memoization speeds up the recursion in both cases, because prior values are never cached in its absence. Thus withf[x_]:= f[x-1] +1, a call is required to f[x-1], which in turn requires a call to f[x-2], and so on. With memoization, by contrast, such recalculation would not be needed. $\endgroup$ Commented Mar 30, 2022 at 3:56
  • $\begingroup$ However, because in this case the number of calls does not grow exponentially the way it would with f[x_]:= f[x-1]+f[x-1] +1, the advantage of memoization is far less, which is why I don't see the timing difference. Do I have that right? If so, it seems that, if f[x] were a temporally expensive calculation, then I should see a timing difference even with just one call. i.e., even in the absence of an exponential growth in the number of required calculations.Integrate[1 /( Sinh[z] ), {z, 1, 10}] is relatively expensive, so I compared $\endgroup$ Commented Mar 30, 2022 at 4:07