Skip to main content
2 of 10
improved the answer
Ray Shadow
  • 8k
  • 1
  • 17
  • 44

Cause of speed up

This is definitely not a memoization. The reason for the speed up is that when you work with very large arrays (like 10^8 elements), the memory clean up operations start take significant time. If you don't clean memory you save time.

Here is an example:

Let's create a large array, perform a calculation, and remove the array:

AbsoluteTiming[ Total[ConstantArray[0, 10^8]]; ] 

{0.422509, Null}

Let's now do the same thing, but keep array in memory:

AbsoluteTiming[ Total[garbage = ConstantArray[0, 10^8]]; ] 

{0.366755, Null}

It is noticably faster.

Let's check how long does it take to remove a large array:

AbsoluteTiming[ Remove[garbage] ] 

{0.061982, Null}

It is the difference of the calculation times above.

In real calculations with large arrays one needs to cleanup pretty often, and thus avoiding garbage collection will not be very practical.

Your example

In the example you provide, removing of Unitize@data[[All, n]] array from memory takes significant time. If one saves this array in a redundant variable, one avoids memory clean-up and wins some time.

How to make test representative?

In this case one can add a clean-up function to the "optimized" code to see that it doesn't run faster than the built-in solution.

Ray Shadow
  • 8k
  • 1
  • 17
  • 44