7

I have two numpy arrays, X and Y, with shapes (n,d) and (m,d), respectively. Assume that we want to compute the Euclidean distances between each row of X and each row of Y and store the result in array Z with shape (n,m). I have two implementations for this. The first implementation uses two for loops as follows:

for i in range(n): for j in range(m): Z[i,j] = np.sqrt(np.sum(np.square(X[i] - Y[j]))) 

The second implementation uses only one loop by vectorization:

for i in range(n): Z[i] = np.sqrt(np.sum(np.square(X[i]-Y), axis=1)) 

When I run these codes on a particular X and Y data, the first implementation takes nearly 30 seconds while the second implementation takes nearly 60 seconds. I expect the second implementation to be faster since it uses vectorization. What is the reason of its slow running? I know that we can obtain faster implementations by fully vectorizing the code, but I don't understand why the second code (which is partially vectorized) is slower than non-vectorized version.

Here is the complete code:

n,m,d = 5000,500,3000 X = np.random.rand(n,d) Y = np.random.rand(m,d) Z = np.zeros((n,m)) tic = time.time() for i in range(n): for j in range(m): Z[i,j] = np.sqrt(np.sum(np.square(X[i] - Y[j]))) print('Elapsed time 1: ', time.time()-tic) tic = time.time() for i in range(n): Z[i] = np.sqrt(np.sum(np.square(X[i]-Y), axis=1)) print('Elapsed time 2: ', time.time()-tic) tic = time.time() train_squared = np.square(X).sum(axis=1).reshape((1,n)) test_squared = np.square(Y).sum(axis=1).reshape((m,1)) test_train = -2*np.matmul(Y, X.T) dists = np.sqrt(test_train + train_squared + test_squared) print('Elapsed time 3: ', time.time()-tic) 

And this is the output:

Elapsed time 1: 35.659096002578735 Elapsed time 2: 65.57051086425781 Elapsed time 3: 0.3912069797515869 
13
  • What are the typical values of n,m,d in your testing? Commented Jul 13, 2017 at 5:30
  • @Divakar n=5000, m=500, d=3000 Commented Jul 13, 2017 at 5:33
  • For me, first took 24 sec, second 20 sec. Share your benchmarking setup? Commented Jul 13, 2017 at 5:39
  • @Divakar Even with the random data, I get the same result. I updated the question to include the complete code and outputs. Commented Jul 13, 2017 at 6:16
  • 2
    @Hossein: The first version uses temporary arrays of shape (d,); the second version uses temporaries of shape (m, d), much bigger. I don't know what your fully vectorized solution looks like, so I don't know what size of temporaries that's producing. Commented Jul 13, 2017 at 6:29

1 Answer 1

8

I pulled apart your equations and reduced it down to this MVCE:

for i in range(n): for j in range(m): Y[j].copy() for i in range(n): Y.copy() 

The copy() here is just to simulate the subtraction from X. The subtraction itself should be quite cheap.

Here's the results on my computer:

  • The first one took 10ms.
  • The second one took 13s!

I'm copying the exact same amount of data. Using your choices n=5000, m=500, d=3000, this code is copying 60 gigabytes of data.

To be honest, I'm not surprised at all that 13 seconds. That's already over 4GB/s, essentially the maximum bandwidth between my CPU and RAM (of e.g. memcpy).

The really surprising thing is that the first test managed to copy 60GB in only 0.01seconds, which translates to 6TB/s!

I'm pretty sure this is because the data isn't actually leaving the CPU at all. It's just bouncing back and forth between the CPU and the L1 cache: an array of 3000 double-precision numbers will easily fit in a 32KiB L1 cache.

Therefore, I deduce that the main reason your second algorithm isn't as great as one would naively expect is because processing a whole chunk of 500×3000 elements per iteration is very unfriendly to the CPU cache: you basically evict the whole cache into RAM! In contrast, your first algorithm is does take advantage of cache to some extent, because the 3000 elements will still be in cache by the time the sum gets computed, so there's not nearly as much data moving between your CPU and RAM. (Once you have the sum, the 3000 element array is "thrown away", which means it will probably just get overwritten in cache and never make it back to the actually RAM.)


Naturally, doing matrix multiplication is insanely faster, because your problem is essentially of the form:

C[i, j] = ∑[k] f(A[i, k], B[j, k]) 

If you replace f(x, y) with x * y, you can see it's just a variant of matrix multiplication. The operation f is not extremely important here − what is important are how the indices behave in this equation, which determines how your arrays are stored in memory. The essence of matrix multiplication algorithms lies in the ability to cope with this kind of array access through blocking, so in principle the overall algorithm does not change dramatically even for a user-defined f. Unfortunately, in practice there are very few libraries that support user-defined operations, so you have use the trick (X - Y)**2 = X**2 - 2 X Y + Y**2 as you have done. But it gets the job done :D

Sign up to request clarification or add additional context in comments.

9 Comments

Could you please share how you benchmarked that code? While your argument, that the result of the subtraction remains in the CPU cache and that it can be reused for the sum without using the memory bus, may apply to the case of the OP, I don't see how this plays a role for your copy() example. For the first approach (double for-loop) you copy a different row for each inner iteration j, so this needs to be copied from the RAM. I doubt that the Python compiler swaps the inner and outer loop (while gcc probably would). I ran your example and it was roughly the same time for both cases.
I used the same benchmarking functions as the OP. It was done on Arch Linux with Python 3. My guess is that how much it reuses the small buffer is probably highly dependent on the system allocator, which is probably why there is a huge variation in results.
Okay, so I checked it: Y[j].copy().__array_interface__["data"] gives me the same pointer every single time, so the allocator is indeed reusing the same buffer.
@Rufflewind: while I understand the reasoning about the cache size etc, I do not think this explains whats happening. numpy vectorization is only meant to move the loops from python deeper to the C implementation; they're still there and vectorization does not really remove them only pushes them deeper. Otherwise there will almost never be a benefit to vectorization.
@Kai I'm confused. I did not mention anything about removal of loops. What I explained is that when loops traverse over the same piece of memory repeatedly, it stays hot in cache and reduces unnecessary movement between the memory hierarchies.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.