Skip to main content
AI Assist is now on Stack Overflow. Start a chat to get instant answers from across the network. Sign up to save and share your chats.

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

13
  • alternatively to np.einsum one can use np.tensordot(), which has also a very flexible notation... Commented Jul 8, 2013 at 14:10
  • Sadly, all 3 methods you suggest are slower, (the deltas=... takes six seconds already, which is why they are slower) Commented Jul 8, 2013 at 15:09
  • Funny how memory management ruins the best laid plans... I don't fully understand what's going on, but see my edit. You may want to try the above methods on your huge arrays to see if the timings behave differently, but there may be some margin to win with scipy. Commented Jul 8, 2013 at 16:15
  • Your fastest way still is slower than mine, but i think it is because in my case, i just do (x-x')**2+(y-y')**2, (later on, i get the minimal value from it, but taking the sqrt of it wont change the place of the minimal value, so i dont do that in my calculation, while the cdist does that too (i assume), (the one [15]: times out at 15.7 sec, while mine times out at 11.8 sec, but i assume its due to the sqrt being taken in the cdist routine, but i really think this way, if theres some routine like that without the sqrt in it, it would end up much faster Commented Jul 9, 2013 at 9:48
  • According to the docs, spdist.cdist(a.T, b.T, 'sqeuclidean') should do just that, can't test it right now. Anyway, interesting to see how memory handling becomes everything when you are using a lot of it! Commented Jul 9, 2013 at 12:35