7

I am trying to figure out an efficient way of finding row intersections of two np.arrays.

Two arrays have the same shapes, and duplicate values in each row cannot happen.

For example:

import numpy as np a = np.array([[2,5,6], [8,2,3], [4,1,5], [1,7,9]]) b = np.array([[2,3,4], # one element(2) in common with a[0] -> 1 [7,4,3], # one element(3) in common with a[1] -> 1 [5,4,1], # three elements(5,4,1) in common with a[2] -> 3 [7,6,9]]) # two element(9,7) in common with a[3] -> 2 

My desired output is : np.array([1,1,3,2])

It is easy to do this with a loop:

def get_intersect1ds(a, b): result = np.empty(a.shape[0], dtype=np.int) for i in xrange(a.shape[0]): result[i] = (len(np.intersect1d(a[i], b[i]))) return result 

Result:

>>> get_intersect1ds(a, b) array([1, 1, 3, 2]) 

But is there a more efficient way to do it?

4
  • Hmm. Can a and b have duplicated values in each row? Commented Nov 1, 2013 at 16:34
  • @MrE good point, duplicates cannot happen. Thanks. Commented Nov 1, 2013 at 16:37
  • How large do you expect the input arrays to be? Commented Nov 1, 2013 at 17:08
  • @WarrenWeckesser, 4,000,000 by 25 and I probably would have do this operation a lot. Commented Nov 1, 2013 at 17:12

3 Answers 3

7

If you have no duplicates within a row you can try to replicate what np.intersect1d does under the hood (see the source code here):

>>> c = np.hstack((a, b)) >>> c array([[2, 5, 6, 2, 3, 4], [8, 2, 3, 7, 4, 3], [4, 1, 5, 5, 4, 1], [1, 7, 9, 7, 6, 9]]) >>> c.sort(axis=1) >>> c array([[2, 2, 3, 4, 5, 6], [2, 3, 3, 4, 7, 8], [1, 1, 4, 4, 5, 5], [1, 6, 7, 7, 9, 9]]) >>> c[:, 1:] == c[:, :-1] array([[ True, False, False, False, False], [False, True, False, False, False], [ True, False, True, False, True], [False, False, True, False, True]], dtype=bool) >>> np.sum(c[:, 1:] == c[:, :-1], axis=1) array([1, 1, 3, 2]) 
Sign up to request clarification or add additional context in comments.

1 Comment

Can you explain the algorithm behind the line c[:, 1:] == c[:, :-1]?
2

This answer might not be viable, because if the input has shape (N, M), it generates an intermediate array with size (N, M, M), but it's always fun to see what you can do with broadcasting:

In [43]: a Out[43]: array([[2, 5, 6], [8, 2, 3], [4, 1, 5], [1, 7, 9]]) In [44]: b Out[44]: array([[2, 3, 4], [7, 4, 3], [5, 4, 1], [7, 6, 9]]) In [45]: (np.expand_dims(a, -1) == np.expand_dims(b, 1)).sum(axis=-1).sum(axis=-1) Out[45]: array([1, 1, 3, 2]) 

For large arrays, the method could be made more memory-friendly by doing the operation in batches.

Comments

1

I can't think of a clean pure-numpy solution, but the following suggestion should speed things up, potentially dramatically:

  1. use numba. It is as simple as decorating your get_intersect1ds function with @autojit
  2. pass assume_unique = True when you call intersect1d

1 Comment

Unfortunately, I don't have access to numba, but I was thinking cython. I think it should work as well. Thanks for the suggestion.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.