14

I have a list that contains nested lists and I need to know the most efficient way to search within those nested lists.

e.g., if I have

[['a','b','c'], ['d','e','f']] 

and I have to search the entire list above, what is the most efficient way to find 'd'?

5 Answers 5

16
>>> lis=[['a','b','c'],['d','e','f']] >>> any('d' in x for x in lis) True 

generator expression using any

$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "any('d' in x for x in lis)" 1000000 loops, best of 3: 1.32 usec per loop 

generator expression

$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'d' in (y for x in lis for y in x)" 100000 loops, best of 3: 1.56 usec per loop 

list comprehension

$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'d' in [y for x in lis for y in x]" 100000 loops, best of 3: 3.23 usec per loop 

How about if the item is near the end, or not present at all? any is faster than the list comprehension

$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'NOT THERE' in [y for x in lis for y in x]" 100000 loops, best of 3: 4.4 usec per loop $ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "any('NOT THERE' in x for x in lis)" 100000 loops, best of 3: 3.06 usec per loop 

Perhaps if the list is 1000 times longer? any is still faster

$ python -m timeit -s "lis=1000*[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'NOT THERE' in [y for x in lis for y in x]" 100 loops, best of 3: 3.74 msec per loop $ python -m timeit -s "lis=1000*[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "any('NOT THERE' in x for x in lis)" 100 loops, best of 3: 2.48 msec per loop 

We know that generators take a while to set up, so the best chance for the LC to win is a very short list

$ python -m timeit -s "lis=[['a','b','c']]" "any('c' in x for x in lis)" 1000000 loops, best of 3: 1.12 usec per loop $ python -m timeit -s "lis=[['a','b','c']]" "'c' in [y for x in lis for y in x]" 1000000 loops, best of 3: 0.611 usec per loop 

And any uses less memory too

Sign up to request clarification or add additional context in comments.

6 Comments

I would be cautious generalizing about this based on searching for 'd' (ie something relatively close to the front of the list). If you look at the discussion @Ashwini and I had (below his answer) you'll see that the choice of the "target" in the list makes a significant difference. In our trials LC beat the generator with targets in the middle and at the end. Generator "won" if the target was near the front. I think in the end it's all very dependent on the specific dataset and target sought after.
@Levon, I'm well aware of the tradeoffs. However in this case, any is still faster than the LC even if the item is not present at all.
I didn't mean to imply you weren't .. and the use of any didn't even occur to me until you brought it up, so that's something for me to keep in mind for future use.
@gnibbler how to identify which one of these to be used gen,LC or any()? coz In this case any() was faster and on the other link I posted gen was faster than any(). I was completely aware of any() but didn't used it here because of the results I saw on that question and this time any() came up with faster solution.
@AshwiniChaudhary, I think the only way to know for sure is to try. Also to furthur complicate things, the relative timings may change between different implementations/versions of Python. In general I won't use a list comprehension unless I need a list (for slicing, reversing, etc). generator expressions are a good 1st choice, although in some cases they can be beaten by using of itertools etc.
|
8

Using list comprehension, given:

mylist = [['a','b','c'],['d','e','f']] 'd' in [j for i in mylist for j in i] 

yields:

True 

and this could also be done with a generator (as shown by @AshwiniChaudhary)

Update based on comment below:

Here is the same list comprehension, but using more descriptive variable names:

'd' in [elem for sublist in mylist for elem in sublist] 

The looping constructs in the list comprehension part is equivalent to

for sublist in mylist: for elem in sublist 

and generates a list that where 'd' can be tested against with the in operator.

7 Comments

Just so I can understand what is going on can you clarify what j and i are?
@fdsa I'll update my answer by adding a "verbose" version (more descriptive variable names)
Thanks for taking the time, I appreciate it
@Downvoter .. could I get an explanation please? This is a functional solution to OP's problem. A downvote without explanation helps no one (OP, SO, or me). If it's because I didn't use a generator, you can see from the discussion with Ashwini that the generator solution isn't always necessarily faster.
I expect the downvote was for generating the whole list every time. I expect that you'd have to have a fairly small list for the list comprehension to be faster or use less memory.
|
4

Use a generator expression, here the whole list will not be traversed as generator generate results one by one:

>>> lis = [['a','b','c'],['d','e','f']] >>> 'd' in (y for x in lis for y in x) True >>> gen = (y for x in lis for y in x) >>> 'd' in gen True >>> list(gen) ['e', 'f'] ~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'d' in (y for x in lis for y in x)" 100000 loops, best of 3: 2.96 usec per loop ~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'d' in [y for x in lis for y in x]" 100000 loops, best of 3: 7.4 usec per loop 

12 Comments

What do you mean by "the whole list will not be traversed"? .. you don't terminate early, you have to generate all of the values at some point, no? The wording seems to imply the opposite.
@Levon but it doesn't generate e and f.
So it acts like a short-circuited Boolean expression? I didn't know that .. neat. Thanks, I learned something new.
@Levon this time I searched 11 and this time LC was much faster than generator. o.O
We got the same results this time, I searched for 10 :-) and also 18, and LC beat the generator each time, though LC was slower for 'd'.
|
3

using recursion we can do it as

def recursive_search(nested_list, target): for item in nested_list: if isinstance(item, list): if recursive_search(item, target): return True elif item == target: return True return False nested_list = [1, 2, [3, 4, [5, 6]], 7] target = 5 print(recursive_search(nested_list, target)) # True 

2 Comments

This is a hack, of course. It would find [ or ] in every list even if those aren't elements, and find ' and , in almost every list, even " or \ in a lot of lists that don't contain those, and would find substrings of one of the nested strings, and treat non-string elements as strings, and find 'n' in the list ['\x0a'], and...
@KarlKnechtel yes it was hacky, adding a recursive approach to solve it.
2

If your arrays are always sorted as you show, so that a[i][j] <= a[i][j+1] and a[i][-1] <= a[i+1][0] (the last element of one array is always less than or equal to the first element in the next array), then you can eliminate a lot of comparisons by doing something like:

a = # your big array previous = None for subarray in a: # In this case, since the subarrays are sorted, we know it's not in # the current subarray, and must be in the previous one if a[0] > theValue: break # Otherwise, we keep track of the last array we looked at else: previous = subarray return (theValue in previous) if previous else False 

This kind of optimization is only worthwhile if you have a lot of arrays and they all have a lot of elements though.

3 Comments

Thank you for this - I hadn't considered sorting them but they will be used quite often so I will consider sorting
No problem - I don't know how much data you're going to have or what your exact use-case is, but if you do have a lot of data it might be worth looking into python's collections module, which has high-performance data structures suited to specific tasks.
It they are always sorted, you would be better to use the bisect module

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.