190

I need a fast way to count the number of bits in an integer in python. My current solution is

bin(n).count("1") 

but I am wondering if there is any faster way of doing this?

5
  • Related: stackoverflow.com/questions/407587/… Commented Mar 22, 2012 at 20:15
  • 1
    What kind of representation are you using if your "integers" are longer than a standard python int? Does that not have its own method for calculating this? Commented Mar 22, 2012 at 21:02
  • 1
    In Python 3.6+, instead of bin(n), try f"{n:b}". It should be faster, and you don't get that pesky "0b" prefix. Also, you can do things like f"{n:032b}" to get zero-padded bitstrings of width 32. Commented Nov 3, 2020 at 17:36
  • @PM2Ring Apparently it's slower. (see my benchmark). -- Also, the 0b prefix or zero-padding doesn't matter if all matters is the number of ones in the string. Commented Nov 6, 2020 at 9:28
  • 1
    @user202729 Oh, ok. Thanks for doing the benchmark. I assumed the f-string would be faster because it avoids an explicit function call. OTOH, bin is a C function, which has less overhead than a Python function call. Sure, the 0b prefix & zero-padding are irrelevant here, I just mentioned those things for readers who may need to know it for other contexts. Commented Nov 6, 2020 at 11:47

13 Answers 13

187

Python 3.10 introduces int.bit_count():

>>> n = 19 >>> bin(n) '0b10011' >>> n.bit_count() 3 >>> (-n).bit_count() 3 

This is functionally equivalent to bin(n).count("1") but should be ~6 times faster. See also Issue29882.

Sign up to request clarification or add additional context in comments.

2 Comments

~6 times faster was for small numbers. With the question's 10000+ bits numbers, it's more like ~40 times faster. Would be nice if you could show the speedup at different magnitudes.
@KellyBundy i've time[d]it: its a lot faster: graph
153

For arbitrary-length integers, bin(n).count("1") is the fastest I could find in pure Python.

I tried adapting Óscar's and Adam's solutions to process the integer in 64-bit and 32-bit chunks, respectively. Both were at least ten times slower than bin(n).count("1") (the 32-bit version took about half again as much time).

On the other hand, gmpy popcount() took about 1/20th of the time of bin(n).count("1"). So if you can install gmpy, use that.

To answer a question in the comments, for bytes I'd use a lookup table. You can generate it at runtime:

counts = bytes(bin(x).count("1") for x in range(256)) # py2: use bytearray 

Or just define it literally:

counts = (b'\x00\x01\x01\x02\x01\x02\x02\x03\x01\x02\x02\x03\x02\x03\x03\x04' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x04\x05\x05\x06\x05\x06\x06\x07\x05\x06\x06\x07\x06\x07\x07\x08') 

Then it's counts[x] to get the number of 1 bits in x where 0 ≤ x ≤ 255.

9 Comments

+1! The converse of this is not accurate, however, it should be stated: bin(n).count("0") is not accurate because of the '0b' prefix. Would need to be bin(n)[2:].count('0') for those counting naughts....
You can't really count zero bits without knowing how many bytes you're filling, though, which is problematic with a Python long integer because it could be anything.
Although those are fast options for single integers, note that algorithms presented in other answers may be potentially vectorised, thus much faster if run across many elements of a large numpy array.
I have used bin(n).count("1"). However, only beats 60% of python submission. @ leetcode
@pushpen.paul For 8-bit integers I'd just use a lookup table. Could use a bytearray for it for space efficiency.
|
39

You can adapt the following algorithm:

def CountBits(n): n = (n & 0x5555555555555555) + ((n & 0xAAAAAAAAAAAAAAAA) >> 1) n = (n & 0x3333333333333333) + ((n & 0xCCCCCCCCCCCCCCCC) >> 2) n = (n & 0x0F0F0F0F0F0F0F0F) + ((n & 0xF0F0F0F0F0F0F0F0) >> 4) n = (n & 0x00FF00FF00FF00FF) + ((n & 0xFF00FF00FF00FF00) >> 8) n = (n & 0x0000FFFF0000FFFF) + ((n & 0xFFFF0000FFFF0000) >> 16) n = (n & 0x00000000FFFFFFFF) + ((n & 0xFFFFFFFF00000000) >> 32) # This last & isn't strictly necessary. return n 

This works for 64-bit positive numbers, but it's easily extendable and the number of operations growth with the logarithm of the argument (i.e. linearly with the bit-size of the argument).

In order to understand how this works imagine that you divide the entire 64-bit string into 64 1-bit buckets. Each bucket's value is equal to the number of bits set in the bucket (0 if no bits are set and 1 if one bit is set). The first transformation results in an analogous state, but with 32 buckets each 2-bit long. This is achieved by appropriately shifting the buckets and adding their values (one addition takes care of all buckets since no carry can occur across buckets - n-bit number is always long enough to encode number n). Further transformations lead to states with exponentially decreasing number of buckets of exponentially growing size until we arrive at one 64-bit long bucket. This gives the number of bits set in the original argument.

5 Comments

I seriously have no idea how this would work with 10 000 bit numbers though, but I do like the solution. can you give me a hint if and how i can applay that to bigger numbers?
I didn't see the number of bits you're dealing with here. Have you considered writing your data handling code in a low-level language like C? Perhaps as an extension to your python code? You can certainly improve performance by using large arrays in C compared to large numerals in python. That said, you can rewrite the CountBits() to handle 10k-bits numbers by adding just 8 lines of code. But it'll become unwieldy due to huge constants.
You can write code to generate the sequence of constants, and set up a loop for the processing.
This answer has the great advantage that it can be vectorised for cases dealing with large numpy arrays.
"the number of operations growth with the logarithm of the argument (i.e. linearly with the bit-size of the argument). " -- This is wrong, the number of (addition, bitwise) operation is (asymptotically) log(log n) or log(number of bits). The time complexity is log(n) log(log n) or (number of bits * log(number of bits)) -- because Python's integer is arbitrary-precision (technically int data type in Python 2 is 64 bit, but the OP was using long without knowing it) -- so it would almost-likely be slower than gmpy for single numbers.
22

Here's a Python implementation of the population count algorithm, as explained in this post:

def numberOfSetBits(i): i = i - ((i >> 1) & 0x55555555) i = (i & 0x33333333) + ((i >> 2) & 0x33333333) return (((i + (i >> 4) & 0xF0F0F0F) * 0x1010101) & 0xffffffff) >> 24 

It will work for 0 <= i < 0x100000000.

5 Comments

That's clever. Looking this up instead of shooting an answer from the hip is completely appropriate!
Did you benchmark this? On my machine using python 2.7, I found this to actually be a bit slower than bin(n).count("1").
@DavidWeldon No I didn't, could you please post your benchmarks?
%timeit numberOfSetBits(23544235423): 1000000 loops, best of 3: 818 ns per loop; %timeit bitCountStr(23544235423): 1000000 loops, best of 3: 577 ns per loop.
However, numberOfSetBits processes my 864×64 numpy.ndarray in 841 µs. With bitCountStr I have to loop explicitly, and it takes 40.7 ms, or almost 50 times longer.
15

I really like this method. Its simple and pretty fast but also not limited in the bit length since python has infinite integers.

It's actually more cunning than it looks, because it avoids wasting time scanning the zeros. For example it will take the same time to count the set bits in 1000000000000000000000010100000001 as in 1111.

def get_bit_count(value): n = 0 while value: n += 1 value &= value-1 return n 

6 Comments

looks great, but it's only good for very "sparse" integers. on average it's quite slow. Still, it looks really useful in certain use cases.
I'm not quite sure what you mean by "on average it's quite slow". Quite slow compared to what? Do you mean slow compared to some other python code that you're not quoting? It's twice as fast as counting bit by bit for the average number. In fact on my macbook it counts 12.6 million bits per second which is a lot faster than I can count them. If you have another generic python algorithm that works for any length of integer and is faster than this I'd like to hear about it.
I do accept that it is actually slower than the answer by Manuel above.
Quite slow on average means, counting bits for 10000 numbers with 10000 digits takes 0.15s with bin(n).count("1") but it took 3.8s for your function. If the numbers had very few bits set it would work fast, but if you take any random number, on average the function above will be orders of magnitude slower.
OK I will accept that. I guess I was just being a dick cos you're a little imprecise but you're totally right. I just hadn't tested the method using the method by Manuel above before my comment. It looks very clunky but it is actually very fast. I'm now using a version like that but with 16 values in the dictionary and that's even much faster than the one he quoted. But for the record I was using mine in an application with only a few bits that were set to 1. But for totally random bits yeah it's going to about 50:50 with a little variance decreasing with length.
|
11

According to this post, this seems to be one the fastest implementation of the Hamming weight (if you don't mind using about 64KB of memory).

#http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetTable POPCOUNT_TABLE16 = [0] * 2**16 for index in range(len(POPCOUNT_TABLE16)): POPCOUNT_TABLE16[index] = (index & 1) + POPCOUNT_TABLE16[index >> 1] def popcount32_table16(v): return (POPCOUNT_TABLE16[ v & 0xffff] + POPCOUNT_TABLE16[(v >> 16) & 0xffff]) 

On Python 2.x you should replace range with xrange.

Edit

If you need better performance (and your numbers are big integers), have a look at the GMP library. It contains hand-written assembly implementations for many different architectures.

gmpy is A C-coded Python extension module that wraps the GMP library.

>>> import gmpy >>> gmpy.popcount(2**1024-1) 1024 

3 Comments

I have edited my question to make it clear I need this for Large numbers (10k bits and more). optimizing something for 32 bit integers would probablt not make that much of a difference since the number of counts would have to be really big, in which case that would cause the slow execute time.
But GMP is exactly for very large numbers, including numbers at and far beyond the sizes you mention.
Memory usage will be better if you use array.array for POPCOUNT_TABLE16, as then it'll be stored as an array of integers, instead of as a dynamically sized list of Python int objects.
4

You can use the algorithm to get the binary string [1] of an integer but instead of concatenating the string, counting the number of ones:

def count_ones(a): s = 0 t = {'0':0, '1':1, '2':1, '3':2, '4':1, '5':2, '6':2, '7':3} for c in oct(a)[1:]: s += t[c] return s 

[1] https://wiki.python.org/moin/BitManipulation

2 Comments

This works fast. There's an error, at least on p3, the [1:] should be [2:] because oct() returns '0o' before the string. The code runs a lot faster though if you use hex() instead of oct() and make a 16 entry dictionary,
You can avoid having to chop off the first two characters by using "%o" % a instead of oct(a). (%x % a` for the suggested hexadecimal improvement)
3

It's possible to combine a lookup table with int.to_bytes (Python 3 only):

popcount8bit = bytes([popcount(x) for x in range(1<<8)]) # use any method to initialize this lookup table popcount = lambda x: sum(map(popcount8bit.__getitem__, x.to_bytes((x.bit_length()+7)//8, "little"))) 

This solution unfortunately is about 20% slower than bin(x).count('1') on Python 3, but twice faster on PyPy3.


This is a benchmark script, compares several different solutions presented here, for different number of bits:

from __future__ import print_function #for Python 2 import sys from timeit import timeit import random def popcount(x): return bin(x).count("1") version3=sys.version.startswith("3") for numBit in (2, 4, 8, 16, 31, 32, 63, 64, 1000, 10000): maximum=int((1<<numBit)-1) #int cast just in case it overflows to long in Python 2 functions=[ (popcount, "bin count"), (lambda x: "{:b}".format(x).count("1"), "format string count"), ] try: import gmpy functions.append((gmpy.popcount, "gmpy")) except ImportError: pass if sys.version.startswith("3"): exec('''functions.append((lambda x: f"{x:b}".count("1"), "f-string count"))''') if numBit<=16: table1=[popcount(x) for x in range(maximum+1)] functions.append((lambda x: table1[x], "lookup list")) functions.append((table1.__getitem__, "lookup list without lambda")) table2="".join(map(chr, table1)) functions.append((lambda x: ord(table2[x]), "lookup str")) if version3: table3=bytes(table1) functions.append((lambda x: table3[x], "lookup bytes")) if numBit==8: functions.append(( b'\x00\x01\x01\x02\x01\x02\x02\x03\x01\x02\x02\x03\x02\x03\x03\x04' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x04\x05\x05\x06\x05\x06\x06\x07\x05\x06\x06\x07\x06\x07\x07\x08' .__getitem__, "lookup bytes hard coded 8 bit")) table_hardcoded=( b'\x00\x01\x01\x02\x01\x02\x02\x03\x01\x02\x02\x03\x02\x03\x03\x04' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x01\x02\x02\x03\x02\x03\x03\x04\x02\x03\x03\x04\x03\x04\x04\x05' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x02\x03\x03\x04\x03\x04\x04\x05\x03\x04\x04\x05\x04\x05\x05\x06' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x03\x04\x04\x05\x04\x05\x05\x06\x04\x05\x05\x06\x05\x06\x06\x07' b'\x04\x05\x05\x06\x05\x06\x06\x07\x05\x06\x06\x07\x06\x07\x07\x08') functions.append(( table_hardcoded.__getitem__, "lookup bytes hard coded 8 bit local variable")) functions.append((table3.__getitem__, "lookup bytes without lambda")) if version3: popcount8bit=bytes([popcount(x) for x in range(1<<8)]) #bytes because benchmark says that it's fastest functions.append(( lambda x: sum(popcount8bit[x] for x in x.to_bytes((x.bit_length()+7)//8, "big")), "to_bytes" )) functions.append(( lambda x: sum(map(popcount8bit.__getitem__, x.to_bytes((x.bit_length()+7)//8, "big"))), "to_bytes without list comprehension" )) functions.append(( lambda x: sum(map(popcount8bit.__getitem__, x.to_bytes((x.bit_length()+7)//8, "little"))), "to_bytes little endian, without list comprehension" )) #for x in (2, 4, 8, 16, 32, 64): # table1=[popcount(x) for x in range(1<<8)] print("====== numBit=", numBit) data=[] numRepeat=10**7//(numBit+100) for popcountFunction, description in functions: random.seed(10) #make randint returns the same value data.append(( timeit(lambda: popcountFunction(random.randint(0, maximum)), number=numRepeat), description )) time1, name1=data[0] assert name1=="bin count" data.sort() maxLength=0 for time, description in data: maxLength=max(maxLength, len(description)) for time, description in data: print("{:{}} -> {:2f} = {} * {:2f}".format(description, maxLength+2, time, name1, time/time1)) 

It works with both Python 2 and 3; however, if a solution is unavailable for Python 2, it's not measured.

Some solutions are not listed here.

Result:

  • Python 2: "lookup list without lambda" is the fastest (25% faster than "bin count", 6% faster than "lookup list" (with lambda)) for <= 16 bits, larger than that "bin count" is the fastest. (I didn't install gmpy for Python 2)
  • Python 3: Roughly the same result.
    • "Lookup bytes without lambda" is comparable (+/-2% compared to "lookup list without lambda").
    • gmpy is faster than "bin count` in all cases, but slower than "lookup list without lambda" for about 5% with numBit <= 16.
    • "to_bytes" is comparable.
    • Using f-string is about 10% slower than "bin count".
  • PyPy3: The lambda no longer incurs much cost, and the to_bytes version becomes much faster (twice faster than "bin count"); however, I could not get gmpy to install.

2 Comments

On second thoughts, the speeds may vary a fair bit, depending on the environment. I just tried your benchmark script in Python 3.7 on SageMathCell and f-strings were mostly faster. But I was running in Sage mode. (It throws an error in plain Python mode, but that mode isn't really pure Python and it has flaky handling of string literals, eg you have to use "\\n" to get a newline).
It'd be interesting to see times without an extra layer of function calls, where applicable. Here's a short example of timing expressions rather than functions: stackoverflow.com/a/50212230 BTW, it's a good idea to do a few (3-5) repetitions of timeit loops, and use the minimum one, as I discuss here stackoverflow.com/a/43678107/4014959
2

You said Numpy was too slow. Were you using it to store individual bits? Why not extend the idea of using ints as bit arrays but use Numpy to store those?

Store n bits as an array of ceil(n/32.) 32-bit ints. You can then work with the numpy array the same (well, similar enough) way you use ints, including using them to index another array.

The algorithm is basically to compute, in parallel, the number of bits set in each cell, and them sum up the bitcount of each cell.

setup = """ import numpy as np #Using Paolo Moretti's answer http://stackoverflow.com/a/9829855/2963903 POPCOUNT_TABLE16 = np.zeros(2**16, dtype=int) #has to be an array for index in range(len(POPCOUNT_TABLE16)): POPCOUNT_TABLE16[index] = (index & 1) + POPCOUNT_TABLE16[index >> 1] def popcount32_table16(v): return (POPCOUNT_TABLE16[ v & 0xffff] + POPCOUNT_TABLE16[(v >> 16) & 0xffff]) def count1s(v): return popcount32_table16(v).sum() v1 = np.arange(1000)*1234567 #numpy array v2 = sum(int(x)<<(32*i) for i, x in enumerate(v1)) #single int """ from timeit import timeit timeit("count1s(v1)", setup=setup) #49.55184188873349 timeit("bin(v2).count('1')", setup=setup) #225.1857464598633 

Though I'm surprised no one suggested you write a C module.

Comments

1
class popcount_lk: """ Creates an instance for calculating the population count of bitstring, based on a lookup table of 8 bits. """ def __init__(self): """ Creates a large lookup table of the Hamming weight of every 8 bit integer. """ self.lookup_table = bytes.maketrans(bytes(range(1<<8)),bytes((bin(i).count('1') for i in range(1<<8)))) self.byteorder = sys.byteorder def __call__(self,x): """ Breaks x, which is a python integer type, into chuncks of 8 bits. Calls the lookup table to get the population count of each chunck and returns the aggregated population count. """ return sum(x.to_bytes((x.bit_length()>>3)+1,self.byteorder).translate(self.lookup_table)) popcount = popcount_lk print(popcount(56437865483765)) 

This should be 3 times faster than bin(56437865483765).count('1') in CPython and PyPy3.

4 Comments

As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.
As far as the results are concerned, bytes.maketrans(bytes(range(1<<8)),bytes((bin(i).count('1') for i in range(1<<8)))) and bytes(bin(i).count('1') for i in range(1 << 8)) are the same.
It is the call function that speeds things.
@Ariad Yes, it is faster than bin().count('1'), so +1, I just pointed out the parts that can be modified. In addition, can you move the text description from the code comments to the body? People here seem to prefer the description in the body rather than in the code comments (my browser translation does not work with code comments either).
0

@Robotbugs' answer, but wrapped in numba's njit decorator was faster than the gmpy in my case.

@njit(int64(uint64)) def get_bit_count(bitboard): n = 0 bitboard = int64(bitboard) while bitboard: n += 1 bitboard &= bitboard - 1 return n 

I had to set uint64 as argument type to avoid OverlowError.

Comments

-1
#Python prg to count set bits #Function to count set bits def bin(n): count=0 while(n>=1): if(n%2==0): n=n//2 else: count+=1 n=n//2 print("Count of set bits:",count) #Fetch the input from user num=int(input("Enter number: ")) #Output bin(num) 

Comments

-4

It turns out your starting representation is a list of lists of ints which are either 1 or 0. Simply count them in that representation.


The number of bits in an integer is constant in python.

However, if you want to count the number of set bits, the fastest way is to create a list conforming to the following pseudocode: [numberofsetbits(n) for n in range(MAXINT)]

This will provide you a constant time lookup after you have generated the list. See @PaoloMoretti's answer for a good implementation of this. Of course, you don't have to keep this all in memory - you could use some sort of persistent key-value store, or even MySql. (Another option would be to implement your own simple disk-based storage).

10 Comments

@StevenRumbalski How is it unhelpful?
When I read your answer it contained only your first sentence: "The number of bits in an integer is constant in python."
I already have a bit count lookup table for all the the counts that it's possible to store, but having a large list of numbers and operating on them with a[i] & a[j] , makes your soltuion useless unless i have 10+GB of ram. array for & ^ | for tripples of 10000 numbers would be 3*10000^3 lookup table size. since i don't know what i will need, it makes more sense to just count the few thousand when i need them
@zidarsk8 Or, you could use some kind of database or persistent key-value store.
(I know this answer is old but) the OP was using long (as it's called in Python 2) and (seems to) not aware that it's the case. So the first sentence is wrong -- at least in this question. -- -- By the way, Python 2's int type maximum value is 2^63-1, so that would definitely not fit in any kind of table; and if you have to use a disk, it's already slower than the slowest algorithm you would find reasonable, since in this case the task is so simple.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.