Example:
import sys class Test(): def __init__(self): self.a = 'a' self.b = 'b' self.c = 'c' self.d = 'd' self.e = 'e' if __name__ == '__main__': test = [Test() for i in range(100000)] print(sys.getsizeof(test)) In windows task manager: I am getting a jump of ~20 MB when creating a list of 100000 vs 10.
Using sys.getsizeoff(): For a list of 100000, I get 412,236 bytes; for a list of 10, I get 100 bytes.
This seems hugely disproportionate. Why is this happening?
xrangein this case) to save memorysys.getsizeofreturns a shallow size: it doesn't include the objects contained in the list.