In a sense, this already has an excellent answer:
One should not think too hard about it. It's ultimately better for the mental health and longevity of the individual.
Mental health and longevity are of course nice, but what about this individual's pride which took another hit trying to be clever and cruelly being denied by numpy:
Consider the following where we start with some byte data:
a = np.linspace(0,255,6, dtype=np.uint8) a # array([ 0, 51, 102, 153, 204, 255], dtype=uint8) Let's assume we want to add something and promote the type, so it does not wrap around. With a scalar, this does not work:
b = np.uint16(1) a + b # array([ 1, 52, 103, 154, 205, 0], dtype=uint8) But with an array, it does:
c = np.ones(1, np.uint16) a + c # array([ 1, 52, 103, 154, 205, 256], dtype=uint16) So I thought let's make an array.
b[...] # array(1, dtype=uint16) np.isscalar(b[...]) # False But, alas:
a + b[...] # array([ 1, 52, 103, 154, 205, 0], dtype=uint8) Why does this 0d array behave like a scalar here?
np.isscalar(b)andb=np.uint(16)returned True for me. What version are you using? Numpy 1.15 here.np.isscalar(b[...])(make sure not to miss the Ellipsis)?False...! But doc said you should usendimalmost everywhere.np.isscalar(b[...])is False on Numpy 1.16.3, as b[...] is indeed a numpy array. I think this answer (stackoverflow.com/a/42191121/10640534) will help.