22

I checked the size of a pointer in my python terminal (in Enthought Canopy IDE) via

import ctypes print (ctypes.sizeof(ctypes.c_voidp) * 8) 

I've a 64bit architecture and working with numpy.float64 is just fine. But I cannot use np.float128?

np.array([1,1,1],dtype=np.float128) 

or

np.float128(1) 

results in:

AttributeError: 'module' object has no attribute 'float128' 

I'm running the following version:

sys.version_info(major=2, minor=7, micro=6, releaselevel='final', serial=0) 
6
  • 2
    @Matthias: Unless you've got a very unusual platform (e.g., IBM mainframe), NumPy almost certainly doesn't give you access to true 128-bit floats. On some platforms, NumPy supports the x87 80-bit floating-point format defined in the 1985 version of the IEEE 754 standard, and on some of those platforms, that format is reported as float128 (while on others it's reported as float96). But all that's going on there is that you have an 80-bit format with 48 bits (or 16 bits) of padding. Commented Apr 23, 2015 at 11:30
  • @PadraicCunningham np.longdouble results in np.float64 Commented Apr 23, 2015 at 11:32
  • stackoverflow.com/questions/9062562/… Commented Apr 23, 2015 at 11:33
  • @PadraicCunningham the exact size does not really matter as long as I have a higher precision than a float64 (for comparing quadrature rules) Commented Apr 23, 2015 at 11:34
  • 1
    @Matthias: Then you're probably out of luck. Are you on Windows? IIRC, the Windows platform defines long double to be the same type as double, so np.longdouble doesn't give you any extra precision. Commented Apr 23, 2015 at 11:35

2 Answers 2

7

Update: From the comments, it seems pointless to even have a 128 bit float on a 64 bit system.

I am using anaconda on a 64-bit Ubuntu 14.04 system with sys.version_info(major=2, minor=7, micro=9, releaselevel='final', serial=0)

and 128 bit floats work fine:

import numpy a = numpy.float128(3) 

This might be an distribution problem. Try:

EDIT: Update from the comments:

Not my downvote, but this post doesn't really answer the "why doesn't np.float128 exist on my machine" implied question. The true answer is that this is platform specific: float128 exists on some platforms but not others, and on those platforms where it does exist it's almost certainly simply the 80-bit x87 extended precision type, padded to 128 bits. – Mark Dickinson

Sign up to request clarification or add additional context in comments.

10 Comments

That's almost certainly not a 128-bit float, at least not in the sense of the IEEE 754 binary128 format. It's an 80-bit float with 48 bits of padding.
Try doing numpy.float128(1) + numpy.float128(2**-64) - numpy.float128(1). I suspect you'll get an answer of 0.0, indicating that the float128 type contains no more than 64 bits of precision.
@CharlieParker: Yes, absolutely expected. In normal double precision, 1.0 + 2**-64 is not exactly representable (not enough significand bits), so the result of the addition is the closest double-precision float which is exactly representable, which is 1.0 again. And now of course subtracting 1.0 gives 0.0. And for regular double precision, the same is true with 1.0 + 2**-53 - 1.0 (the binary precision is 53). For extended x87-style precision, with the usual round-ties-to-even, 1.0 + 2**-64 - 1.0 will give zero, while 1.0 + 2**-63 - 1.0 will be nonzero.
@CharlieParker: Not my downvote, but this post doesn't really answer the "why doesn't np.float128 exist on my machine" implied question. The true answer is that this is platform specific: float128 exists on some platforms but not others, and on those platforms where it does exist it's almost certainly simply the 80-bit x87 extended precision type, padded to 128 bits.
@CharlieParker: Because floating-point means floating (binary) point! The ability to move the point allows representations of values at a wide range of scales, but doesn't magically give extra precision. See any of the many floating-point guides out there for more information. These comments aren't really the right place for this discussion ...
|
2

For me, the issue was a Python module that has a problem in Windows (PyOpenGL for those that care). This (now-dead) site used to have Python wheels with "fixed" versions of many popular modules, to address the float128 issue. I haven't looked in a while, but there may be a viable replacement for that.

Our company started using fbs to get around this style of issue, but you should know that it costs a (very) small amount of money to legally use for business code.


Note: This question has an accepted answer. My answer is for future searchers, since this question is high in Google results for module 'numpy' has no attribute 'float128'.

2 Comments

The link in the answer does no longer exist...
@FelixScheffer That is a bummer. We relied on that guy for years. I'll update my answer.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.