0

I know why decimal can not be represented in floating-point accurately. I also know it will be rounded when convert decimal to floating-point according to IEEE 754.

But, now that the actual floating-point is not accurate, for example decimal 0.1 will be rounded in floating-point: 0,01111111011,1001100110011001100110011001100110011001100110011010

the infinite representation is: 0.000110011001...(infinte 1001)

When we print the 0.1 in decimal, the floating-point representation should be convert to decimal, the question is how the rounded floating-point representation can be rounded "accurately"?

I find a similar question here, @Guffa say:

it's close enough so that when the least significant digits are rounded off to display the value

But I don't understand how the rounding works when converting floating-point back to decimal

I do not know if I have made it clear, but I really want to know why.

Thanks.

2
  • I know this is a duplicate, but it's going to be really hard to find the dupetarget. Commented Apr 6, 2017 at 9:15
  • Turns out, it wasn't that hard to find. Commented Apr 6, 2017 at 9:16

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.