1

I have this code

#define Third (1.0/3.0) #define ThirdFloat (1.0f/3.0f) int main() { double a=1/3; double b=1.0/3.0; double c=1.0f/3.0f; printf("a = %20.15lf, b = %20.15lf, c = %20.15lf\n", a,b,c); float d=1/3; float e=1.0/3.0; float f=1.0f/3.0f; printf("d = %20.15f, e = %20.15f, f = %20.15f\n", d,e,f); double g=Third*3.0; double h=ThirdFloat*3.0; float i=ThirdFloat*3.0f; printf("(1/3)*3: g = %20.15lf; h = %20.15lf, i = %20.15f\n", g, h, i); } 

Which gives that output

a = 0.000000000000000, b = 0.333333333333333, c = 0.333333343267441 d = 0.000000000000000, e = 0.333333343267441, f = 0.333333343267441 (1/3)*3: g = 1.000000000000000; h = 1.000000029802322, i = 1.000000000000000 

I assume that output for a and d looks like this because compiler casts integer value to float after division. b looks good, e is wrong because of low float precision, so as c and f.

But i have no idea why g has correct value (i thought that 1.0/3.0 = 1.0lf/3.0lf, but then i should be wrong) and why h isn't the same as i.

9
  • For g... didn't you notice that it's multiplied my three? Commented Oct 30, 2020 at 20:55
  • so basically you're asking why (1.0 / 3.0) * 3.0 is 1?? and why is (1.0f / 3.0f) * 3.0 not outputting 1? Commented Oct 30, 2020 at 20:57
  • 2
    Re h versus i: Because 3.0 is a double and 3.0f is a float Commented Oct 30, 2020 at 20:57
  • 2
    Compilers tend to be very good at something called constant folding, where compile-time constant expressions are evaluated at compile-time. Also, good compilers are also getting rather good to detect cases like shown for g, h and i, where a division by 3.0 followed by a multiplication by 3.0 cancel each other out. Commented Oct 30, 2020 at 21:07
  • Regarding the constant folding and calculation, even without optimizations enabled GCC 10.2 will convert all your calculation into constant values, as seen in the assembly here (look at the constants in the end). Commented Oct 30, 2020 at 21:12

3 Answers 3

2

Let us first look closer: use "%.17e" (approximate decimal) and "%a" (exact).

#define Third (1.0/3.0) #define ThirdFloat (1.0f/3.0f) #define FMT "%.17e, %a" int main(void) { double a=1/3; double b=1.0/3.0; double c=1.0f/3.0f; printf("a = " FMT "\n", a,a); printf("b = " FMT "\n", b,b); printf("c = " FMT "\n", c,c); puts(""); float d=1/3; float e=1.0/3.0; float f=1.0f/3.0f; printf("d = " FMT "\n", d,d); printf("e = " FMT "\n", e,e); printf("f = " FMT "\n", f,f); puts(""); double g=Third*3.0; double h=ThirdFloat*3.0; float i=ThirdFloat*3.0f; printf("g = " FMT "\n", g,g); printf("h = " FMT "\n", h,h); printf("i = " FMT "\n", i,i); } 

Output

a = 0.00000000000000000e+00, 0x0p+0 b = 3.33333333333333315e-01, 0x1.5555555555555p-2 c = 3.33333343267440796e-01, 0x1.555556p-2 d = 0.00000000000000000e+00, 0x0p+0 e = 3.33333343267440796e-01, 0x1.555556p-2 f = 3.33333343267440796e-01, 0x1.555556p-2 g = 1.00000000000000000e+00, 0x1p+0 h = 1.00000002980232239e+00, 0x1.0000008p+0 i = 1.00000000000000000e+00, 0x1p+0 

But i have no idea why g has correct value

  1. (1.0/3.0)*3.0 can evaluate as a double at compiler or run time and the rounded result is exactly 1.0.

  2. (1.0/3.0)*3.0 can evaluate at compiler or run time using wider than double math and the rounded result is exactly 1.0. Research FLT_EVAL_METHOD.

and why h isn't the same as i.

(1.0f/3.0f) can use float math to form the float quotient that is noticeably different than one-third: 0.333333343267.... a final *3.0 is not surprisingly different that 1.0.

The outputs are all correct. We need to see why the expectation was amiss.


OP further asks: "Why is h (float * double) less accurate than i (float * float)?"

Both start with 0.333333343267... * 3.0, not one-third * 3.0.
float * double is more accurate. Both form a product, yet float * float is a float product rounded to the nearest 1 part in 224 whereas the more accurate float * double product is a double and rounds to the nearest 1 part in 253. The float * float round to 1.0000000 whereas float * double rounds to 1.0000000298...

Sign up to request clarification or add additional context in comments.

3 Comments

Why is h ( float * double ) less accurate than i ( float * float )? Is it because compiler evaluates (1.0f/3.0f) * 3.0f as 1, so the same thing as in g?
I tried this double h=ThirdFloat*3.0; float i=Third*3.0f; because I thought that that can lead to the same mistake (h is float * double, i is double * float), but i seems to be correct.
In other words, where does the value in h come from?
0

But i have no idea why g has correct value (i thought that 1.0/3.0 = 1.0lf/3.0lf

G has exactly the value it should based on:

#define Third (1.0/3.0) ... double g=Third*3.0; 

which is g=(1.0/3.0)*3.0;
Which is 1.000000000000000 (when printed with "%20.15lf")

5 Comments

I thought that computers cannot use exact values like 1.0/3.0, so they approximate it to 0.3333. How is this possible for computer to use exact values?
There are some values that can be represented in exact terms, but a better answer for that question is here.
I understand how computers see numbers, the question is about precision. I mean is 1.0/3.0 exacly 1/3 and thats why (1.0/3.0)*3.0 = 1.000000 (the stored value is not approximation but rather both nominator and denomianator), or it has very high precision, but isn't excatly the same (like 1.00000000000000000000000000000135235).
@Pulpit - Are you able to keep up with the comments, in particular Chux, and the link I provided in comment above (2nd). They are addressing your questions in great detail. Precision of variable type is the limiting factor in simplest terms. That is why when looking at float (32 bits) a value may be shown to exact, but when viewed with double, variations will show up in the fractional part, as you have shown. 6 digits is not enough to exceed the ability of a float to show near exact depiction of (1.0/3.0)*3.0. As per Chux's suggestion, try it with "%.17e"`
I see that the same number can be approximated with lower (float) or higher (double) precision. And the conversion from float to double shows off the float's limitations. That's understandable. But why float * double (like in g) gives lower precision that float * float (like in h). It cannot come from float's limitation. Is it refered to conversion process?
0

I think i got the answer.

#define Third (1.0/3.0) #define ThirdFloat (1.0f/3.0f) printf("%20.15f, %20.15lf\n", ThirdFloat*3.0, ThirdFloat*3.0);//float*double printf("%20.15f, %20.15lf\n", ThirdFloat*3.0f, ThirdFloat*3.0f);//float*float printf("%20.15f, %20.15lf\n", Third*3.0, Third*3.0);//double*double printf("%20.15f, %20.15lf\n\n", Third*3.0f, Third*3.0f);//float*float printf("%20.15f, %20.15lf\n", Third, Third); printf("%20.15f, %20.15lf\n", ThirdFloat, ThirdFloat); printf("%20.15f, %20.15lf\n", 3.0, 3.0); printf("%20.15f, %20.15lf\n", 3.0f, 3.0f); 

And output:

 1.000000029802322, 1.000000029802322 1.000000000000000, 1.000000000000000 1.000000000000000, 1.000000000000000 1.000000000000000, 1.000000000000000 0.333333333333333, 0.333333333333333 0.333333343267441, 0.333333343267441 3.000000000000000, 3.000000000000000 3.000000000000000, 3.000000000000000 

First line is not accurate because of the limitations of float. Constant ThirdFloat has really low precision, so when multiplied by double, compiler takes this really bad approximation (0.333333343267441), converts it into double and multiplies by 3.0 given by double, and that gives also wrong result (1.000000029802322).

But if ThirdFloat, which is float, is multiplied by 3.0f, which is float as well, compiler can avoid approximation by taking exact value of 1/3 and multiply it by 3, that's why i got exact result.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.