4

This is a bit of an odd one. I'm getting different outputs for the same input for the same bit of code at different times.

it's a very simple calculation, just getting the Radians for a given angle in degrees, in a class that handles a compass type stuff. it started off like this:

public double Radians { get { return this.heading_degrees * Math.PI / 180; } set { this.heading_degrees = value * 180 / Math.PI; normalize(); } } 

(heading_degrees is a member variable in the Compass class)
looks ok right?
except I was getting different results when 'getting' the Radians for a given angle.
so I dug deeper and changed the code, 'get' now looks like this:

get { //double hd = heading_degrees; double hd = 180.0; //double pi = Math.PI; double pi180 = 0.01745329251; //pi / 180; double result = hd * pi180; //double result = 3.14159265359; return result; //return heading_degrees * Math.PI / 180; } 

As you can from the commented out lines I've tried different things to try and get to the bottom of this.
setting double result = 3.14159265359; did return 3.14159265359 consistently,
however returning double result = hd * pi180; as in the above code does NOT return a consistent result. as you can see heading degrees is exactly 180.0 now just for testing and to prove that the input IS exactly the same. when I hit this code the first time, I get this result:
result = 3.1415926518
the second time through I get this:
result = 3.1415927410125732

I've tried this on two computers in an attempt to see if the problem was environmental, I've not yet been able to test it on different IDEs (currently using VS express 2012) anyone got any ideas as to why this could be happening? I'm not threading anywhere (and even if I was, how would it change the result in the current iteration of the code, with the input being set at 180.0?) one little clue I seem to have found, is that making little changes to the code (ie, using Math.PI instead of 3.3.14159... etc.) changes the result on the first time through. however the result the second time through seems to be always 3.1415927410125732

Apologies for the extremely long winded post.

other notes: Second run through is just another place in the program that is calling this function. it's not a difference between debug and release. using .net 4

More tests:

if the get code is:

get{ double result = 180.0d * 0.01745329251d; return result; } 

The result is consistent. to the greater accuracy.

if the get code is:

get{ double hd = 180.0d; double result = hd * 0.01745329251d; return result; } 

The result is Not consistent.

if I do:

get{ double hd = 180.0d; double result = (float)(hd * 0.01745329251d); return result; } 

The result is consistent, but to the lower accuracy.

note that in the above tests the variables are all local to the getter!
Also note that I only appear to be getting the inconsistency when I run the full code, is it something about how I'm storing the object that the getter belongs to that causes this?
I think I need to read Eric Lippert's reply to one of the answers again. Eric if you write those two replies as an answer I'll probably mark them as the answer. especially since the last example above is doing pretty much what you said with the cast.

and THIS looks like gold:
Fixed point math in c#?
and appears to answer the how to get out of the hole I've dug myself into.
Especially as I've found there are many, many, functions similar to the above which are giving me the exact same headache.

8
  • Can you show the code that is calling and displaying the results? Commented Jan 2, 2014 at 20:50
  • It may be best to point to the project in it's entirety. Commented Jan 2, 2014 at 20:55
  • It may be best to point to the project in it's entirety. It's a little messy currently but. project: bitbucket.org/ekolis/freee the above code is at line 250 of: bitbucket.org/ekolis/freee/src/… it's getting called from line 313 of:bitbucket.org/ekolis/freee/src/… (both times, the second run through is the 'replay' iteration of the code) Commented Jan 2, 2014 at 21:01
  • Are you interested in the why these values are different? or the how fix the bug they are introducing? Commented Jan 2, 2014 at 21:02
  • 1
    Both. I can probably truncate the result and keep enough accuracy to fix the bug that it's introducing, however the WHY is important because it could help me find other places that this might happen. Commented Jan 2, 2014 at 21:04

1 Answer 1

1

Not sure why my mind jumped straight to release vs debug, but the hardware itself is inconsistent even on the same processor. Is floating-point math consistent in C#? Can it be? The short answer being that intermediates can use higher precision values some of the time generating different results depending on when things get truncated.

Old answer:
There are differences in the release vs debug check out this for getting started.

Float/double precision in debug/release modes

If you need highly consistent results you might want decimals not doubles.

Sign up to request clarification or add additional context in comments.

25 Comments

Thanks for your reply, however this is not a difference between release and debug, I've edited the OP to show that. also the point about decimals does not explain the inconsistency.
your edited reply has a very usefull link. I will be reading through the replies on that one as it's the question that this problem will eventually lead to. I've not yet seen anything in that link about inconsistencies on the same processor there though. also, one reason I'm using doubles is because most the System.Math functions require a double as input, ie Math.Cos etc. I may have to rethink that and write my own math class...
@se5a: This is a frequently asked question. See for instance also stackoverflow.com/questions/8795550/…
@se5a: The official line is: the compiler can do so at its whim. In practice, the compiler truncates back to the "natural" precision when (1) there's an explicit cast, (2) the value is stored to a heap location (that is, a field of a class type, or a field of a struct where the struct is on the heap, or an array element.) Other than those situations it is free to do math in higher precision for any reason whatsoever.
@se5a: You might wonder why this oddity exists. The reason is that there are a small number of floating point registers available on the chip, and doing math in those registers is (1) faster, and (2) higher precision. But because there are a limited number of them, sometimes the values have to be "kicked out" of the registers, which truncates them. The exact details of how the registers are scheduled is implementation-defined.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.