If you need precision, use decimal and not double/float;
var num = 39.248779999999996; // num is double. var num = 39.248779999999996m; // num is decimal.
The decimal keyword indicates a 128-bit data type. Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations.
Edit:
You can't represent all numbers exactly in float/double:
Binary floating point arithmetic is fine so long as you know what's going on and don't expect values to be exactly the decimal ones you put in your program, and don't expect calculations involving binary floating point numbers to necessarily yield precise results. Even if two numbers are both exactly represented in the type you're using, the result of an operation involving those two numbers won't necessarily be exactly represented. This is most easily seen with division (eg 1/10 isn't exactly representable despite both 1 and 10 being exactly representable) but it can happen with any operation - even seemingly innocent ones such as addition and subtraction.
For Example:
double doubleValue = 1f / 10f; // => 0.10000000149011612 decimal decimalValue = 1m / 10m; // => 0.1
You can truncate the digits to ensure max of 7 digits, but you can't exactly round the value:
double value = 39.248779999999996; double roundTo = Math.Pow(10, 7); double resultResult = Math.Truncate(value * roundTo) / roundTo; // result is : 39.2487799
decimaltype if you want a precise representation.