Most programmers would be surprised that COBOL got that right... in the first version of COBOL there was no floating point, only decimal, and the tradition in COBOL continued until today that the first thing you think of when declaring a number is decimal... floating point would only be used if you really needed it. When C came along, for some reason, there was no primitive decimal type, so in my opinion, that's where all the problems started.