It's kind of a common knowledge that (most of) floating point numbers are not stored precisely (when IEEE-754 format is used). So one shouldn't do this:
0.3 - 0.2 === 0.1; // very wrong ... as it will result in false, unless some specific arbitrary-precision type/class was used (BigDecimal in Java/Ruby, BCMath in PHP, Math::BigInt/Math::BigFloat in Perl, to name thea few) instead.
Yet I wonder why when one tries to print the result of this expression, 0.3 - 0.2, scripting languages (Perl and PHP) give 0.1, but "virtual-machine" ones (Java, JavaScript and Erlang) give something more similar to 0.09999999999999998 instead?
And why it'sis it also inconsistent within Ruby:? version 1.8.6 (codepad) gives 0.1, version 1.9.3 (ideone) -gives 0.0999...?