it's pretty clear why double & co. are not a good choice when it comes to handling currency. I'm wondering though, since the issue only arises when calculations are performed on the value, am I correct by assuming that there is no problem at all to just store a currency value in a double?
For example: 1. Value gets loaded from any given source into a double 2. Value gets modified, directly typed in by the user. 3. Value gets stored to disk in a suitable format.
In the above example the double is just a way to store the value in memory, and thus shouldn't present any of the problems that arise if calculates are performed on the value. Is this correct?
And, if correct, wouldn't it be better to use currency specific types, only when performing calculations? Instead of loading 1000 BigDecimals from a database one could load 1000 doubles. Then, when necessary, define a BigDecimal, do the calculations and just keep the resulting double in memory.
doubleto aBigDecimal, you won't get the value you expect. Storing a currency in adoublemakes it wrong immediately, so don't do it if you're going to be calculating with it later. Unless of course, the error that is introduced is not of concern to the requirements of your program.BigDecimal, and you never need more than 14 significant digits, and you always know how many digits to round to, then your approach would be workable. It would still be quite fragile, though.