TL:DR;
I'd point it back to two things:
- Custom formats not returning more digits than they can be sure of being correct due to legacy reasons.
- Floating point inaccuracies within the Dragon4 algorithm that is used when generating the string for the Numeric format specifier (N). Which is to be expected as there is no way to be more accurate with the given precision of a float! So this is not necessarily related to the algorithm used.
Issue fixes itself using decimals:
decimal dec = 0.123M; string s1 = dec.ToString("0.#########"); // "0.123" string s2 = dec.ToString("N9"); // "0.123000000"
In depth
Using the ### syntax the formatter won't allow you to print any more decimal digits than it is sure that are correct. Lets test this by trying a few samples:
Lets try with more decimal digits. Note that the Numeric format specifier lets us print more characters:
float flt = 1f/3f; flt.ToString("0.#########").Dump(); // 0.3333333 flt.ToString("N9").Dump(); // 0.333333343
We get more characters when using a more precise data type, but still a character limit using the #.
double dbl = 1D/3D; dbl.ToString("0.#################").Dump(); // 0,333333333333333 dbl.ToString("N17").Dump(); // 0,33333333333333331
We can track it back to this. Which gets the precision from TNumber.MaxPrecisionCustomFormat with its documentation:
SinglePrecisionCustomFormat and DoublePrecisionCustomFormat are used to ensure that custom format strings return the same string as in previous releases when the format would return x digits or less (where x is the value of the corresponding constant). In order to support more digits, we would need to update ParseFormatSpecifier to pre-parse the format and determine exactly how many digits are being requested and whether they represent "significant digits" or "digits after the decimal point".
Concrete Answers
Why aren't these two formats equivalent?
Due to legacy reasons, as the custom format will only show "known" digits. And the Numeric format will show you all requested digits, even though they might be inaccurate.
Why does the "standard" format add 000003 to the end of the variable that only has 0.123?
The numeric format adds "000003" to the end of the variable, because you asked it to do so. You asked for 9 decimal digits, you got 9 decimal digits even if they might not be correct.
The custom format won't add any digits to the end of the decimal digits, as between the 3rd and the 7th decimal digit (In your sample you added 9 # though the last 2 get ignored), there are only 0s and the custom format ignores any ending 0s as stated here.
Note that this specifier never displays a zero that is not a significant digit, even if zero is the only digit in the string. It will display zero only if it is a significant digit in the number that is being displayed.