I would rephrase the description as "code which converts a type to a different representation for the purpose of doing something which could have been done just as well or better in the original and then converts it back. There are many situations where converting something to a different type, acting upon it, and converting it back is entirely appropriate and failure to do so would result in incorrect behavior.
As an example where conversion is good:
One has four float values of arbitrary signs whose magnitudes may differ by a factor of up to 1,000, and one needs to compute the sum to within 0.625 units in the last place. Converting all four values to double, computing the sum, and converting the result back to float will be much more efficient than would be any approach using float alone.
Floating-point values are at best accurate to 0.5 units in the last place (ULP). This example would require that the worst-case rounding error by no more than 25% above optimum worst-case error. Using a double will yield a value which will be accurate within 0.5001 ULP. While a 0.625 ULP requirement might seem contrived, such requirements are often important in successive-approximation algorithms. The more tightly the error bound is specified, the lower the worst-case iteration requirement.
As an example where conversion is bad:
One has a floating-point number, and wishes to output a string which will represent its value uniquely. One approach is to convert the number to a string with a certain number of digits, try to convert it back, and see if the result matches.
But this is actually a poor approach. If a decimal string represents a value which sits almost precisely on the halfway point between two floating-point values, it's fairly expensive for a string-to-float method to guarantee that it will always yield the nearer float value, and many such conversion methods don't uphold such a guarantee (among other things, doing so would in some cases require reading all the digits of a number, even if it was billions of digits long).
It is much cheaper for a method to guarantee that it will always return a value that is within 0.5625 units in the last place (ULP) of the represented value. A robust "reversible" decimal-to-string formatting routine should compute how far the output value is from the correct value, and continue outputting digits until the result is within 0.375 (ULP) if not 0.25 (ULP). Otherwise, it may output a string which some conversion methods will process correctly, but other conversion methods won't.
It is better to sometimes output a digit that might not be "necessary" than output a value that might be misinterpreted. The key part is that the decision of how many digits should be output should be made based upon numeric calculations related to the output process, rather than upon the result of one particular method's attempt to convert the string back to a number.
"Roundabout code" that accomplishes in many instructions what could be done with far fewer (eg: rounding a number by converting a decimal into a formatted string, then converting the string back into a decimal).if the situation is so that they have to be used?- what situation would that be?decimal myValue = decimal.Parse(dataReader["myColumn"].ToString())is a pet peeve of mine.