77

Simple question - why does the Decimal type define these constants? Why bother?

I'm looking for a reason why this is defined by the language, not possible uses or effects on the compiler. Why put this in there in the first place? The compiler can just as easily in-line 0m as it could Decimal.Zero, so I'm not buying it as a compiler shortcut.

2
  • 3
    I do not think these answers adequately explain why these values are there. I'm hearing some effects and usage, but what I'm looking for is WHY was this designed into the language, and why it's not there in Float, for example, or Int32... Commented Apr 17, 2009 at 18:26
  • They help compilers to generate more compact assemblies. Still used today Commented Aug 18, 2020 at 21:28

3 Answers 3

39

Small clarification. They are actually static readonly values and not constants. That has a distinct difference in .Net because constant values are inlined by the various compilers and hence it's impossible to track their usage in a compiled assembly. Static readonly values however are not copied but instead referenced. This is advantageous to your question because it means the use of them can be analyzed.

If you use reflector and dig through the BCL, you'll notice that MinusOne and Zero are only used with in the VB runtime. It exists primarily to serve conversions between Decimal and Boolean values. Why MinusOne is used coincidentally came up on a separate thread just today (link)

Oddly enough, if you look at the Decimal.One value you'll notice it's used nowhere.

As to why they are explicitly defined ... I doubt there is a hard and fast reason. There appears to be no specific performance and only a bit of a convenience measure that can be attributed to their existence. My guess is that they were added by someone during the development of the BCL for their convenience and just never removed.

EDIT

Dug into the const issue a bit more after a comment by @Paleta. The C# definition of Decimal.One uses the const modifier however it is emitted as a static readonly at the IL level. The C# compiler uses a couple of tricks to make this value virtually indistinguishable from a const (inlines literals for example). This would show up in a language which recognize this trick (VB.Net recognizes this but F# does not).

Sign up to request clarification or add additional context in comments.

7 Comments

It is incorrect that those values are readonly, looking at the .net framework Decimal metada you can see the following [DecimalConstant(0, 0, 4294967295, 4294967295, 4294967295)] public const decimal MaxValue = 79228162514264337593543950335m; [DecimalConstant(0, 128, 0, 0, 1)] public const decimal MinusOne = -1m;
@Paleta, no they are readonly. I've verified this by looking at the metadata and the MSDN page for hte values. msdn.microsoft.com/en-us/library/system.decimal.one(VS.80).aspx
I have to disagree with you and your answer, look at the reflected mscorlib.dll code here reflector.webtropy.com/default.aspx/4@0/4@0/DEVDIV_TFS/Dev10/… those are declared as constants, MSDN is incorrect
Well, I just saw that on the IL is declared as field static initonly, are all const in C# translated into static readonly fields? Thanks for the clarification
@Paleta i shared the same confusion as you. I had to sit down for a few minutes and play around with the generated IL and C# source to understand what was going on here. For your question though, no the majority of C# constants are emitted as .literal values. It appears that only DateTime and Decimal are emitted in this mixed manner
|
25

Some .NET languages do not support decimal literals, and it is more convenient (and faster) in these cases to write Decimal.ONE instead of new Decimal(1).

Java's BigInteger class has ZERO and ONE as well, for the same reason.

3 Comments

Can someone explain this to me? 6 up votes means it's got to be a good answer - but: How does a .Net language that doesn't support decimal as a datatype benefit from having a shared read-only property that returns a decimal and is defined as part of the decimal class?
He probably meant that if a language doesn't have decimal literals, using a constant would be more efficient than converting an int literal to a decimal. Every .NET language supports the System.Decimal datatype, it's part of the CLR.
yeah Niki that is what I wanted to say. You can of course use System.Decimal in all .NET languages, but some support it better (like C# which has a decimal keyword and decimal literals) and some worse. Sorry, English is not my native language...
-1

My opinion on it is that they are there to help avoid magic numbers.

Magic numbers are basically anywhere in your code that you have an aribtrary number floating around. For example:

int i = 32; 

This is problematic in the sense that nobody can tell why i is getting set to 32, or what 32 signifies, or if it should be 32 at all. It's magical and mysterious.

In a similar vein, I'll often see code that does this

int i = 0; int z = -1; 

Why are they being set to 0 and -1? Is this just coincidence? Do they mean something? Who knows?

While Decimal.One, Decimal.Zero, etc don't tell you what the values mean in the context of your application (maybe zero means "missing", etc), it does tell you that the value has been deliberately set, and likely has some meaning.

While not perfect, this is much better than not telling you anything at all :-)

Note It's not for optimization. Observe this C# code:

public static Decimal d = 0M; public static Decimal dZero = Decimal.Zero; 

When looking at the generated bytecode using ildasm, both options result in identical MSIL. System.Decimal is a value type, so Decimal.Zero is no more "optimal" than just using a literal value.

7 Comments

Your argument hurts my head. They are numbers, they only become magic numbers when you start attaching random meaning to them such as -1 means do the dishes and 1 means bake cakes. Decimal.One is just as magical as 1 but arguably harder to read (but perhaps more optimal).
my point was that if someone types Decimal.Zero, they are more likely to have done that deliberately because Zero has some meaning - rather than just arbitrarily setting it to 0
I take issue with saying assigning something to 0 is arbitrary. It makes sense for enumerations with symbolic constants being mapped to numbers but for numbers mapped to numbers seems pretty insane. Sometimes 0 is really... zero. Units on the other hand would be a nice construct. 1km != 1.
Saying Decimal.Zero is just as arbitrary as saying 0.0. If we were talking about something that changes on an operating system level, like "/" vs some library constant that describes the filesystem separator, it would make sense, but Decimal.Zero is always just 0.0.
Damn, it was just my opinion. I thought I made that clear :-(
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.