11

I just took an IKM C# test. One of the questions was:

Which of the folowing IMPROVE the performance of a C# program?

  • A. Use boxing
  • B. Use unboxing
  • C. Do not use constants
  • D. Use empty destructors
  • E. Use value type instead of reference type

In the end I skipped the question, the only possible answer I can see is E. In some situations value types could provide better performance (for small types: no dereferencing required and not on the managed heap [assuming not a member of a reference type]), but that's certainly not always the case.

1
  • 2
    Can you post here other questions ? Commented Sep 10, 2013 at 8:52

2 Answers 2

19

Focusing a bit on the wrong answers:

A boxing conversion occurs when a value type value is converted to a reference type value, an object. It involves allocating memory from the garbage collected heap, creating an object header that identifies the object as being of the type of the value type and copying the value type value bits into the object. This is the conversion that creates the type system illusion that a value type derives from System.ValueType and System.Object. Boxing conversions were heavily used in .NET 1.x programs since the only collection types it supported where the classes in System.Collections, collections whose elements are Object. .NET Generics added in 2.0 made these classes instantly obsolete since that allowed creating the classes in System.Collections.Generic. Which can store a value without having to box it. So no.

Unboxing is the opposite conversion, going from a boxed object value back to the value type value. Not quite as expensive as boxing, it only involves checking if the type of the boxed object is of the expected type and copying the value type value bits. It requires a cast in C# and is prone to throwing exceptions when the boxed value type does not match. Same no as the previous one.

Identifiers marked with the const keyword are literal values that are directly compiled into the IL generated by the compiler. The alternative is the readonly keyword. Which requires a memory access to load the value and is thus always slower. A const identifier should always be private or internal, public constants have a knack for breaking a program when you deploy a bug fix that alters the value but don't recompile the assemblies that use the constant. Those assemblies will still use the old constant value since it was compiled into their code. A problem that can't happen with readonly values. So no.

A destructor (aka finalizer) considerably increases the cost of an object. The garbage collector ensures that a finalizer is called when an object is garbage collected. But to do so, it has to keep track of the object separately, such an object is put on the finalizer queue, waiting for the finalizer thread to get around to executing the finalizer. The object doesn't actually truly get destroyed until the next GC pass. You almost always have the class for such an object implement IDisposable so a program can invoke the duties of the finalizer early and not burden the runtime with doing it automatically. You call GC.SuppressFinalize() in your Dispose() method. Nothing worse than a finalizer that doesn't do anything, so no.

Value types exist in .NET for the express reason that they can be so much more efficient than reference types. Their values take a lot less memory than an reference type object and can be stored in CPU registers and the CPU stack, memory locations that are highly optimized in processor designs. They burden the language design since abstracting them as objects is a leaky abstraction that swallows cpu cycles undetectably, particularly a struct is a difficult type with a knack for breaking programs when you try to mutate them. But one that's important to avoid the kind of perf hit that super pure languages like Smalltalk suffer from. A pioneer OOP language where every value is an object and which influenced a large number of subsequent OOP languages. But rarely actually got used anywhere due to its poor performance without a clear path for the hardware engineers to make it as fast as languages that don't abstract the processor design away. Like C#. So that makes it E.

Sign up to request clarification or add additional context in comments.

2 Comments

>>Smalltalk ... But rarely actually got used anywhere due to its poor performance without a clear path for the hardware engineers to make it as fast as languages that don't abstract the processor design away.<< Smalltalk actually got used very successfully in diverse industries from Wall Street investment banking to Container Shipping to Silicon Wafer production. IBM actually retrained thousands of consultants in Smalltalk. Smalltalk implementations were actually faster than Java language implementations -- but Java implementations were given away, free to use.
... and then of course 30 years of hardware improvements totally change the balance: Smalltalk has been fast enough for the past ten years and is still a far more productive environment.
8

The answer would most likely be E. In almost all cases, value types will improve performance. First, use of value types in functions create stack space, which is allocated even before the call is made, avoiding object allocation overhead. Second, when creating arrays of value types on the heap, you both avoid the object allocation overhead, and the data tends to be more cache coherent.

It is true that there is potentially memory bandwidth overhead in the copying of value types, but modern memory bandwidth is so huge, the loss is usually grossly overwhelmed by the other savings. In addition, there is effectively no loss when dealing with 64-bit or smaller types.

3 Comments

I understand that value types can improve performance, but the way the question is worded suggests that you would only choose E as the answer if value types ALWAYS improved performance - in some cases reference types would provide better performance. Not to worry, just making sure I wasn't missing anything obvious.
Well, boxing, unboxing, no constants, and empty destructors are likely to make performance worse. The MSDN explicitly warns about performance costs for empty destructors: Destructors - "Empty destructors should not be used. When a class contains a destructor, an entry is created in the Finalize queue. When the destructor is called, the garbage collector is invoked to process the queue. If the destructor is empty, this just causes a needless loss of performance."
Right, hence, I said the answer would "likely" be E. As Brian pointed out, the others don't have any aspect that can improve performance - only E can potentially do so, so it's the most likely answer.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.