Skip to main content
added 291 characters in body
Source Link
supercat
  • 8.7k
  • 25
  • 30

I think the fundamental reason is that relatively few null checks are required to make a program "safe" against data corruption. If a program tries to use the contents of an array element or other storage location which is supposed to have been written with a valid reference but wasn't, the best-case outcome is for an exception to be thrown. Ideally, the exception will indicate exactly where the problem occurred, but what matters is that ansome kind of exception gets thrown before the null reference gets stored somewhere that could cause data corruption. Although Unless a liberal sprinklingmethod stores an object without trying to use it in some fashion first, an attempt to use an object will--in and of explicit null checks would be requireditself--constitute a "null check" of sorts.

If one wants to ensure that exceptions supplyan null reference which appears where it shouldn't will cause a particular exception other than NullReferenceException, it will often be necessary to include null checks all over the maximum possible detailplace. On the other hand, merely ensuring that some exception will occur before a null reference can cause "damage" beyond any that has already been done will often require relatively few tests--testing would generally only be required in cases where an object would store a reference without trying to use it, and either the null reference would overwrite a valid one, or it would cause other code to misinterpret other aspects of program state. Such situations exist, but aren't all that common; most accidental null references will get caught very quickly whether one checks for them or notwhether one checks for them or not.

I think the fundamental reason is that relatively few null checks are required to make a program "safe" against data corruption. If a program tries to use the contents of an array element or other storage location which is supposed to have been written with a valid reference but wasn't, the best-case outcome is for an exception to be thrown. Ideally, the exception will indicate exactly where the problem occurred, but what matters is that an exception gets thrown before the null reference gets stored somewhere that could cause data corruption. Although a liberal sprinkling of explicit null checks would be required to ensure that exceptions supply the maximum possible detail, merely ensuring that some exception will occur before a null reference can cause "damage" beyond any that has already been done will often require relatively few tests--testing would generally only be required in cases where an object would store a reference without trying to use it, and either the null reference would overwrite a valid one, or it would cause other code to misinterpret other aspects of program state. Such situations exist, but aren't all that common; most accidental null references will get caught very quickly whether one checks for them or not.

I think the fundamental reason is that relatively few null checks are required to make a program "safe" against data corruption. If a program tries to use the contents of an array element or other storage location which is supposed to have been written with a valid reference but wasn't, the best-case outcome is for an exception to be thrown. Ideally, the exception will indicate exactly where the problem occurred, but what matters is that some kind of exception gets thrown before the null reference gets stored somewhere that could cause data corruption. Unless a method stores an object without trying to use it in some fashion first, an attempt to use an object will--in and of itself--constitute a "null check" of sorts.

If one wants to ensure that an null reference which appears where it shouldn't will cause a particular exception other than NullReferenceException, it will often be necessary to include null checks all over the place. On the other hand, merely ensuring that some exception will occur before a null reference can cause "damage" beyond any that has already been done will often require relatively few tests--testing would generally only be required in cases where an object would store a reference without trying to use it, and either the null reference would overwrite a valid one, or it would cause other code to misinterpret other aspects of program state. Such situations exist, but aren't all that common; most accidental null references will get caught very quickly whether one checks for them or not.

Source Link
supercat
  • 8.7k
  • 25
  • 30

I think the fundamental reason is that relatively few null checks are required to make a program "safe" against data corruption. If a program tries to use the contents of an array element or other storage location which is supposed to have been written with a valid reference but wasn't, the best-case outcome is for an exception to be thrown. Ideally, the exception will indicate exactly where the problem occurred, but what matters is that an exception gets thrown before the null reference gets stored somewhere that could cause data corruption. Although a liberal sprinkling of explicit null checks would be required to ensure that exceptions supply the maximum possible detail, merely ensuring that some exception will occur before a null reference can cause "damage" beyond any that has already been done will often require relatively few tests--testing would generally only be required in cases where an object would store a reference without trying to use it, and either the null reference would overwrite a valid one, or it would cause other code to misinterpret other aspects of program state. Such situations exist, but aren't all that common; most accidental null references will get caught very quickly whether one checks for them or not.