Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • 2
    Your results are highly suspicious. They seem to be saying that when one of the explicit null tests gives true (which short-circuits the remaining tests), the time jumps from 4ms to 36ms. This is just so counter-intuitive that it has to be nonsense; i.e. there has to be something wrong with your benchmarking methodology. Please post the code. Commented Sep 11, 2012 at 2:44
  • 1
    The most likely explanation is that the 4ms and 2ms are before the code was JIT compiled, and the 6ms is after. And that the 36ms is including the time taken to JIT compile the benchmark! Commented Sep 11, 2012 at 2:47
  • Stephen C is correct, my results were bad. After making a simple change, the average times are now 51ms for Method A, and 48ms for Method B. Exceptions are still slightly faster than the compound if. Given how many of these calls are made, I'm still going to go for an exception. Commented Sep 11, 2012 at 5:50
  • Exceptions being slightly faster is plausible if consider what the JIT optimizer might or might not be able to do; see my Answer. However, I'd still like to see your complete benchmark code to figure out what is going on. And I should note that your result only applies to your specific use-case. In many others, you will find that exceptions are significantly slower. Commented Sep 11, 2012 at 6:24