Skip to main content
added 117 characters in body
Source Link
Mnementh
  • 51.5k
  • 49
  • 152
  • 202

There are many possible pitfalls for writing micro-benchmarks in Java.

First: You have to calculate with all sorts of events that take time more or less random: Garbage collection, caching effects (of OS for files and of CPU for memory), IO etc.

Second: You cannot trust the accuracy of the measured times for very short intervals.

Third: The JVM optimizes your code while executing. So different runs in the same JVM-instance will become faster and faster.

My recommendations: Make your benchmark run some seconds, that is more reliable than a runtime over milliseconds. Warm up the JVM (means running the benchmark at least once without measuring, that the JVM can run optimizations). And run your benchmark multiple times (maybe 5 times) and take the median-value. Run every micro-benchmark in a new JVM-instance (call for every benchmark new Java) otherwise optimization effects of the JVM can influence later running tests. Don't execute things, that aren't executed in the warmup-phase (as this could trigger class-load and recompilation).

There are many possible pitfalls for writing micro-benchmarks in Java.

First: You have to calculate with all sorts of events that take time more or less random: Garbage collection, caching effects (of OS for files and of CPU for memory), IO etc.

Second: You cannot trust the accuracy of the measured times for very short intervals.

Third: The JVM optimizes your code while executing. So different runs in the same JVM-instance will become faster and faster.

My recommendations: Make your benchmark run some seconds, that is more reliable than a runtime over milliseconds. Warm up the JVM (means running the benchmark at least once without measuring, that the JVM can run optimizations). And run your benchmark multiple times (maybe 5 times) and take the median-value. Run every micro-benchmark in a new JVM-instance (call for every benchmark new Java) otherwise optimization effects of the JVM can influence later running tests.

There are many possible pitfalls for writing micro-benchmarks in Java.

First: You have to calculate with all sorts of events that take time more or less random: Garbage collection, caching effects (of OS for files and of CPU for memory), IO etc.

Second: You cannot trust the accuracy of the measured times for very short intervals.

Third: The JVM optimizes your code while executing. So different runs in the same JVM-instance will become faster and faster.

My recommendations: Make your benchmark run some seconds, that is more reliable than a runtime over milliseconds. Warm up the JVM (means running the benchmark at least once without measuring, that the JVM can run optimizations). And run your benchmark multiple times (maybe 5 times) and take the median-value. Run every micro-benchmark in a new JVM-instance (call for every benchmark new Java) otherwise optimization effects of the JVM can influence later running tests. Don't execute things, that aren't executed in the warmup-phase (as this could trigger class-load and recompilation).

edited body
Source Link
Mnementh
  • 51.5k
  • 49
  • 152
  • 202

There are many possible pitfalls for writing micro-benchmarks in Java.

First: You have to calculate with all sorts of events that take time more or less random: Garbage collection, caching effects (of OS for files and of CPU for memory), IO etc.

Second: You cannot trust the accuracy of the measured times for very short intervals.

Third: The JVM optimizes your code while executing. So different runs in the same JVM-instance will become faster and faster.

My recommendations: Make your benchmark run some seconds, that is more reliable than a runtime over milliseconds. Warm up the JVM (means running the benchmark at least once without measuring, that the JVM can run optimizations). And run your benchmark multiple times (maybe 5 times) and take the median-value. Run every micro-benchmark in a new JVM-instance (call for every benchmark new Java) otherwise optimisationoptimization effects of the JVM can influence later running tests.

There are many possible pitfalls for writing micro-benchmarks in Java.

First: You have to calculate with all sorts of events that take time more or less random: Garbage collection, caching effects (of OS for files and of CPU for memory), IO etc.

Second: You cannot trust the accuracy of the measured times for very short intervals.

Third: The JVM optimizes your code while executing. So different runs in the same JVM-instance will become faster and faster.

My recommendations: Make your benchmark run some seconds, that is more reliable than a runtime over milliseconds. Warm up the JVM (means running the benchmark at least once without measuring, that the JVM can run optimizations). And run your benchmark multiple times (maybe 5 times) and take the median-value. Run every micro-benchmark in a new JVM-instance (call for every benchmark new Java) otherwise optimisation effects of the JVM can influence later running tests.

There are many possible pitfalls for writing micro-benchmarks in Java.

First: You have to calculate with all sorts of events that take time more or less random: Garbage collection, caching effects (of OS for files and of CPU for memory), IO etc.

Second: You cannot trust the accuracy of the measured times for very short intervals.

Third: The JVM optimizes your code while executing. So different runs in the same JVM-instance will become faster and faster.

My recommendations: Make your benchmark run some seconds, that is more reliable than a runtime over milliseconds. Warm up the JVM (means running the benchmark at least once without measuring, that the JVM can run optimizations). And run your benchmark multiple times (maybe 5 times) and take the median-value. Run every micro-benchmark in a new JVM-instance (call for every benchmark new Java) otherwise optimization effects of the JVM can influence later running tests.

Source Link
Mnementh
  • 51.5k
  • 49
  • 152
  • 202

There are many possible pitfalls for writing micro-benchmarks in Java.

First: You have to calculate with all sorts of events that take time more or less random: Garbage collection, caching effects (of OS for files and of CPU for memory), IO etc.

Second: You cannot trust the accuracy of the measured times for very short intervals.

Third: The JVM optimizes your code while executing. So different runs in the same JVM-instance will become faster and faster.

My recommendations: Make your benchmark run some seconds, that is more reliable than a runtime over milliseconds. Warm up the JVM (means running the benchmark at least once without measuring, that the JVM can run optimizations). And run your benchmark multiple times (maybe 5 times) and take the median-value. Run every micro-benchmark in a new JVM-instance (call for every benchmark new Java) otherwise optimisation effects of the JVM can influence later running tests.