In MD simulation benchmarking, you often see performance expressed in terms of ns/day and hours/ns.
How do I translate thisthese values to the amount of real-time a simulation takes? In other words, what would be a reasonable value of ns/day in order to run standard MD simulations using GPUs?