The below code in unix takes ~9s reported by time command.
int main() { double u = 0; double v = 0; double w = 0; int i; for (i = 0;i < 1000000000;++i) { v *= w; u += v; } printf("%lf\n",u); } I don't understand why the execution times almost double when i change v *= w;withv *= u;
wis always zero. The compiler is probably smart enough not to run that loop at all in the first case sincevwill also always be zero, and thusutoo. (Might be harder to figure out with the interdependency between u and v.)