The value returned by clock() is of type clock_t (an implementation-defined arithmetic type). It represents "implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation" (N1570 7.27.2.1).
Given a clock_t value, you can determine the number of seconds it represents by multiplying it by CLOCKS_PER_SEC, an implementation-defined macro defined in <time.h>. POSIX requires CLOCKS_PER_SEC to be one million, but it may have different values on different systems.
Note in particular, that the value of CLOCKS_PER_SEC does not necessarily correspond to the actual precision of the clock() function.
Depending on the implementation, two successive calls to clock() might return the same value if the amount of CPU time consumed is less than the precision of the clock() function. On one system I tested, the resolution of the clock() function is 0.01 second; the CPU can execute a lot of instructions in that time.
Here's a test program:
#include <stdio.h> #include <time.h> #include <limits.h> int main(void) { long count = 0; clock_t c0 = clock(), c1; while ((c1 = clock()) == c0) { count ++; } printf("c0 = %ld, c1 = %ld, count = %ld\n", (long)c0, (long)c1, count); printf("clock_t is a %d-bit ", (int)sizeof (clock_t) * CHAR_BIT); if ((clock_t)-1 > (clock_t)0) { puts("unsigned integer type"); } else if ((clock_t)1 / 2 == 0) { puts("signed integer type"); } else { puts("floating-point type"); } printf("CLOCKS_PER_SEC = %ld\n", (long)CLOCKS_PER_SEC); return 0; }
On one system (Linux x86_64), the output is:
c0 = 831, c1 = 833, count = 0 clock_t is a 64-bit signed integer type CLOCKS_PER_SEC = 1000000
Apparently on that system the clock() function's actual resolution is one or two microseconds, and two successive calls to clock() return distinct values.
On another system (Solaris SPARC), the output is:
c0 = 0, c1 = 10000, count = 10447 clock_t is a 32-bit signed integer type CLOCKS_PER_SEC = 1000000
On that system, the resolution of the clock() function is 0.01 second (10,000 microseconds), and the value returned by clock() did not change for several thousand iterations.
There's (at least) one more thing to watch out for. On a system where clock_t is 32 bits, with CLOCKS_PER_SEC == 1000000, the value can wrap around after about 72 minutes of CPU time, which could be significant for long-running programs. Consult your system's documentation for the details.
clockreturns the CPU time in rather coarse measures (1-100ms increments, typically). Not clock-cycles of the CPU. So, yes, I expect this will be zero every time.a+b, the compiler will probably optimize it out because it has no effect.rdtsc, notclock.clock_tvalue can be printed with%d.clock_tis an implementation-defined arithmetic type. You can convert it to a known type, for exampleprintf("%ld\n", (long)(end - start));