3

When I run my code below I get a value 0, a few times I did get a value for the intAddition. I've tried many suggestions I found online, but have yet to prevail. My classmate showed me how he did his and it was very similar to mine. He was getting small values, 1 to 3, from his program.

Thanks for the help!

#include <iostream> #include <time.h> #include <stdio.h> clock_t start, end; void intAddition(int a, int b){ start = clock(); a + b; end = clock(); printf("CPU cycles to execute integer addition operation: %d\n", end-start); } void intMult(int a, int b){ start = clock(); a * b; end = clock(); printf("CPU cycles to execute integer multiplication operation: %d\n", end-start); } void floatAddition(float a, float b){ start = clock(); a + b; end = clock(); printf("CPU cycles to execute float addition operation: %d\n", end-start); } void floatMult(float a, float b){ start = clock(); a * b; end = clock(); printf("CPU cycles to execute float multiplication operation: %d\n", end-start); } int main() { int a,b; float c,d; a = 3, b = 6; c = 3.7534, d = 6.85464; intAddition(a,b); intMult(a,b); floatAddition(c,d); floatMult(c,d); return 0; } 
6
  • 4
    clock returns the CPU time in rather coarse measures (1-100ms increments, typically). Not clock-cycles of the CPU. So, yes, I expect this will be zero every time. Commented Mar 4, 2015 at 0:28
  • 6
    You should loop the operation to be timed a large number of times. But note that if you just write a+b, the compiler will probably optimize it out because it has no effect. Commented Mar 4, 2015 at 0:32
  • I just tried looping the operations, but I'm still getting 0. Commented Mar 4, 2015 at 2:50
  • 1
    @Ugluk How many iterations? Start from 1 mln. at least. To get true number of CPU cycles, you need something like rdtsc, not clock. Commented Mar 4, 2015 at 6:30
  • 2
    Don't assume that a clock_t value can be printed with %d. clock_t is an implementation-defined arithmetic type. You can convert it to a known type, for example printf("%ld\n", (long)(end - start)); Commented Mar 4, 2015 at 17:02

2 Answers 2

6

The value returned by clock() is of type clock_t (an implementation-defined arithmetic type). It represents "implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation" (N1570 7.27.2.1).

Given a clock_t value, you can determine the number of seconds it represents by multiplying it by CLOCKS_PER_SEC, an implementation-defined macro defined in <time.h>. POSIX requires CLOCKS_PER_SEC to be one million, but it may have different values on different systems.

Note in particular, that the value of CLOCKS_PER_SEC does not necessarily correspond to the actual precision of the clock() function.

Depending on the implementation, two successive calls to clock() might return the same value if the amount of CPU time consumed is less than the precision of the clock() function. On one system I tested, the resolution of the clock() function is 0.01 second; the CPU can execute a lot of instructions in that time.

Here's a test program:

#include <stdio.h> #include <time.h> #include <limits.h> int main(void) { long count = 0; clock_t c0 = clock(), c1; while ((c1 = clock()) == c0) { count ++; } printf("c0 = %ld, c1 = %ld, count = %ld\n", (long)c0, (long)c1, count); printf("clock_t is a %d-bit ", (int)sizeof (clock_t) * CHAR_BIT); if ((clock_t)-1 > (clock_t)0) { puts("unsigned integer type"); } else if ((clock_t)1 / 2 == 0) { puts("signed integer type"); } else { puts("floating-point type"); } printf("CLOCKS_PER_SEC = %ld\n", (long)CLOCKS_PER_SEC); return 0; } 

On one system (Linux x86_64), the output is:

c0 = 831, c1 = 833, count = 0 clock_t is a 64-bit signed integer type CLOCKS_PER_SEC = 1000000 

Apparently on that system the clock() function's actual resolution is one or two microseconds, and two successive calls to clock() return distinct values.

On another system (Solaris SPARC), the output is:

c0 = 0, c1 = 10000, count = 10447 clock_t is a 32-bit signed integer type CLOCKS_PER_SEC = 1000000 

On that system, the resolution of the clock() function is 0.01 second (10,000 microseconds), and the value returned by clock() did not change for several thousand iterations.

There's (at least) one more thing to watch out for. On a system where clock_t is 32 bits, with CLOCKS_PER_SEC == 1000000, the value can wrap around after about 72 minutes of CPU time, which could be significant for long-running programs. Consult your system's documentation for the details.

Sign up to request clarification or add additional context in comments.

Comments

3

On some compilers clock() measures time in millisecond. Also the compiler is too smart for simple tests, it may skip everything because the result of those operations is not being used.

For example, this loop will probably take less than 1 millisecond (unless debugger is on or optimization is off)

int R = 1; int a = 2; int b = 3; start = clock(); for( int i = 0; i < 10000000; i++ ) R = a * b; printf( "time passed: %ld\n", clock() - start ); 

R is always the same number (6), and R is not even being used. The compiler may skip all the calculations. You have to print R at the end or do something else to fool the compiler to cooperate with the test.

1 Comment

It's not necessarily in milliseconds. The conversion factor is CLOCKS_PER_SEC (which is required to be one million on POSIX systems, but may be different on non-POSIX systems. And your printf assumes (as the code in the question does) that clock_t is compatible with int, which is not guaranteed. See my answer for more information.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.