8

I have written the following code to measure the time of sorting any data. I am getting weird results, like negative time in some cases and am not getting any consistent result for same data set(I understand it won't be exactly same). Please let me know what is wrong or how can I measure time properly.

#include<stdio.h> #include<sys/time.h> void bubble(int a[],int n) { int i,j,flag,temp; for(i=0;i<n-1;i++) { flag=0; for(j=0;j<n-i-1;j++) { if(a[j]>a[j+1]) { flag=1; a[j]=a[j]+a[j+1]; a[j+1]=a[j]-a[j+1]; a[j]=a[j]-a[j+1]; } } if(flag==0) break; } } int main() { int n,a[100000],i; printf("Enter the size of array:"); scanf("%d",&n); for(i=0;i<n;i++) a[i]=rand()%100000; //average case //a[i]=i; //best case //a[i]=n-1-i; //worst case /* printf("The UNsorted array is:\n"); for(i=0;i<n;i++) printf("%d ",a[i]); printf("\n\n"); */ int st,et; struct timezone tz; struct timeval tv; gettimeofday(&tv,NULL); st=tv.tv_usec; bubble(a,n); gettimeofday(&tv,NULL); et=tv.tv_usec; printf("Sorting time: %d micro seconds\n",et-st); /* printf("\nThe sorted array is:\n"); for(i=0;i<n;i++) printf("%d ",a[i]); printf("\n"); */ return 0; } 
2
  • What do you think happens to the microseconds when they reach 1000000? Commented Aug 8, 2016 at 13:58
  • On what system? Please edit the question. Commented Aug 8, 2016 at 14:02

1 Answer 1

11

The struct timeval populated by gettimeofday is defined as follows:

struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ }; 

The tv_sec and tv_usec fields together contains the seconds and microseconds since the epoch. The microseconds part contains only the fractional seconds, i.e. a value from 0 to 999999.

You need to subtract the seconds and microseconds.

struct timeval st, et; gettimeofday(&st,NULL); bubble(a,n); gettimeofday(&et,NULL); int elapsed = ((et.tv_sec - st.tv_sec) * 1000000) + (et.tv_usec - st.tv_usec) printf("Sorting time: %d micro seconds\n",elapsed); 

If the total run time is very short, you can perform multiple runs and average them out:

struct timeval st, et; int i, num_runs = 5; gettimeofday(&st,NULL); for (i=0; i<num_runs; i++) { bubble(a,n); } gettimeofday(&et,NULL); int elapsed = (((et.tv_sec - st.tv_sec) * 1000000) + (et.tv_usec - st.tv_usec)) / num_runs; 
Sign up to request clarification or add additional context in comments.

4 Comments

This is working well for average case and worst case analysis, but still I am facing issue in measuring very small time. When I run the best case of the bubble sort, I am getting inconsistent output. The time varies between 0 to 1000 micro seconds for same data set. I am testing for data ranging from 10000-90000.
@NaitikParekh See my edit. Perform multiple runs and average them out.
Jitter (time variations) is due to other things happening in the system that pushes your process execution away from the CPU: interrupts, I/O and processes with higher priority. If you call printf() or access files in your algorithm, move them after the second gettimeofday(). Regarding the process priority, try to run your program with sudo nice -19 and set the affinity of the process to a specific CPU (using sched_setaffinity for example or numa programs). That will prevent cache trashing as much as possible.
If you want to know the best case, without any system interruption, then place your code in a kernel module inside a critical section: it will get undivided CPU during the whole execution time, with no interruption whatsoever. Expect consistent time measurements with this method but also limitations since calling the C standard library won't work. There are printf's in the kernel (printk actually)

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.