CIS 307: Measuring Time

Measuring time on the computer is non trivial. Here are some of the things we may want to do with time.

In all these cases we have a problem of Precision and of Accuracy. By Precision we mean how finely we can specify time, in seconds, milliseconds, microseconds, and even nanoseconds. By Accuracy we mean that the measure truly represents reality. For example, we can have a clock that is precise down to the nanosecond but that gives a time that is off by 0.1 seconds, i.e. its accuracy is only 0.1 seconds. My wrist-watch is precise down to the second, but it is very inaccurate: it is usually about ten minutes off. Since the measurament of durations, i.e. of time intervals, is affected by intervening events (interrupts, context switches, availability of needed buffers, etc.), it is often inaccurate. We need usually to determine durations as the average of a number of repeated experiments.

Another problem is that "the current time" may mean a number of different things. It could be the standard Greenwich time. Or it could be the time in our time zone. Or it could be the time, measured in some unit (called a tick) since a standard initial time, called the epoch, usually set at 00:00:00 on 1 Jan 1970 at Greenwich.

We can use the function time to determine the time it takes to execute a command or program. But the precision of this command is low, depending on the version, tenths or hundredths of a second. We can request the system to suspend the execution of a program with the sleep command. Infortunately the interval can only be specified in seconds.

We will consider some data types and commands that we can use for dealing with time. I highly recommend that you use these tools to measure accurately and precisely the time it takes to execute various system services and some of your code.

Here are some useful functions (getclock and ctime_r) and data type (timespec) available in Unix:

   #include <sys/timers.h>
   int getclock(int clktyp, struct timespec *tp);
          clktyp: it specifies the type of clock being used.
              We use for it the value TIMEOFDAY
	  tp: A data structure where getclock will save the
	      current time.
   struct timespec {
		time_t tv_sec;   /* number of seconds */
		long   tv_nsec;} /* number of nanoseconds */
   int ctime_r(const time_t *timer, char *buffer, int len);
	  timer:  number of seconds since epoch (1 jan 1970 at 00:00:00)
	  buffer: array where ctime_r will save the representation of
	      timer as a date string
	  len: size of buffer.
          Be sure to compile programs containing ctime_r using the
          library libc_r.a, i.e. use the compilation command modifier
          -lc_r
We can use these functions to determine the time it takes to execute a Unix command. This is done in the following program. Assuming that the corresponding executable has been placed in the file timespec, one can determine the time required to execute a shell command such as who with the call
    % timespec who
You may want to compare the values you get with timespec and time. Here is the timespec program:

   /* timespec.c -- compile with "cc timespec.c -o timespec -lc_r"*/

   #include  <sys/types.h>
   #include  <sys/timers.h>
   #include  <stdlib.h>

   #define TIMLEN 60

   void timespec2string(struct timespec *ts, char buffer[]);

   int main(char argc, char *argv[])
   {
     char buffer[TIMLEN] = "";
     struct timespec tspec1, tspec2;

     if (argc < 2) {
       printf("Write:  timespec shellcommand\n");
       exit(1);}
     getclock((timer_t)TIMEOFDAY, &tspec1;
     if (system(argv[1]) != 0) {
       perror("system");
       exit(1);}
     getclock((timer_t)TIMEOFDAY, &tspec2);
     timespec2string(&tspec1, buffer, TIMLEN);
     printf("Before: %s\n", buffer);
     timespec2string(&tspec2, buffer, TIMLEN);
     printf("After : %s\n", buffer);
   }

   void timespec2string(struct timespec *ts, char buffer[], int len)
   /* It reads the time from ts and puts it in buffer (of size len) */
   /* as a string */
   {
     ctime_r(&(ts->tv_sec), buffer, len);
     /* ctime_r terminates the time with a carriage return. We eliminate it.*/
     /* We report time in microseconds. That is the precision on our system.*/
     sprintf(&buffer[24], " and %6d microseconds", (ts->tv_nsec)/1000);
    }

Notice that we have called getclock before and after the fragment we want to time, without printing out any information in between. This is so as to get a measure that does not include extraneous printing activities.
When doing measurements it is wise to do them a number of times and compute average and standard deviation. This way we can eliminate some of the effects of random errors.

ingargiola.cis.temple.edu