# Is gettimeofday () assured to be of split second resolution?

I am porting a video game, that was initially created for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux).

I have actually applied QueryPerformanceCounter by offering the uSeconds given that the procedure start up:

BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount)
{
gettimeofday(&currentTimeVal, NULL);
performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec);
performanceCount->QuadPart *= (1000 * 1000);
performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec);

return true;
}


This, paired with QueryPerformanceFrequency() offering a constant 1000000 as the regularity, functions well on my equipment, offering me a 64 little bit variable which contains uSeconds given that the program's start up.

So is this portable? I do not intend to uncover it functions in different ways if the bit was assembled in a particular means or anything like that. I am great with it being non-portable to something apart from Linux, nonetheless.

0
2019-05-07 00:13:13
Source Share

You might want Linux FAQ for

0
2019-12-05 01:23:38
Source

Reading the RDTSC is not trusted in SMP systems, given that each CPU keeps their very own counter and also each counter is not assured to by integrated relative to an additional CPU.

I could recommend attempting clock_gettime(CLOCK_REALTIME) . The posix guidebook shows that this need to be applied on all certified systems. It can give a millisecond matter, yet you possibly will intend to examine clock_getres(CLOCK_REALTIME) on your system to see what the real resolution is.

0
2019-12-05 01:23:19
Source

So it claims split seconds clearly, yet claims the resolution of the system clock is undefined. I intend resolution in this context suggests just how the tiniest quantity it will ever before be incremented?

The data structure is specified as having split seconds as a device of dimension, yet that does not suggest that the clock or running system is in fact with the ability of gauging that carefully.

Like other individuals have actually recommended, gettimeofday() misbehaves due to the fact that establishing the moment can create clock alter and also shake off your estimation. clock_gettime(CLOCK_MONOTONIC) is what you desire, and also clock_getres() will certainly inform you the accuracy of your clock.

0
2019-05-09 08:17:43
Source

Wine is in fact making use of gettimeofday () to implement QueryPerformanceCounter () and also it is recognized to make several Windows video games work with Linux and also Mac.

0
2019-05-09 06:01:46
Source

From my experience, and also from what I've read throughout the net, the solution is "No," it is not assured. It relies on CPU rate, running system, taste of Linux, etc

0
2019-05-08 20:41:36
Source

Maybe. Yet you have larger troubles. gettimeofday() can cause wrong timings if there are procedures on your system that transform the timer (ie, ntpd). On a "regular" linux, however, I think the resolution of gettimeofday() is 10us. It can leap onward and also in reverse and also time, subsequently, based upon the procedures working on your system. This properly makes the response to your inquiry no.

You need to check into clock_gettime(CLOCK_MONOTONIC) for timing periods. It deals with numerous much less concerns as a result of points like multi - core systems and also exterior clock setups.

Additionally, check into the clock_getres() function.

0
2019-05-08 20:15:35
Source

The real resolution of gettimeofday () relies on the equipment style. Intel cpus along with SPARC equipments supply high resolution timers that gauge split seconds. Various other equipment styles drop back to the system's timer, which is commonly readied to 100 Hz. In such instances, the moment resolution will certainly be much less exact.

I got this solution from High Resolution Time Measurement and Timers, Part I

0
2019-05-08 19:45:45
Source

High Resolution, Low Overhead Timing for Intel Processors

If you're on Intel equipment, below's just how to read the CPU real-time guideline counter. It will certainly inform you the variety of CPU cycles implemented given that the cpu was started. This is possibly the finest-grained counter you can get for performance dimension.

Keep in mind that this is the variety of CPU cycles. On linux you can get the CPU rate from/ proc/cpuinfo and also divide to get the variety of secs. Transforming this to a double is fairly convenient.

When I run this on my box, I get

11867927879484732
11867927879692217
it took this long to call printf: 207485


Below's the Intel developer's guide that offers lots of information.

#include <stdio.h>
#include <stdint.h>

inline uint64_t rdtsc() {
uint32_t lo, hi;
__asm__ __volatile__ (
"xorl %%eax, %%eax\n"
"cpuid\n"
"rdtsc\n"
: "=a" (lo), "=d" (hi)
:
: "%ebx", "%ecx");
return (uint64_t)hi << 32 | lo;
}

main()
{
unsigned long long x;
unsigned long long y;
x = rdtsc();
printf("%lld\n",x);
y = rdtsc();
printf("%lld\n",y);
printf("it took this long to call printf: %lld\n",y-x);
}

0
2019-05-07 17:48:45
Source

@Bernard :

I need to confess, a lot of your instance went right over my head. It does compile, and also appears to function, however. Is this secure for SMP systems or SpeedStep?

That's an excellent inquiry ... I assume the code's ok. From a sensible point ofview, we utilize it in my firm on a daily basis, and also we work on a rather vast array of boxes, every little thing from 2-8 cores. Certainly, YMMV, etc, yet it appears to be a trusted and also low-overhead (due to the fact that it does not make a context button right into system-space ) method of timing.

Usually just how it functions is :

• proclaim the block of code to be assembler (and also unpredictable, so the optimizer will certainly leave it alone ).
• execute the CPUID guideline. Along with obtaining some CPU details (which we do not do anything with ) it integrates the CPU's implementation barrier to make sure that the timings aren't influenced by out-of-order implementation.
• execute the rdtsc (read timestamp ) implementation. This brings the variety of equipment cycles implemented given that the cpu was reset. This is a 64-bit value, so with existing CPU rates it will certainly twist around every 194 years approximately. Surprisingly, in the initial Pentium reference, they note it twists around every 5800 years approximately.
• the last number of lines store the values from the signs up right into the variables hi and also lo, and also placed that right into the 64-bit return value.

Details notes :

• out-of-order implementation can create wrong outcomes, so we execute the " cpuid" guideline which along with offering you some details concerning the cpu additionally integrates any kind of out-of-order guideline implementation.

• The majority of OS's integrate the counters on the CPUs when they start, so the solution is excellent to within a number of nano-seconds.

• The hibernating comment is possibly real, yet in technique you possibly uncommitted concerning timings throughout hibernation borders.

• pertaining to speedstep : Newer Intel CPUs make up for the rate adjustments and also returns a modified matter. I did a fast check over several of packages on our network and also located just one box that really did not have it : a Pentium 3 running some old data source web server. (these are linux boxes, so I got in touch with : grep constant_tsc/ proc/cpuinfo )

• I'm not exactly sure concerning the AMD CPUs, we're largely an Intel store, although I recognize several of our low-level systems masters did an AMD analysis.

Hope this pleases your inquisitiveness, it's an intriguing and also (IMHO ) under-studied location of shows. You recognize when Jeff and also Joel were speaking about whether a designer should recognize C? I was heckling them, "hi there neglect that top-level C things ... assembler is what you need to find out if you need to know what the computer system is doing!"

0
2019-05-07 16:59:32
Source