High Precision Timer in C
category: general [glöplog]
I've never tried this before so I'm not sure what I'm doing. I usually rely on something like allegro or sdl to help me. I'm trying to write a good timer/timed loop so I can control execution speed (60fps). Here's what I have so far.
The example just spits txt out for (hopefully) 300ms. What do you think? Is there a better way to do this. Thnx
Code:
#include <stdio.h>
#include <time.h>
#include <sys/timeb.h>
int GetMilliCount(){
struct _timeb tb;
_ftime( &tb );
int nCount = tb.millitm + (tb.time & 0xfffff) * 1000;
return nCount;
}
int GetMilliSpan( int nTimeStart ){
int nSpan = GetMilliCount() - nTimeStart;
if ( nSpan < 0 )
nSpan += 0x100000 * 1000;
return nSpan;
}
int main(){
int x = 0;
int y = 0;
time_t startTime;
time(&startTime);
x = GetMilliSpan( startTime );
while(1){
y = GetMilliSpan( startTime );
if( y - x >= 100 && y - x <= 400 ){
printf("%i\n",x);
}
}
return 0;
}
The example just spits txt out for (hopefully) 300ms. What do you think? Is there a better way to do this. Thnx
You should use gettimeofday(), but other than that... there's to my knowledge no better portable timers available.
reading the cycle counter from the CPU used to be popular on Intel platforms but now with multiple cores it's not so clean cut...
reading the cycle counter from the CPU used to be popular on Intel platforms but now with multiple cores it's not so clean cut...
if you are on mac, you can use mach_absolute_time.
I'm thinking about looking into the winapi if this doesn't work well.
in windows the high resolution timer ( look for QueryPerformanceCounter ) does the job nicely.
On Win32, you should use GetPerformanceCounter/GetPerformanceFrequency (or GetTickCount if you can live with the limited precision). On Unixish systems, gettimeofday is the weapon of choice.
Aren't you better off using the graphics API's VSync for accurate frame timing and something like Windows' GetTickCount() to control the speed of what's fed to the rendering engine?
I hereby donate to you my timer class:
Code:
#ifndef VSX_TIMER_H
#define VSX_TIMER_H
//#ifndef WIN32
#include <time.h>
#include <sys/time.h>
//#endif
class vsx_timer {
double startt;
double lastt;
double dtimet;
#ifdef _WIN32
LONGLONG init_time;
#endif
public:
void start() {
startt = atime();
lastt = startt;
}
double dtime() {
double at = atime();
dtimet = at-lastt;
lastt = at;
return dtimet;
}
// normal time
double ntime() {
return ((double)clock())/((double)CLOCKS_PER_SEC);
}
// accurate time
double atime() {
#ifdef _WIN32
LARGE_INTEGER freq, time;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&time);
return (double)((double)(time.QuadPart-init_time) / (double)freq.QuadPart);;
#else
struct timeval now;
gettimeofday(&now, 0);
return (double)now.tv_sec+0.000001*(double)now.tv_usec;
#endif
}
vsx_timer() {
#ifdef _WIN32
LARGE_INTEGER time;
QueryPerformanceCounter(&time);
init_time = time.QuadPart;
#endif
}
};
#endif
run dtime() to get the delta time since last call (once per frame)
Have the issues with performance counters and speedstepping CPUs been solved, though?
Mind you, SDL_GetTicks() does end up calling gettimeofday() on most platforms. If you run unix setitimer ITIMER_REAL and hooking SIG_ALRM gives you a resolution of about 10ms typically. Some systems it's more precise
doom, not if you are a purist: The resolution of gettickcount is 60fps which is low enough to cause time aliasing artefacts if you happen to render at any other rate. And 60Hz flickers awful lot on CRTs..
Of course, if you just can't get an accurate timer: float hack(float t) { static float t2=0, t3=0, wtf=0.1; t2 += (t-t2)*wtf; t3 += (t2-t3)*wtf; return 2*t2-t3; }
As a bonus it will cause hell of a lot cooler artifacts when the fps changes:)
Of course, if you just can't get an accurate timer: float hack(float t) { static float t2=0, t3=0, wtf=0.1; t2 += (t-t2)*wtf; t3 += (t2-t3)*wtf; return 2*t2-t3; }
As a bonus it will cause hell of a lot cooler artifacts when the fps changes:)
deja vu
How about timeBeginPeriod(1) and timeGetTime() for a timer with 1 ms precision? Seems to work on XP and win7 at least, haven't tested on anything else.
216: Yeah, I did a Google and it's a confused issue apparently. As far as I can tell the resolution of GetTickCount() is 1 ms, but the accuracy varies wildly, and it can be off by 20-100 ms, depending on who you believe.
Seems like a lot of people recommend the performance counter, but then a lot of people say it's completely unreliable due to speedstepping and multi-core systems.
timeGetTime() should be good though, right?
Seems like a lot of people recommend the performance counter, but then a lot of people say it's completely unreliable due to speedstepping and multi-core systems.
timeGetTime() should be good though, right?
on Mac there is also CFAbsoluteTimeGetCurrent().
GetTickCount() and timeGetTime() return the exact same value on all systems i've tried so far.
timeBeginPeriod(1) affects the scheduler granularity which makes both timers more accurate.
timeBeginPeriod(1) affects the scheduler granularity which makes both timers more accurate.
216: I guess you would need 30fps to use it then?
Ryg, what systems have you tested that on?
timeBeginPeriod(1) has never affected GetTickCount() in my tests, again only tested XP Pro and Win7 (RC1).
I just tested using this loop, and GetTickCount still has a precision of 16-17 msec, not 1.
timeBeginPeriod(1) has never affected GetTickCount() in my tests, again only tested XP Pro and Win7 (RC1).
I just tested using this loop, and GetTickCount still has a precision of 16-17 msec, not 1.
Code:
timeBeginPeriod(1);
DWORD last = 0;
for(int i=0; i<1000; i++)
{
while(timeGetTime()==last) {};
printf("timeGetTime:%u GetTickCount:%u\n", last=timeGetTime(), GetTickCount());
}
timeEndPeriod(1);