[C++ & WIN32] Game running on a single core. (Weird timing problems)
As the title sais, I want my game to run on a single core, mainly because of a wierd problem I've encounterd. While running my game on a dual core AMD 4800+ (havn't tested it on other dual cores) the movement of the mouse in the game window will cause timing operations to screw up. My timer will read a FPS of well over 7000 when vsync is enabled and it should run at a steady 60. This only happens when the mouse is in the game window, and I've found that setting the affinity of the process manualy in the task manager corrects this issue. This wouldn't be such a big issue for me if the timing operations didn't cause outragously choppy movement. If somebody knows how to fix this problem please explain.
Quote:
This only happens when the mouse is in the game window
I really think synchronizing with the vsync is only done when you're in full screen.
I may be wrong, but...
Hope it helps,
Emmanuel
SetThreadAffinityMask(GetCurrentThread(), 0x01);
Seems to have done the trick. Thank you, but one thing that is still a problem. I get the same symptoms as I previosly had when I "move" the mouse. The mouse can now be in the game window, but when moving it on occasion it freaks out. Not all the time, but some time. I guess the best thing to do is update processor drivers, I'll get on that, but thank you non the less. :)
To be honest, I'd say that using SetThreadAffinityMask() is not a solution. Instead, you should not be using QueryPerformanceCounter() and QueryPerformanceFrequency(). You might want to use timeGetTime() instead (and just handle the case when it rolls over).
Quote:Original post by Colin Jeanne
To be honest, I'd say that using SetThreadAffinityMask() is not a solution. Instead, you should not be using QueryPerformanceCounter() and QueryPerformanceFrequency(). You might want to use timeGetTime() instead (and just handle the case when it rolls over).
QueryPerformanceCounter usually has a much better resolution than timeGetTime, so replacing it it isn't a solution either. In the end, you have to sacrifice something anyway.
But do you need that extra resolution? timeGetTime() has a resolution of approximately 10ms. This allows you to draw a new frame every 10ms, or at 100FPS. If you're only drawing a new frame during vertical synch then this could very well be faster than you need.
But its very inaccurate. I've had 20ms jumps in timeGetTime() on Windows (linux it's very accurate). I may only be able to render 60fps with vsync but in the background I could be doing a lot more stuff like AI for distant actors, yada yada yada. Most published games for windows seems to use QPC/QPF, there is reasons for this. More accurate math. When a clock jumps randomly between updates you get jerky movements in animation, etc. The more accurate you can get it, the "nicer" it is. I've always used QPF and when I notice jumps I roll back to timeGetTime() internally for that one frame.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement