Hi,
Hi,
Using a "fixed" timestep is not about expecting your timer to fire ever N milliseconds on the nose. Even when running fixed-step simulation you need to measure the actual wall clock time that elapses and use that to advance your simulation. Otherwise you will get clock drift and indeterminacy over time.
So yeah, you're basically right. I'm not sure what the question is, though, unless you're just looking to make sure you understand the situation correctly (which it seems you do).
How do you measure the actual wall clock time that elapses? I think that's what I'm already doing that I found was wrong. So, there are two different deltas here, the wall clock delta and the fixed time step delta. The problem I'm talking about is that the measured wall clock is not ever an exact even multiple of the vsync interval even though it should be in order for the rendered frames to be interpolated properly. Can you see how that would be an issue?
I don't see why you need some theoretical magic time value for anything to work correctly. Run your simulation at as close to fixed as you can, and you're done.
You will never get 100% precisely perfect timings on a consumer OS, they simply aren't designed to give you that guarantee. (Look at process timeslice quantization rules for example.) Design your code accordingly.
You say it's not possible to get perfect timing, but you also say that doesn't really need that to work correctly. I agree that a game engine will still work mostly correct without the stuff I'm talking about, every game engine I've ever seen has certainly worked fine without it. But I don't understand why people are willing to settle for mostly correct, if we could get perfect timing that would be better right? I mean, it's not theoretical or magical, monitors have a very specific vsync interval that images are displayed at.
The whole point of what I'm trying to talk about here is that you can get perfect timing that is 100% precise and this is how you do it. The only sacrifice is a small amount of unavoidable latency which is a small price to pay for perfect timing and smoothness of motion. With triple buffering it doesn't really matter what the os does or how much fluctuation between deltas there is. What about my plan makes you think it won't yield perfect timing?
My point is that your corrected timing code is fine.
You will never get guaranteed time deltas without a hard-realtime OS. The kernel is free to schedule some other process for a quantum, which means you don't get a chance to even run when your timestep "should" be firing. This is why you get timing variances like you describe, and in a multitasking OS, it's actually a very good thing.
This compensation is actually relatively common in my experience; basically you're just eating the time variance by amortizing it across the subsequent frame(s).
Hodgman -
My deltas don't vary much, generally between 16 and 17 or so but can be much less/greater. Also when triple buffering multiple updates must sometimes happen during the same vsync interval, otherwise you aren't really buffering anything, right? My engine uses a double for the timer, but that shouldn't affect this issue. Changing to a fixed point solution or increasing the timer accuracy won't get rid of the fluctuation. I am curious about why your measured deltas don't vary, have you tried logging them to a file?
ApochPiQ -
Thanks that helps a little. I like to think of it as buffering the time delta. I figured this was a common thing that I must have never came across or seen mentioned anywhere because it seems pretty damn important. That's why I'm asking if I am over thinking things or maybe just using the wrong terminology.
Yeah I was mistaken, when synced to display at 60Hz, the delta log looks like:I am curious about why your measured deltas don't vary, have you tried logging them to a file?
...
0.016761
0.016591
0.016652
0.016698
0.016710
0.016666
...
Which over time seems to average out to the correct value, but yes, there is jitter.Interesting idea. I can see how this would help make things extra smooth. I'll have to try it out some time and see what the difference is like (I'm on OS X currently so I can't try your demo).
Question: is there a way to support monitors with different refresh rates? Most monitors these days are 60Hz, but there are some displays and systems that aren't running at 60Hz, but something "odd" (for example, some projectors may be 48 or 72Hz (since most movies are 24Hz)). If I'm understanding it right, you can't use this method and "properly" support monitors with different refresh rates (then again, the traditional fixed time step doesn't either).
I'm going to have to think about this one some more...