[quote name='fholm' timestamp='1314034082' post='4852407']
But it's still sort of a "springy" feeling
What I believe most action games do (from working with them and reading about techniques over many years) is something very similar to "displayState = oldState + (newState - oldState) * (timeSinceTick / tickSize)"
This means that the physical simulation will be displayed up to one simulation frame behind real time. If you additionally have a lot of graphics command stream buffering, you'll build up a lot of latency. Thus, I recommend using only double-buffering for graphics frame buffers, and doing lock/copy tricks in the command stream to prevent the graphics card from buffering too much command data.
Assuming you have a good timer in your game loop (QueryPerformanceCounter() on Windows, for example), your loop looks something like:
lastTime = now() - simTickSize;
forever() {
curTime = now();
if (curTime >= lastTime + simTickSize) {
int n = 0;
while (curTime >= lastTime + simTickSize && n < 5) {
oldState = curState;
simulate_one_step();
lastTime += simTickSize;
n += 1;
}
lastTime = curTime - fmod((curTime - lastTime), simTickSize); // snap to current step
}
render_state(oldState, curState, (curTime - lastTime) / simTickSize); // lerp between 0 and 1
}
This loop will drop time if you fall more than 5 simulation steps behind, which will probably cause server-side corrections at that point -- choose your max N carefully. If the client machine is simply too slow to keep up the simulation, there's not much you can do, and you don't want to lock the CPU for too long just running simulation without display.
The input latency here will be at least two render frames, because of double-buffering plus the draw-behind. However, that's only 30 milliseconds, which actually will feel very snappy. Most modern online games actually have more in the 5-7 frames range of latency from input to display on screen. (Where "frames" means 60 Hz sim/display steps)
Another option is to use vsync, set the display frame rate to 60 Hz, and set the simulation rate to exactly 1/60 second. Consoles can do this a lot, and it would allow you to drop one frame of perceived latency. With Unity, that may be harder, though.
[/quote]
All good points though I actually move the render_state function to the very front since my renderer is in a separate thread and I just enque render nodes. From that point, I use the simple velocity version mentioned which for all intents and purposes should be identical in behavior except it separates the lerp target into the simulation/networking code. The only purpose to the change is to allow updating the target of the lerp on the fly (say when a network message is received that says, hey dumby you should be over here, not there) in a smooth manner without causing discontinuities between lerps. It really is a minor detail though, both work.