I'm implementing valve's networking model for my simple top down game but I have some design problems and I just can't think of good solutions. The one of the main ideas that there is a render time separate from simulation time. So for example I have interpolation time of 50ms (as my server update rate is 20Hz).
So, on client my simulation time is 1000ms and render time is 950ms. But I have some design problems... When the user presses a key (in example cast spell), the logical choice is to trigger it in the render time (950ms), but that's impossible since my simulation is in 1000ms already. I think that the solution may be to trigger input using simulation time (and use "lag compensation", so if spell casting time is 100ms, user casts it at the time when render is at 950ms, and it will start in simulation at time 1000ms as 50% casted spell). However I'm not sure, is it a good solution?
The second problem is about sending packets. So If my solution for the first problem is good, then I will send the newest packet from simulation (and don't bother with render time when sending packets). Is it good or should I send packets using render time (ie when simulation time is 1000ms I send to server packet from 950ms)?
The third problem is about receiving packets. When I receive packet when the simulation time is 1000ms, should I save it as packet received in 1000ms or 950ms? In valve networking model (which I use as I said), when server receives info "started casting spell" at server time 250ms, it is compensated by rtt/2 AND by client interpolation time... So I don't know how valve clients handle input, looks like they handle it at render time which I don't understand. Anyway, as I want to use my solution for handling input, I think that I shouldn't compensate for interpolation on server, yes? Or maybe I should compensate on server also by client interpolation time?
I hope that you can help me with my 3 questions. Thanks in advance!
Edited by NetrickPL, 03 January 2013 - 11:13 AM.