Jump to content

  • Log In with Google      Sign In   
  • Create Account


Farkon

Member Since 17 Sep 2011
Offline Last Active Jun 08 2014 10:49 AM

Posts I've Made

In Topic: Timestamping & processing inputs

25 May 2014 - 02:12 PM

Thanks, your two last answers is what i needed!


In Topic: Timestamping & processing inputs

24 May 2014 - 11:48 AM

Thank you both for taking the time to answer. 

 

I'm finding it a bit hard to understand what your problem is, so I'll just describe a typical scenerario and you can tell me what part you're deviating from or having trouble with.

  • The client( C ) sends a join request to the server(S), S accepts and responds with info about other players and spawns C into the world. C receives response and spawns into the world along with the other players.
  • C starts moving, sends input along with it's frame ID as you call it(assuming this is a counter for the fixed steps)
  • S receives input and stores it's own and C's frame ID, now for every packet received onward, S can calculate the difference between the steps he took and the steps C took. S buffers the received input so that it can simulate the input X steps in the future. S notifies the other clients.
  • If S receives input from C where the difference between the steps they took is larger than X, S corrects C.

That's the gist of it, what is the part that you're having trouble with?

 

 

I think i understand the steps you are showing and i don't think i'm having a trouble with what i understand from this BUT I'm still wondering how variation of latency is handled here. I think it still all comes down to me not understanding the part of your implementation i'm not having !?

 

My big misunderstanding sums down to (warning: what i'm going to say is probably wrong but that's how i get it) : Server iterates over the client frame buffer starting from the first frame he has received from the client, and will process one frame per server loop. So once the first frame is received, that frame will define the latency for the rest of the game.

 

In my mind if a client is sending the first frame at high latency, since the server will take that frame as the first one to process (and after that, one per frame) i will always keep that lag even if my latency is reduced, because what i will do is just adding frames to the server buffer in a more responsive manner but the frames are still sucked at 60fps so the lag won't be eliminated, that's what i don't understand.

 

If by that you mean it lags for longer than the server can compensate for, then you fail hard, meaning that the server waits for moves and in the meantime any client moves that are missing are "corrected" by the server state, meaning stuck in place. You could tell the server to defer simulation (in other words, don't update the move id as it didn't actually simulate), but that would lead to the server having too many moves, and the client remaining "lagged" (they would not feel the lag but their events would happen later and later). You want to drop moves as the buffer fills up and then simulate as normal.

 

That was an extreme case to help me understand what i am doing wrong here. I agree that it shouldn't happen, and it actually doesn't in my case.

 

Hmm, in which case you really have little choice at the moment. I did stumble across this http://stackoverflow.com/questions/7728623/getting-started-with-rtmfp But otherwise you'll be sort of stuck, unless you ensure your move buffer is long enough to hide resend latency (might be best to ask other TCP familiars if you need to change your approach)

 

Thanks, i wasn't aware of rtmfp! even if it seems like it's for p2p connections, i guess i could make one client a server... that's worth digging, i'll still stick to the flash.net.Socket thingy for now though.

 

If you simply maintain an index into the lookup buffer, and increment that every time you request an item, you will be fine (ensuring it wraps around). Any items that overfill the buffer are dropped. You only want to fill the buffer to a fraction of its full capacity, meaning that if you suddenly receive too many moves, they wont be dropped. (Let's say you 3/4 fill it). That faction must correspond to the amount of delay you want to use (my 150-200ms). Use the stored move's ID for any ack/ correction procedures. Just think of the server value as a lookup only.

 

This way, if a client tries to speed-hack, they just have moves that are dropped, because the buffer becomes saturated eventually. The server consumes moves at the permitted tick rate, which the client cannot influence.

 

Thanks for that precise explanation, that's how i understand it as well. But i'm still confused about my lag variance issue (see first answer of that post).


In Topic: Timestamping & processing inputs

23 May 2014 - 10:37 AM

I guess i could calculate the latency (in ticks) server-side and adjust the client frame id (still server-side) dynamically...


In Topic: Timestamping & processing inputs

23 May 2014 - 08:26 AM

1. Yes, but provided that you are using some real measure of time. your client should simulate n additional frames and the server will still have moves in the buffer from which to take whilst it awaits to receive the late ones.

 

  I do but what if the client is lagging over the number of cached frames ? 

 

2. Assuming that you initial synchronised the clocks, then x would represent the length of the buffer in ticks. However, this does require you to synchronise the clocks. Alternatively, you would take the first move ID and use that as the offset, such that:

if (!this->calibrated)
{
this->id_difference = this->current_id - move->id;
this->calibrated = true;
}

dejittered_index = move->id + this->id_difference + this->offset_ticks;
this->move_array[dejittered_index] = move;

(I think, at least).

 

 

How do you get offest_ticks here ?

 

 

What type of game are you developing? It might be that TCP is not the best choice for your inputs.When attempting to avoid sending previous moves by using reliable delivery, it resulted in moves arriving too late to be simulated at the correct time - which means either lengthening the buffer past 150 ms (which is already quite a long time) or dropping moves.

 

This is action RPG-ish and if i had the choice i would go UDP but i'm using flash.

 

 

If lag happens, it represents information arriving late. You need to ensure that information is always arriving early, so that "late" is still in the future, or just in time, which is whole purpose of the buffer. It doesn't matter whether the network causes the delay or the client itself, provided that the delay doesn't exceed the length of the buffer.

 

 

So you set a fixed buffer size, same for all clients ? And no matter the latency, if the client is over that size, he's not meeting the requirement to play that game ? If so how big usually is that buffer ?

 

I'm still in the mist with setting the first frame ID (client to server), let's say i'm having a 1second lag when setting the ID, no map loading, just no luck. I'm going to set frame ID 0 at t1 and the next frame ID1 2 3 4 5 ... all at once at t2, the server will increment ID 0 by server step at t1+lag, then will receive 1 2 3 4 5 ids while only incrementing ID0 to ID1 at t2+lag when the id should be way further. There's something that i'm really not getting here :S


In Topic: Timestamping & processing inputs

23 May 2014 - 07:43 AM

If your physics are simple enough, you can store all your states for the amount of lag you want to be able to compensate for, rewind x steps and then simulate forward again. The rest of the clients have  to do this as well. Or you could just let the server correct the client.

 

If i'm understanding well you're describing the rewinding technique for clients, if so this is lag compensation while i'm talking about raw lag that *shouldn't* be there.

 

W hat does your fixed timestep look like? Can't you load before hand or send a message that you're ready to take on input? Why do you end up sending X events(who is you?)?

 

Definitely; that's something i imagined as a workaround though. I'm still a bit confused about the fact that i'm definining the lag by the first message i'm sending for the rest of the game. The server is using that ID and will increment it at server steps, so if i'm lagging more, the server will receive messages that are too late to process. I guess there is some kind of dynamic mechanism relative to the latency to make but i'm not sure how.

 

you is any client in this case: The client at t1 is receiving the map + player creation event, that loop takes 1 second, the client is sending first inputs. Server receives those inputs and set the first frame ID. Client is at t2 and catches up for the laggy previous frame hence spamming the server buffer with X inputs, which makes the server having longer buffers than expected. I'm aware that it's a corner case and that i can fix it but i still don't know how to set that first timestamp from which the server will iterates on (if it's ever how i should do).

 

There's not much you can do about this, if the client doesn't meet the minimum required system specs they simply can't play.

 

I admit i didn't really visualize the scenario that way. Angus pointed out the same thing. In my mind the idea was that sometimes people are lagging so much that i make them going out of a potential infinite loop and that those cases will happen no matter what but i guess i can just say that if the number of loops have passed a certain limit i just disconnect the player or something in that respect. Which i guess is more or less what you were thinking about. This makes sense.


PARTNERS