Avatar management and client interpolation?

Started by
10 comments, last by oliii 15 years, 9 months ago
Hey Im in the process of desiding how to implement network into my game. The game is a fast-phased 3d isometric perspetive coop game. (The view is somewhat like diablo). After some reading I have grown found of the quake3 way of handling netcode. ie. Let everything be unreliable. In my mind I have figured that I will use a test on the server to see what entities that might be visible the next N frames ( around 2-5 I guess ). And send a list of these to the client. The client will be dumb, and just send its input to the server. The server will rollback the movement from the client and make sure its in sequence. The client will be performing a somewhat simple client side prediction, and will act immediately on input for its own avatar, and later rollback the servers position for the avatar. (Only the players avatar get predicted I think. ) To hide the effect of lag somwhat I figured i will always render behind what the client actually got in terms of data. So I can interpolate up to the newest data set, and if no updates arives from the server - extrapolate. Does this layout seems reasonable? What are the problems I might bump into, and are there other options that I should consider? cheers, Sondre
- Me
Advertisement
The only way you'll know if it's reasonable for your game is to implement it, and see how it works. It can probably be made to work the way you say, and you'll probably run into all kinds of issues related to networking -- you always do. Exactly what they are, you don't know until you try it.
Good luck!
enum Bool { True, False, FileNotFound };
What you describe is pretty much how quake3 / HL2 work (and what we are trying to implement ourselves).

a couple of articles.

http://www.gaffer.org/game-physics/networked-physics

http://unreal.epicgames.com/Network.htm

http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking

http://developer.valvesoftware.com/wiki/Latency_Compensating_Methods_in_Client/Server_In-game_Protocol_Design_and_Optimization

http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking

Everything is better with Metal.

some e bumps would be bandwidth related. The worse the connection is, the more data gets transmitted (as the diff packets grow in size). So you need to implement throttling.

Everything is better with Metal.

Well, then I will implement it and see how it goes :)


And oliii, thanks for the links, read tru most of them beforehand, but the Latency_Compensating_Methods was new, and a good read :)

What problems have you (oliii) had with this kind of implementation, any things I need
to watch out for?
- Me
It can impact your design.

For example, you have to imagine how to implement things like server commands / notifications, which are generally implemented as a single reliable message and are straight forward to implement and understand ad-hoc.

So, the code would take more of state-based approach rather than the typical event based interactions (although you can move to locally events and notifications by detecting changes in states).

For example, a red player captured a spawn point. Typically, you would send a reliable "Player XXX captured spawn point" notification to all. It gets received, and you play a sound effect and show blip the HUD.

So how would you notify the spawn point being captured without reliable messages and just relying on states?

The simplest way would be to have a 'captured_by' variable in the spawn point class, and set that variable the player id, and the change in the spawn point state will then be automatically broadcasted to all players.

On the client side, you will notice the 'captured_by' variable changing from nothing to a player id. And you can generate a notification to the audio / HUD. However, imagine packet loss, and the state changing very quickly (player A and B and C fighting over something), you could be missing B capturing the point. Would that be a problem or not, it all depends on your design.

There are other ways though of course. You can also implement reliable messages as well as delta-compressed and unreliable channels, but that defeats the purpose. The main problem with having different protocols (mixing unreliable and reliable messages) is sync issues. That can be real bad.

You can also implement a ring buffer on the host that holds a list of notifications on the server, and keep adding them to that. Then you will miss events only when the event buffer wraps around (which should be very unlikely).

Also, everything works pretty much through rigid serialisation. You cannot really change an state's data format. But that's usaully true for many things anyway. In a sense, you could design your serialisation through scripts, detailing the variables that needs to be transmitted for each object type (like Unreal does).

However, we've opted for a very simple approach. We just serialise variables of an object into a raw buffer, and to signal changes between two states, we have a bitfield that tells the client which byte of that raw buffer has changed. That basically limits the compression to 1/8 (or whatever ganularity you choose) at best (as you need to transmit the bitfield in full), so the delta bitfield is then RLE encoded to keep the size down.

It's not ideal, but it means that we don't need to worry about what gets put into the buffers. It's a pretty basic approach, but we have major time constraints.

Another problem is sending the base state (or large delta packets).

The first state you will send a to a client as he connects could be massive as you will be transmitting a full state (a snapshot). This could be greater than the maxiimum UDP packet size, so we had to implement manual packet fragmentation manually (besides, the 360 dont allow you to send packets > than the defined MTU size).

What we do is have a 'reliable' stage, where the initial packet is sent reliably. Updates for that client are blocked until all the fragments of the full state have been aknowledged, then the client can receive deltas (which will be hopefully much smaller). That also plays a part in throttling bandwidth, and that blocking mechanism is part of our automated throtlling in case a delta packet gets too big (if a client lags, you will start sending full states of some objects again as his ack number will be too old).

Also, more of a pltform-specific problem, is the memory allocation. Each object will maintain a stack of cached states, and the states are usually pretty small (under 100 bytes). So there will be lots of small allocations if you are not careful. Also, the stack depth can be variable. A pickup will change state a lot less often than a player, so there is no need of keeping a large cache for them.

Hope that helps.

Everything is better with Metal.

Quote:Original post by oliii
So, the code would take more of state-based approach rather than the typical event based interactions (although you can move to locally events and notifications by detecting changes in states).

For example, a red player captured a spawn point. Typically, you would send a reliable "Player XXX captured spawn point" notification to all. It gets received, and you play a sound effect and show blip the HUD.

So how would you notify the spawn point being captured without reliable messages and just relying on states?



Funny that you would mention this, as this is atm the thing im having most problem with.
And I just cant quite wrap my head around and find a way to do this that I like.

What im aiming for now is kinda a system where I just create a event message
and send them ontop of regular packets, and continue to do so until a packet that contained the event get ack'ed.

Have you tried the state based approach?
- Me
Yeah you can do that, piggy-back reliable messages in the packets. What we are trying to do however is to completely remove the need for reliable messages.

We're still unsure how far we can go. I seem to recall Carmack noted similar problems, for example the chat messages. iirc, Quake3 uses delta compression directly on the reliable buffer. They write messages into a ring buffer, and send a delta of the bytes stored in that buffer. The problem is that the buffer can be quite big (say, 4K).

For stuff like player requests (i.e. requesting a team change, or a new weapon set, ect...), we're going through states and a sync process as well.

We have an object that sync client requests.

class ClientRequests{    int m_teamRequest;           // the team we requested    int m_teamRequestSync;       // the sqn marker for serve acks    bool serialise(Stream* stream)   {        stream->writeBits(bitPacker::Int(&m_teamRequest, E_TEAM_FIRST, E_TEAM_BITS));        stream->writeBits(bitPacker::Int(&m_teamRequestSync, E_SQN_FIRST, E_SQN_BITS));        return stream->ok();   }    bool deserialise(Stream* stream)   {        stream->readBits(bitPacker::Int(&m_teamRequest, E_TEAM_FIRST, E_TEAM_BITS));        stream->readBits(bitPacker::Int(&m_teamRequestSync, E_SQN_FIRST, E_SQN_BITS));        return stream->ok();   }    bool teamRequestPending() const    {        int serverAck = server()->getAckNumber(); // the latest number the server acked from us        return (serverAck < m_teamRequestSync); // the server hasn't processed our latest request.    }};


playing with ack and sqn numbers, you can detect new requests, or when the server acknowledged your request (accept / reject results is just comparing the requested value, and the value set by the server once he acked your request).

So for example, a player can change red-blue team very quickly, it will not impact the bandwidth (we wont be spamming reliable requests) and we dont need to curb their inputs.

Everything is better with Metal.

How will you handle chat with this system?
Create a 'chatmsg' object and make sure its synced?


The system looks pretty neat as you dont have to handle some packets diffrent.
In fact, its pretty tempting to adopt this system :)

Are you using one "super class" for all events or some kind of template class
that you typedef?

like:
typedef EventObject<int> IntEvent;class SomeClassThatNeedsEvents:     ...     protected:          IntEvent m_TeamRequest;// in .cppm_TeamRequest.SetValue( 2 ); // fire a request a change to team '2' to the server// and have a interface like:m_TeeamRequest.Status() // returns a enum, { EVENT_PENDING, EVENT_GRANTED, EVENT_DENIED }



Or do you just extend ClientRequests to have a lot of events that you delta compress?
- Me
well, we're still in the prototype phase. We know the risk is low (i.e. it works), it's just a question of implementation. I'm not actually coding that part, but it looks like we'll be trying to formalise reliable requests instead of shovelling everything into one class and be primitive about it. Something along the lines you highlighted.

The problem is that we're going peer to peer (as you have on consoles :/). So there will be lots of interactions and we need a working synchronised request / command process.

For commands (for example, server tells clients to load a game, end a game, restart, ect...) we'll need some pretty spanky synchronisation, especially for handling stuff like arbitration, posting trueskills, host migration, voting, ... things that requires all participants to reach a common decision. We've done each processes separately already, but it needs formalising and re-factoring.

As for chat, we dont know. If it is done, it is only for the PC version. Since chat is pretty stand-alone, I think we'll be using a standard reliable stream added to packets (like voice packets, or even their own packets, and sod header overheads).

Everything is better with Metal.

This topic is closed to new replies.

Advertisement