Question about topology and bandwidth allocation scheme for online game

Started by
5 comments, last by hplus0603 16 years, 11 months ago
Hello everybody, This is my first thread so I would like to thanks GameDev for it contribution in growing the game programming community. I would like to create an online racing game and I start thinking about what kind of topology is best for that type of game. I think that P2P is the best choice in order to allow for Low Latency. But with p2p the bandwidth start to be a problem to deal with. A racing game usually need a large state in respect to FPS. Each peer need to send input at constant rate (non really the frame rate but something in betwee 50-100ms is required) to every others. Seem also to me necessary to send the vehicle physic state in order to resync and handle collisions between clients.. However sending a car physics state is bandwidth consuming because is not a matter of sending only reference position, quaternion etc. Furthermore due to latency, I can rely on that input but I need some "Dead Reckoning" style extrapolation that may use the past input as suggestion in order to approx the user behaviour. But this may be another thread by itself ;) I start wondering how to find my available bandwidth and how to assign the proper bitrate and update rate to the various peer. How can I estimate my available upstream bandwidth? Given N peer how can I assign to each peer a portion of that available bandwidth ? How other flows can update their max bitrate and update ratio if congestion or changing condition occours ? TCP have its own congestion control but rely on ACK and packet lost to deal with... I do not use TCP ( I have my custum UDP with realiable service build on top). I need to provide a "control stream" ?? I wonder if other people deal with that kind of problem.. Bandwidth requirement are of course an active topic but I see none talk about how to manage traffic shaping in a peer to peer or client/server online game. I see many article about congestion control and dynamic bandwidth ala TCP (for example), but all this methods are end-to-end. I'm not very experience in network multiplayer game so I really appreciate any help or suggestion. I would be glad if someone that have already made an online racing game may share its approach. I think it can be a nice learning experience Thanks Cristian
Advertisement
How will you handle collision detection and physics updates? Fixed time step? If so, how often? How do you intend to handle state synchronization?

Collision detection and maintaining consistent state may be much easier if done on central server. I don't think I know of P2P collision detection/physics system, but then again, I never looked much into P2P strategies.

To the best of my knowledge, SecondLife uses Havoc physics engine, it's authoritative on server, runs at 50 updates/second, and updates clients at half the rate. While not exactly the same, it does achieve decent results for completely open world. The updates there are done by sending vertex transformations to clients, so that's not applicable to your case.

But you first need to solve the authority problems, then worry about bandwidth. It may, or may not be necessary to send rigid body matrices across network, updates at 1Hz might be enough to synchronize state, or you might need to send delta states in between, or perhaps even complete state every few frames.

But until you know how things will work, it's hard to estimate anything.

Personally, I wouldn't touch P2P physics unless I found some good paper to back it up, because I know how many problems there are maintaining clients in sync on central server.

Quote:Original post by Antheus
How will you handle collision detection and physics updates? Fixed time step? If so, how often? How do you intend to handle state synchronization?


The physics run a fixed time step (60 fps). The same do collision detection system that work in sync with physics. The game run instead a 30 fps
Stating that a latency in between 80-120 ms is to consider the main case,
state synchronization may be done by issuing state snapshot of each vehicle
at fixed rate or based by estimating the differences in between the computed
local state and the extrapolated remote state. Once received, the system may recover by advance internally the client state to the current simulation time
(clock must be synchronized) using something simplified physics and collision detection.

complete vehicle state means : reference (quaternion .rigid body matrix, CM velocity etc), engine related (rpm, clunch etc) , wheel status, etc.
A rought estimate tell me that the state by itself worth nearly 200 byte, so
I cannot send every frame to all peers the snapshot.
Also I need to send input stream: 3 axis (analog)
It would require for 10 player for example, 200*30*10 = 60KB/s that is well
behing the limit of a common upstream for a ADSL connection (256Kbps/640Kbps tipically).
I can send the complete status at 5-10hz and in that case I meet 20KB/s or less that sound best.

If I run in client/server each client easily meet the requirement: 200*30 = 6Kbyte/s. Input stream must be sent to all peer as fast as possible so peer to peer.
But the server need a very nice upstream bandwidth: the status of every vehicle to every one means ~N^2 bandwidth (N*(N-1)).

By sending realignment state at 1HZ or at need (ex: very large state incoherences) an estimate for each client (10 peers)
each replica require: 200*1*10 = 2KB
because server send replica update to all but itself
2KB * 9 = 18KB.

1HZ update seem not enougth to me. Because of dead reckoning (or something similar) client state diverge a lot with only input stream.

If I would like to send more update or snapshot server upstream need to very high. In order to reduce the update I can use interest management.. That way each peer does not need to be informed about real state of every other but only in the proximity.. But this really help ?? Does not the simulation diverge a lot more?

Quote:Original post by Antheus
But you first need to solve the authority problems, then worry about bandwidth.


In effect there are no very much literature, and honestly I don't know much game that work in a peer to peer fashion.

In a client server the authority will reside on the session host.
But if the host may migrate than the consistency cannot last anymore.
Every peer have a slighlty different view of the game and when a new host take the control, the game view may be slightly different.

Hovever, in peer to peer, host cannot have any consistency authority. The consistency can be achieved by major voting. Eventually I can give host
the authority to invalidate a peer status. Every other peers need to
rollback in some way. Rollback may be evil in complex physics simulation.

I should allow that each peer may have a view different in respect of another but the core part of the simulation must be preserved.. for example race positions must be consistent as much as possible.. Collision in between peer/peer must not cause the remote vehicle interfere too much with the local player. I can allow for little collision or something similar, but this is something I'm currently thinking about.

There are many open question that I would like to ask, but I think is better to
open another thread.

Also in client/server you have bandwidth assignemnt problem. How do you estimate your available bandwidth ??
If you have say 16 peer how do you give to each of them a fraction of your available bitrate? Do you estimate the peer connection quality and erogate
state at different rate based on that estimante ??
GTR use adaptive bandwidth throttiling .. meaning that they are able to adapt the state update in a per peer fashion ? I think that many game use dynamic congestion and flow control ala TCP.. this is essential for a good internet protocol.

Excuse me for a so long post. I have so many questions and doubt..

cheers


Personally, I find P2P to be more trouble than it's worth for most cases. However, if your game is physics heavy, and a user is hosting the game, then one user will have to play on the "server" instance, rather than run two instances on the same machine.

Regarding sync in a racing game, that's a really tricky problem. You can't really get rid of latency, and because position matters in racing, you have to forward extrapolate other entities when displaying, or delay the user's input, or both.
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
Personally, I find P2P to be more trouble than it's worth for most cases. However, if your game is physics heavy, and a user is hosting the game, then one user will have to play on the "server" instance, rather than run two instances on the same machine.

Regarding sync in a racing game, that's a really tricky problem. You can't really get rid of latency, and because position matters in racing, you have to forward extrapolate other entities when displaying, or delay the user's input, or both.


I think P2P will be really the new standard the facto for distributed multiplayer game. However consistency is hard to ackieve.
However on PSP I have no problem. But PSP are fairly easy ;)

With PC and console online service is a lot more challenging :)

I think that a good solution will be to apply input buffering (aka delta causality or bucket synchronization or local lag) and then use AI to extrapolate. This is not the best but I'm not able to find something better.

Given that the interaction is limited (only collision) is possible to give each peers a little of authority in their local game view.
In a racing game what matter is position.. Maintain the same position against all clients may be difficults if AI runs without "controls". However, what is really important is not the "real" position but the "relative" position in between vehicles... Resyncing the position by forcing a "warp" maybe a solution in some case.. It is not something new..

Again, there is also Gleen approach in c/s: advance each peers clock by latency and jitter, that way inputs arrive just in time. However, still being a benefit in term of server computational effort (no rollback) I really mind what is the price for the clients..
Server response take say RTT/2 before reach the client again and because clients are RTT/2 so a full RTT wide interpolation is needed.
In classical c/s only RTT/2. So consistency have not improved on client side.
And what about if clients perform itself a robust physics? It need to rollback or restart ? I start thinking I miss something about this approach.
Can anyone explain me what client side really do in that case?

Cheers
Cristian
I would like also to return to the original topic question about
bandwidth allocation, in order to explain better what is my problem.

I need a method to adapt update rate of each clients based on network
condition. I can add a congestion control on a peer end-to-end basis
(ala tcp for example or by using RTT variation).

Now suppose that a peer have only say 128Kb/s of upstream bandwidth
and 640Kbs/s downstream. Surely not all that physical
available bandwidth is usable, suppose 90Kbs/s
Because of upstream limit I need a way to adjust my update rate in order to
be able to send to all peers data and remain below that threshold.
How can I do that?

In a client/server I think server will alloc fractions of its bandwidth
to each clients based on some criteria.
I thought that the criteria could be:
1- Its own available physical (??) bandwidth
2- The available max bandwidth (end-to-end) for each peer
3- The maximum update rate

But In peer to peer the problem seem difficult due to the fact that data
came from multiple sources.
Any hint?

Sorry for posting it again, I hope I was able to explain it better this time.

;)
Do what TCP does.

- Keep an estimate of available bandwidth.
- Whenever you detect a lost packet, you halve the estimate.
- Whenever you get ack for a sent packet, you add some constant number to the estimate.

You can do it trivially on a per-peer basis. However, it might be possible to also do it on a per-machine basis, so you keep both estimates of available bandwidth to each client, and total available bandwidth. That way, you may be able to prevent gyration of bandwidth allocation between different clients that each have similar characteristics, while being able to throttle a client that truly has less available bandwidth.

On the other hand, if you assume that available up bandwidth on any one client is always less than or equal to available down bandwidth on any other client, then you can get away with ONLY doing global bandwidth management -- don't track per-client, only track available bandwidth overall.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement