Multiplayer Physics

Started by
17 comments, last by hplus0603 17 years, 10 months ago
Does anyone know where I could find information on this subject? I searched the article database of gamesutra and gamedev and found nothing. I'm looking for any information on this subject. Help would be greatly appriciated.
Advertisement
As far as I know there aren't any textbooks on the subject. Physics for multiplayer may be the same as for single player or not, but it really depends on what you want to do, in what field, under what architecture.

A good rule of thumb is that the more physics-aware entities you want to handle, on a simpler architecture, the cruder your simulation has to be. Common sense, really.

There are MANY good resources on distributed simulations (which is basically what we're talking about here), hplus mentioned one in my thread for event propogation and attenuation (check my profile). Can't really help much more without knowing the problem field.


Winterdyne Solutions Ltd is recruiting - this thread for details!
Check out the Forum FAQ, it has some pointers, and a link to the "zen of networked character physics" thread for more info.
enum Bool { True, False, FileNotFound };
Yes, I read all of that, and all of what it has links to aswell. It's good stuff, but I want more! Especially if there's a article that goes in to detail on how it's done in a game. ( Any game is fine, as long as it has physics )
You might want to try asking makers of physics APIs this question, like Havok.
I have also been playing Urban Chaos on PS2 and that uses physics objects with ragdolls for multiplayer games. Judging by the copyright screen at the start of the game it mentions it uses Replicanet ( www.replicanet.com ) so you could try asking there technical support.
I'm looking for information on this too for our next project. I have spent days searching and reading hundreds of academical papers indirectly or directly related to the subject. But unfortunately most of them doesn't transfer very well to the real world, or they don't provide any more info besides the basics.

There were some round-table discussions on GDC 2006 called "Next Generation Multiplayer: Networking Highly Physical Game Worlds." But unfortunately I don't know what was discussed there, being in the middle of the crunch there was no way to attend, and as far as I know nothing has been provided.

I guess there's no really magical tricks to make it work though. A decisssion has to be made about who are going to send the object. Some priorisation has to be made to prevent bandwidth overflow. The rest is just standard dead reckonning. The first two tasks can either be hard or easy depending on how complicated you want to make it.

However there's a great chance that you can just do all the physics locally for the dynamics objects. The physics will still affect the gameplay, and you have to avoid hitting the objects, or can use them to your advantage. The remotes might see a slightly different view of the word, but in most cases that won't bother the players that much, as they don't know exactly how his world looks like. You can see the player avatar jump a bit when hitting an "invisible" object, or see him go through an object, but depending on the game theese things could be allowed.

This was the way that we did it in FlatOut, and how we are doing it in FlatOut 2. The game has thousands of physics object.
In our current project, we use a technique that might be a bit unconventional.

Basically, only the inputs from the players get sync'd over the net, and nothing else. If the physics engine is 100% deterministic, this information is sufficient to calculate the current state of the game.
In practice, we use two instances of the physics engine. One for simulating the "official" game based on the inputs from all the players, and the other one to let the game run (i.e. extrapolate) between network updates.
When the official simulation has done an update, all positions and velocities of the physical objects are transfered to the simulation that the player sees.

The main drawback of course is that the physics need at least double the CPU time. But on the other side there is very little network traffic.
That's known as the "lockstep" model. It has the main benefit of reducing network bandwidth, as you say, and there are several games that use it.

However, there are various things you need to take into account:

- How to get the player actions responsive, while staying consistent with the distributed simulation (the round-trip problem).
- What to do if you lose a packet; if you use UDP, that means state loss; if you use TCP, that means that you have a period of time without an update, followed by updates for steps in time you've already displayed to the user.
- How to make a heterogenous system actually be 100% deterministic. It's hard enough across various Pentiums and AMDs on a single OS and compiler; if you want to include other platforms it's Really Hard. Integer-only simulation might help here.

I think a lot of us would be interested if you could describe how you solve these three problems in your particular game.
enum Bool { True, False, FileNotFound };
Honestly, I didn't solve all of that problems yet. I've implemented the synchronization methods, but never really testet them over the net on different computers.

So all I can say at the moment is how I will approach these problems:

a)
Between the network updates, the simulation runs like in singleplayer mode. So the player can steer his car (it's a action racing game by the way) without any lag. In addition, because there's so little traffic it is possible that every client sends his input data to all the other clients. That will make it more responsive, too. Quite a big problem are scripts that have a influence on the gamestate. When a car collects a pickup goodie, for example. This may only happen when the official simulation runs, so that the car picks the goodie in the exact same frame on every computer.

b)
Packet loss is prohibited. That would introduce an accumulating error which is not acceptable.
If the clients receives an update that is let's say 100 ms behind real time, the simulation that the player sees has to rewind to that position. Based on the inputs he did the last 100 ms, the simulation then has to catch up to real time again (everything happening in one frame).
I know that sounds like quite expensive, but I run the physics constantly at 60 fps, so in the worst case there would be 180 updates per second, which should be OK for modern CPUs.

c)
The platform, compiler and OS will not change. So the question is: Do CPUs (AMD and Intel) differ in their floating point results? Rounding errors are not impoartant, as long as they occur equally on all chips. I searched the web for this topic, but couldn't find anything. So I assumed this problem doesn't exist :p. Well, what else would the IEEE standard be good for?


PS: Sorry for disturbing this thread
Quote:Do CPUs (AMD and Intel) differ in their floating point results?


Yes. By default, Intel CPUs use 80 bits internal precision (for the FP stack) and AMD uses 64. There is also the problem of various libraries (OpenGL, DirectX, etc) possibly changing the rounding mode and/or internal precision bits from underneath you. The best solution is to slam these bits in the control word right before you run your simulation loop, once every step.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement