Jump to content
  • Advertisement
Sign in to follow this  
Baiame

Synchronized input-based networking- computational load?

This topic is 3962 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been thinking about implementing a game networking system whereby all player's game inputs are sent to all other players (as opposed to more concrete game state info like position and velocity), which are then applied to every player on every machine involved. What draws me to this approach is its low maintenance and bandwidth. Although it's not where I got the idea, I read The Internet Sucks., and it's helpful. But Lincroft seemed to have more problems with networking than anything else- what I'm more worried about is the computational cost of this technique. Since the server and client machines must maintain a synchronized game state, and there's considerable delays between the creation of information and its receival on other machines, client and server machines alike must basically go back in time and re-apply old input data to get all the way back to the current state; this means that they must run up to about 10 physics updates per frame, and so around a thousand per second. In my case there are several hundred rigid bodies (most of which would be inactive, however). It seems impossible that the CPU could keep up. Has anyone tried this before, or just have any other ideas? Thanks. Of course one way would be to set up an incredibly complex system which quickly simulates only rigid bodies against the environment in cases where the bodies couldn't possibly have interacted with others, but that's beyond my abilities. Not sure it'd be enough of an optimization anyway.

Share this post


Link to post
Share on other sites
Advertisement
An important factor is - what kind of game is it? There is no single catch all solution which will work for every game type.

Share this post


Link to post
Share on other sites
As said already it depends what kind of game it is. Client-Server beats P2P almost every time bandwidth wise. It only relies on 1 computer having solid processing power and download/upload. Everyone else is free to have their own latency and packet loss and what have you. Depends on the game though.

Quote:
Original post by BaiameSince the server and client machines must maintain a synchronized game state

true for both Client-Server and P2P

Quote:
Original post by Baiameclient and server machines alike must basically go back in time and re-apply old input data to get all the way back to the current state; this means that they must run up to about 10 physics updates per frame, and so around a thousand per second


I don't think you understand the concept of a Client-Server system. Clients send input and server sends out full state/update packets to players. Culling on the server can make this task trivial.

Quote:
Original post by BaiameOf course one way would be to set up an incredibly complex system which quickly simulates only rigid bodies against the environment in cases where the bodies couldn't possibly have interacted with others, but that's beyond my abilities. Not sure it'd be enough of an optimization anyway.

P2P is far from being an optimization.

Put it this way. Player 1 hits a rigid body block and Player 2 hits a rigid body block and the blocks move toward one another. They send their input to all other players. Timestamps will have to be used to know when things happened. If one player receives the message at 100 ms and another receives it at 250 ms and the two blocks already hit. Well you can see the complexity. Who's client is right after subtle errors compound. Also imagine if one client lags for 5 seconds and requires a full state update. Who sends out the full state?

In a Client-Server the players send input and the server runs 1 authoritative simulation. Player 1 send a packet saying it wants to move forward and player 2 does the same. The server updates and performs the physics step after processing the packets and realizes the players hit the blocks. It marks their delta variables and sends out small packets to players when the update finishes telling everyone within range of the positions and velocities with a timestamp.

Which system seems more stable? Everyone trying to synchronize simulations or all the clients synchronizing to one true simulation.

Share this post


Link to post
Share on other sites
You don't have to send state packets from the server; you can still send an input stream. That saves a lot of bandwidth, but does cost processing -- both on the server, and on the other clients.

If you're just displaying simple dead reckoned entities, then the most simulation you have to do on the client for the entity is a little bit of testing so you don't go through the ground. If you run the full simulation (based on inputs) of entities on servers and clients, you have to run the full character physical simulation, which may be significantly more expensive.

If you want to know one way of building an entire system around input-synchronous simulation, while hiding latency wherever possible, see US Patent 6628287.

[Edited by - hplus0603 on February 1, 2008 1:39:45 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by TheGilb
An important factor is - what kind of game is it? There is no single catch all solution which will work for every game type.

It's a first-person shooter with up to 64 players (no, nothing like the Battlefield games).
Quote:
Original post by Sirisian
I don't think you understand the concept of a Client-Server system. Clients send input and server sends out full state/update packets to players. Culling on the server can make this task trivial.

Not with the method I'm talking about. I guess my explanation wasn't clear enough- please glance through The Internet Sucks, you'll get the idea. But basically nothing but inputs are sent from clients to the server, and from the server to clients. Can't do much culling, as it's a requirement that every player can potentially see every other player on the map (I could send lower-fidelity info for more distant players though, if I went with the "server sends absolute states" way, which seems likely at this point). EDIT: This part wasn't very clear. I meant that if I opted for the traditional approach, I couldn't do much; if I went ahead with what I'm planning and talking about here, I couldn't do ANY culling.
Quote:
Original post by Sirisian
P2P is far from being an optimization.

Put it this way. Player 1 hits a rigid body block and Player 2 hits a rigid body block and the blocks move toward one another. They send their input to all other players. Timestamps will have to be used to know when things happened. If one player receives the message at 100 ms and another receives it at 250 ms and the two blocks already hit. Well you can see the complexity. Who's client is right after subtle errors compound. Also imagine if one client lags for 5 seconds and requires a full state update. Who sends out the full state?

Read again: what I described in that paragraph was an idea for a purely machine-side physics system optimization; it had nothing to do with client/server or P2P (it would apply either way). But what you said wouldn't in itself be a problem. The simulation runs at a fixed timestep on the client machines and the server, so synchronization isn't conceptually difficult. Update numbers would be used to know when things happened. Both player's simulations would revert to old gamestates to apply the new input information and run them from there up until the present time; bugs notwithstanding, there are no synchronization issues.

In any case I had pretty much already decided for client-server, as
1. The bandwidth requirements for 64 players P2P are still too high with this method.
2. Late-joining and player dropping (due to packets not being recieved in time) are much easier to handle.
Quote:
Original post by hplus0603
If you want to know one way of building an entire system around input-synchronous simulation, while hiding latency wherever possible, see US Patent 6628287.

Thanks, but the grinding repetition made it impossible for me to read all the way through. But from what I saw they didn't do anything that Lincroft didn't.

What really drew me to this approach in the first place was consideration of the physics system. The physics will be roughly as complex as those in Half-Life 2. Now, it's bad enough that with 64 players you need to send 64^2*packet size bytes each update just for the player info, but you also in this case need to send alot of info pertaining to the rigid bodies. If, on the other hand, you only send player inputs, you don't need to send ANY info about the bodies.

So what I need is either a super-fast physics system (I estimate that 500,000 CPU cycles per ~50 ms timestep update would do it), or a means of handling all the information not directly about players well enough, or something completely different.

Share this post


Link to post
Share on other sites
Quote:
please glance through The Internet Sucks


That article is 9 years old, and the lessons it draws from even older.

It doesn't make them less valid - but 12 years is a very long time. Processor computing power has increased 256 times. Graphics performance probably by 1024-2048 times.

Internet has gone from mostly dial-up PPP to broadband, where streaming 5-20k is somewhat viable, although practical stream rates remain at 3k. It's the age of FaceBook, YouTube and BitTorrent afterall.

Quote:
The physics will be roughly as complex as those in Half-Life 2. Now, it's bad enough that with 64 players you need to send 64^2*packet size bytes each update just for the player info, but you also in this case need to send alot of info pertaining to the rigid bodies. If, on the other hand, you only send player inputs, you don't need to send ANY info about the bodies.


If it's of same complexity, then how does HL2 do it? If they were sending 4 byte packets, that would be 280000 gigabytes of data per update. But still, their servers run just fine.

Share this post


Link to post
Share on other sites
Quote:
from what I saw they didn't do anything that Lincroft didn't


Actually, the novel part of that invention is sliding the reference time frame around for different objects, and dealing with how objects interact based on their client-side perceived time frame. Whereas Tie Fighter had one database of "forward extrapolated" entities it used for rendering, that invention has the ability to continuously slide entities in relative time, and does so based on various criteria.

Anyway, I don't think CPU power has increased 256 times since 1999. I believe a top-of-the-line CPU from then was something like an 800 HMz Pentium III (this was a year or two before the Pentium 4 came out) with 133 MHz memory. Current CPUs run at 3 GHz with two or four cores, and 1333 MHz (effective) memory. Thus, you can claim a factor 10, but not 100. Basically, once the Pentium 4 second generation came out, Moore's law stopped applying to computer speed scaling, and we're now down to a measly 15-25% speed improvement (per single core) per year.

So, the result is still the same: When using input-synchronous client/server simulation with an authoritative server, you become limited on the number of entities you can deal with on the server. However, 64 entities should be no problem.

The problem with using existing physics engines like Havok or PhysX (or even ODE) is that they are not guaranteed deterministic. That means that the lockstep assumption goes out the window -- the simulations will slowly diverge. Now, you may call me partial, because my company is in the bussiness of licensing an enterprise virtual world platform based on exactly that: input-synchronous client/server simulation with an authoritative server. However, I'm just telling it like it is :-)

I believe HL2 uses entity updates for both players and physical objects. It also uses more than 5 kB/sec of bandwidth for a 16-person deathmatch with physical objects.

Share this post


Link to post
Share on other sites
Things have changed a lot. With my current server I've been developing (a test on bandwidth) I plan to average around 400 delta updates (give or take) for a 28K connection (aka 3584 bytes/s (best case meaning they are only playing the game and have no interruptions from their ISP)).

"Havok or PhysX (or even ODE) is that they are not guaranteed deterministic."
So true. Few physics engines are. I had to engineer my own for 2D (it's rather poor, but gets the job done. Deterministic physics = easy synchronization. A lot less worrying about things like, "did the grenade bounce off the wall and hit that object exactly like it did for everyone?" With timestamps you end up with the ability to know exactly where everything should be at a given time by rolling the simulation back and forth. (tricky, still haven't got to that part yet).

In the whole scheme of things I prefer trading CPU for bandwidth since I figured out that delta and full state updates are actually really cheap operations to do per player. (2D anyway).

I wouldn't go by what an article that old says about TCP either. In my experience with TCP it's perfectly fine for twitch based games (not as good as something like reliable UDP or something but it works).

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
It doesn't make them less valid - but 12 years is a very long time. Processor computing power has increased 256 times. Graphics performance probably by 1024-2048 times.

But the complexity of the physical simulation required is probably greater than 256 times, considering that proper newtonian mechanics (with lots of collision) are required, whereas XWvTF presumably had an arcade flight model.
Quote:
Original post by Antheus
Internet has gone from mostly dial-up PPP to broadband, where streaming 5-20k is somewhat viable, although practical stream rates remain at 3k. It's the age of FaceBook, YouTube and BitTorrent afterall.

But if I minimize the server's upstream requirements, a hell of alot more people can host reasonably large servers. Also, the worst-case complexity (i.e. alot of rigid bodies AND players in the same area) is the same as the average case in terms of networking; it only requires more CPU time on all machines.
Quote:
Original post by Antheus
If they were sending 4 byte packets, that would be 280000 gigabytes of data per update.

I don't know how you figure that, but
Quote:
Original post by Antheus
f it's of same complexity, then how does HL2 do it?

no idea. It'd be good to know.

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
Actually, the novel part of that invention is sliding the reference time frame around for different objects...

Thanks for summing it up for me.
Quote:
Original post by hplus0603
So, the result is still the same: When using input-synchronous client/server simulation with an authoritative server, you become limited on the number of entities you can deal with on the server. However, 64 entities should be no problem.

What do you mean by "entities"? I need several hundred rigid bodies.
Quote:
Original post by hplus0603
The problem with using existing physics engines like Havok or PhysX (or even ODE) is that they are not guaranteed deterministic. That means that the lockstep assumption goes out the window -- the simulations will slowly diverge.

I'm aware of that, but I believe the Newton physics engine is deterministic, and it happens to be the easiest to implement on account of the engine I'm using (TrueVisison 3D 6.5).
Quote:
Original post by hplus0603
I believe HL2 uses entity updates for both players and physical objects. It also uses more than 5 kB/sec of bandwidth for a 16-person deathmatch with physical objects.

That's consistent with my experiences in HL2 multiplayer mods- things get horrible laggy when a couple dozen rigid bodies are in roughly the same place.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!