• Create Account

# Broadcasting (Multicasting) based multiplayer game, LAN/WAN?

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

48 replies to this topic

### #21Antheus  Members   -  Reputation: 2397

Like
0Likes
Like

Posted 28 August 2009 - 06:50 AM

Quote:
 Original post by SirisianThe waiting part in P2P is more for a lock-step kind of game. I don't see the point of waiting for the slowest player when using P2P. You could set it up so that players just stop sending packets to them. I mean there's no rule saying you have to keep slow clients in sync.

If a client is slow, they cannot keep up with others.

If simulation is deterministic and fair, you need to wait for slowest, or kick those that are slow.

Otherwise, they will always be too slow, they will keep falling behind, and you either keep sending them baselines, increasing network load, or you disconnect them.

If simulation is not deterministic, then you end up with unfair gameplay model, or at least something that is not entirely suitable for quick-paced FPS (60Hz, accurate collisions, accurate physics). Unlike server-based, where clients are dumb and just renderers (with extrapolation), every peer has to carry its weight, exactly the same as all others.

At very least, I see no other solution to slow peers. They are unable to keep pace with rest of simulation, and others will never be able to consider their actions, since by the time they receive them, the rest have moved on. MMO models obviously exist, but those are not fully connected.

Each peer would obviously behave exactly like current servers, keeping time in the past, compensating for latency and extrapolating, and that would include tolerating slow clients or occasional lag spikes with minimal impact to each player, but degraded experience to slow peer.

Quote:
 Worse yet imagine if he's telling clients A and B it's position at the correct time interval, but is purposely delaying packets to client C so that he can get a quick kill. :P So many vulnerabilities I don't even know where to start.

Which is why you need to wait for slowest peer, before you advance the time step.

More accurately, the waiting doesn't need to be at that point. Each peer can advance even if they don't receive previous updates. When they do, they apply changes in the past, correct the state if needed, and so on.

But this again introduces warps and visual distortions, or causes other problems.

And if any peer does cheat, AoE-like fully deterministic model immediately has all others break connection. This model assumes that each peer maintains full state of entire world - unlike P2P MMOs, where each peer maintains full state of portion of world, and acts as dumb terminal for everything else.

[Edited by - Antheus on August 28, 2009 1:50:47 PM]

### #22hplus0603  Moderators   -  Reputation: 5163

Like
0Likes
Like

Posted 28 August 2009 - 07:14 AM

Actually, almost no PC game is Peer-to-peer. While some games allow users to host without a central server, that merely means that the user doing the hosting is the topological server. Starcraft, for example, is explicitly *not* a peer-to-peer game.

Quote:
 absolute optimisation and lowest possible hardware requirements

I understand engineering. what I'm trying to say is that those two requirements are actually contradictory in certain terms. If you want the lowest possible latency at all costs, then that's not optimization, because the overall total cost goes up -- you require everyone to have a great Internet connection. Engineering is the art of making smart trade-offs to deliver the most end user value at the lowest possible cost. Low latency is "value," high bandwidth requirement is "cost." Bandwidth might also be considered a "hardware requirement," so reducing bandwidth means reduced hardware requirements.

If you're really into P2P networking for games, I suggest you check out the VAST project (a link to which is in the Forum FAQ).

### #23Katie  Members   -  Reputation: 1313

Like
0Likes
Like

Posted 28 August 2009 - 08:55 AM

"how does all the real-time streaming audio and video on the www work? ain't there radio and tv shows already broadcasting real-time over WWW"

This, I can help you with -- radio audio streaming works by using a combination of both a central server AND a peer-to-peer network. The users are arranged in a tree structure -- not only being consumers of the data, but also servers for nearby users. The users at the far end get the radio delayed, possibly by seconds, but no-one notices.

A similar system can be arranged to happen with live video, although the fan-out is much lower.

The streaming companies provide a backbone which guarantees that data can be delivered -- those systems step in when the user machines don't supply the data. In addition they marshal all of the various machines into the right architectures -- tell them where to go for better connections and so on.

Sorting out all of this is complicated and that's usually part of the core proprietary tech of the companies concerned.

"Broadcast" packets just don't work on the wider internet because typically ISPs will just drop them at their border. Just because you say "please send this to everyone", doesn't mean they actually do.

### #24 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 12:47 PM

Quote:
Original post by stonemetal
Quote:
 Original post by gbLinuxnonsense. what in the world are you talking about? where did you pull that information from? what kind of hardware 10mil people who play WoW have? ...well, that hardware will do just fine. ah forget it, just remember that clients would NOT need to do the same work as server, much less indeed and it would be SIMULTANEOUS, parallel.

How so? If you plan on shipping full state from place to place to avoid clients having to recalculate things then your bandwidth requirements have just sky rocketed. Oh and what stops me from sending out the I win packets? You need a server to be authoritative. Lan games it is ok not to since you obviously know everyone involved.

my enquiry is theoretical, im interested in theoretically FASTEST solution first. security, connection quality and other local or temporary limits are not important to me at this point. all i want is side by side comparation of p2p vs server, given the same client distribution, with 4, 8, 16, 32, 100, 500.. clients. not to be "convinced" and "assured", i want analysis and numbers.

sure, there will be problems as they scale, bandwidth becomes a problem for p2p, but as the future comes to past bandwidth may become cheap allowing p2p to scale much more elegantly.

what is the average bandwidth cost with server-based games? what is the max number of clients current server-based games support? if today servers can't do 100 clients at all, then no one should object how it would be too bandwidth costly for p2p to do so, in my book that's a still a win for p2p. if not for today than for the happy children of the future. for now, i'm happy if p2p beats server with 8 to 16 user setup.

### #25Washu  Senior Moderators   -  Reputation: 4992

Like
0Likes
Like

Posted 28 August 2009 - 01:14 PM

Quote:
 Original post by gbLinux- what do you mean by "game is no bigger than available bandwidth"?- what "things" did you measure if you or anyone never tried what i'm talking about?- what is "last mile"? what is "long haul"? can that be communicated in plain english?imagine you and your 10 friends in the same room or in the same city playing the game over the WWW and the server is in the next city. similar to your example this can easily illustrate just how much faster can P2P be, and that's not mentioning the time server loses on calculation of all the physics for all the clients. my guess is that P2P online multiplayer games could be, ON AVERAGE, even more than twice faster than server-based ones. it of course depends on location distribution and individual connections, but it seems only extreme cases of player/server distribution would benefit more from server-based approach, if any.i don't really get what are you saying there... clients should be able to communicate with each other as much and as fast as they can do it, on average, with the server. everything else server wasted time on before being able to send packets is a pure gain here, which i believe translates in more than just better latency, but i'd like some experimental data.

You don't know what the terms "last mile" and "long haul" are, but you think you know how to do something faster networking wise? Considering those are two fundamental concepts in networking...I suggest doing a bit of reading up first.

The last mile is typically the physical connection from the CPE to the ISP, this is generally the slowest portion of an ISPs network, as it will have the largest amount of routing, QoS, filtering, and other traffic manipulators applied to it.

The long haul is when you're getting sent across long pipes, such as the transatlantic one, and hence tend to have high bandwidth and high latency.

From a proper server farm (in an ISP data centers) to a client, the slowest portion of the network, and the part with the highest latency, will generally be the last mile, assuming you're connecting to a geographically local server. This is because servers on that scale tend to be mounted in data centers that serve as backbone aggregates (meaning they are multihomed and often serve as routing centers for major ISPs). These places don't have "last miles" because they don't run as CPEs. Which means servers hosted off their lines have very minimal routing and QoS issues to contend with (mostly internal networking of the local server cluster).

[Edited by - Washu on August 28, 2009 8:14:10 PM]

### #26hplus0603  Moderators   -  Reputation: 5163

Like
0Likes
Like

Posted 28 August 2009 - 01:58 PM

Quote:
 clients should be able to communicate with each other as much and as fast as they can do it, on average, with the server

That's simply not true, for two reasons:

1) Simple math. The overall number of connections, and thus packet overhead, grows by N-squared in a P2P system, but is static at 1 for c/s for clients, and N for c/s/ for servers (although each client sees N connections up and N down in P2P, and 1 up, 1 down in c/s). For most games, the size of the packet headers can easily be as big or bigger than the size of the actual update data in a single "tick" update packet. I assume you're familiar with N-squared vs N vs constant requirements, and how they scale; if not, please use the appropriate Google or Bing services.

2) All consumer internet connections are asymmetric, where upload is 1/10th the bandwidth of the download. In a P2P set-up the bandwidth requirement is symmetric, whereas in c/s it's heavily skewed towards download for all the clients. Hence, all things being equal, c/s systems generally can allow clients to receive 10x more data than a P2P system, assuming the server has "infinite" outgoing bandwidth.

I'm starting to not like your tone, and your lack of basic Googling before you start attacking answers people are trying to give you in this thread, though.

Having operated commercial client/server games for years, that even include voice chat, I can say that bandwidth costs are very low on our list of concerns. Ability to attract and retain people is about a 1000x bigger concern for the P&L statement (given that you want numbers). The nice thing is that more users, while needing more bandwidth, also mean more income to pay for the bandwidth, and user income grows linearly, whereas bandwidth cost grows basically logarithmically. Leasing, cooling, power and rack space for the servers are also bigger costs than bandwidth, but still smaller concerns by an order of magnitude than customer acquisition, retention and service.

[Edited by - hplus0603 on August 28, 2009 8:58:00 PM]

### #27 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 02:54 PM

Quote:

What did you mean by:
Quote:
 Original post by gbLinuxvisual absurds that happen to each and every client with server-based approach

In the server-client model the slowest player never affects the gameplay if the server is authoritative, unless the game is designed to make sure everyone is in sync and not lagging. (I've played a few games that do that).

yes, kind of.. here is the article i was referring to:

The Valve Developer Community, Source Multiplayer Networking:
http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking

Quote:
 About using P2P in a game it's possible. You might want to try it and see how well it works for you. So yeah I'd write up a test program and see if you like how it works.

so, after all... this p2p multiplayer has never been tried, is that what you're saying?
and... i'm supposed to invent this thing now? c'mon, someone must have tried something about it.

Quote:
Quote:
 Original post by gbLinux- central server collects all the input from all the clients, calculates movement/collision then sends to each client the new machine state, ie. new positions of each entity, velocities, orientation and such, which then each client renders, perhaps utilising some extrapolation to fill in the gaps from, say network refresh rate of 20Hz to their screen refresh rate of 60 FPS. now, this would mean that game actually can only run as fast the slowest connected client can communicate with the server, right?

That doesn't even make sense. The server is never limited by the slowest connection. It's just getting input and sending out packets. At no point does it have a "wait for slowest client". If it doesn't get input it doesn't get input. The only way you could get that effect would be if you've lock-stepped input, which I've never seen done before (RTS games don't even do that as far as I know).

i was referring to what i explained later on - different client update rates make for unrealistic simulation and causes space/time paradoxes, therefore, ideally, game should run on frequency server can communicate with the slowest client. that is why there should be different servers for slow/fast connection clients.

The Valve Developer Community, Source Multiplayer Networking:
http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking

Quote:
 I use the server-client model. It allows me to get input from clients and update physics then build individual packets with full/delta states for each client such that each packet is small and easy for even the lowest of bandwidth players. The added latency you mention isn't noticeable. You might want to make some measurements and see if it's worth it. I'd like to see some test with 8 players where the one-way latency is recorded between each and every client. Then compare it to the RTT time between the server and client model.

i'd rather read about someone else's measurements, but yeah that is exactly what i want to know.

till then i can use simple ABC example and plug some more numbers...

      B30km     s 17kmA          C

say, it takes one packet of information to communicate either plain client input or full entity state.

*** P2P, one frame loop
round-trip: 30km
packets sent: 2
total packets per client: 4

*** SERVER, one frame loop
round-trip: 34km
packets sent: 1
total packets per client: 4

this situation can change rapidly as number of clients increases, but also depending on real packets size, as well as how upload/download speed limit posed by ISP affects the speed of outgoing and incoming packets. in p2p all packets are "upload", does that mean top speed would be limited by client ISP's upload speed limit? this is where server model might catch up, if not overrun p2p, i think... since the difference in upload/download speed is so great for most of the ADSL plans, as far as i know. any thoughts on this?

### #28 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 03:24 PM

Quote:
Original post by hplus0603
Quote:
 clients should be able to communicate with each other as much and as fast as they can do it, on average, with the server

That's simply not true, for two reasons:

1) Simple math. The overall number of connections, and thus packet overhead, grows by N-squared in a P2P system, but is static at 1 for c/s for clients, and N for c/s/ for servers (although each client sees N connections up and N down in P2P, and 1 up, 1 down in c/s). For most games, the size of the packet headers can easily be as big or bigger than the size of the actual update data in a single "tick" update packet. I assume you're familiar with N-squared vs N vs constant requirements, and how they scale; if not, please use the appropriate Google or Bing services.

2) All consumer internet connections are asymmetric, where upload is 1/10th the bandwidth of the download. In a P2P set-up the bandwidth requirement is symmetric, whereas in c/s it's heavily skewed towards download for all the clients. Hence, all things being equal, c/s systems generally can allow clients to receive 10x more data than a P2P system, assuming the server has "infinite" outgoing bandwidth.

I'm starting to not like your tone, and your lack of basic Googling before you start attacking answers people are trying to give you in this thread, though.

Having operated commercial client/server games for years, that even include voice chat, I can say that bandwidth costs are very low on our list of concerns. Ability to attract and retain people is about a 1000x bigger concern for the P&L statement (given that you want numbers). The nice thing is that more users, while needing more bandwidth, also mean more income to pay for the bandwidth, and user income grows linearly, whereas bandwidth cost grows basically logarithmically. Leasing, cooling, power and rack space for the servers are also bigger costs than bandwidth, but still smaller concerns by an order of magnitude than customer acquisition, retention and service.

ok, can you now please express the same thing with some real-world numbers and examples, say something like ABC only with 8 clients and compare... use some average server upload bandwidth and some average client upload/download ISP limits, some average data sizes/packets... compare, and then it will be much easier to understand, can you do something like that, please?

i still believe there are lots of things which can turn out in favor for p2p, so ideally there would also be some article on the www where someone who actually tried it experimentally wrote some numbers about it... practice can sometimes reveal unexpected. after all, coincidence, luck and random chance are the main factors in the history of human inventions and discoveries.

### #29Antheus  Members   -  Reputation: 2397

Like
0Likes
Like

Posted 28 August 2009 - 03:36 PM

Here are some numbers: 7, 22, 45, 88, 102, 344.

Actually, the real ones are below:

Quote:
 this situation can change rapidly as number of clients increases,
The total number of packets in the network increases with n squared. So with 2 clients, there are 4 packets. With 16 clients, there are 256 packets. With 100 clients, there are 10,000 packets. Sent 60 times per second.

To be completely fair, the number is n*(n-1), but for the purpose of this discussion, this doesn't matter beyond 3 clients.

Quote:
 "upload", does that mean top speed would be limited by client ISP's upload speed limit?

Yes. Or better yet, by each user's subscription plan's limit (128kbit - several Mbit).

Quote:
 this is where server model might catch up, if not overrun p2p, i think... since the difference in upload/download speed is so great for most of the ADSL plans, as far as i know. any thoughts on this?

Due to wonders of mathematics, we can determine when this would happen. The formulas are given above. For each update (60 times per second)

Edit: The following is packet overhead, not actual usable data.

Server client
Server: N packets
N * Each client: 1 packet
Total number of packets: N + N

P2P:
N * Each client: N packets
Total number of packets: N * N

As far as number of packets goes:
N * N < N + N
N^2 < 2N
N < 2

In other words, P2P is only more efficient as long as there are at most two peers.

When there are 3, P2P needs to generate 9 packets, C/S only 6. When there are 64 clients, P2P needs to generate 4096 packets, while C/S only 128.

Bandwidth requirements, 64 players
P2P:
Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to 64 peers. This adds up to 38*60*64 bytes per second, or 145920 bytes per second. So each player needs to have ~1.2Mbit upstream.

S/C:
Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to server. This adds up to 38*60*1 = 2280 bytes per second, or approximately 20kbit upstream (a dial-up modem provides it).
Server however, needs to send full updates, at maximum rate same as P2P (often less), so ~1.2Mbit. Servers are typically hosted by operators in data centers, not in bedroom computers.

But it gets worse. For each update, all of those packets are send over internet. The total bandwidth required by each model is:
P2P:
Each client sends to all others, so 1.2MBit * 64 = 76.8MBit
S/C:
1.2Mbit + (64 * 0.02Mbit) = 2.48Mbit

But now lets look at 256 player game:
P2P:
38*60*256*256 = 1.14 Gigabits
S/C:
38*60*256*2 = 8.9 Mbit

Let's go further. On WoW server, there are about 3000 at any given time (except Tuesdays).
P2P: 38*60*3000*3000 = 152 Gigabits (152,000 Mbits)
S/C: 38*60*3000*2 = 104 Mbits (or the capacity of home broadband router)
And this is just for a single shard - WoW has how many, hundred?

See the problem with P2P, and why just a simple napkin calculation shows it simply doesn't work?

Just because The Cloud is somewhere out there, it doesn't mean it's magic infinite space. The numbers listed above are the infrastructure requirements, which exceed total internet capacity of most countries.

Edit: Software prefers bytes, but network gear prefer bits. So when talking about capacities, I multiply the bytes sent by 8 to obtain bits. Not entirely correct, but close enough - it's not what makes or breaks it.

[Edited by - Antheus on August 28, 2009 10:36:57 PM]

### #30Matias Goldberg  Crossbones+   -  Reputation: 3137

Like
0Likes
Like

Posted 28 August 2009 - 03:57 PM

Hi!

Quite an argument here.

I just wanted to point about the ABC problem. I'll change it to:

      B     s 17kmA          C      D

SD = 17km

Server approach:
Info comming from B to D needs 34km

P2P approach:
Info comming from B to D needs 60km

It seems to me it gets worse as we grow.
P2P is complex, much more than my non-network-oriented head can take. But you don't need to be an expert that the advantages of P2P is resistance to failure, while the disadvantage are ridiculously high latencies (Have you experienced Skype taking like one minute to send your chat message? or searching some terms in the Kademlia network?) and code complexity. For example, if CD is down, BA route will be preferred, but figguring that out may take some time out delays (usually, a PC going down doesn't actually warn the others "hey, I'm out").
It gets worse if the data D receives from A differs the one received from C. The error will be spotted, but fixing it can take a lot of time (as we would have to ask B again). Ideally A may spot the problem, but still it would have to ask B again.
While the server approach has backwards of it's own, you can spot something: IT'S SIMPLE. If server's down, stop game. On data transmission error, server resolves differences.
Democracy (P2P) may be great, but you can't deny it's easier to be a tyrant (Server) make them do everything as you say. Everyone follows the server. Period. P2P is not like that.

Cheerio
My two cents
Dark Sylinc

### #31 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 04:05 PM

Quote:
Original post by hplus0603
Actually, almost no PC game is Peer-to-peer. While some games allow users to host without a central server, that merely means that the user doing the hosting is the topological server. Starcraft, for example, is explicitly *not* a peer-to-peer game.

Quote:
 absolute optimisation and lowest possible hardware requirements

I understand engineering. what I'm trying to say is that those two requirements are actually contradictory in certain terms. If you want the lowest possible latency at all costs, then that's not optimization, because the overall total cost goes up -- you require everyone to have a great Internet connection. Engineering is the art of making smart trade-offs to deliver the most end user value at the lowest possible cost. Low latency is "value," high bandwidth requirement is "cost." Bandwidth might also be considered a "hardware requirement," so reducing bandwidth means reduced hardware requirements.

If you're really into P2P networking for games, I suggest you check out the VAST project (a link to which is in the Forum FAQ).

ok, i think it's safe now to conclude that p2p multiplayer is non-existent or at least very rare.

yes, actually it is contradictory, i agree. so, to put it more precisely - from the algorithm and software implementation point of view, as a programmer, im after speed - fast code, but in the terms of network architecture i'm interested in _theoretically and _potentially faster solution. in theoretical terms, i expect the future to get rid of the limits current network and other information infrastructures pose... so, simply said, without client upload speed limit i see no way server model would outperform p2p, and even now, as it is today, p2p might be very well suitable candidate to replace server based model for some specific, if not most, situations. i kind of have a feeling it's the 'security' what made and keeps p2p so wildly unpopular.

thanks for that link, that's exactly what i was looking for... if anyone can find more such projects feel free to let me know.

### #32 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 04:28 PM

Quote:
Original post by Antheus
Here are some numbers: 7, 22, 45, 88, 102, 344.

Actually, the real ones are below:

Quote:
 this situation can change rapidly as number of clients increases,
The total number of packets in the network increases with n squared. So with 2 clients, there are 4 packets. With 16 clients, there are 256 packets. With 100 clients, there are 10,000 packets. Sent 60 times per second.

To be completely fair, the number is n*(n-1), but for the purpose of this discussion, this doesn't matter beyond 3 clients.

Quote:
 "upload", does that mean top speed would be limited by client ISP's upload speed limit?

Yes. Or better yet, by each user's subscription plan's limit (128kbit - several Mbit).

Quote:
 this is where server model might catch up, if not overrun p2p, i think... since the difference in upload/download speed is so great for most of the ADSL plans, as far as i know. any thoughts on this?

Due to wonders of mathematics, we can determine when this would happen. The formulas are given above. For each update (60 times per second)

Server client
Server: N packets
N * Each client: 1 packet
Total number of packets: N + N

P2P:
N * Each client: N packets
Total number of packets: N * N

As far as number of packets goes:
N * N < N + N
N^2 < 2N
N < 2

In other words, P2P is only more efficient as long as there are at most two peers.

When there are 3, P2P needs to generate 9 packets, C/S only 6. When there are 64 clients, P2P needs to generate 4096 packets, while C/S only 128.

Bandwidth requirements, 64 players
P2P:
Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to 64 peers. This adds up to 38*60*64 bytes per second, or 145920 bytes per second. So each player needs to have ~1.2Mbit upstream.

S/C:
Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to server. This adds up to 38*60*1 = 2280 bytes per second, or approximately 20kbit upstream (a dial-up modem provides it).
Server however, needs to send full updates, at maximum rate same as P2P (often less), so ~1.2Mbit. Servers are typically hosted by operators in data centers, not in bedroom computers.

But it gets worse. For each update, all of those packets are send over internet. The total bandwidth required by each model is:
P2P:
Each client sends to all others, so 1.2MBit * 64 = 76.8MBit
S/C:
1.2Mbit + (64 * 0.02Mbit) = 2.48Mbit

But now lets look at 256 player game:
P2P:
38*60*256*256 = 1.14 Gigabits
S/C:
38*60*256*2 = 8.9 Mbit

Let's go further. On WoW server, there are about 3000 at any given time (except Tuesdays).
P2P: 38*60*3000*3000 = 152 Gigabits (152,000 Mbits)
S/C: 38*60*3000*2 = 104 Mbits (or the capacity of home broadband router)
And this is just for a single shard - WoW has how many, hundred?

See the problem with P2P, and why just a simple napkin calculation shows it simply doesn't work?

Just because The Cloud is somewhere out there, it doesn't mean it's magic infinite space. The numbers listed above are the infrastructure requirements, which exceed total internet capacity of most countries.

Edit: Software prefers bytes, but network gear prefer bits. So when talking about capacities, I multiply the bytes sent by 8 to obtain bits. Not entirely correct, but close enough - it's not what makes or breaks it.

beautiful!
thank you, i very much appreciate that.

i'm stupid for that kind of thing and i had to see some numbers to be able to grasp it and have some picture about relations. now, let me chew on that for a while...

### #33 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 04:52 PM

Quote:
 When there are 3, P2P needs to generate 9 packets, C/S only 6. When there are 64 clients, P2P needs to generate 4096 packets, while C/S only 128.

that doesn't seem right, this is how i see it:

- per one frame, per one client -
*** 3 clients:
total: 4

total: 4

*** 6 clients:
total: 10

total: 7

*** 64 clients:
total: 126

total: 65

do we understand each other?
are we talking about same stuff here?

### #34Antheus  Members   -  Reputation: 2397

Like
0Likes
Like

Posted 28 August 2009 - 04:54 PM

Quote:
 Original post by gbLinuxbeautiful! thank you, i very much appreciate that. i'm stupid for that kind of thing and i had to see some numbers to be able to grasp it and have some picture about relations. now, let me chew on that for a while...

I edited the post so the wording is incorrect. The numbers are overhead and wasted bandwidth, not actual requirements.

Or simply put, a user with 1Mbit upstream would not have enough spare bandwidth to send anything. If you actually send the data, things get worse.

If all clients are sent full updates, then P2P and C/S need about the same bandwidth. But P2P has so much more overhead.

A user participating in 64-player P2P game over 1Mbit upstream would waste their entire bandwidth on packet overhead, and couldn't even send any usable data.

Quote:

*Each* client. You listed one. There are 64, and each needs to send 63 packets. So each update generates 4032 packets in the internet.

Quote:

Per entire shared state. Per game. That is total across all servers and all clients.

In P2P, there are a total of 64 clients, each of them generates 63 packets it sends to all others.
In C/S, each client sends one packet to server, and server sends on packet to each client.

### #35 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 05:17 PM

Quote:

Quote:

*Each* client. You listed one. There are 64, and each needs to send 63 packets. So each update generates 4032 packets in the internet.

i said per ONE client, per ONE frame... why do you care how much you saturate WWW in general? that does not influence latency, individual bandwidth or speed of p2p network in any way i can perceive. what does it matter? isn't that like if everyone started talking faster over the phone? would that "overload" the lines, slow down internet or something?

Quote:

Quote:

Per entire shared state. Per game. That is total across all servers and all clients.

In P2P, there are a total of 64 clients, each of them generates 63 packets it sends to all others.
In C/S, each client sends one packet to server, and server sends on packet to each client.

i'm afraid the information server sends to each client about each other client is much bigger than what clients send to server and what peers send to peers in p2p.

it's more realistic to say that in C/S each client send one "packet" and receives as many "packets" as there are other clients, or in the worse case as many as there are dynamic entities of any kind in the client scene.

Quote:
 If all clients are sent full updates, then P2P and C/S need about the same bandwidth. But P2P has so much more overhead.A user participating in 64-player P2P game over 1Mbit upstream would waste their entire bandwidth on packet overhead, and couldn't even send any usable data.

well, i can't trust if you're using right numbers and formulas again. are you saying they would use alloted ammount of download data pemited by ISP, or that their UPLOAD SPEED would not be sufficient?

so, i suppose server-based games host 64, maybe more players? what is the maximum number of players allowed per server on some of the most popular games?

### #36 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 05:52 PM

Quote:
 Original post by Matias GoldbergHi!Quite an argument here.I just wanted to point about the ABC problem. I'll change it to:*** Source Snippet Removed ***AD = DC = 30km.SD = 17kmServer approach:Info comming from B to D needs 34kmP2P approach:Info comming from B to D needs 60km

that seems wrong. those numbers were not coincidental, that's what you get with equilateral triangle on a flat surface and it's centre, you can't just imagine numbers... or perhaps you can, but that makes for distorted geo-surface. here is how it goes for equilateral quadrilateral, such as square and rhombus.

B            C      s       30km  20kmA            D

AB=BC=CD=DA = 30km
As=Bs=Cs=Ds = 20km

*** P2P, one frame loop, per ONE client
round-trip: ~33km (2x 30km, 1x 40km)
packets sent: 3
total packets per client: 6

*** SERVER, one frame loop, per ONE client
round-trip: 40km
packets sent: 1
total packets per client: 5

it cant' get better than this for server-based model, having server in the middle... it's only client's upload speed limit that can make server based model faster, as far as i can see.

### #37Codeka  Members   -  Reputation: 1157

Like
0Likes
Like

Posted 28 August 2009 - 08:02 PM

Quote:
 Original post by gbLinuxi said per ONE client, per ONE frame... why do you care how much you saturate WWW in general? that does not influence latency, individual bandwidth or speed of p2p network in any way i can perceive. what does it matter? isn't that like if everyone started talking faster over the phone? would that "overload" the lines, slow down internet or something?
I think it's already been establed that a "peer to peer" model is only useful if all the clients are geographically close (otherwise latency is unavoidable, and the latency of Sydney-LA, say, far outweighs any latency a server may introduce).

That means that if all clients are geographically close to each other, there's a fairly high chance they're all going to be using the same ISP. That means each of those clients 64 packets per frame need to be routed through the ISP's network. An ISP isn't going to be happy about ~120k packets per second running through it's network for just one game is it? We already saw how DOOM was banned on many corporate networks because of the excessive traffic it generated, why wouldn't an ISP also block a game that generates N2 packets per second?

Also, there is a big difference between 64 small packets and one big packet with the same amount of payload. If the individual packet has a payload size of 100 bytes (which is actually pretty big for an FPS style game) you've then got to add the 24 byte UDP/IP header which means a total of 7,936 bytes. If you instead put them all into one packet, then you've got 6,400 bytes of data + the single 24 byte header for a total of 6,424 bytes - a saving of 20%. Then add on the fact that you can compress 6,400 bytes more easily than 100 bytes (i.e. you'll get a bigger compression ratio with more data) and the saving is even bigger.

### #38 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 08:35 PM

i just realised i was mixing two concepts.

it is important to differentiate line Speed and Bandwidth. even tho referred to as a "speed" bandwidth is actually not what matters to "PING", it does not measure the speed with which packets travel or influence how long would it take to arrive at destination... for that we need some distance and the number of routers and checkpoints it must stop and visit on the way.

say, if the speed of water molecules in some water-pipe is what packet speed is in network lines, then bandwidth is the width of that pipe, ie. how many packets can flow through it per second. but the speed, the speed is always about light speed, i suppose, minus all the time lost in routing... and that's pretty much all i know about this. what routers do and how much they slow packets down, that i don't know.

anyhow, this is what is important:
- if SPEED is the same for upload and download (forget bandwidth) between peers in p2p as it is between server and clients, than p2p simply must be that much faster (as explained with ABC). bandwidth only starts to matter with more with, say 16 or 32 players, and even than it heavily depends on just how much you can squeeze your communication info, so 32 players on p2p sounds quite doable with average ADSL connections.

the conclusion:
p2p game with 8 or less players must, simply must be faster than server-based game, given the same hardware and having some average ADSL connections. does anyone wanna challenge this conclusion?

btw,
the only thing i could google about "p2p multiplayer" is that some folks are complaining about some "laggy Xbox 360 P2P multiplayer", whatever is that all about, plus some mobile games are using it via bluetooth.. that's pretty much all. how peculiar. and, what now... hopefully someone will refute my conclusion so i don't have to be the one to produce the first online p2p multiplayer game in the world.

[Edited by - gbLinux on August 29, 2009 2:35:08 AM]

### #39 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 08:47 PM

Quote:
 I think it's already been establed that a "peer to peer" model is only useful if all the clients are geographically close (otherwise latency is unavoidable, and the latency of Sydney-LA, say, far outweighs any latency a server may introduce).

i'm pretty sure this has not been established, in fact most of the stuff said here is pretty vague or was refuted by subsequent posts. we didn't even know if p2p multiplayer games were very common or very rare, even now i'm not quite sure about it... hahaa. anyhow, set any example you want pick any cities you want and we will calculate traversal for both models, i assure you latency will be LESS with p2p as long as bandwidth does not come into play.

### #40Antheus  Members   -  Reputation: 2397

Like
0Likes
Like

Posted 29 August 2009 - 02:21 AM

The very last piece of help I can offer:
+-----------+|   Brick   |+-----------+

Apply liberally to forehead until enlightened.

Considering physics background, this is equivalent to trying to design Carnot engine with efficiency above 1.

The most fundamental concepts state it cannot be done, yet many people still try to develop perpetuum mobile, hoping for a miracle.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS