# Broadcasting (Multicasting) based multiplayer game, LAN/WAN?

## Recommended Posts

hplus0603    11347
Quote:
 clients should be able to communicate with each other as much and as fast as they can do it, on average, with the server

That's simply not true, for two reasons:

1) Simple math. The overall number of connections, and thus packet overhead, grows by N-squared in a P2P system, but is static at 1 for c/s for clients, and N for c/s/ for servers (although each client sees N connections up and N down in P2P, and 1 up, 1 down in c/s). For most games, the size of the packet headers can easily be as big or bigger than the size of the actual update data in a single "tick" update packet. I assume you're familiar with N-squared vs N vs constant requirements, and how they scale; if not, please use the appropriate Google or Bing services.

2) All consumer internet connections are asymmetric, where upload is 1/10th the bandwidth of the download. In a P2P set-up the bandwidth requirement is symmetric, whereas in c/s it's heavily skewed towards download for all the clients. Hence, all things being equal, c/s systems generally can allow clients to receive 10x more data than a P2P system, assuming the server has "infinite" outgoing bandwidth.

I'm starting to not like your tone, and your lack of basic Googling before you start attacking answers people are trying to give you in this thread, though.

Having operated commercial client/server games for years, that even include voice chat, I can say that bandwidth costs are very low on our list of concerns. Ability to attract and retain people is about a 1000x bigger concern for the P&L statement (given that you want numbers). The nice thing is that more users, while needing more bandwidth, also mean more income to pay for the bandwidth, and user income grows linearly, whereas bandwidth cost grows basically logarithmically. Leasing, cooling, power and rack space for the servers are also bigger costs than bandwidth, but still smaller concerns by an order of magnitude than customer acquisition, retention and service.

[Edited by - hplus0603 on August 28, 2009 8:58:00 PM]

##### Share on other sites
gbLinux    100
Quote:

What did you mean by:
Quote:
 Original post by gbLinuxvisual absurds that happen to each and every client with server-based approach

In the server-client model the slowest player never affects the gameplay if the server is authoritative, unless the game is designed to make sure everyone is in sync and not lagging. (I've played a few games that do that).

yes, kind of.. here is the article i was referring to:

The Valve Developer Community, Source Multiplayer Networking:
http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking

Quote:
 About using P2P in a game it's possible. You might want to try it and see how well it works for you. So yeah I'd write up a test program and see if you like how it works.

so, after all... this p2p multiplayer has never been tried, is that what you're saying?
and... i'm supposed to invent this thing now? c'mon, someone must have tried something about it.

Quote:
Quote:
 Original post by gbLinux- central server collects all the input from all the clients, calculates movement/collision then sends to each client the new machine state, ie. new positions of each entity, velocities, orientation and such, which then each client renders, perhaps utilising some extrapolation to fill in the gaps from, say network refresh rate of 20Hz to their screen refresh rate of 60 FPS. now, this would mean that game actually can only run as fast the slowest connected client can communicate with the server, right?

That doesn't even make sense. The server is never limited by the slowest connection. It's just getting input and sending out packets. At no point does it have a "wait for slowest client". If it doesn't get input it doesn't get input. The only way you could get that effect would be if you've lock-stepped input, which I've never seen done before (RTS games don't even do that as far as I know).

i was referring to what i explained later on - different client update rates make for unrealistic simulation and causes space/time paradoxes, therefore, ideally, game should run on frequency server can communicate with the slowest client. that is why there should be different servers for slow/fast connection clients.

The Valve Developer Community, Source Multiplayer Networking:
http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking

Quote:
 I use the server-client model. It allows me to get input from clients and update physics then build individual packets with full/delta states for each client such that each packet is small and easy for even the lowest of bandwidth players. The added latency you mention isn't noticeable. You might want to make some measurements and see if it's worth it. I'd like to see some test with 8 players where the one-way latency is recorded between each and every client. Then compare it to the RTT time between the server and client model.

i'd rather read about someone else's measurements, but yeah that is exactly what i want to know.

till then i can use simple ABC example and plug some more numbers...

      B30km     s 17kmA          C

say, it takes one packet of information to communicate either plain client input or full entity state.

*** P2P, one frame loop
round-trip: 30km
packets sent: 2
total packets per client: 4

*** SERVER, one frame loop
round-trip: 34km
packets sent: 1
total packets per client: 4

this situation can change rapidly as number of clients increases, but also depending on real packets size, as well as how upload/download speed limit posed by ISP affects the speed of outgoing and incoming packets. in p2p all packets are "upload", does that mean top speed would be limited by client ISP's upload speed limit? this is where server model might catch up, if not overrun p2p, i think... since the difference in upload/download speed is so great for most of the ADSL plans, as far as i know. any thoughts on this?

##### Share on other sites
gbLinux    100
Quote:
Original post by hplus0603
Quote:
 clients should be able to communicate with each other as much and as fast as they can do it, on average, with the server

That's simply not true, for two reasons:

1) Simple math. The overall number of connections, and thus packet overhead, grows by N-squared in a P2P system, but is static at 1 for c/s for clients, and N for c/s/ for servers (although each client sees N connections up and N down in P2P, and 1 up, 1 down in c/s). For most games, the size of the packet headers can easily be as big or bigger than the size of the actual update data in a single "tick" update packet. I assume you're familiar with N-squared vs N vs constant requirements, and how they scale; if not, please use the appropriate Google or Bing services.

2) All consumer internet connections are asymmetric, where upload is 1/10th the bandwidth of the download. In a P2P set-up the bandwidth requirement is symmetric, whereas in c/s it's heavily skewed towards download for all the clients. Hence, all things being equal, c/s systems generally can allow clients to receive 10x more data than a P2P system, assuming the server has "infinite" outgoing bandwidth.

I'm starting to not like your tone, and your lack of basic Googling before you start attacking answers people are trying to give you in this thread, though.

Having operated commercial client/server games for years, that even include voice chat, I can say that bandwidth costs are very low on our list of concerns. Ability to attract and retain people is about a 1000x bigger concern for the P&L statement (given that you want numbers). The nice thing is that more users, while needing more bandwidth, also mean more income to pay for the bandwidth, and user income grows linearly, whereas bandwidth cost grows basically logarithmically. Leasing, cooling, power and rack space for the servers are also bigger costs than bandwidth, but still smaller concerns by an order of magnitude than customer acquisition, retention and service.

ok, can you now please express the same thing with some real-world numbers and examples, say something like ABC only with 8 clients and compare... use some average server upload bandwidth and some average client upload/download ISP limits, some average data sizes/packets... compare, and then it will be much easier to understand, can you do something like that, please?

i still believe there are lots of things which can turn out in favor for p2p, so ideally there would also be some article on the www where someone who actually tried it experimentally wrote some numbers about it... practice can sometimes reveal unexpected. after all, coincidence, luck and random chance are the main factors in the history of human inventions and discoveries.

##### Share on other sites
Antheus    2409
Here are some numbers: 7, 22, 45, 88, 102, 344.

Actually, the real ones are below:

Quote:
 this situation can change rapidly as number of clients increases,
The total number of packets in the network increases with n squared. So with 2 clients, there are 4 packets. With 16 clients, there are 256 packets. With 100 clients, there are 10,000 packets. Sent 60 times per second.

To be completely fair, the number is n*(n-1), but for the purpose of this discussion, this doesn't matter beyond 3 clients.

Quote:
 "upload", does that mean top speed would be limited by client ISP's upload speed limit?

Yes. Or better yet, by each user's subscription plan's limit (128kbit - several Mbit).

Quote:
 this is where server model might catch up, if not overrun p2p, i think... since the difference in upload/download speed is so great for most of the ADSL plans, as far as i know. any thoughts on this?

Due to wonders of mathematics, we can determine when this would happen. The formulas are given above. For each update (60 times per second)

Edit: The following is packet overhead, not actual usable data.

Server client
Server: N packets
N * Each client: 1 packet
Total number of packets: N + N

P2P:
N * Each client: N packets
Total number of packets: N * N

As far as number of packets goes:
N * N < N + N
N^2 < 2N
N < 2

In other words, P2P is only more efficient as long as there are at most two peers.

When there are 3, P2P needs to generate 9 packets, C/S only 6. When there are 64 clients, P2P needs to generate 4096 packets, while C/S only 128.

Bandwidth requirements, 64 players
P2P:
Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to 64 peers. This adds up to 38*60*64 bytes per second, or 145920 bytes per second. So each player needs to have ~1.2Mbit upstream.

S/C:
Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to server. This adds up to 38*60*1 = 2280 bytes per second, or approximately 20kbit upstream (a dial-up modem provides it).
Server however, needs to send full updates, at maximum rate same as P2P (often less), so ~1.2Mbit. Servers are typically hosted by operators in data centers, not in bedroom computers.

But it gets worse. For each update, all of those packets are send over internet. The total bandwidth required by each model is:
P2P:
Each client sends to all others, so 1.2MBit * 64 = 76.8MBit
S/C:
1.2Mbit + (64 * 0.02Mbit) = 2.48Mbit

But now lets look at 256 player game:
P2P:
38*60*256*256 = 1.14 Gigabits
S/C:
38*60*256*2 = 8.9 Mbit

Let's go further. On WoW server, there are about 3000 at any given time (except Tuesdays).
P2P: 38*60*3000*3000 = 152 Gigabits (152,000 Mbits)
S/C: 38*60*3000*2 = 104 Mbits (or the capacity of home broadband router)
And this is just for a single shard - WoW has how many, hundred?

See the problem with P2P, and why just a simple napkin calculation shows it simply doesn't work?

Just because The Cloud is somewhere out there, it doesn't mean it's magic infinite space. The numbers listed above are the infrastructure requirements, which exceed total internet capacity of most countries.

Edit: Software prefers bytes, but network gear prefer bits. So when talking about capacities, I multiply the bytes sent by 8 to obtain bits. Not entirely correct, but close enough - it's not what makes or breaks it.

[Edited by - Antheus on August 28, 2009 10:36:57 PM]

##### Share on other sites
Matias Goldberg    9575
Hi!

Quite an argument here.

I just wanted to point about the ABC problem. I'll change it to:

      B     s 17kmA          C      D

SD = 17km

Server approach:
Info comming from B to D needs 34km

P2P approach:
Info comming from B to D needs 60km

It seems to me it gets worse as we grow.
P2P is complex, much more than my non-network-oriented head can take. But you don't need to be an expert that the advantages of P2P is resistance to failure, while the disadvantage are ridiculously high latencies (Have you experienced Skype taking like one minute to send your chat message? or searching some terms in the Kademlia network?) and code complexity. For example, if CD is down, BA route will be preferred, but figguring that out may take some time out delays (usually, a PC going down doesn't actually warn the others "hey, I'm out").
It gets worse if the data D receives from A differs the one received from C. The error will be spotted, but fixing it can take a lot of time (as we would have to ask B again). Ideally A may spot the problem, but still it would have to ask B again.
While the server approach has backwards of it's own, you can spot something: IT'S SIMPLE. If server's down, stop game. On data transmission error, server resolves differences.
Democracy (P2P) may be great, but you can't deny it's easier to be a tyrant (Server) make them do everything as you say. Everyone follows the server. Period. P2P is not like that.

Cheerio
My two cents
Dark Sylinc

##### Share on other sites
gbLinux    100
Quote:
Original post by hplus0603
Actually, almost no PC game is Peer-to-peer. While some games allow users to host without a central server, that merely means that the user doing the hosting is the topological server. Starcraft, for example, is explicitly *not* a peer-to-peer game.

Quote:
 absolute optimisation and lowest possible hardware requirements

I understand engineering. what I'm trying to say is that those two requirements are actually contradictory in certain terms. If you want the lowest possible latency at all costs, then that's not optimization, because the overall total cost goes up -- you require everyone to have a great Internet connection. Engineering is the art of making smart trade-offs to deliver the most end user value at the lowest possible cost. Low latency is "value," high bandwidth requirement is "cost." Bandwidth might also be considered a "hardware requirement," so reducing bandwidth means reduced hardware requirements.

If you're really into P2P networking for games, I suggest you check out the VAST project (a link to which is in the Forum FAQ).

ok, i think it's safe now to conclude that p2p multiplayer is non-existent or at least very rare.

yes, actually it is contradictory, i agree. so, to put it more precisely - from the algorithm and software implementation point of view, as a programmer, im after speed - fast code, but in the terms of network architecture i'm interested in _theoretically and _potentially faster solution. in theoretical terms, i expect the future to get rid of the limits current network and other information infrastructures pose... so, simply said, without client upload speed limit i see no way server model would outperform p2p, and even now, as it is today, p2p might be very well suitable candidate to replace server based model for some specific, if not most, situations. i kind of have a feeling it's the 'security' what made and keeps p2p so wildly unpopular.

thanks for that link, that's exactly what i was looking for... if anyone can find more such projects feel free to let me know.

##### Share on other sites
gbLinux    100
Quote:
Original post by Antheus
Here are some numbers: 7, 22, 45, 88, 102, 344.

Actually, the real ones are below:

Quote:
 this situation can change rapidly as number of clients increases,
The total number of packets in the network increases with n squared. So with 2 clients, there are 4 packets. With 16 clients, there are 256 packets. With 100 clients, there are 10,000 packets. Sent 60 times per second.

To be completely fair, the number is n*(n-1), but for the purpose of this discussion, this doesn't matter beyond 3 clients.

Quote:
 "upload", does that mean top speed would be limited by client ISP's upload speed limit?

Yes. Or better yet, by each user's subscription plan's limit (128kbit - several Mbit).

Quote:
 this is where server model might catch up, if not overrun p2p, i think... since the difference in upload/download speed is so great for most of the ADSL plans, as far as i know. any thoughts on this?

Due to wonders of mathematics, we can determine when this would happen. The formulas are given above. For each update (60 times per second)

Server client
Server: N packets
N * Each client: 1 packet
Total number of packets: N + N

P2P:
N * Each client: N packets
Total number of packets: N * N

As far as number of packets goes:
N * N < N + N
N^2 < 2N
N < 2

In other words, P2P is only more efficient as long as there are at most two peers.

When there are 3, P2P needs to generate 9 packets, C/S only 6. When there are 64 clients, P2P needs to generate 4096 packets, while C/S only 128.

Bandwidth requirements, 64 players
P2P:
Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to 64 peers. This adds up to 38*60*64 bytes per second, or 145920 bytes per second. So each player needs to have ~1.2Mbit upstream.

S/C:
Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to server. This adds up to 38*60*1 = 2280 bytes per second, or approximately 20kbit upstream (a dial-up modem provides it).
Server however, needs to send full updates, at maximum rate same as P2P (often less), so ~1.2Mbit. Servers are typically hosted by operators in data centers, not in bedroom computers.

But it gets worse. For each update, all of those packets are send over internet. The total bandwidth required by each model is:
P2P:
Each client sends to all others, so 1.2MBit * 64 = 76.8MBit
S/C:
1.2Mbit + (64 * 0.02Mbit) = 2.48Mbit

But now lets look at 256 player game:
P2P:
38*60*256*256 = 1.14 Gigabits
S/C:
38*60*256*2 = 8.9 Mbit

Let's go further. On WoW server, there are about 3000 at any given time (except Tuesdays).
P2P: 38*60*3000*3000 = 152 Gigabits (152,000 Mbits)
S/C: 38*60*3000*2 = 104 Mbits (or the capacity of home broadband router)
And this is just for a single shard - WoW has how many, hundred?

See the problem with P2P, and why just a simple napkin calculation shows it simply doesn't work?

Just because The Cloud is somewhere out there, it doesn't mean it's magic infinite space. The numbers listed above are the infrastructure requirements, which exceed total internet capacity of most countries.

Edit: Software prefers bytes, but network gear prefer bits. So when talking about capacities, I multiply the bytes sent by 8 to obtain bits. Not entirely correct, but close enough - it's not what makes or breaks it.

beautiful!
thank you, i very much appreciate that.

i'm stupid for that kind of thing and i had to see some numbers to be able to grasp it and have some picture about relations. now, let me chew on that for a while...

##### Share on other sites
gbLinux    100
Quote:
 When there are 3, P2P needs to generate 9 packets, C/S only 6. When there are 64 clients, P2P needs to generate 4096 packets, while C/S only 128.

that doesn't seem right, this is how i see it:

- per one frame, per one client -
*** 3 clients:
total: 4

total: 4

*** 6 clients:
total: 10

total: 7

*** 64 clients:
total: 126

total: 65

do we understand each other?
are we talking about same stuff here?

##### Share on other sites
Antheus    2409
Quote:
 Original post by gbLinuxbeautiful! thank you, i very much appreciate that. i'm stupid for that kind of thing and i had to see some numbers to be able to grasp it and have some picture about relations. now, let me chew on that for a while...

I edited the post so the wording is incorrect. The numbers are overhead and wasted bandwidth, not actual requirements.

Or simply put, a user with 1Mbit upstream would not have enough spare bandwidth to send anything. If you actually send the data, things get worse.

If all clients are sent full updates, then P2P and C/S need about the same bandwidth. But P2P has so much more overhead.

A user participating in 64-player P2P game over 1Mbit upstream would waste their entire bandwidth on packet overhead, and couldn't even send any usable data.

Quote:

*Each* client. You listed one. There are 64, and each needs to send 63 packets. So each update generates 4032 packets in the internet.

Quote:

Per entire shared state. Per game. That is total across all servers and all clients.

In P2P, there are a total of 64 clients, each of them generates 63 packets it sends to all others.
In C/S, each client sends one packet to server, and server sends on packet to each client.

##### Share on other sites
gbLinux    100
Quote:

Quote:

*Each* client. You listed one. There are 64, and each needs to send 63 packets. So each update generates 4032 packets in the internet.

i said per ONE client, per ONE frame... why do you care how much you saturate WWW in general? that does not influence latency, individual bandwidth or speed of p2p network in any way i can perceive. what does it matter? isn't that like if everyone started talking faster over the phone? would that "overload" the lines, slow down internet or something?

Quote:

Quote:

Per entire shared state. Per game. That is total across all servers and all clients.

In P2P, there are a total of 64 clients, each of them generates 63 packets it sends to all others.
In C/S, each client sends one packet to server, and server sends on packet to each client.

i'm afraid the information server sends to each client about each other client is much bigger than what clients send to server and what peers send to peers in p2p.

it's more realistic to say that in C/S each client send one "packet" and receives as many "packets" as there are other clients, or in the worse case as many as there are dynamic entities of any kind in the client scene.

Quote:
 If all clients are sent full updates, then P2P and C/S need about the same bandwidth. But P2P has so much more overhead.A user participating in 64-player P2P game over 1Mbit upstream would waste their entire bandwidth on packet overhead, and couldn't even send any usable data.

well, i can't trust if you're using right numbers and formulas again. are you saying they would use alloted ammount of download data pemited by ISP, or that their UPLOAD SPEED would not be sufficient?

so, i suppose server-based games host 64, maybe more players? what is the maximum number of players allowed per server on some of the most popular games?

##### Share on other sites
gbLinux    100
Quote:
 Original post by Matias GoldbergHi!Quite an argument here.I just wanted to point about the ABC problem. I'll change it to:*** Source Snippet Removed ***AD = DC = 30km.SD = 17kmServer approach:Info comming from B to D needs 34kmP2P approach:Info comming from B to D needs 60km

that seems wrong. those numbers were not coincidental, that's what you get with equilateral triangle on a flat surface and it's centre, you can't just imagine numbers... or perhaps you can, but that makes for distorted geo-surface. here is how it goes for equilateral quadrilateral, such as square and rhombus.

B            C      s       30km  20kmA            D

AB=BC=CD=DA = 30km
As=Bs=Cs=Ds = 20km

*** P2P, one frame loop, per ONE client
round-trip: ~33km (2x 30km, 1x 40km)
packets sent: 3
total packets per client: 6

*** SERVER, one frame loop, per ONE client
round-trip: 40km
packets sent: 1
total packets per client: 5

it cant' get better than this for server-based model, having server in the middle... it's only client's upload speed limit that can make server based model faster, as far as i can see.

##### Share on other sites
Codeka    1239
Quote:
 Original post by gbLinuxi said per ONE client, per ONE frame... why do you care how much you saturate WWW in general? that does not influence latency, individual bandwidth or speed of p2p network in any way i can perceive. what does it matter? isn't that like if everyone started talking faster over the phone? would that "overload" the lines, slow down internet or something?
I think it's already been establed that a "peer to peer" model is only useful if all the clients are geographically close (otherwise latency is unavoidable, and the latency of Sydney-LA, say, far outweighs any latency a server may introduce).

That means that if all clients are geographically close to each other, there's a fairly high chance they're all going to be using the same ISP. That means each of those clients 64 packets per frame need to be routed through the ISP's network. An ISP isn't going to be happy about ~120k packets per second running through it's network for just one game is it? We already saw how DOOM was banned on many corporate networks because of the excessive traffic it generated, why wouldn't an ISP also block a game that generates N2 packets per second?

Also, there is a big difference between 64 small packets and one big packet with the same amount of payload. If the individual packet has a payload size of 100 bytes (which is actually pretty big for an FPS style game) you've then got to add the 24 byte UDP/IP header which means a total of 7,936 bytes. If you instead put them all into one packet, then you've got 6,400 bytes of data + the single 24 byte header for a total of 6,424 bytes - a saving of 20%. Then add on the fact that you can compress 6,400 bytes more easily than 100 bytes (i.e. you'll get a bigger compression ratio with more data) and the saving is even bigger.

##### Share on other sites
gbLinux    100
i just realised i was mixing two concepts.

it is important to differentiate line Speed and Bandwidth. even tho referred to as a "speed" bandwidth is actually not what matters to "PING", it does not measure the speed with which packets travel or influence how long would it take to arrive at destination... for that we need some distance and the number of routers and checkpoints it must stop and visit on the way.

say, if the speed of water molecules in some water-pipe is what packet speed is in network lines, then bandwidth is the width of that pipe, ie. how many packets can flow through it per second. but the speed, the speed is always about light speed, i suppose, minus all the time lost in routing... and that's pretty much all i know about this. what routers do and how much they slow packets down, that i don't know.

anyhow, this is what is important:
- if SPEED is the same for upload and download (forget bandwidth) between peers in p2p as it is between server and clients, than p2p simply must be that much faster (as explained with ABC). bandwidth only starts to matter with more with, say 16 or 32 players, and even than it heavily depends on just how much you can squeeze your communication info, so 32 players on p2p sounds quite doable with average ADSL connections.

the conclusion:
p2p game with 8 or less players must, simply must be faster than server-based game, given the same hardware and having some average ADSL connections. does anyone wanna challenge this conclusion?

btw,
the only thing i could google about "p2p multiplayer" is that some folks are complaining about some "laggy Xbox 360 P2P multiplayer", whatever is that all about, plus some mobile games are using it via bluetooth.. that's pretty much all. how peculiar. and, what now... hopefully someone will refute my conclusion so i don't have to be the one to produce the first online p2p multiplayer game in the world.

[Edited by - gbLinux on August 29, 2009 2:35:08 AM]

##### Share on other sites
gbLinux    100
Quote:
 I think it's already been establed that a "peer to peer" model is only useful if all the clients are geographically close (otherwise latency is unavoidable, and the latency of Sydney-LA, say, far outweighs any latency a server may introduce).

i'm pretty sure this has not been established, in fact most of the stuff said here is pretty vague or was refuted by subsequent posts. we didn't even know if p2p multiplayer games were very common or very rare, even now i'm not quite sure about it... hahaa. anyhow, set any example you want pick any cities you want and we will calculate traversal for both models, i assure you latency will be LESS with p2p as long as bandwidth does not come into play.

##### Share on other sites
Antheus    2409
The very last piece of help I can offer:
+-----------+|   Brick   |+-----------+

Apply liberally to forehead until enlightened.

Considering physics background, this is equivalent to trying to design Carnot engine with efficiency above 1.

The most fundamental concepts state it cannot be done, yet many people still try to develop perpetuum mobile, hoping for a miracle.

##### Share on other sites
Codeka    1239
Quote:
 Original post by gbLinuxit is important to differentiate line Speed and Bandwidth. even tho referred to as a "speed" bandwidth is actually not what matters to "PING", it does not measure the speed with which packets travel or influence how long would it take to arrive at destination... for that we need some distance and the number of routers and checkpoints it must stop and visit on the way. say, if the speed of water molecules in some water-pipe is what packet speed is in network lines, then bandwidth is the width of that pipe, ie. how many packets can flow through it per second. but the speed, the speed is always about light speed, i suppose, minus all the time lost in routing... and that's pretty much all i know about this. what routers do and how much they slow packets down, that i don't know.
I get the impression you're only skimming people's replies and not really taking it all in. I explained the difference between latency (what you call "speed") and bandwidth way back in my second reply:
Quote:
 From my second replyBandwidth and latency are usually orthogonal (one is not related to the other). Bandwidth is the amount of data per second that your connection can sustain and is usually measured in bits per second (b/s). Latency is the amount of time a packet sent from one end of the connection takes to reach the other end and is measured in seconds (or milliseconds). For example, a sattelite link usually has really high bandwidth, but high latency. Fibre optic connections are typically high bandwidth and low latency, and so on.Now, bandwidth is technically infinitely expandable - if you want to transfer twice as much data per second, simply install twice as many cables. But latency is limited by the physical properties of the universe we live in - data cannot travel faster than the speed of light, and it takes around 66ms for light to travel from Sydney to LA (for example), meaning the physical minimum round-trip time from Sydney to LA is 133ms*. You cannot improve that (without violating the laws of physics).
Quote:
The "P2P" used by most Xbox games is similar to what was described by hplus0603. That is, one of the "peers" is designated the "host" (or "server") and everybody connects to him. In reality, it's a client/server model.

I'm going to leave this discussion with one observation. It is not uncommon for a novice to a particular field to believe he's come up with a novel idea that nobody's ever thought of before. He can't see any problems with his idea, and he gets frustrated because so-called "experts" will dismiss it, almost out-of-hand. This is not because the experts lack imagination, rather it is because the experts can see the inherent flaws in the idea that a novice - from a lack of experience - will miss.

Some people believe that being a novice can be an advantage because you're not hampered with pre-conceived notions of what is and is not possible, but that is not true. Perhaps you can provide one or two examples of a novice who actually has come up with a novel idea that no "expert" would have considered, but for each of those, I can point out tens of thousands of "novice" ideas that fall down in the real world.

Do not be discouraged, however. We were all novices once! (Not that I'm an expert by any stretch of the imagination, of course!) My suggestion would be to keep your idea in the back of your mind as you learn all you can about implementing networked applications in the real world - you will be surprised at how complex it actually is.

##### Share on other sites
Matias Goldberg    9575
Quote:
 Original post by gbLinuxB C s 30km 20kmA DAB=BC=CD=DA = 30kmAs=Bs=Cs=Ds = 20km*** P2P, one frame loop, per ONE clientround-trip: ~33km (2x 30km, 1x 40km)packets sent: 3packets received: 3total packets per client: 6*** SERVER, one frame loop, per ONE clientround-trip: 40kmpackets sent: 1packets received: 4total packets per client: 5

Round-trip for P2P should be 40km (2x 30km, 1x60km / 3) assumming it's an average what you did.
Second, and more importantly, in computers there is no average in this stuff. The whole system works as slow as the lowest component in that system (what is called, a bottleneck), not as it's average.
This means the P2P will go as slow as if it were running at 60km, because A has to wait for D. There are some clever tricks (lag-compensation) that help A predicting what D should have done and when data has arrived fix the estimations. Nevertheless, in the long term, A will have eventually need to stop and wait because D can't keep it up (or vice vesa)
And B and C are caught in the A & D's delays, so they have to wait too to avoid getting too far in the simulation from and A & D.

Cheers
Dark Sylinc

##### Share on other sites
hplus0603    11347
Quote:
 why do you care how much you saturate WWW in general?

Because if World of Warcraft (a single game) were P2P, it alone would consume more than the available bandwidth of the entire Internet.

gbLinux, I really suggest you go read through the entire Forum FAQ for this forum, including following all the links. Start with question 0, make sure you internalize the science behind it, then go to question 1, make sure you internalize that, ...

Then come back, and we can hold a discussion that makes sense, and where you don't come out looking like a lazy n00b. You've made so many beginner mistakes in your analysis it's not even funny, yet you complain that the experienced answers don't make sense to you. For an example of the latest mistake: You assume that geographic distance equates to network distance. That's not true at all -- when I ping a server in San Francisco from Redwood City (a distance of about 25 miles north), the packet goes through San Mateo, Sacramento (80 miles away), San Jose (30 miles south) and from there finally to San Francisco. If you're not familiar with the SF Bay Area, look it up on a map. Geographic distance has very little to do with network distance at the regional and lower levels. Hence, why we talk about "back-haul" and "long-haul" in the discussion.

##### Share on other sites
gbLinux    100
Quote:
 I'm going to leave this discussion with one observation. It is not uncommon for a novice to a particular field to believe he's come up with a novel idea that nobody's ever thought of before. He can't see any problems with his idea, and he gets frustrated because so-called "experts" will dismiss it, almost out-of-hand. This is not because the experts lack imagination, rather it is because the experts can see the inherent flaws in the idea that a novice - from a lack of experience - will miss.

im novice and i can not see any problems with p2p, so i come here to ask expert (you) to explain it to me, but you only told me p2p is bad, it has problems, it's this and that, it can't work... but no explanation, no analysis, no numbers.. nothing, and now you gonna leave? i suppose you realized you are wrong, after all what kind of expert does it take to realize shortest route will yield the fastest path?

Quote:
 Some people believe that being a novice can be an advantage because you're not hampered with pre-conceived notions of what is and is not possible, but that is not true. Perhaps you can provide one or two examples of a novice who actually has come up with a novel idea that no "expert" would have considered, but for each of those, I can point out tens of thousands of "novice" ideas that fall down in the real world.Do not be discouraged, however. We were all novices once! (Not that I'm an expert by any stretch of the imagination, of course!) My suggestion would be to keep your idea in the back of your mind as you learn all you can about implementing networked applications in the real world - you will be surprised at how complex it actually is.

shortest route will yield fastest path? (YES/NO)

hahaa... bye, bye.

##### Share on other sites
gbLinux    100
Quote:
Original post by Matias Goldberg
Quote:
 Original post by gbLinuxB C s 30km 20kmA DAB=BC=CD=DA = 30kmAs=Bs=Cs=Ds = 20km*** P2P, one frame loop, per ONE clientround-trip: ~33km (2x 30km, 1x 40km)packets sent: 3packets received: 3total packets per client: 6*** SERVER, one frame loop, per ONE clientround-trip: 40kmpackets sent: 1packets received: 4total packets per client: 5

Round-trip for P2P should be 40km (2x 30km, 1x60km / 3) assumming it's an average what you did.

what?? no, take client A for example:

A->B = 30km
A->C = 40km
A->D = 30km

how do you keep coming with 60km, diagonally opposite clients can talk to each other as well, this is not some RING thing, or something.

Quote:
 Second, and more importantly, in computers there is no average in this stuff. The whole system works as slow as the lowest component in that system (what is called, a bottleneck), not as it's average.

no. there actually is an average here, especially if we decided to sync all the peers some time in the past, just like servers do. this system does not work as the lowest lowest component allows because updates are asynchronous there is no FAST/SLOW here, no waiting - you only have FURTHER and CLOSER, further is not SLOWER it is only more behind in the past, but the rate of update is NON INTERRUPTED, has CONSTANT streaming flow.

theoretically working on FULL 60Hz and more, where frequency only depends on upload bandwidth, size of packets and number of peers. you are describing problems server-based approach have.

Quote:
 This means the P2P will go as slow as if it were running at 60km, because A has to wait for D. There are some clever tricks (lag-compensation) that help A predicting what D should have done and when data has arrived fix the estimations. Nevertheless, in the long term, A will have eventually need to stop and wait because D can't keep it up (or vice vesa)And B and C are caught in the A & D's delays, so they have to wait too to avoid getting too far in the simulation from and A & D.

no, no waiting here.

imagine 8 people have radar devices that can read signal from similar device and display their location. all the devices broadcast their location to all other devices and all the devices update location of all other device as the the signal arrives. now this signal never stops and the latency here is directly proportional ONLY to distance.

##### Share on other sites
Antheus    2409
Quote:
 Original post by gbLinuxshortest route will yield fastest path? (YES/NO)

The shortest route between London and New York is straight line. It would take decades or centuries to dig a tunnel through there.
Then next shortest route is over surface. It takes about 3 days of sailing.
The longest route is through the air - and only takes 6 hours or so.

Also - stuck in gridlock vs. subway+walking.

And - walking across a mountain in straight line, vs. driving all the way around.

So I'm going to go with: No.

Quote:
 imagine 8 people have radar devices that can read signal from similar device and display their location. all the devices broadcast their location to all other devices and all the devices update location of all other device as the the signal arrives. now this signal never stops and the latency here is directly proportional ONLY to distance.

This is not necessarily true. First, it implies stationary observers within same frame of reference. This can result in different results, depending on which terminology is ued.
In addition, it does not include medium. Speed of light inside some media is lower than in vacuum. Gravitational lensing can be used to bend indirect path through vacuum instead of traveling shorter path in straight line but at slower speed.
Things are further complicated by tunneling. Depending on distance between observers, shortest distance might be zero, but with low probability.
And then there's string theory....

In other words: it is not proportional to distance. Time needed to travel (average speed can be calculated from that) is integral of velocity over path, as defined very long time by physics.

And since latency is direct function of average speed (emphasis on average), it is independent of topologically (geography, line of sight, network route) shortest path.

[Edited by - Antheus on August 29, 2009 5:45:17 PM]

##### Share on other sites
gbLinux    100
Quote:
Original post by Antheus
Quote:
 Original post by gbLinuxshortest route will yield fastest path? (YES/NO)

The shortest route between London and New York is straight line. It would take decades or centuries to dig a tunnel through there.
Then next shortest route is over surface. It takes about 3 days of sailing.
The longest route is through the air - and only takes 6 hours or so.

Also - stuck in gridlock vs. subway+walking.

So I'm going to go with: No.

i thought you were referring to yourself as 'networking expert', and now instead of using paths of network infrastructure you would rather dig tunnels?! please, if this is your profession... don't you think its kind of important to figure this thing out completely? or, at least, don't try to dig it down without good reason, thanks.

it should be perfectly clear to novice and experts alike, p2p has shorter traversal route, so it simply has to be able to communicate faster, plus having all the benefits of streamed, non-interrupted, parallel processing.

this will not only allow far better frequency, but asynchronous updates will smooth many visual glitches automatically and the streaming nature of incoming data would even further make the whole experience more fluid.

if you think i'm wrong about something, then please point it out.

[Edited by - gbLinux on August 29, 2009 5:05:34 PM]

##### Share on other sites
gbLinux    100
Quote:
Original post by hplus0603
Quote:
 why do you care how much you saturate WWW in general?

Because if World of Warcraft (a single game) were P2P, it alone would consume more than the available bandwidth of the entire Internet.

gbLinux, I really suggest you go read through the entire Forum FAQ for this forum, including following all the links. Start with question 0, make sure you internalize the science behind it, then go to question 1, make sure you internalize that, ...

Then come back, and we can hold a discussion that makes sense, and where you don't come out looking like a lazy n00b. You've made so many beginner mistakes in your analysis it's not even funny, yet you complain that the experienced answers don't make sense to you. For an example of the latest mistake: You assume that geographic distance equates to network distance. That's not true at all -- when I ping a server in San Francisco from Redwood City (a distance of about 25 miles north), the packet goes through San Mateo, Sacramento (80 miles away), San Jose (30 miles south) and from there finally to San Francisco. If you're not familiar with the SF Bay Area, look it up on a map. Geographic distance has very little to do with network distance at the regional and lower levels. Hence, why we talk about "back-haul" and "long-haul" in the discussion.

huh. why complicate?

YES/NO:
1.) does p2p has shorter traversal path than server-based model?

2.) would parallel computing even further speed up latency by getting rid of serial computation server does?

3.) can p2p run on much faster frequency (60HZ and more) due to the nature of uninterrupted, streamed, asynchronous updates?

i rest my case... and i will gladly answer any question and try to explain if there is still anyone who can not understand this.

Quote:
 Because if World of Warcraft (a single game) were P2P, it alone would consume more than the available bandwidth of the entire Internet.

what are you talking about? we are not talking about 'broadcast packets' any more, i think conclusion is those packets would be lost on WWW. are you asserting that all 10mil WoW clients, play on one server? how many players maximum can one WoW server host?

"bandwidth of the entire Internet", does that even make sense? that has nothing to do with anything. you should only be concerned about upload/download bandwidth per client. -- take 32 clients, take some average packet size, calculate traversals and latency, then tell us what's upload/download bandwidth for p2p and server based model, can you do that? as long as every peer/client stays within it's limits, that's all what maters, and than p2p wins over server model on sheer SPEED provided by constant, uninterrupted streaming flow of asynchronous updates, isn't that so?

[Edited by - gbLinux on August 29, 2009 6:37:51 PM]

##### Share on other sites
hplus0603    11347
gbLinux,

I now realize you are a troll. However, you've done a great job of skirting the limits of what might be considered a reasonable line of questioning, so I've let the thread go on this long. As far as trolls go, you're really skilled. (Or, as far as normal social humans go, you're very unskilled -- it's hard to tell the difference online)

Because you do not take advise that's given to you, and do not actually draw the learning from the posts that have been made (including posts with clear numbers, statistics, and technical explanations), this discussion will go nowhere further.