Jump to content

  • Log In with Google      Sign In   
  • Create Account


Broadcasting (Multicasting) based multiplayer game, LAN/WAN?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • This topic is locked This topic is locked
48 replies to this topic

#1 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 11:09 AM

hi, my experience is in physics and graphics programming, i always stayed ignorant of networking, but recently i decided to make my first WWW/LAN multiplayer code, so i'm in the process of figuring out how things work today, but also how they did it in the past... what were the mistakes and why is the current most popular architecture the way it is. for example, i read yesterday that game DOOM used broadcasting which was causing overload of LAN since all the computers on the network received and processed the packets regardless if they were playing the game or just being a part of the network, and so sys admins banned DOOM, but how would that apply on today's internet? can we not utilise benefits of multicasting now with all the firewalls and stuff? these 'unexpected' packets would be discarded by all/most of today computers, of course unless they intentionally open ports to grab those packets, play the game. so, the problem with DOOM would not be the same today or on the internet as it was for LAN, right? in the light of that, i have this idea and i could not find any documentation about it, so can someone educate about the most recent, most popular ways of doing things, correct me in my understanding and perhaps explain why there are no games based on Broadcasting-Multicasting architecture, or if there are, then please direct me with some links to read more about those. now, the way i see it... CURRENT, MOST POPULAR ONLINE_MULTIPLAYER ARCHITECTURE: - central server collects all the input from all the clients, calculates movement/collision then sends to each client the new machine state, ie. new positions of each entity, velocities, orientation and such, which then each client renders, perhaps utilising some extrapolation to fill in the gaps from, say network refresh rate of 20Hz to their screen refresh rate of 60 FPS. now, this would mean that game actually can only run as fast the slowest connected client can communicate with the server, right? which in the best possible case is about 10-20FPS of real server-computed data and input samples, considering the roundtrip lag... to put it simply, games on the internet today run only about 10FPS, regardless of how many FPS your client renders, but those extra frames are only "smooth-candy", and even worse, they may itself be the source of another set of visual artifact, since they can easily mismatch with server's next update. is this correct state of affairs in networking and multiplayer games, as of 2009? NOW, THE IDEA: - how about every client broadcasts it's input to every other client, including the central server, instead of just talking to server back-forth, so without the return trip this would imidiately speed things up about 2x, right? if this is not very terrible idea for some reason, then i guess the only obstacle would be security and fear of user cheating and hacking their client software. but wait, we just got rid of the need for central server, so we could use it instead to police the clients full-time. i'm not sure about the ways players can cheat, or why, but if you invest all the time and resources server uses to compute all the physics, trajectories, collisions and what not, then it seems possible to utilise that time and instead make the server check all the clients and find the way to identify non-human controlled input. this should not be more complicated for the server than the original task of running complete game physics for all the clients, right? so basically, the game would be played on client computers, every client calculating their own physics and only reporting their input to every other client with only ONE broadcast call, while simultaneously receiving the same info from all the other clients and simply rendering them without processing. this way server would have all the time in the world to go about gestapo business policing the clients and the average update round-trip would be cut in half or so, how does that sound? [Edited by - gbLinux on August 27, 2009 9:16:24 PM]

Sponsor:

#2 Antheus   Members   -  Reputation: 2393

Like
0Likes
Like

Posted 27 August 2009 - 11:30 AM

Speed of light is the ultimate limiting factor. Even without delays from switches, round-trip latency between EU-US connection is ~50-100ms (do the math), 150+ in reality.

DOOM's networked physics model does not tolerate such high latency, or even half of it.

Broadcast over WAN can be, and is implemented using P2P. The big problem is practical - broken routers, firewalls, NAT, asymmetric connections, ....

FPS (more accurately, physics update rate) is not limited to 10Hz. There is no reason why it couldn't run at 60 or 100Hz over WAN. The limiting factor is peer's bandwidth. Many cannot support such rates. Broadcast could help with that, but nobody has even the slightest idea on how such routing could be efficiently performed. Multicast never took off for this very reason.

The high latency problem has also been solved for many tasks, but a much larger part of them remains unsolved in presence of latency (stack of bricks, 10 users manipulating one of them at the same time). So as long as there exists latency, these problems will either be approximations, or will always be delayed.

Broadcast (as routing mechanism) has less impact on this. The key difference between WAN and LAN is latency. On LAN it's effectively zero, over WAN, it's between tens and up to thousands of milliseconds. This is what causes difficulties. Factor two as added by server does increase it, but it actually solves many other problems, which is why MMOs prefer it.

Quote:
so basically, the game would be played on client computers, every client calculating their own physics and only reporting their input to every other client with only ONE broadcast call, while simultaneously receiving the same info from all the other clients and simply rendering them without processing.


This is actually how it's done by many multi-player games. The seminal article about a very reliable, very efficient mechanism to achieve this is the Age of Empires networking, which has been proven in practice a decade or so ago.

But again, most multiplayers games choose server-based design for practical reasons, but still do most of physics on client.

#3 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 12:20 PM

hi, thanks for that. i agree.

basically, i was reading about counter strike and quake servers where they are default set to about 20Hz update rate. clients can request more and some can get, but then what? not only is that not fair, but it also does not make for realistic simulation, relatively speaking... players close to the server with fast connection could then sample time much faster and therefore have super-perception and super-reaction advantage, compared to other players. but even worse, when you try to animate and synchronise all that across all the clients, you get all kind space and time distortion, like warping and players being on more than one place in the same time, being shot dead, yet being alive... like in that movie where people lived in computer generated world.


Quote:

This is actually how it's done by many multi-player games. The seminal article about a very reliable, very efficient mechanism to achieve this is the Age of Empires networking, which has been proven in practice a decade or so ago.

But again, most multiplayer games choose server-based design for practical reasons, but still do most of physics on client.


are you saying those games do most of the physics on the clients, or are you saying they actually use broadcasting where all the clients talk to every other client as well? can you give some links?

was i correct then that broadcasting and communication off one-to-many of all the clients can actually cut update delay in half, given the same conditions... and if so, then what could be advantages and practical reasons to still run server-based games, if client-to-client broadcasting is that much faster? pros, cons?


thanks

[Edited by - gbLinux on August 27, 2009 7:20:12 PM]

#4 Antheus   Members   -  Reputation: 2393

Like
0Likes
Like

Posted 27 August 2009 - 12:59 PM

Server based FPS run simulation in the past. They collect actions for 100ms (or so), then resolve what happened and let clients know. Clients display actions immediately, but change relevant state only after server confirms it. On client, when you shoot someone, blood appears immediately. But they only lose health after server confirms it.


As far as broadcasting goes - it's not latency it solves. Factor 2 is a one-time, constant factor improvement. It pales in comparison to everything else.

Broadcast on LAN saves bandwidth. One packet from each peer delivered to all, regardless of whether there's 2 or 200. But - it also delivers this packet to all, regardless of whether they are interested or not.

On WAN, this means delivering packet to all IPs, and routing it to each computer in each private network. WoW has 10 million players, each of them sends ten packets per second, each packet is 10 bytes long. That is ten gigabit connections, fully saturated - to each and every single computer in internet. Backbone would probably hold, but last-mile has no hope. And that is just WoW.


Multicast solves the design problem, but not the technical one. It allows creation of limited interest groups (virtual LAN with dynamic subscriptions), but there is currently no directly usable support for it on WAN, simply due to complexity of routing.

#5 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 02:00 PM

ok, thanks, i get you, but i'd like more details, so please let me ask more questions...

Quote:

As far as broadcasting goes - it's not latency it solves. Factor 2 is a one-time, constant factor improvement. It pales in comparison to everything else.

i really can not see anything that can compare with with 2x speed increase. i always saw speed as a paramount factor when designing any component of any real-time game, while online games, since they are struggling with time-lag to begin with, should be even more concerned with any speed gain they can get on network communication, there is no narrower bottleneck than connection speed in online multiplayer games, right?

Quote:

Broadcast on LAN saves bandwidth. One packet from each peer delivered to all, regardless of whether there's 2 or 200. But - it also delivers this packet to all, regardless of whether they are interested or not.

ok, my knowledge about working of all that is minimal, so i still have my original question: would not internet routers, firewalls and closed ports automatically discard such kind of packets? well, i thought there at least must be some kind of packets which automatically get blocked and lost, and so we could design packets in such way - 'to be only received/processed by active-listeners'.

Quote:

On WAN, this means delivering packet to all IPs, and routing it to each computer in each private network. WoW has 10 million players, each of them sends ten packets per second, each packet is 10 bytes long. That is ten gigabit connections, fully saturated - to each and every single computer in internet. Backbone would probably hold, but last-mile has no hope. And that is just WoW.

i really can not visualise how much impact the sheer amount of increased information in the physical network lines can really have - is that not just like if everyone would talk faster over the phone, that would not overload the phone lines, or would it? please explain? what kind of effect would WWW experience if half computers on the internet started broadcasting? would internet slow down? my first guess that it would not.

Quote:

Multicast solves the design problem, but not the technical one. It allows creation of limited interest groups (virtual LAN with dynamic subscriptions), but there is currently no directly usable support for it on WAN, simply due to complexity of routing.

ok, thats it. i think that fits best to what i had in mind... can you/anyone just explain what kind of support is needed and what are the complexities of routing it?

[Edited by - gbLinux on August 27, 2009 9:00:58 PM]

#6 swiftcoder   Senior Moderators   -  Reputation: 9613

Like
0Likes
Like

Posted 27 August 2009 - 02:44 PM

Quote:
Original post by gbLinux
what kind of effect would WWW experience if half computers on the internet started broadcasting? would internet slow down? my first guess that it would not.
If you talk too fast on the telephone, the person on the other end will either ignore you or hang up on you. Pretty much the same thing happens re the internet - routers mostly just drop broadcast/multicast packets that are intended to propagate to the internet, so at most you can flood your own LAN. If you customise your router to pass the broadcast packets onwards, chance are the next level router (at your local ISP) will drop them anyway...

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#7 Codeka   Members   -  Reputation: 1153

Like
0Likes
Like

Posted 27 August 2009 - 03:03 PM

The original DOOM used IPX as it's networking protocol, which is a LAN-only protocol - it doesn't work over the internet (WAN) at all. IP does have the concept of broadcasting, but it works only over the local subnet - routers can not pass those packets up-line (i.e. to the internet) - it simply doesn't work.

That means if you wanted to do what you're suggesting, you basically have to build a network like how P2P networks (sharaza and friends) work - each client maintains a connection to every other client. That means, to send an outbound packet, you must send one individual packet to each client.

So if your game requires 10Kb/s of bandwidth per player, than means if you've got 32 players, that's 320Kb/s of uplink - many "broadband" plans don't support that much upstream bandwidth.

#8 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 03:05 PM

Quote:
Original post by swiftcoder
Quote:
Original post by gbLinux
what kind of effect would WWW experience if half computers on the internet started broadcasting? would internet slow down? my first guess that it would not.
If you talk too fast on the telephone, the person on the other end will either ignore you or hang up on you. Pretty much the same thing happens re the internet - routers mostly just drop broadcast/multicast packets that are intended to propagate to the internet, so at most you can flood your own LAN. If you customise your router to pass the broadcast packets onwards, chance are the next level router (at your local ISP) will drop them anyway...


hehe, i'll talk slower then. but seriously tho, are you saying that this actually has never been tried, or that it simply would not work, hence it was never tried? ...but then, how does P2P work? how does all the real-time streaming audio and video on the www work? ain't there radio and tv shows already broadcasting real-time over WWW, or something like that? i don't quite get it, i'm afraid. am i not clear in my question? sorry if i'm missing some basics here.


basically, this is the question:
- has anyone ever tried some form of real-time multiplayer interaction over WWW based on 'packet broadcasting', one-to-many of all the clients to each other client?

[Edited by - gbLinux on August 27, 2009 10:05:44 PM]

#9 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 03:15 PM

Quote:
Original post by Codeka
The original DOOM used IPX as it's networking protocol, which is a LAN-only protocol - it doesn't work over the internet (WAN) at all. IP does have the concept of broadcasting, but it works only over the local subnet - routers can not pass those packets up-line (i.e. to the internet) - it simply doesn't work.

That means if you wanted to do what you're suggesting, you basically have to build a network like how P2P networks (sharaza and friends) work - each client maintains a connection to every other client. That means, to send an outbound packet, you must send one individual packet to each client.

So if your game requires 10Kb/s of bandwidth per player, than means if you've got 32 players, that's 320Kb/s of uplink - many "broadband" plans don't support that much upstream bandwidth.


ok, thanks.. i'm sorry if it took some time but i believe i understand now.

basically, it all comes down to ISP providers, what kind of connections are available and how much you choose to pay your plan. that sounds great to me, i think that's where this problem should be - with end users, not the game developers.

everyone upgrades their computers with latest video cards so they can smoothly render all the graphics, the same should be with network cards and ISP. there should be separate virtual broadcasting networks for different speed connections, just like there should be different servers working on different update rates, serving particular users with similar connection speed.


so the only question remains, has it been tried, and how much faster can 'P2P online multiplayer' be, if at all?

#10 hplus0603   Moderators   -  Reputation: 4971

Like
0Likes
Like

Posted 27 August 2009 - 04:26 PM

Quote:
i think that's where this problem should be - with end users, not the game developers.

everyone upgrades their computers with latest video cards so they can smoothly render all the graphics


It's clear you're not involved in the business side of game development, given that you're expressing two big game business marketing mistakes in these two sentences!

First, it's not upon end users to upgrade just to play your game -- it's up to you to serve as many possible customers as possible, if you want to actually re-capture the cost of making your game.

Second, only a fraction of PC owners (1%? 2%?) upgrade their graphics cards to "smoothly render all the graphics." If you want to sell to a small, discerning set of hobbyists, then feel free, but the problem there is that those guys already have several very good-looking games to choose from, and for you to take a slice of that pie requires a lot of investment in art (many millions of dollars), which you're not at all certain to make bat.

Third, >50% of all "PCs" sold are laptops, which can't upgrade graphics at all. >50% of all graphics sold are Intel Integrated, which generally aren't upgradeable. And, in fact, the fastest growing market is netbooks, which, if you're lucky, come with Windows 7, Intel GMA 950, 1 GB of RAM and a hyper-threaded, in-line Atom processor. And a 1024x640 screen.

If you have the resources, and want to build a game to require the users to obtain the best possible service and hardware (even though most people in the world can't actually get a very fast Internet connection, even if they wanted to), then you're welcome to do it. I hope you enjoy it! You will not, however, make your money back. But it's not always about the money.

If you want to spray packets to each player in your game using UDP, from each other player, then that's totally doable (assuming the game is no bigger than available bandwidth). However, when we have measured these things, it turns out that most of the latency is in the "last mile," plus the "long haul." Thus, going the "last mile" out from you, and then back-haul to me, and then the "last mile" back to me, is not much faster than going "last mile" to back-haul to a well connected server, to back-haul to my "last mile." The only case where you will measure significantly improved latency is when you and I are on one coast (or continent), and the server is on another. The reason for this is that co-lo centers have no "last mile" to worry about; they are plumbed straight into the peering fabric of the back-haul providers.



#11 Codeka   Members   -  Reputation: 1153

Like
0Likes
Like

Posted 27 August 2009 - 04:54 PM

Quote:
Original post by gbLinux
basically, it all comes down to ISP providers, what kind of connections are available and how much you choose to pay your plan. that sounds great to me, i think that's where this problem should be - with end users, not the game developers.
It doesn't all come down to cost, network connections have fundamental limits on how much data per second they can send/receive. Dial-up networks typically max out at 56Kb/s, ADSL maxes out around 20-30Mb/s a LAN can often do up to 1Gb/s and so on.

Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually < 8.

Also, streaming is a totally different ballgame to games. When you're streaming video, the most important factor is bandwidth. Bandwidth and latency are usually orthogonal (one is not related to the other). Bandwidth is the amount of data per second that your connection can sustain and is usually measured in bits per second (b/s). Latency is the amount of time a packet sent from one end of the connection takes to reach the other end and is measured in seconds (or milliseconds). For example, a sattelite link usually has really high bandwidth, but high latency. Fibre optic connections are typically high bandwidth and low latency, and so on.

Now, bandwidth is technically infinitely expandable - if you want to transfer twice as much data per second, simply install twice as many cables. But latency is limited by the physical properties of the universe we live in - data cannot travel faster than the speed of light, and it takes around 66ms for light to travel from Sydney to LA (for example), meaning the physical minimum round-trip time from Sydney to LA is 133ms*. You cannot improve that (without violating the laws of physics).

So the problem with your solution is that you're trying to make an improvement which relies on very low latency to succeed. That means a person in Sydney could not play your game with a person in LA without being adversely affected - and no amount of upgrading the hardware is going to fix their problem.

You're better off living with the limitations of the network, and design your network protocol around the fact that latency will inevitably exist.

* I might've got the math wrong there, but it's a good enough approximation :-)

#12 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 06:04 PM

Quote:

It's clear you're not involved in the business side of game development, given that you're expressing two big game business marketing mistakes in these two sentences!

First, it's not upon end users to upgrade just to play your game -- it's up to you to serve as many possible customers as possible, if you want to actually re-capture the cost of making your game.

Second, only a fraction of PC owners (1%? 2%?) upgrade their graphics cards to "smoothly render all the graphics." If you want to sell to a small, discerning set of hobbyists, then feel free, but the problem there is that those guys already have several very good-looking games to choose from, and for you to take a slice of that pie requires a lot of investment in art (many millions of dollars), which you're not at all certain to make bat.

Third, >50% of all "PCs" sold are laptops, which can't upgrade graphics at all. >50% of all graphics sold are Intel Integrated, which generally aren't upgradeable. And, in fact, the fastest growing market is netbooks, which, if you're lucky, come with Windows 7, Intel GMA 950, 1 GB of RAM and a hyper-threaded, in-line Atom processor. And a 1024x640 screen.


i disagree completely, but not passionately... so, i'd rather not talk about anything like that since the solution i'm suggesting might be twice as fast, therefore actually more suitable for low-end clients.

Quote:

If you have the resources, and want to build a game to require the users to obtain the best possible service and hardware (even though most people in the world can't actually get a very fast Internet connection, even if they wanted to), then you're welcome to do it. I hope you enjoy it! You will not, however, make your money back. But it's not always about the money.

i have no idea what are you going on about. i work on embedded devices and i said speed is the paramount factor for me, which means absolute optimisation and lowest possible hardware requirements. i don't want to build game, ask for blessing or talk about marketing and statistics, i have simple question and id like answer to it.

the question is:
- has "P2P multiplayer" architecture ever been tried over WWW or anyhow else, at all?


Quote:

If you want to spray packets to each player in your game using UDP, from each other player, then that's totally doable (assuming the game is no bigger than available bandwidth). However, when we have measured these things, it turns out that most of the latency is in the "last mile," plus the "long haul." Thus, going the "last mile" out from you, and then back-haul to me, and then the "last mile" back to me, is not much faster than going "last mile" to back-haul to a well connected server, to back-haul to my "last mile." The only case where you will measure significantly improved latency is when you and I are on one coast (or continent), and the server is on another. The reason for this is that co-lo centers have no "last mile" to worry about; they are plumbed straight into the peering fabric of the back-haul providers.


- what do you mean by "game is no bigger than available bandwidth"?
- what "things" did you measure if you or anyone never tried what i'm talking about?
- what is "last mile"? what is "long haul"? can that be communicated in plain english?

imagine you and your 10 friends in the same room or in the same city playing the game over the WWW and the server is in the next city. similar to your example this can easily illustrate just how much faster can P2P be, and that's not mentioning the time server loses on calculation of all the physics for all the clients. my guess is that P2P online multiplayer games could be, ON AVERAGE, even more than twice faster than server-based ones. it of course depends on location distribution and individual connections, but it seems only extreme cases of player/server distribution would benefit more from server-based approach, if any.

i don't really get what are you saying there... clients should be able to communicate with each other as much and as fast as they can do it, on average, with the server. everything else server wasted time on before being able to send packets is a pure gain here, which i believe translates in more than just better latency, but i'd like some experimental data.


#13 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 06:46 PM

Quote:
Original post by Codeka
Quote:
Original post by gbLinux
basically, it all comes down to ISP providers, what kind of connections are available and how much you choose to pay your plan. that sounds great to me, i think that's where this problem should be - with end users, not the game developers.
It doesn't all come down to cost, network connections have fundamental limits on how much data per second they can send/receive. Dial-up networks typically max out at 56Kb/s, ADSL maxes out around 20-30Mb/s a LAN can often do up to 1Gb/s and so on.

Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually < 8.

Also, streaming is a totally different ballgame to games. When you're streaming video, the most important factor is bandwidth. Bandwidth and latency are usually orthogonal (one is not related to the other). Bandwidth is the amount of data per second that your connection can sustain and is usually measured in bits per second (b/s). Latency is the amount of time a packet sent from one end of the connection takes to reach the other end and is measured in seconds (or milliseconds). For example, a sattelite link usually has really high bandwidth, but high latency. Fibre optic connections are typically high bandwidth and low latency, and so on.

Now, bandwidth is technically infinitely expandable - if you want to transfer twice as much data per second, simply install twice as many cables. But latency is limited by the physical properties of the universe we live in - data cannot travel faster than the speed of light, and it takes around 66ms for light to travel from Sydney to LA (for example), meaning the physical minimum round-trip time from Sydney to LA is 133ms*. You cannot improve that (without violating the laws of physics).

So the problem with your solution is that you're trying to make an improvement which relies on very low latency to succeed. That means a person in Sydney could not play your game with a person in LA without being adversely affected - and no amount of upgrading the hardware is going to fix their problem.

You're better off living with the limitations of the network, and design your network protocol around the fact that latency will inevitably exist.

* I might've got the math wrong there, but it's a good enough approximation :-)


i agree. but given the same circumstances, ie. the same spatial distribution of server and clents, i'm pretty sure that on average the same information can be passed AND calculated more quickly among the clients than over the single server, therefore more times per second.

person A is in Sydney
person B is in New York
person C is in Tokyo
server S is in Paris

*** SERVER-BASED SIMULATION FRAME
simultaneous{
info travels A to S
info travels B to S
info travels C to S
}
server physics calculation... ms?
simultaneous{
info travels S to A
info travels S to B
info travels S to C
}

client graphics render A
client graphics render B
client graphics render C

how much ms, how much kb we used there to accomplish this?


*** P2P-BASED SIMULATION FRAME
simultaneous{
client A physics calculation...
info travels A to B,C...
client B physics calculation...
info travels B to A,C...
client C physics calculation...
info travels C to A,B...
}

client graphics render A
client graphics render B
client graphics render C

how much ms, how much kb we used there to accomplish this?
how long is the route packets traversed in each case? how many packets - more, less, same?


this is kind of geometrical problem and i have no idea how to solve it but with plugging some real numbers and compare, which i find bothersome to do... so if anyone can do the math and explain if there is any latency gain/loss one way or another? also, i think visual artifact produced by unavoidable latency in either case would still be better distributed without the server.

and most interestingly, since every two players would have the same connection speed in relation to one another, no one player would have any advantage over another as discussed above when i mentioned fast time sampling and "super-perception", "super-reaction" advantage players with faster connection have in server-based approach, or so it would seem. any experimental data available on the subject?


Quote:

Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually <8

wait the second, so some games actually do work like that, perhaps not with broadcasting, but that's less important now. so, can you please let me know what games have all the clients talking to all the other clients? i don't think the phrase >'P2P online multiplayer game'< actually exists, so what in the world is the proper name for that kind of architecture, as oposed to server based?

[Edited by - gbLinux on August 28, 2009 3:46:44 AM]

#14 Codeka   Members   -  Reputation: 1153

Like
0Likes
Like

Posted 27 August 2009 - 07:55 PM

Quote:
Original post by gbLinux
Quote:

Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually <8

wait the second, so some games actually do work like that, perhaps not with broadcasting, but that's less important now. so, can you please let me know what games have all the clients talking to all the other clients? i don't think the phrase >'P2P online multiplayer game'< actually exists, so what in the world is the proper name for that kind of architecture, as oposed to server based?
Peer to peer is the correct term, and it's fairly common with real-time strategy games (e.g. starcraft) where the number of simultaneous players is small, and bandwidth requirements aren't large (i.e. you don't need to send updates 15-20 times per second just to make it look "OK").

#15 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 09:18 PM

Quote:
Original post by Codeka
Quote:
Original post by gbLinux
Quote:

Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually <8

wait the second, so some games actually do work like that, perhaps not with broadcasting, but that's less important now. so, can you please let me know what games have all the clients talking to all the other clients? i don't think the phrase >'P2P online multiplayer game'< actually exists, so what in the world is the proper name for that kind of architecture, as opposed to server based?
Peer to peer is the correct term, and it's fairly common with real-time strategy games (e.g. starcraft) where the number of simultaneous players is small, and bandwidth requirements aren't large (i.e. you don't need to send updates 15-20 times per second just to make it look "OK").


cool. now would any of those games actually happen to use broadcasting as a way to implement outgoing packets, or they just simply have for/next loop where packets are sent to selected/connected addresses?

so, obviously this p2p multiplayer stuff works and it's nothing new, then there must be some way to actually compare and deduce, if not simply measure the two, and see how they perform side by side given the same task and same client distribution.

you are saying it as if p2p multiplayer games inherently perform worse than server-based, for some reason, as you suggesting it's used where there is no need for high-frequency update.

but i'm suggesting the opposite, i'm suggesting 'p2p multiplayer games' would scale much better than server-based, could support more players and could actually calculate and transmit the same information faster, allowing for more rapid updates.


the above geometrical problem is all it takes to get the rough picture, just plug in some approximate numbers or draw the triangle on the piece of paper and measure the distance one way and another, p2p vs central-server, compare the two, subtract the time server needs to calculate the physics, consider how that scales with the number of players and tell me what you think. how about it?


can someone point to some article or paper about all this? any "developer notes" from starcraft team or similar info?



#16 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 27 August 2009 - 10:43 PM

ok, here it is.


B

s
A C


let's consider the best possible case scenario for server based approach, where server is located right in the middle. say, all the connections are the same speed and hypothetically perfect... all the sides of this triangle (AB, AC, BC) are 30km, distance from each client to server (As, Bs, Cs) is 17km.


*** P2P approach
- it takes 30km packet traversal for each client to have information about all other


*** SERVER approach
- it takes 34km packet traversal for each client to have information about all other, plus the time server took to calculate physics, which can be what? 50, 100, 500ms?


now let's consider how the time server needs to calculate physics for all the clients scale with the number of clients... there will always be the limit on processing speed of the server, and no matter how fast calculation can be it would still take 'some' time, if for nothing else, but only to loop through all the clients and send packets to each... and then, the first client on the list has advantage over the last, the player with faster connection has advantage over the one with slow connection, or merely further away from the server.

this is not the case with P2P, brodcasting would work as asynchronous update, which means visual artifact would be distributed better, since there would be no wait, no need for synchronisation. stuff would simply get updated as it arrives, again regardless of actual display frame-rate and independent of "slowest connection" syndrome, the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.


P2P wins big time, if you ask me, so what in the world is going on here? where are the papers and articles about this "P2P ONLINE MULTIPLAYR" and why is it not used more or for the high frequency update games such as first person shooters? why starcraft, why not quake? though, i think there is not much difference as long as the action is in real time... but, but, why server in the first place? who did ever come up with server concept and why?


[Edited by - gbLinux on August 28, 2009 5:43:11 AM]

#17 Antheus   Members   -  Reputation: 2393

Like
0Likes
Like

Posted 28 August 2009 - 01:32 AM

Quote:
Original post by gbLinux

let's consider the best possible case scenario for server based approach, where server is located right in the middle. say, all the connections are the same speed and hypothetically perfect...

Ah, see, here's your problem right here.

All connections are not equal. Backbone connections are fast, low latency. The one between your computer and ISP's gateway is slow, oversubscribed, perhaps throttled, subject to QoS or bandwidth shaping or buffering. People using wireless cards also experience higher latency.

Quote:
*** P2P approach
- it takes 30km packet traversal for each client to have information about all other

They also need to calculate physics


Quote:
*** SERVER approach
- it takes 34km packet traversal for each client to have information about all other, plus the time server took to calculate physics, which can be what? 50, 100, 500ms?

That time is not part of latency, since communication is asynchronous.

As said, server does not update physics immediately to allow for fair-play, and tolerate players with high latency. Server is running 100ms in the past.

Quote:
there will always be the limit on processing speed of the server, and no matter how fast calculation can be it would still take 'some' time,

Yes - and same goes for clients. In server based model, server will be 16-processor, server-grade piece of hardware. In P2P model, each peer will be a 5 year old notebook, running in power saving mode (not always, but too often), with 3% of server's processing power - but needed to do the same work as server!!!.

Quote:
if for nothing else, but only to loop through all the clients and send packets to each... and then, the first client on the list has advantage over the last, the player with faster connection has advantage over the one with slow connection, or merely further away from the server.

Looping over clients takes zero time. And again, server simulates things in the past, so player with high latency is not disadvantaged.

All current server-based models are 'fair'. They are designed to take into account latency (whether 1ms or 100ms) and level the playing field. The different connection argument hasn't been valid since who knows when.

Quote:
stuff would simply get updated as it arrives, again regardless of actual display frame-rate and independent of "slowest connection" syndrome,


"And then, magic happens..."

There is no slowest connection syndrome. Slowest client affecting all others is an artefact of network models being built for LAN - and having no latency compensation. It has not been an issue for over a decade.

"stuff simply gets update" is an unsolved problem right now. For example, very knowledgable people are exploring how to make stuff happen.

Without listing all various methodologies.
- P2P can work without physics at all, or assume non-deterministic model, synchronizing as needed, perhaps incuring high latency and stalls
- Or, it can use fully deterministic model, and accept that slowest client will determine update rate (the way Age of Empires worked)


Quote:
the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.

In P2P model, everyone would need to wait for slowest.
In server model, glitches are used to guarantee fair play, but causing slow player to lag.

Quote:
why is it not used more or for the high frequency update games such as first person shooters?

It's a conspiracy, lead by IPAP (Internet posters against P2P).

Quote:
why starcraft
Licensing and control.

Quote:
why not quake?
Carmack is pragmatic and gets things done by choosing simplest and most robust solution.


Quote:
but, but, why server in the first place? who did ever come up with server concept and why?
A bunch of people who developed both, P2P and server-based models for 10 years, and finally concluded that server-based works better.


Look - there are a lot of real world gotchas which have shown that P2P has its own set of problems, some of which are, as of right now, unsolvable, or the solutions are worse than server-based implementations.

Half life and quake networking models are described in detail in FAQ, there are other resources and articles which mention the technical part. In real world, other issues occur, such as users being behind NAT or corporate firewalls, using shared WiFi, using routers with broken firmware, and so on....

It's not a conspiracy of incompetent developers, but 20 or so years of experience by people who have actually had to ship such titles, and hordes of frustrated help desk workers who had to deal with customers using them.

Broadcast, as it applies to LAN, does not work over WAN. P2P however does, and is sometimes used, except that Real World™ issues make it inconvenient and not viable.

#18 gbLinux   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 28 August 2009 - 04:12 AM

Quote:

Quote:

let's consider the best possible case scenario for server based approach, where server is located right in the middle. say, all the connections are the same speed and hypothetically perfect...

Ah, see, here's your problem right here.

All connections are not equal. Backbone connections are fast, low latency. The one between your computer and ISP's gateway is slow, oversubscribed, perhaps throttled, subject to QoS or bandwidth shaping or buffering. People using wireless cards also experience higher latency.


ok, connections are not equal, but having server makes it all worse.

Quote:
Quote:

*** P2P approach
- it takes 30km packet traversal for each client to have information about all other

They also need to calculate physics


each client does it simultaneously in parallel computation, and only a small portion of what server would do in serial computation. there is a point to which certain lag/latency can be tolerated given the desired update frequency, and that lag should not be caused by your 386 processor... i mean, if your computer can't do the math or render one frame faster than average network update, then i don't think that computer is suitable for any kind of games, online or not. are you not aware of advantages of parallel and distributed computing, like with dual processors, PS3, graphics pipelines or render farms?


Quote:
Quote:
*** SERVER approach
- it takes 34km packet traversal for each client to have information about all other, plus the time server took to calculate physics, which can be what? 50, 100, 500ms?

That time is not part of latency, since communication is asynchronous.

As said, server does not update physics immediately to allow for fair-play, and tolerate players with high latency. Server is running 100ms in the past.


that time is very much a part of the time it takes for one complete frame cycle, call it what you want. what do you mean to say - "does not update physics immediately"? waiting to sync? that's even worse! none of that makes it any better or faster than p2p, but worse. it is simple geometry and mathematics, the distance information has to travel - SPEED OF LIGHT - the ultimate limiting factor, remember?

given the same circumstances the shortest distance will yield the fastest path, isn't that so? what are you trying to conclude anyway, server based approach is better, but why? based on what? you don't seem to have any valid data on how would P2P actually perform, so you can not really compare the two, can you now?


huh, running in the past... all that stuff is just a way to smooth visual artifacts caused by low network updates, filling in the missing frames. but then you have to make some room in the time-stepping chronology to correct for possible errors caused by the first attempt at visual correction or by plain network lag, so you run the server in the past and "adjust" for any miss-synchronisation or wrong extrapolation guess... great stuff that.


Quote:
Quote:
there will always be the limit on processing speed of the server, and no matter how fast calculation can be it would still take 'some' time,

Yes - and same goes for clients. In server based model, server will be 16-processor, server-grade piece of hardware. In P2P model, each peer will be a 5 year old notebook, running in power saving mode (not always, but too often), with 3% of server's processing power - but needed to do the same work as server!!!.


nonsense. what in the world are you talking about? where did you pull that information from? what kind of hardware 10mil people who play WoW have? ...well, that hardware will do just fine. ah forget it, just remember that clients would NOT need to do the same work as server, much less indeed and it would be SIMULTANEOUS, parallel.


Quote:
Quote:
the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.

In P2P model, everyone would need to wait for slowest.
In server model, glitches are used to guarantee fair play, but causing slow player to lag.


no, why would any peer in p2p have to wait for anything? waiting and syncing is more part of the server information flow as you explained above... working in the past, waiting to sync, eh? ...did you just say how glitches are actually good, "guarantee fair play"? uh, huh.



Quote:
Quote:
but, but, why server in the first place? who did ever come up with server concept and why?
A bunch of people who developed both, P2P and server-based models for 10 years, and finally concluded that server-based works better.


what? how did you come up with that? you're just saying stuff without really knowing anyhting about it, which strange by itself... perhaps not, but then please prove me wrong and actually provide some refernce so we can confirm your statement, will you?


Quote:

Look - there are a lot of real world gotchas which have shown that P2P has its own set of problems, some of which are, as of right now, unsolvable, or the solutions are worse than server-based implementations.


i've heard that one here before, and i ask again: what? what are the gotchas? who has shown? when? how? where do you pull all that stuff from, can you reference any of it? some links, articles, papers?


Quote:

It's not a conspiracy of incompetent developers, but 20 or so years of experience by people who have actually had to ship such titles, and hordes of frustrated help desk workers who had to deal with customers using them.


good, then someone must have wrote something about it, right? it could not all be just a product of your imagination, or could it?

#19 stonemetal   Members   -  Reputation: 288

Like
0Likes
Like

Posted 28 August 2009 - 04:42 AM

Quote:
Original post by gbLinux

nonsense. what in the world are you talking about? where did you pull that information from? what kind of hardware 10mil people who play WoW have? ...well, that hardware will do just fine. ah forget it, just remember that clients would NOT need to do the same work as server, much less indeed and it would be SIMULTANEOUS, parallel.


How so? If you plan on shipping full state from place to place to avoid clients having to recalculate things then your bandwidth requirements have just sky rocketed. Oh and what stops me from sending out the I win packets? You need a server to be authoritative. Lan games it is ok not to since you obviously know everyone involved.

#20 Sirisian   Crossbones+   -  Reputation: 1638

Like
0Likes
Like

Posted 28 August 2009 - 05:31 AM

Quote:
Original post by gbLinux
Quote:
Original post by Antheus
Quote:
Original post by gbLinux
the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.

In P2P model, everyone would need to wait for slowest.
In server model, glitches are used to guarantee fair play, but causing slow player to lag.


no, why would any peer in p2p have to wait for anything? waiting and syncing is more part of the server information flow as you explained above... working in the past, waiting to sync, eh? ...did you just say how glitches are actually good, "guarantee fair play"? uh, huh.

The waiting part in P2P is more for a lock-step kind of game. I don't see the point of waiting for the slowest player when using P2P. You could set it up so that players just stop sending packets to them. I mean there's no rule saying you have to keep slow clients in sync.

What did you mean by:
Quote:
Original post by gbLinux
visual absurds that happen to each and every client with server-based approach

In the server-client model the slowest player never affects the gameplay if the server is authoritative, unless the game is designed to make sure everyone is in sync and not lagging. (I've played a few games that do that).

About using P2P in a game it's possible. You might want to try it and see how well it works for you. My major complaints about P2P is that it lets other player's know each others IP which I've never really liked. Also it doesn't scale really well and gets complex for more than a few players. Actually my shard servers are set up in a lock-step P2P system to keep their updates in Sync. Still haven't done much to see how well it works.

Also broadcasting over the internet would be lol with IPv6 :P

So yeah I'd write up a test program and see if you like how it works. Remember to handle instances where one player is lagging. In a server-client model no one would notice since the server is broadcasting the state changes and lag only affects the person lagging (assuming this isn't a lock-step model that wait for players). All of a sudden in a P2P you have a client that is telling everyone position updates every second. Worse yet imagine if he's telling clients A and B it's position at the correct time interval, but is purposely delaying packets to client C so that he can get a quick kill. :P So many vulnerabilities I don't even know where to start.

Quote:
Original post by gbLinux
- central server collects all the input from all the clients, calculates movement/collision then sends to each client the new machine state, ie. new positions of each entity, velocities, orientation and such, which then each client renders, perhaps utilising some extrapolation to fill in the gaps from, say network refresh rate of 20Hz to their screen refresh rate of 60 FPS. now, this would mean that game actually can only run as fast the slowest connected client can communicate with the server, right?

That doesn't even make sense. The server is never limited by the slowest connection. It's just getting input and sending out packets. At no point does it have a "wait for slowest client". If it doesn't get input it doesn't get input. The only way you could get that effect would be if you've lock-stepped input, which I've never seen done before (RTS games don't even do that as far as I know).

I use the server-client model. It allows me to get input from clients and update physics then build individual packets with full/delta states for each client such that each packet is small and easy for even the lowest of bandwidth players. The added latency you mention isn't noticeable. You might want to make some measurements and see if it's worth it. I'd like to see some test with 8 players where the one-way latency is recorded between each and every client. Then compare it to the RTT time between the server and client model.

//edit (P2P between clients would never really work for me anyway with 100 clients all viewing each other anyway. Not sure why I brought it up.)

//edit might as well reply to this:
Quote:
Original post by stonemetal
Quote:
Original post by gbLinux

nonsense. what in the world are you talking about? where did you pull that information from? what kind of hardware 10mil people who play WoW have? ...well, that hardware will do just fine. ah forget it, just remember that clients would NOT need to do the same work as server, much less indeed and it would be SIMULTANEOUS, parallel.

How so? If you plan on shipping full state from place to place to avoid clients having to recalculate things then your bandwidth requirements have just sky rocketed. Oh and what stops me from sending out the I win packets? You need a server to be authoritative. Lan games it is ok not to since you obviously know everyone involved.
A group of P2P clients can vote for certain state changes. End game can be one of them. Imagine you had a lobby game that tracked win/loss. Unless a lot of players are cheating. If all the clients at the end tell the server who won chances are you will get the correct outcome. You did bring up another vulnerability though. Implementing security is a lot of work.



[Edited by - Sirisian on August 28, 2009 11:31:19 AM]




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS