Broadcasting (Multicasting) based multiplayer game, LAN/WAN?

Started by
47 comments, last by hplus0603 14 years, 7 months ago
hi, my experience is in physics and graphics programming, i always stayed ignorant of networking, but recently i decided to make my first WWW/LAN multiplayer code, so i'm in the process of figuring out how things work today, but also how they did it in the past... what were the mistakes and why is the current most popular architecture the way it is. for example, i read yesterday that game DOOM used broadcasting which was causing overload of LAN since all the computers on the network received and processed the packets regardless if they were playing the game or just being a part of the network, and so sys admins banned DOOM, but how would that apply on today's internet? can we not utilise benefits of multicasting now with all the firewalls and stuff? these 'unexpected' packets would be discarded by all/most of today computers, of course unless they intentionally open ports to grab those packets, play the game. so, the problem with DOOM would not be the same today or on the internet as it was for LAN, right? in the light of that, i have this idea and i could not find any documentation about it, so can someone educate about the most recent, most popular ways of doing things, correct me in my understanding and perhaps explain why there are no games based on Broadcasting-Multicasting architecture, or if there are, then please direct me with some links to read more about those. now, the way i see it... CURRENT, MOST POPULAR ONLINE_MULTIPLAYER ARCHITECTURE: - central server collects all the input from all the clients, calculates movement/collision then sends to each client the new machine state, ie. new positions of each entity, velocities, orientation and such, which then each client renders, perhaps utilising some extrapolation to fill in the gaps from, say network refresh rate of 20Hz to their screen refresh rate of 60 FPS. now, this would mean that game actually can only run as fast the slowest connected client can communicate with the server, right? which in the best possible case is about 10-20FPS of real server-computed data and input samples, considering the roundtrip lag... to put it simply, games on the internet today run only about 10FPS, regardless of how many FPS your client renders, but those extra frames are only "smooth-candy", and even worse, they may itself be the source of another set of visual artifact, since they can easily mismatch with server's next update. is this correct state of affairs in networking and multiplayer games, as of 2009? NOW, THE IDEA: - how about every client broadcasts it's input to every other client, including the central server, instead of just talking to server back-forth, so without the return trip this would imidiately speed things up about 2x, right? if this is not very terrible idea for some reason, then i guess the only obstacle would be security and fear of user cheating and hacking their client software. but wait, we just got rid of the need for central server, so we could use it instead to police the clients full-time. i'm not sure about the ways players can cheat, or why, but if you invest all the time and resources server uses to compute all the physics, trajectories, collisions and what not, then it seems possible to utilise that time and instead make the server check all the clients and find the way to identify non-human controlled input. this should not be more complicated for the server than the original task of running complete game physics for all the clients, right? so basically, the game would be played on client computers, every client calculating their own physics and only reporting their input to every other client with only ONE broadcast call, while simultaneously receiving the same info from all the other clients and simply rendering them without processing. this way server would have all the time in the world to go about gestapo business policing the clients and the average update round-trip would be cut in half or so, how does that sound? [Edited by - gbLinux on August 27, 2009 9:16:24 PM]
Advertisement
Speed of light is the ultimate limiting factor. Even without delays from switches, round-trip latency between EU-US connection is ~50-100ms (do the math), 150+ in reality.

DOOM's networked physics model does not tolerate such high latency, or even half of it.

Broadcast over WAN can be, and is implemented using P2P. The big problem is practical - broken routers, firewalls, NAT, asymmetric connections, ....

FPS (more accurately, physics update rate) is not limited to 10Hz. There is no reason why it couldn't run at 60 or 100Hz over WAN. The limiting factor is peer's bandwidth. Many cannot support such rates. Broadcast could help with that, but nobody has even the slightest idea on how such routing could be efficiently performed. Multicast never took off for this very reason.

The high latency problem has also been solved for many tasks, but a much larger part of them remains unsolved in presence of latency (stack of bricks, 10 users manipulating one of them at the same time). So as long as there exists latency, these problems will either be approximations, or will always be delayed.

Broadcast (as routing mechanism) has less impact on this. The key difference between WAN and LAN is latency. On LAN it's effectively zero, over WAN, it's between tens and up to thousands of milliseconds. This is what causes difficulties. Factor two as added by server does increase it, but it actually solves many other problems, which is why MMOs prefer it.

Quote:so basically, the game would be played on client computers, every client calculating their own physics and only reporting their input to every other client with only ONE broadcast call, while simultaneously receiving the same info from all the other clients and simply rendering them without processing.


This is actually how it's done by many multi-player games. The seminal article about a very reliable, very efficient mechanism to achieve this is the Age of Empires networking, which has been proven in practice a decade or so ago.

But again, most multiplayers games choose server-based design for practical reasons, but still do most of physics on client.
hi, thanks for that. i agree.

basically, i was reading about counter strike and quake servers where they are default set to about 20Hz update rate. clients can request more and some can get, but then what? not only is that not fair, but it also does not make for realistic simulation, relatively speaking... players close to the server with fast connection could then sample time much faster and therefore have super-perception and super-reaction advantage, compared to other players. but even worse, when you try to animate and synchronise all that across all the clients, you get all kind space and time distortion, like warping and players being on more than one place in the same time, being shot dead, yet being alive... like in that movie where people lived in computer generated world.


Quote:
This is actually how it's done by many multi-player games. The seminal article about a very reliable, very efficient mechanism to achieve this is the Age of Empires networking, which has been proven in practice a decade or so ago.

But again, most multiplayer games choose server-based design for practical reasons, but still do most of physics on client.


are you saying those games do most of the physics on the clients, or are you saying they actually use broadcasting where all the clients talk to every other client as well? can you give some links?

was i correct then that broadcasting and communication off one-to-many of all the clients can actually cut update delay in half, given the same conditions... and if so, then what could be advantages and practical reasons to still run server-based games, if client-to-client broadcasting is that much faster? pros, cons?


thanks

[Edited by - gbLinux on August 27, 2009 7:20:12 PM]
Server based FPS run simulation in the past. They collect actions for 100ms (or so), then resolve what happened and let clients know. Clients display actions immediately, but change relevant state only after server confirms it. On client, when you shoot someone, blood appears immediately. But they only lose health after server confirms it.


As far as broadcasting goes - it's not latency it solves. Factor 2 is a one-time, constant factor improvement. It pales in comparison to everything else.

Broadcast on LAN saves bandwidth. One packet from each peer delivered to all, regardless of whether there's 2 or 200. But - it also delivers this packet to all, regardless of whether they are interested or not.

On WAN, this means delivering packet to all IPs, and routing it to each computer in each private network. WoW has 10 million players, each of them sends ten packets per second, each packet is 10 bytes long. That is ten gigabit connections, fully saturated - to each and every single computer in internet. Backbone would probably hold, but last-mile has no hope. And that is just WoW.


Multicast solves the design problem, but not the technical one. It allows creation of limited interest groups (virtual LAN with dynamic subscriptions), but there is currently no directly usable support for it on WAN, simply due to complexity of routing.
ok, thanks, i get you, but i'd like more details, so please let me ask more questions...

Quote:
As far as broadcasting goes - it's not latency it solves. Factor 2 is a one-time, constant factor improvement. It pales in comparison to everything else.

i really can not see anything that can compare with with 2x speed increase. i always saw speed as a paramount factor when designing any component of any real-time game, while online games, since they are struggling with time-lag to begin with, should be even more concerned with any speed gain they can get on network communication, there is no narrower bottleneck than connection speed in online multiplayer games, right?

Quote:
Broadcast on LAN saves bandwidth. One packet from each peer delivered to all, regardless of whether there's 2 or 200. But - it also delivers this packet to all, regardless of whether they are interested or not.

ok, my knowledge about working of all that is minimal, so i still have my original question: would not internet routers, firewalls and closed ports automatically discard such kind of packets? well, i thought there at least must be some kind of packets which automatically get blocked and lost, and so we could design packets in such way - 'to be only received/processed by active-listeners'.

Quote:
On WAN, this means delivering packet to all IPs, and routing it to each computer in each private network. WoW has 10 million players, each of them sends ten packets per second, each packet is 10 bytes long. That is ten gigabit connections, fully saturated - to each and every single computer in internet. Backbone would probably hold, but last-mile has no hope. And that is just WoW.

i really can not visualise how much impact the sheer amount of increased information in the physical network lines can really have - is that not just like if everyone would talk faster over the phone, that would not overload the phone lines, or would it? please explain? what kind of effect would WWW experience if half computers on the internet started broadcasting? would internet slow down? my first guess that it would not.

Quote:
Multicast solves the design problem, but not the technical one. It allows creation of limited interest groups (virtual LAN with dynamic subscriptions), but there is currently no directly usable support for it on WAN, simply due to complexity of routing.

ok, thats it. i think that fits best to what i had in mind... can you/anyone just explain what kind of support is needed and what are the complexities of routing it?

[Edited by - gbLinux on August 27, 2009 9:00:58 PM]
Quote:Original post by gbLinux
what kind of effect would WWW experience if half computers on the internet started broadcasting? would internet slow down? my first guess that it would not.
If you talk too fast on the telephone, the person on the other end will either ignore you or hang up on you. Pretty much the same thing happens re the internet - routers mostly just drop broadcast/multicast packets that are intended to propagate to the internet, so at most you can flood your own LAN. If you customise your router to pass the broadcast packets onwards, chance are the next level router (at your local ISP) will drop them anyway...

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

The original DOOM used IPX as it's networking protocol, which is a LAN-only protocol - it doesn't work over the internet (WAN) at all. IP does have the concept of broadcasting, but it works only over the local subnet - routers can not pass those packets up-line (i.e. to the internet) - it simply doesn't work.

That means if you wanted to do what you're suggesting, you basically have to build a network like how P2P networks (sharaza and friends) work - each client maintains a connection to every other client. That means, to send an outbound packet, you must send one individual packet to each client.

So if your game requires 10Kb/s of bandwidth per player, than means if you've got 32 players, that's 320Kb/s of uplink - many "broadband" plans don't support that much upstream bandwidth.
Quote:Original post by swiftcoder
Quote:Original post by gbLinux
what kind of effect would WWW experience if half computers on the internet started broadcasting? would internet slow down? my first guess that it would not.
If you talk too fast on the telephone, the person on the other end will either ignore you or hang up on you. Pretty much the same thing happens re the internet - routers mostly just drop broadcast/multicast packets that are intended to propagate to the internet, so at most you can flood your own LAN. If you customise your router to pass the broadcast packets onwards, chance are the next level router (at your local ISP) will drop them anyway...


hehe, i'll talk slower then. but seriously tho, are you saying that this actually has never been tried, or that it simply would not work, hence it was never tried? ...but then, how does P2P work? how does all the real-time streaming audio and video on the www work? ain't there radio and tv shows already broadcasting real-time over WWW, or something like that? i don't quite get it, i'm afraid. am i not clear in my question? sorry if i'm missing some basics here.


basically, this is the question:
- has anyone ever tried some form of real-time multiplayer interaction over WWW based on 'packet broadcasting', one-to-many of all the clients to each other client?

[Edited by - gbLinux on August 27, 2009 10:05:44 PM]
Quote:Original post by Codeka
The original DOOM used IPX as it's networking protocol, which is a LAN-only protocol - it doesn't work over the internet (WAN) at all. IP does have the concept of broadcasting, but it works only over the local subnet - routers can not pass those packets up-line (i.e. to the internet) - it simply doesn't work.

That means if you wanted to do what you're suggesting, you basically have to build a network like how P2P networks (sharaza and friends) work - each client maintains a connection to every other client. That means, to send an outbound packet, you must send one individual packet to each client.

So if your game requires 10Kb/s of bandwidth per player, than means if you've got 32 players, that's 320Kb/s of uplink - many "broadband" plans don't support that much upstream bandwidth.


ok, thanks.. i'm sorry if it took some time but i believe i understand now.

basically, it all comes down to ISP providers, what kind of connections are available and how much you choose to pay your plan. that sounds great to me, i think that's where this problem should be - with end users, not the game developers.

everyone upgrades their computers with latest video cards so they can smoothly render all the graphics, the same should be with network cards and ISP. there should be separate virtual broadcasting networks for different speed connections, just like there should be different servers working on different update rates, serving particular users with similar connection speed.


so the only question remains, has it been tried, and how much faster can 'P2P online multiplayer' be, if at all?
Quote:i think that's where this problem should be - with end users, not the game developers.

everyone upgrades their computers with latest video cards so they can smoothly render all the graphics


It's clear you're not involved in the business side of game development, given that you're expressing two big game business marketing mistakes in these two sentences!

First, it's not upon end users to upgrade just to play your game -- it's up to you to serve as many possible customers as possible, if you want to actually re-capture the cost of making your game.

Second, only a fraction of PC owners (1%? 2%?) upgrade their graphics cards to "smoothly render all the graphics." If you want to sell to a small, discerning set of hobbyists, then feel free, but the problem there is that those guys already have several very good-looking games to choose from, and for you to take a slice of that pie requires a lot of investment in art (many millions of dollars), which you're not at all certain to make bat.

Third, >50% of all "PCs" sold are laptops, which can't upgrade graphics at all. >50% of all graphics sold are Intel Integrated, which generally aren't upgradeable. And, in fact, the fastest growing market is netbooks, which, if you're lucky, come with Windows 7, Intel GMA 950, 1 GB of RAM and a hyper-threaded, in-line Atom processor. And a 1024x640 screen.

If you have the resources, and want to build a game to require the users to obtain the best possible service and hardware (even though most people in the world can't actually get a very fast Internet connection, even if they wanted to), then you're welcome to do it. I hope you enjoy it! You will not, however, make your money back. But it's not always about the money.

If you want to spray packets to each player in your game using UDP, from each other player, then that's totally doable (assuming the game is no bigger than available bandwidth). However, when we have measured these things, it turns out that most of the latency is in the "last mile," plus the "long haul." Thus, going the "last mile" out from you, and then back-haul to me, and then the "last mile" back to me, is not much faster than going "last mile" to back-haul to a well connected server, to back-haul to my "last mile." The only case where you will measure significantly improved latency is when you and I are on one coast (or continent), and the server is on another. The reason for this is that co-lo centers have no "last mile" to worry about; they are plumbed straight into the peering fabric of the back-haul providers.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement