Broadcasting (Multicasting) based multiplayer game, LAN/WAN?

Started by
47 comments, last by hplus0603 14 years, 8 months ago
Quote:Original post by gbLinux
basically, it all comes down to ISP providers, what kind of connections are available and how much you choose to pay your plan. that sounds great to me, i think that's where this problem should be - with end users, not the game developers.
It doesn't all come down to cost, network connections have fundamental limits on how much data per second they can send/receive. Dial-up networks typically max out at 56Kb/s, ADSL maxes out around 20-30Mb/s a LAN can often do up to 1Gb/s and so on.

Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually < 8.

Also, streaming is a totally different ballgame to games. When you're streaming video, the most important factor is bandwidth. Bandwidth and latency are usually orthogonal (one is not related to the other). Bandwidth is the amount of data per second that your connection can sustain and is usually measured in bits per second (b/s). Latency is the amount of time a packet sent from one end of the connection takes to reach the other end and is measured in seconds (or milliseconds). For example, a sattelite link usually has really high bandwidth, but high latency. Fibre optic connections are typically high bandwidth and low latency, and so on.

Now, bandwidth is technically infinitely expandable - if you want to transfer twice as much data per second, simply install twice as many cables. But latency is limited by the physical properties of the universe we live in - data cannot travel faster than the speed of light, and it takes around 66ms for light to travel from Sydney to LA (for example), meaning the physical minimum round-trip time from Sydney to LA is 133ms*. You cannot improve that (without violating the laws of physics).

So the problem with your solution is that you're trying to make an improvement which relies on very low latency to succeed. That means a person in Sydney could not play your game with a person in LA without being adversely affected - and no amount of upgrading the hardware is going to fix their problem.

You're better off living with the limitations of the network, and design your network protocol around the fact that latency will inevitably exist.

* I might've got the math wrong there, but it's a good enough approximation :-)
Advertisement
Quote:
It's clear you're not involved in the business side of game development, given that you're expressing two big game business marketing mistakes in these two sentences!

First, it's not upon end users to upgrade just to play your game -- it's up to you to serve as many possible customers as possible, if you want to actually re-capture the cost of making your game.

Second, only a fraction of PC owners (1%? 2%?) upgrade their graphics cards to "smoothly render all the graphics." If you want to sell to a small, discerning set of hobbyists, then feel free, but the problem there is that those guys already have several very good-looking games to choose from, and for you to take a slice of that pie requires a lot of investment in art (many millions of dollars), which you're not at all certain to make bat.

Third, >50% of all "PCs" sold are laptops, which can't upgrade graphics at all. >50% of all graphics sold are Intel Integrated, which generally aren't upgradeable. And, in fact, the fastest growing market is netbooks, which, if you're lucky, come with Windows 7, Intel GMA 950, 1 GB of RAM and a hyper-threaded, in-line Atom processor. And a 1024x640 screen.


i disagree completely, but not passionately... so, i'd rather not talk about anything like that since the solution i'm suggesting might be twice as fast, therefore actually more suitable for low-end clients.

Quote:
If you have the resources, and want to build a game to require the users to obtain the best possible service and hardware (even though most people in the world can't actually get a very fast Internet connection, even if they wanted to), then you're welcome to do it. I hope you enjoy it! You will not, however, make your money back. But it's not always about the money.

i have no idea what are you going on about. i work on embedded devices and i said speed is the paramount factor for me, which means absolute optimisation and lowest possible hardware requirements. i don't want to build game, ask for blessing or talk about marketing and statistics, i have simple question and id like answer to it.

the question is:
- has "P2P multiplayer" architecture ever been tried over WWW or anyhow else, at all?


Quote:
If you want to spray packets to each player in your game using UDP, from each other player, then that's totally doable (assuming the game is no bigger than available bandwidth). However, when we have measured these things, it turns out that most of the latency is in the "last mile," plus the "long haul." Thus, going the "last mile" out from you, and then back-haul to me, and then the "last mile" back to me, is not much faster than going "last mile" to back-haul to a well connected server, to back-haul to my "last mile." The only case where you will measure significantly improved latency is when you and I are on one coast (or continent), and the server is on another. The reason for this is that co-lo centers have no "last mile" to worry about; they are plumbed straight into the peering fabric of the back-haul providers.


- what do you mean by "game is no bigger than available bandwidth"?
- what "things" did you measure if you or anyone never tried what i'm talking about?
- what is "last mile"? what is "long haul"? can that be communicated in plain english?

imagine you and your 10 friends in the same room or in the same city playing the game over the WWW and the server is in the next city. similar to your example this can easily illustrate just how much faster can P2P be, and that's not mentioning the time server loses on calculation of all the physics for all the clients. my guess is that P2P online multiplayer games could be, ON AVERAGE, even more than twice faster than server-based ones. it of course depends on location distribution and individual connections, but it seems only extreme cases of player/server distribution would benefit more from server-based approach, if any.

i don't really get what are you saying there... clients should be able to communicate with each other as much and as fast as they can do it, on average, with the server. everything else server wasted time on before being able to send packets is a pure gain here, which i believe translates in more than just better latency, but i'd like some experimental data.
Quote:Original post by Codeka
Quote:Original post by gbLinux
basically, it all comes down to ISP providers, what kind of connections are available and how much you choose to pay your plan. that sounds great to me, i think that's where this problem should be - with end users, not the game developers.
It doesn't all come down to cost, network connections have fundamental limits on how much data per second they can send/receive. Dial-up networks typically max out at 56Kb/s, ADSL maxes out around 20-30Mb/s a LAN can often do up to 1Gb/s and so on.

Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually < 8.

Also, streaming is a totally different ballgame to games. When you're streaming video, the most important factor is bandwidth. Bandwidth and latency are usually orthogonal (one is not related to the other). Bandwidth is the amount of data per second that your connection can sustain and is usually measured in bits per second (b/s). Latency is the amount of time a packet sent from one end of the connection takes to reach the other end and is measured in seconds (or milliseconds). For example, a sattelite link usually has really high bandwidth, but high latency. Fibre optic connections are typically high bandwidth and low latency, and so on.

Now, bandwidth is technically infinitely expandable - if you want to transfer twice as much data per second, simply install twice as many cables. But latency is limited by the physical properties of the universe we live in - data cannot travel faster than the speed of light, and it takes around 66ms for light to travel from Sydney to LA (for example), meaning the physical minimum round-trip time from Sydney to LA is 133ms*. You cannot improve that (without violating the laws of physics).

So the problem with your solution is that you're trying to make an improvement which relies on very low latency to succeed. That means a person in Sydney could not play your game with a person in LA without being adversely affected - and no amount of upgrading the hardware is going to fix their problem.

You're better off living with the limitations of the network, and design your network protocol around the fact that latency will inevitably exist.

* I might've got the math wrong there, but it's a good enough approximation :-)


i agree. but given the same circumstances, ie. the same spatial distribution of server and clents, i'm pretty sure that on average the same information can be passed AND calculated more quickly among the clients than over the single server, therefore more times per second.

person A is in Sydney
person B is in New York
person C is in Tokyo
server S is in Paris

*** SERVER-BASED SIMULATION FRAME
simultaneous{
info travels A to S
info travels B to S
info travels C to S
}
server physics calculation... ms?
simultaneous{
info travels S to A
info travels S to B
info travels S to C
}

client graphics render A
client graphics render B
client graphics render C

how much ms, how much kb we used there to accomplish this?


*** P2P-BASED SIMULATION FRAME
simultaneous{
client A physics calculation...
info travels A to B,C...
client B physics calculation...
info travels B to A,C...
client C physics calculation...
info travels C to A,B...
}

client graphics render A
client graphics render B
client graphics render C

how much ms, how much kb we used there to accomplish this?
how long is the route packets traversed in each case? how many packets - more, less, same?


this is kind of geometrical problem and i have no idea how to solve it but with plugging some real numbers and compare, which i find bothersome to do... so if anyone can do the math and explain if there is any latency gain/loss one way or another? also, i think visual artifact produced by unavoidable latency in either case would still be better distributed without the server.

and most interestingly, since every two players would have the same connection speed in relation to one another, no one player would have any advantage over another as discussed above when i mentioned fast time sampling and "super-perception", "super-reaction" advantage players with faster connection have in server-based approach, or so it would seem. any experimental data available on the subject?


Quote:
Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually <8

wait the second, so some games actually do work like that, perhaps not with broadcasting, but that's less important now. so, can you please let me know what games have all the clients talking to all the other clients? i don't think the phrase >'P2P online multiplayer game'< actually exists, so what in the world is the proper name for that kind of architecture, as oposed to server based?

[Edited by - gbLinux on August 28, 2009 3:46:44 AM]
Quote:Original post by gbLinux
Quote:
Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually <8

wait the second, so some games actually do work like that, perhaps not with broadcasting, but that's less important now. so, can you please let me know what games have all the clients talking to all the other clients? i don't think the phrase >'P2P online multiplayer game'< actually exists, so what in the world is the proper name for that kind of architecture, as oposed to server based?
Peer to peer is the correct term, and it's fairly common with real-time strategy games (e.g. starcraft) where the number of simultaneous players is small, and bandwidth requirements aren't large (i.e. you don't need to send updates 15-20 times per second just to make it look "OK").
Quote:Original post by Codeka
Quote:Original post by gbLinux
Quote:
Peer-to-peer works perfectly find when the number of peers is relatively small (for example, most RTS games run peer-to-peer, because the number of peers is usually <8

wait the second, so some games actually do work like that, perhaps not with broadcasting, but that's less important now. so, can you please let me know what games have all the clients talking to all the other clients? i don't think the phrase >'P2P online multiplayer game'< actually exists, so what in the world is the proper name for that kind of architecture, as opposed to server based?
Peer to peer is the correct term, and it's fairly common with real-time strategy games (e.g. starcraft) where the number of simultaneous players is small, and bandwidth requirements aren't large (i.e. you don't need to send updates 15-20 times per second just to make it look "OK").


cool. now would any of those games actually happen to use broadcasting as a way to implement outgoing packets, or they just simply have for/next loop where packets are sent to selected/connected addresses?

so, obviously this p2p multiplayer stuff works and it's nothing new, then there must be some way to actually compare and deduce, if not simply measure the two, and see how they perform side by side given the same task and same client distribution.

you are saying it as if p2p multiplayer games inherently perform worse than server-based, for some reason, as you suggesting it's used where there is no need for high-frequency update.

but i'm suggesting the opposite, i'm suggesting 'p2p multiplayer games' would scale much better than server-based, could support more players and could actually calculate and transmit the same information faster, allowing for more rapid updates.


the above geometrical problem is all it takes to get the rough picture, just plug in some approximate numbers or draw the triangle on the piece of paper and measure the distance one way and another, p2p vs central-server, compare the two, subtract the time server needs to calculate the physics, consider how that scales with the number of players and tell me what you think. how about it?


can someone point to some article or paper about all this? any "developer notes" from starcraft team or similar info?

ok, here it is.

     B          sA          C


let's consider the best possible case scenario for server based approach, where server is located right in the middle. say, all the connections are the same speed and hypothetically perfect... all the sides of this triangle (AB, AC, BC) are 30km, distance from each client to server (As, Bs, Cs) is 17km.


*** P2P approach
- it takes 30km packet traversal for each client to have information about all other


*** SERVER approach
- it takes 34km packet traversal for each client to have information about all other, plus the time server took to calculate physics, which can be what? 50, 100, 500ms?


now let's consider how the time server needs to calculate physics for all the clients scale with the number of clients... there will always be the limit on processing speed of the server, and no matter how fast calculation can be it would still take 'some' time, if for nothing else, but only to loop through all the clients and send packets to each... and then, the first client on the list has advantage over the last, the player with faster connection has advantage over the one with slow connection, or merely further away from the server.

this is not the case with P2P, brodcasting would work as asynchronous update, which means visual artifact would be distributed better, since there would be no wait, no need for synchronisation. stuff would simply get updated as it arrives, again regardless of actual display frame-rate and independent of "slowest connection" syndrome, the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.


P2P wins big time, if you ask me, so what in the world is going on here? where are the papers and articles about this "P2P ONLINE MULTIPLAYR" and why is it not used more or for the high frequency update games such as first person shooters? why starcraft, why not quake? though, i think there is not much difference as long as the action is in real time... but, but, why server in the first place? who did ever come up with server concept and why?


[Edited by - gbLinux on August 28, 2009 5:43:11 AM]
Quote:Original post by gbLinux

let's consider the best possible case scenario for server based approach, where server is located right in the middle. say, all the connections are the same speed and hypothetically perfect...

Ah, see, here's your problem right here.

All connections are not equal. Backbone connections are fast, low latency. The one between your computer and ISP's gateway is slow, oversubscribed, perhaps throttled, subject to QoS or bandwidth shaping or buffering. People using wireless cards also experience higher latency.

Quote:*** P2P approach
- it takes 30km packet traversal for each client to have information about all other

They also need to calculate physics


Quote:*** SERVER approach
- it takes 34km packet traversal for each client to have information about all other, plus the time server took to calculate physics, which can be what? 50, 100, 500ms?

That time is not part of latency, since communication is asynchronous.

As said, server does not update physics immediately to allow for fair-play, and tolerate players with high latency. Server is running 100ms in the past.

Quote:there will always be the limit on processing speed of the server, and no matter how fast calculation can be it would still take 'some' time,

Yes - and same goes for clients. In server based model, server will be 16-processor, server-grade piece of hardware. In P2P model, each peer will be a 5 year old notebook, running in power saving mode (not always, but too often), with 3% of server's processing power - but needed to do the same work as server!!!.

Quote:if for nothing else, but only to loop through all the clients and send packets to each... and then, the first client on the list has advantage over the last, the player with faster connection has advantage over the one with slow connection, or merely further away from the server.

Looping over clients takes zero time. And again, server simulates things in the past, so player with high latency is not disadvantaged.

All current server-based models are 'fair'. They are designed to take into account latency (whether 1ms or 100ms) and level the playing field. The different connection argument hasn't been valid since who knows when.

Quote:stuff would simply get updated as it arrives, again regardless of actual display frame-rate and independent of "slowest connection" syndrome,


"And then, magic happens..."

There is no slowest connection syndrome. Slowest client affecting all others is an artefact of network models being built for LAN - and having no latency compensation. It has not been an issue for over a decade.

"stuff simply gets update" is an unsolved problem right now. For example, very knowledgable people are exploring how to make stuff happen.

Without listing all various methodologies.
- P2P can work without physics at all, or assume non-deterministic model, synchronizing as needed, perhaps incuring high latency and stalls
- Or, it can use fully deterministic model, and accept that slowest client will determine update rate (the way Age of Empires worked)


Quote:the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.

In P2P model, everyone would need to wait for slowest.
In server model, glitches are used to guarantee fair play, but causing slow player to lag.

Quote:why is it not used more or for the high frequency update games such as first person shooters?

It's a conspiracy, lead by IPAP (Internet posters against P2P).

Quote:why starcraft
Licensing and control.

Quote:why not quake?
Carmack is pragmatic and gets things done by choosing simplest and most robust solution.


Quote:but, but, why server in the first place? who did ever come up with server concept and why?
A bunch of people who developed both, P2P and server-based models for 10 years, and finally concluded that server-based works better.


Look - there are a lot of real world gotchas which have shown that P2P has its own set of problems, some of which are, as of right now, unsolvable, or the solutions are worse than server-based implementations.

Half life and quake networking models are described in detail in FAQ, there are other resources and articles which mention the technical part. In real world, other issues occur, such as users being behind NAT or corporate firewalls, using shared WiFi, using routers with broken firmware, and so on....

It's not a conspiracy of incompetent developers, but 20 or so years of experience by people who have actually had to ship such titles, and hordes of frustrated help desk workers who had to deal with customers using them.

Broadcast, as it applies to LAN, does not work over WAN. P2P however does, and is sometimes used, except that Real World(tm) issues make it inconvenient and not viable.
Quote:
Quote:
let's consider the best possible case scenario for server based approach, where server is located right in the middle. say, all the connections are the same speed and hypothetically perfect...

Ah, see, here's your problem right here.

All connections are not equal. Backbone connections are fast, low latency. The one between your computer and ISP's gateway is slow, oversubscribed, perhaps throttled, subject to QoS or bandwidth shaping or buffering. People using wireless cards also experience higher latency.


ok, connections are not equal, but having server makes it all worse.

Quote:
Quote:
*** P2P approach
- it takes 30km packet traversal for each client to have information about all other

They also need to calculate physics


each client does it simultaneously in parallel computation, and only a small portion of what server would do in serial computation. there is a point to which certain lag/latency can be tolerated given the desired update frequency, and that lag should not be caused by your 386 processor... i mean, if your computer can't do the math or render one frame faster than average network update, then i don't think that computer is suitable for any kind of games, online or not. are you not aware of advantages of parallel and distributed computing, like with dual processors, PS3, graphics pipelines or render farms?


Quote:
Quote:*** SERVER approach
- it takes 34km packet traversal for each client to have information about all other, plus the time server took to calculate physics, which can be what? 50, 100, 500ms?

That time is not part of latency, since communication is asynchronous.

As said, server does not update physics immediately to allow for fair-play, and tolerate players with high latency. Server is running 100ms in the past.


that time is very much a part of the time it takes for one complete frame cycle, call it what you want. what do you mean to say - "does not update physics immediately"? waiting to sync? that's even worse! none of that makes it any better or faster than p2p, but worse. it is simple geometry and mathematics, the distance information has to travel - SPEED OF LIGHT - the ultimate limiting factor, remember?

given the same circumstances the shortest distance will yield the fastest path, isn't that so? what are you trying to conclude anyway, server based approach is better, but why? based on what? you don't seem to have any valid data on how would P2P actually perform, so you can not really compare the two, can you now?


huh, running in the past... all that stuff is just a way to smooth visual artifacts caused by low network updates, filling in the missing frames. but then you have to make some room in the time-stepping chronology to correct for possible errors caused by the first attempt at visual correction or by plain network lag, so you run the server in the past and "adjust" for any miss-synchronisation or wrong extrapolation guess... great stuff that.


Quote:
Quote:there will always be the limit on processing speed of the server, and no matter how fast calculation can be it would still take 'some' time,

Yes - and same goes for clients. In server based model, server will be 16-processor, server-grade piece of hardware. In P2P model, each peer will be a 5 year old notebook, running in power saving mode (not always, but too often), with 3% of server's processing power - but needed to do the same work as server!!!.


nonsense. what in the world are you talking about? where did you pull that information from? what kind of hardware 10mil people who play WoW have? ...well, that hardware will do just fine. ah forget it, just remember that clients would NOT need to do the same work as server, much less indeed and it would be SIMULTANEOUS, parallel.


Quote:
Quote:the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.

In P2P model, everyone would need to wait for slowest.
In server model, glitches are used to guarantee fair play, but causing slow player to lag.


no, why would any peer in p2p have to wait for anything? waiting and syncing is more part of the server information flow as you explained above... working in the past, waiting to sync, eh? ...did you just say how glitches are actually good, "guarantee fair play"? uh, huh.



Quote:
Quote:but, but, why server in the first place? who did ever come up with server concept and why?
A bunch of people who developed both, P2P and server-based models for 10 years, and finally concluded that server-based works better.


what? how did you come up with that? you're just saying stuff without really knowing anyhting about it, which strange by itself... perhaps not, but then please prove me wrong and actually provide some refernce so we can confirm your statement, will you?


Quote:
Look - there are a lot of real world gotchas which have shown that P2P has its own set of problems, some of which are, as of right now, unsolvable, or the solutions are worse than server-based implementations.


i've heard that one here before, and i ask again: what? what are the gotchas? who has shown? when? how? where do you pull all that stuff from, can you reference any of it? some links, articles, papers?


Quote:
It's not a conspiracy of incompetent developers, but 20 or so years of experience by people who have actually had to ship such titles, and hordes of frustrated help desk workers who had to deal with customers using them.


good, then someone must have wrote something about it, right? it could not all be just a product of your imagination, or could it?
Quote:Original post by gbLinux

nonsense. what in the world are you talking about? where did you pull that information from? what kind of hardware 10mil people who play WoW have? ...well, that hardware will do just fine. ah forget it, just remember that clients would NOT need to do the same work as server, much less indeed and it would be SIMULTANEOUS, parallel.


How so? If you plan on shipping full state from place to place to avoid clients having to recalculate things then your bandwidth requirements have just sky rocketed. Oh and what stops me from sending out the I win packets? You need a server to be authoritative. Lan games it is ok not to since you obviously know everyone involved.
Quote:Original post by gbLinux
Quote:Original post by Antheus
Quote:Original post by gbLinux
the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.

In P2P model, everyone would need to wait for slowest.
In server model, glitches are used to guarantee fair play, but causing slow player to lag.


no, why would any peer in p2p have to wait for anything? waiting and syncing is more part of the server information flow as you explained above... working in the past, waiting to sync, eh? ...did you just say how glitches are actually good, "guarantee fair play"? uh, huh.

The waiting part in P2P is more for a lock-step kind of game. I don't see the point of waiting for the slowest player when using P2P. You could set it up so that players just stop sending packets to them. I mean there's no rule saying you have to keep slow clients in sync.

What did you mean by:
Quote:Original post by gbLinux
visual absurds that happen to each and every client with server-based approach

In the server-client model the slowest player never affects the gameplay if the server is authoritative, unless the game is designed to make sure everyone is in sync and not lagging. (I've played a few games that do that).

About using P2P in a game it's possible. You might want to try it and see how well it works for you. My major complaints about P2P is that it lets other player's know each others IP which I've never really liked. Also it doesn't scale really well and gets complex for more than a few players. Actually my shard servers are set up in a lock-step P2P system to keep their updates in Sync. Still haven't done much to see how well it works.

Also broadcasting over the internet would be lol with IPv6 :P

So yeah I'd write up a test program and see if you like how it works. Remember to handle instances where one player is lagging. In a server-client model no one would notice since the server is broadcasting the state changes and lag only affects the person lagging (assuming this isn't a lock-step model that wait for players). All of a sudden in a P2P you have a client that is telling everyone position updates every second. Worse yet imagine if he's telling clients A and B it's position at the correct time interval, but is purposely delaying packets to client C so that he can get a quick kill. :P So many vulnerabilities I don't even know where to start.

Quote:Original post by gbLinux
- central server collects all the input from all the clients, calculates movement/collision then sends to each client the new machine state, ie. new positions of each entity, velocities, orientation and such, which then each client renders, perhaps utilising some extrapolation to fill in the gaps from, say network refresh rate of 20Hz to their screen refresh rate of 60 FPS. now, this would mean that game actually can only run as fast the slowest connected client can communicate with the server, right?

That doesn't even make sense. The server is never limited by the slowest connection. It's just getting input and sending out packets. At no point does it have a "wait for slowest client". If it doesn't get input it doesn't get input. The only way you could get that effect would be if you've lock-stepped input, which I've never seen done before (RTS games don't even do that as far as I know).

I use the server-client model. It allows me to get input from clients and update physics then build individual packets with full/delta states for each client such that each packet is small and easy for even the lowest of bandwidth players. The added latency you mention isn't noticeable. You might want to make some measurements and see if it's worth it. I'd like to see some test with 8 players where the one-way latency is recorded between each and every client. Then compare it to the RTT time between the server and client model.

//edit (P2P between clients would never really work for me anyway with 100 clients all viewing each other anyway. Not sure why I brought it up.)

//edit might as well reply to this:
Quote:Original post by stonemetal
Quote:Original post by gbLinux

nonsense. what in the world are you talking about? where did you pull that information from? what kind of hardware 10mil people who play WoW have? ...well, that hardware will do just fine. ah forget it, just remember that clients would NOT need to do the same work as server, much less indeed and it would be SIMULTANEOUS, parallel.

How so? If you plan on shipping full state from place to place to avoid clients having to recalculate things then your bandwidth requirements have just sky rocketed. Oh and what stops me from sending out the I win packets? You need a server to be authoritative. Lan games it is ok not to since you obviously know everyone involved.
A group of P2P clients can vote for certain state changes. End game can be one of them. Imagine you had a lobby game that tracked win/loss. Unless a lot of players are cheating. If all the clients at the end tell the server who won chances are you will get the correct outcome. You did bring up another vulnerability though. Implementing security is a lot of work.



[Edited by - Sirisian on August 28, 2009 11:31:19 AM]

This topic is closed to new replies.

Advertisement