Jump to content

  • Log In with Google      Sign In   
  • Create Account

Incoming hilarity: abaraba is back, and this time he's fixated on P2P networking!


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
82 replies to this topic

#41 Jeonjr   Members   -  Reputation: 122

Like
0Likes
Like

Posted 05 September 2009 - 10:09 AM

Quote:
Original post by Antheus

I reject your reality, and substitute my own.

Mythbusters :D.

My current reality is that P2P allows for a higher maximum number of player than Server-Client does in the case we don't use dedicated servers

players=16updates per sec=30

p2pclient 15*(28+16)*30=7228
server up 15*(28+(16*15))*30=120600

players=32 updates per sec=30

p2pclient 31*(28+16)*30=40920
server up 31*(28+(16*31))*30=487320


players=64 updates per sec=30

p2pclient 63*(28+16)*30=83160
server up 63*(28+(16*63))*30=1958040


players=128 updates per sec=30

p2pclient 127*(28+16)*30=167640
server up 127*(28+(16*127))*30=7848600

with my current internet 32 players is already using almost all my upload bandwidth, however if I were to host a server I wouldn't even be able to host a game with 16 players



Sponsor:

#42 Azgur   Members   -  Reputation: 325

Like
0Likes
Like

Posted 05 September 2009 - 11:19 AM

Quote:
Original post by oliii
P2P is a pain in the ass. The extra bandwidth requirement and update rate are trivial problems when compared to host migration, and achieving consistency (with exotic NAT router settings and all), packet forwarding, ect...


This probably is the most important point of this entire discussion.
P2P really is a huge pain due to the very wide use of NAT (and it's exotic differences between implementations).

Demigod is actually a great example in this case as the developers have been really open about their network implementation and the many issues they've encountered in using P2P.
Ultimately they ended up creating several workarounds involving proxies (which leads to a client-server architecture between individual peers) to combat this problem when everything else fails.

In the end it did work out for them, but it came at a cost significant effort.
The reason for using a client/server approach often isn't related to bandwidth at all, security and complexity are huge factors in making these decisions.


Back to abaraba's comeback. I actually remember his initial rampage and it has provided me with plenty of entertainment in the past. I'm glad to see he hasn't lost his ways and still puts up one hell of a show.
I can't help but think he's just faking it. I can only hope anyways.
Remco van Oosterhout, game programmer.
My posts are my own and don't reflect the opinion of my employer.

#43 Gaffer   Members   -  Reputation: 422

Like
0Likes
Like

Posted 05 September 2009 - 12:16 PM

Quote:
Back to abaraba's comeback. I actually remember his initial rampage and it has provided me with plenty of entertainment in the past. I'm glad to see he hasn't lost his ways and still puts up one hell of a show.
I can't help but think he's just faking it. I can only hope anyways.


Dude, if he's faking it I will personally nominate him for an academy award.

According to Downey Jr. he may have blown his chances though:

Quote:
Stiller: "It's what we do, right?"
Downey: "Everybody knows you never do a full retard."
Stiller: "What do you mean?"
Downey: "Check it out. Dustin Hoffman, 'Rain Man,' look retarded, act retarded, not retarded. Count toothpicks to your cards. Autistic, sure. Not retarded. You know Tom Hanks, 'Forrest Gump.' Slow, yes. Retarded, maybe. Braces on his legs. But he charmed the pants off Nixon and won a ping-pong competition. That ain't retarded. You went full retard, man. Never go full retard."


Zing! :)

#44 Andrew F Marcus   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 05 September 2009 - 04:51 PM

Quote:
Original post by Jeonjr
My current reality is that P2P allows for a higher maximum number of player than Server-Client does in the case we don't use dedicated servers

with my current internet 32 players is already using almost all my upload bandwidth, however if I were to host a server I wouldn't even be able to host a game with 16 players


Exactly. If Age of Empires did not use P2P we would have never been able to have 8 players in a game on 28.8 modem. If there was no P2P there will be no Stickman and 50 in one game, free game.



Quote:
Original post by hplus0603
That has been shown wrong by numbers in a previous thread. The overhead of a single UDP packet is 28 bytes. If all you send is client commands, then the commands data packet is on the order of 4-8 bytes. Thus, packet overhead is on the order of 3.5-7x as important as actual data.


Ok, it matter very much. It makes overall bandwidth usage much less for the C/S model, but P2P can still support more players given the common bandwidth limits, not having a dedicated server.


Quote:
Original post by hplus0603
That has not been my experience from actually talking to and in some cases working with those people. They each are very good engineers, carefully considering all the possible approaches, and choosing the approach that works best in practice.


How do you explain they researched it if there is not a single paper, article or any kind of written assessment or analysis to show, or even suggest P2P would not be suitable or would be less practical for whatever the situation they required. There is no such document. All the research, theory and practice goes only to prove P2P is very useful, practical and heavily underrated. What would Quake or Counter Strike lose if they provided P2P at least as an option, and what would they gain?



Quote:
Original post by hplus0603
With dozens of millions on the line, and a studio of 150-200 people working for years on a game, do you think "being silly" is the reason why they're accomplishing something that nobody else has done before?

AFAICT, they're using client/server, and they're using user-input passing, and they're doing 256 simultaneous players. I suggest you try replicating that with P2P and see how far you get.


No problem, we already have it. Stickman uses 5KBps upload with 50 players, therefore it would need about 25Kbps for 250 players. It scales easy as that. Now, you tell me how much would they save if they used P2P approach instead, and how many more players could they support? It costs less, yet gives you more.



Quote:
Original post by Ysaneya
Correct. There's a choice to make. In C/S (in my particular example), individual bandwidth per client is x5 times lower than for a P2P client. But you need a server with a capacity much higher than a single client.

The good news is that companies, when speaking of a number of players over the two digits, normally provide such a server; therefore players don't have to worry about this kind of concern, and can play with x5 times as many players than in the P2P model.


This is particularly about the games that do not have dedicated server, about games whose decided servers closed down and about all those games that simply do not have a chance of ever having a dedicated server, or simply desire to use less expensive approach. Is there a reason why not have P2P as an option?


Quote:
Original post by Ysaneya
Roughly the same time it takes for a peer to calculate physics and collision for 50 clients.. ? The advantage being that a server can be authoritative while peers trusting each other opens the door to all sorts of hacks and cheats..


P2P can distribute the load and have each peer do the physics and collision only for itself. This is particularity about the games without dedicated server or console games, where security risks are not an issue or are the same as with C/S approach.

#45 KulSeran   Members   -  Reputation: 2544

Like
0Likes
Like

Posted 05 September 2009 - 05:48 PM

Quote:

Ok, but is that argument for or against P2P?
I agree overhead makes it less efficient for smaller packets, but that still does not make a difference.


I'm not arguing either. You were saying "everyone discount X, it doesnt matter" But it does matter. And where packet loss happens matters too. Losing packets on TCP slows you down. Losing packets on UDP will go unnoticed. Losing parts of larger packets stalls both ends, but more than that, knowing that you may need to resend a packet means that the "server" side has to keep track of all the packets it has sent recently. This means your peer computers have to have more memory and better hardware to make sure the extra network stack doesn't bog them down. And it CAN bog them down, you ever notice how much resources uTorrent uses when you get a good download going?

Quote:

Quote:

Original post by Ysaneya
Roughly the same time it takes for a peer to calculate physics and collision for 50 clients.. ? The advantage being that a server can be authoritative while peers trusting each other opens the door to all sorts of hacks and cheats..



P2P can distribute the load and have each peer do the physics and collision only for itself. This is particularity about the games without dedicated server or console games, where security risks are not an issue or are the same as with C/S approach.

Refer to my other post (here). Your logic is flawed. As soon as your game becomes complex enough that it involves graphics, and an unknown end-client computer spec, the server wins the compute-time battle every time.

Quote:

This is particularly about the games that do not have dedicated server, about games whose decided servers closed down and about all those games that simply do not have a chance of ever having a dedicated server, or simply desire to use less expensive approach. Is there a reason why not have P2P as an option?

Many games release dedicated servers. As time goes by everyone's specs have increased. If the company shuts down their dedicated servers, you hook up the dedicated server on a hosting site OR on a good home PC, and run with it. Times change things you have to remember that.

Quote:

with my current internet 32 players is already using almost all my upload bandwidth, however if I were to host a server I wouldn't even be able to host a game with 16 players

Ok, so I'm FINALLY begining to understand what "BETTER" means. It is nice if people explain their ambiguous words. "Better" could have meant a LOT of things. In the face of "i, as a funded company can choose to host servers, and have gamers be clients" the client server approach wins every time. In the face of "i'm releasing this to the people, and can't afford a server" P2P may look more attractive.

Economy of scale and all though. You are maxing out your bandwidth at 32 players, you still didn't gain much from P2P. Consider also that consumer lines will always be smaller than business lines, so for 60+$ a month, you COULD be hosting a 128 or 256 player game on a server that has 10 or 100 fold the upload that any consumer will have, if you optimize for that case you can insure a good QoS for your customers.

Besides, how often do you REALLY setup games that need more than 4 people? 8? 16? 32? 256? 32768? When I do lan parties we are usually lucky to get 8 to 16 people, AND we are all in one place on 100baseT lines. You seem to be arguing a case that doesn't often exist anyway. You get X people, each pitches in 5$, you all get a dedicated server for the month, and you all play happly ever after, and don't have to worry about firewalls, packet shapers, NATs, etc. And every player doesn't need to by cable/fiber internet just to play.

And on THAT note:
Quote:

Does the fact that Age of Empires implements P2P on 28.8 modem with perfect synchronization, determinism and fluid animation not make all the objections against P2P invalid, especially if this is intended for console systems and games without dedicated server where the security risks are just the same as with client hosting the game?

Do YOU remember how many "CABLE ONLY!!!111eleven" games people hosted? The bandwidth usage was still out of bounds from playable if you got more than 2 people and a decent unit cap. Just because it "worked" doesn't mean it was end-all the best solution out there.

and THAT reminds me of the Physics thing again. Have you played supreme commander? it is P2P. Everyone hosts "QuadCore ONLY!" games because it is way more physics than the clients can handle at the same time they do all the pretty graphics, networking, and task switching to my voip app.

#46 Gaffer   Members   -  Reputation: 422

Like
0Likes
Like

Posted 05 September 2009 - 06:01 PM

Quote:
abaraba says
How do you explain they researched it if there is not a single paper, article or any kind of written assessment or analysis to show, or even suggest P2P would not be suitable or would be less practical for whatever the situation they required. There is no such document. All the research, theory and practice goes only to prove P2P is very useful, practical and heavily underrated.


this statement is incorrect. there are many documents discussing the quake, counterstrike and unreal networking models. each of them discuss why client/server is a more attractive choice than peer-to-peer.

Quote:
What would Quake or Counter Strike lose if they provided P2P at least as an option, and what would they gain?


their overall bandwidth would be higher, they would be massively susceptible to cheating, and the players would be lagged by the most lagged player when using a lockstep deterministic P2P networking model like the one used in age of empires.

Quote:
No problem, we already have it. Stickman uses 5KBps upload with 50 players, therefore it would need about 25Kbps for 250 players. It scales easy as that.


yes, and... ?

it is well known that bandwidth per-peer scales at O(n) - so what?

the reason we don't care is because there is something better: with client/server the client bandwidth is O(1) - it does not increase as the number of players increases - it stays the same.

Quote:
Now, you tell me how much would they save if they used P2P approach instead, and how many more players could they support? It costs less, yet gives you more.


they would save nothing. their product would not work and they would lose their jobs.

Quote:
P2P can distribute the load and have each peer do the physics and collision only for itself. This is particularity about the games without dedicated server or console games, where security risks are not an issue or are the same as with C/S approach.


assuming you go with a deterministic lockstep input passing based P2P sync, you cannot perform the physics and collision just for yourself. you must by definition run the exact same code for all peers. i am surprised that you don't seem to understand this, since the age of empires paper is quite explicit in describing that they use a P2P lockstep deterministic network model, like pretty much all other RTS games.

[Edited by - Gaffer on September 6, 2009 2:01:55 AM]

#47 Azgur   Members   -  Reputation: 325

Like
0Likes
Like

Posted 05 September 2009 - 09:59 PM

Comparing bandwidth utilization between P2P and Server/Client isn't always as relevant either.
Often these models also have a very different implementation in which data is send, resulting in very different bandwidth utilizations.

For P2P it's not uncommon to only share input between all clients and have the clients simulate the game deterministically by freezing the game if one of the clients stall.
This results in a very low bandwidth utilization but comes at the cost of leaving several potential attack vectors for cheaters open.
Many RTS games don't owe their low bandwidth utilization due to P2P, but rather they got it due to the use of an input sharing model.
The input sharing model is less common in client/server models as latency is a much bigger issue here. So the choice here is often based on latency, not bandwidth utilization.

In a client/server model it's much more common to see updated data flow from the server to the clients rather than input.
This model can deal with updates at variable times and use interpolation/extrapolation to keep the appearance of a smooth simulation.
Here the simulation is often not deterministic among different clients, but close enough.
Using clever prediction these simulations also very often give a real-time feel, something that often isn't the case in input sharing P2P models.
In fact, in an input model sharing it's not uncommon to intentionally introduce artificial lag to handle network fluctuations. This is a lot less suitable for FPS for example.

These differences in networking models causes the decision of P2P vs Client/Server to only partially be about bandwidth utilization. Genre is a very defining factor in this decision and certain models are much more suited for a specific genre.

Abaraba is really naive in thinking there's a single holy grail while his arguments only hold for a tiny part of the equation even if they were correct.

Finally, there's also a commercial factor involved.
Using a client/server setup you can effectively only release a client and keep all your server software inhouse to profit from.
This is for example very common MMO's. But it's also seen in free-to-play shooters where the authors intend to profit from renting out game servers instead.

Basically, P2P vs client/server is a very silly discussion. There's many factors involved that will quite easily push a developer into one of the 2 solutions.
Bandwidth utilization often isn't one them. Latency and genre is.
Remco van Oosterhout, game programmer.
My posts are my own and don't reflect the opinion of my employer.

#48 Andrew F Marcus   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 05 September 2009 - 10:12 PM

Quote:
Original post by KulSeran
Many games release dedicated servers. As time goes by everyone's specs have increased. If the company shuts down their dedicated servers, you hook up the dedicated server on a hosting site OR on a good home PC, and run with it. Times change things you have to remember that.


How many games have dedicated servers, 2%? Regardless of time P2P will always be able to make it more practical for average users to get together in greater numbers.


Quote:
Original post by Gaffer
this statement is incorrect. there are many documents discussing the quake, counterstrike and unreal networking models. each of them discuss why client/server is a more attractive choice than peer-to-peer.


There is no such document and I challenge you to prove me wrong. Do you accept the challenge, mate?

#49 Azgur   Members   -  Reputation: 325

Like
0Likes
Like

Posted 05 September 2009 - 10:19 PM

Quote:
Original post by Andrew F Marcus
There is no such document and I challenge you to prove me wrong. Do you accept the challenge, mate?


Read up on it.
Many shooters (including unreal/quake) use several tricks to give the appearance of real-time behavior to the user.
These tricks only work in a client/server model.

Basically, the clients predict actions of the player ahead of time and only corrects it when the local result differs too much from the result provided by the server.
This trick requires an authoritative server which makes the final decision as several clients are running around slightly out of sync.
This works well in most cases but does result in rubber banding or delayed deaths on laggy connections. But this is considered an acceptable loss as most of the time the player feels he receives instant feedback.

In a P2P model you're required to run the game deterministically (or face a really really huge mess) which means you can't hide latency with the same trick.
P2P games often can't simulate the next step without having received input from all peers.
This results in 1 client being able to freeze all other clients.
This is considered acceptable in RTS games, but completely unacceptable in FPS games.

Of course I expect you to either completely ignore this post or nitpick on irrelevant details and ignore the main subject in the progress.

[Edited by - Azgur on September 6, 2009 4:19:31 AM]
Remco van Oosterhout, game programmer.
My posts are my own and don't reflect the opinion of my employer.

#50 Andrew F Marcus   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 05 September 2009 - 11:26 PM

Quote:
Original post by Azgur
Using clever prediction these simulations also very often give a real-time feel, something that often isn't the case in input sharing P2P models.

In fact, in an input model sharing it's not uncommon to intentionally introduce artificial lag to handle network fluctuations. This is a lot less suitable for FPS for example.


You are describing two identical techniques, equally implemented in either case. Server running in the past 100ms is the same thing as issuing commands 100ms for the future. The difference is only if you gonna interpolate this lag with client-side predictions or simply let the user experience the latency.


Quote:
Original post by Azgur
Abaraba is really naive in thinking there's a single holy grail while his arguments only hold for a tiny part of the equation even if they were correct.


The point is very simple and not involving any grails - "Stickman" is possible, 5KBps upload with 50 players over P2P. That's all. It's a fact so it's not incorrect. Predicting the possibility of such game, while everyone is saying it's impossible, does not make it naive, but visionary.



Quote:
Original post by Azgur
Quote:
Original post by Andrew F Marcus
There is no such document and I challenge you to prove me wrong. Do you accept the challenge, mate?


Read up on it.


Read what? There is no such document.


Quote:
Original post by Azgur
Many shooters (including unreal/quake) use several tricks to give the appearance of real-time behavior to the user.
These tricks only work in a client/server model.


Not true, all the same tricks work all the same.


Quote:
Original post by Azgur
Basically, the clients predict actions of the player ahead of time and only corrects it when the local result differs too much from the result provided by the server.
This trick requires an authoritative server which makes the final decision as several clients are running around slightly out of sync.


Why do you imagine peers can not predict actions ahead? It's nothing more but velocity extrapolation. Authoritative server is not necessary to make interaction fair.


Quote:
Original post by Azgur
In a P2P model you're required to run the game deterministically (or face a really really huge mess) which means you can't hide latency with the same trick.


How did you come up with that? If you pass only input then you likely need determinism, but if you pass all the data then you don't, same as with a server approach. Go play Stickamn.


Quote:
Original post by Azgur
P2P games often can't simulate the next step without having received input from all peers.
This results in 1 client being able to freeze all other clients.
This is considered acceptable in RTS games, but completely unacceptable in FPS games.


False. It is not about P2P, it's about passing input and determinism. If you wanted to make Age of Empires with C/S model you would need to do exactly the same thing.


Quote:
Original post by Azgur
Of course I expect you to either completely ignore this post or nitpick on irrelevant details and ignore the main subject in the progress.


I expect you to give me that link you told me to read upon, so I can read it.

#51 KulSeran   Members   -  Reputation: 2544

Like
0Likes
Like

Posted 05 September 2009 - 11:45 PM

Quote:

Quote:
Original post by Gaffer
this statement is incorrect. there are many documents discussing the quake, counterstrike and unreal networking models. each of them discuss why client/server is a more attractive choice than peer-to-peer.



There is no such document and I challenge you to prove me wrong. Do you accept the challenge, mate?

And google presents (in 0.29sec no less)

how and why quake's model works:
http://www.runequake.com/hoh/Quake.pdf

discussion of many of the pitfalls of P2P quake and how to help alleviate them. But notice how they skirt around the bandwidth problem by limiting the data set. The same techniques could apply to client/server.
http://cseweb.ucsd.edu/~fuyeda/papers/iptps2007.pdf

More discussion of P2P and the extremes you have to go to in order to make it work for clients with vastly different hardware and networks.
http://prisms.cs.umass.edu/brian/pubs/stjohn.nossdav.2005.pdf

Quote:

How many games have dedicated servers, 2%? Regardless of time P2P will always be able to make it more practical for average users to get together in greater numbers.


Of the games i have:
TF2, MW4, L4D, CS Source, HL2, Quake 3, UT2K4, IL2, Descent Freespace...

hmmm... looks like I didn't find a SINGLE game that has multiplayer, uses ClientServer, and doesn't provide an .exe i can run/remote host as a dedicated server. Not saying they don't exist, but that is a lot of big name games I listed, and they all provide that functionality.

RTS games are usually P2P, so they can all run without hosted servers. (except SupCom, cause they tied that to GPGNet for some stupid reason)

And again, I'm going to stress. Without a dedicated server (pirate bay, battlenet, gpgnet, steam) how many people do you expect to play a game together? It isn't a magical "P2P! GO!", people still need a place to gather to play the game. If you as a company have to provide a matchmaking server, and the gameplay model plays better(lower lag, less cheating, etc) on C/S why not setup that way to start with and save yourself the hassle of making P2P work. Read the papers above, P2P takes a lot more work just to get it to behave close to the QoS level C/S setups have.


Finally, from a developer standpoint. There are deadlines. Imagine 6 months to make this P2P or C/S game model work, with a publisher breathing down your neck, quoting your minimum specs, and quoting you support costs for features if they are to be expected to host anything (server OR just matchmaking server). Now, are you going to go P2P and risk all the problems it has? (have you read how hard it is to get BitTorrent working? Network problems abound.) Or just go with a client server, pay hosting costs, but save on Support line costs? Very few games(ie only PC, and some console live games) are willing to provide devs with support money for patches and the like. So again, something that is risky? or something that works out of the box and you don't have to risk wasting money patching as often?
IF there was some "P2P Networking Middleware!™" company then things would be different, but there isn't one. On the other hand there are tonnes of dedicated server hosting sites a company can choose to rent from.

Quote:

Why do you imagine peers can not predict actions ahead? It's nothing more but velocity extrapolation. Authoritative server is not necessary to make interaction fair.


Ah, but it is. Because in the face of cheaters and packet loss, not every client sees the same view of the game. You need arbatration. Either you waste bandwidth sending data so the clients can choose to agree on the world state, or you have a dedicated server that dictates the world state

Quote:

Read what? There is no such document.

I expect you to give me that link you told me to read upon, so I can read it.

Learn to google. Tonnes of CS majors go onto grad school and have to publish papers on this type of crap all the time. Also, check out some of the networking middleware that is available (like eNet or RakNet). They also discuss quite in depth most of what you are talking about.

It is a bad attitude to demand resources that you aren't willing to look for. I have yet to see you provide any source documents of any academic quality to back up your claims. That is why you keep getting banned. Arguments aren't about "you are wrong I'm right", they are about "here is proof" "rebuttal proof". Scientific method and all. Everyone else is being through and rebutting points categorically down each of your posts. Why not follow in suit? I note you picked a SINGLE point on my list to rebutt/accept. Sounds antagonistic, and like you are skirting the issues instead of attempting to understand, accept, or properly attempt to educate us on how we are wrong.

Quote:

If you pass only input then you likely need determinism, but if you pass all the data then you don't,

And there's the rub. What type of game are you making? an input based twich game(FPS) or a turn-based/lockstep game(RTS)

[Edited by - KulSeran on September 6, 2009 6:45:11 AM]

#52 Azgur   Members   -  Reputation: 325

Like
0Likes
Like

Posted 05 September 2009 - 11:48 PM

Quote:
Original post by Andrew F Marcus
You are describing two identical techniques, equally implemented in either case. Server running in the past 100ms is the same thing as issuing commands 100ms for the future. The difference is only if you gonna interpolate this lag with client-side predictions or simply let the user experience the latency.


No, there is a huge difference.
1 Model provides real-time feedback to use input by predicting the actions locally.
The other model introduces a delay in the user's actions to hide network fluctuations.
The difference is in the speed of feedback from key presses.

Quote:
Original post by Andrew F Marcus
The point is very simple and not involving any grails - "Stickman" is possible, 5KBps upload with 50 players over P2P. That's all. It's a fact so it's not incorrect. Predicting the possibility of such game, while everyone is saying it's impossible, does not make it naive, but visionary.

A single game is not representative of all games ever made.
Like I stated in my post, it depends on genre. It's all about user expectations and how much lag they're willing to tolerate.
An RTS or RPG can quite often get away with several 100 ms of input lag. An FPS absolutely can not.

Quote:
Original post by Andrew F Marcus
Read what? There is no such document.

http://unreal.epicgames.com/Network.htm
http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking
http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
Few of the many documents on networking in a real-time FPS game.
I had no trouble finding them using Google at all.

Quote:
Original post by Andrew F Marcus
Not true, all the same tricks work all the same.

From this I can only conclude you've never designed or implemented a networking model.
I have. There are very distinct differences in tricks used in C/S and P2P.
P2P with input sharing causes all clients to run a full simulation of the game.
Client/server with authoritative server will turn clients into dumb viewers who only run a partial simulation for the local player to hide latency.


Quote:
Original post by Andrew F Marcus
Why do you imagine peers can not predict actions ahead? It's nothing more but velocity extrapolation. Authoritative server is not necessary to make interaction fair.

No, you can't make predictions in a P2P input sharing model.
Since only input is shared the clients need to run fully deterministic to remain synchronized.
Note the words "input sharing". There are no object positions being shared, only key presses.


Quote:
Original post by Andrew F Marcus
How did you come up with that? If you pass only input then you likely need determinism, but if you pass all the data then you don't, same as with a server approach. Go play Stickamn.

A data sharing model in P2P is highly prone to hacking and cheats.
Additionally it introduces several problems with interactions and timings.
Again, I base this on real life experience. Not assumptions I made by briefly looking at various other games.

Quote:
Original post by Andrew F Marcus
False. It is not about P2P, it's about passing input and determinism. If you wanted to make Age of Empires with C/S model you would need to do exactly the same thing.

It's funny you should mention Age of Empires. That game will freeze all clients when one of the clients freezes sufficiently.
The reason against using the input sharing model with client/server is the additional latency introduced, this is a much bigger factor than bandwidth usage.

Quote:
Original post by Andrew F Marcus
to give me that link you told me to read upon, so I can read it.

They're at the top of the post.
Please educate yourself before making these outrageous claims.
Your posts make it very clear you have no practical experience in designing or implementing network games.
Your statements and theories are out of sync with the real world.

[Edited by - Azgur on September 6, 2009 6:48:29 AM]
Remco van Oosterhout, game programmer.
My posts are my own and don't reflect the opinion of my employer.

#53 Andrew F Marcus   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 06 September 2009 - 12:32 AM

Quote:
Original post by KulSeran
And google presents (in 0.29sec no less)



Thank you, my friend. I don't think I need to say anything anymore, from now on I'll just quote these links. Are you on my side or were you just lazy to actually read those papers?


Scaling Peer-to-Peer Games in Low-Bandwidth Environments
http://cseweb.ucsd.edu/~fuyeda/papers/iptps2007.pdf
Jeffrey Pang, Carnegie Mellon University
Frank Uyeda, U. C. San Diego
Jacob R. Lorch, Microsoft Research



...For this reason, we are developing a new architecture,
called Donnybrook, to enable large-scale P2P games even
in environments with highly constrained bandwidth.

To evaluate our techniques, we modify Quake III,
a popular first-person shooter (FPS) game, to run on
Donnybrook.


To test
these techniques, we implement them and conduct
a user study that evaluates the resulting game. We
find that our techniques make a game played with
low bandwidth significantly more fun than existing
techniques, and nearly as much fun as one played
on a LAN. Thus, they enable an order of magnitude
more players than existing techniques.



We present the results of a large user study,
which show that Donnybrook substantially increases the
enjoyment of P2P Quake III in a low-bandwidth environment.

#54 KulSeran   Members   -  Reputation: 2544

Like
0Likes
Like

Posted 06 September 2009 - 12:37 AM

WOW. You took entirely the wrong take on those papers. I posted those to educate you on the pitfalls of P2P, and how much more work it is to get P2P working vs the standard client server approach. They even document how much more bandwidth the P2P method takes vs C/S. I read each of those end to end before posting them, in the hopes of finding documents that would help you understand what makes P2P the worse of the two solutions.

That said. You were looking for data to prove or disprove P2P is "better". Do your research and you will find there IS research being done in both the C/S and P2P directions. Agreed the "donybrook" method can make P2P work better than P2P currently does, But it is NOT a document comparing C/S directly to P2P. But a comparison of different techniques of P2P vs other P2P techniques.

Quote:

more fun than existing
techniques, and nearly as much fun as one played
on a LAN. Thus, they enable an order of magnitude
more players than existing techniques.

They are only comparing P2P techiques vs other P2P techiques. NOT C/S vs P2P.



-- sorry for all the edits --

#55 Andrew F Marcus   Banned   -  Reputation: 100

Like
0Likes
Like

Posted 06 September 2009 - 04:08 AM

Quote:
Original post by KulSeran
WOW. You took entirely the wrong take on those papers. I posted those to educate you on the pitfalls of P2P, and how much more work it is to get P2P working vs the standard client server approach.


I'm sorry if you find it confusing, but there are no pitfalls in P2P other than usual latency and sync issues, just like with the server approach. Do you realize they made their P2P by running Quake server on each peer? There is no more work involved, it's the one and the same thing.


Quote:
Original post by KulSeran
They even document how much more bandwidth the P2P method takes vs C/S.


Hey, you said they were not comparing it with C/S. You must be talking about some other paper, this one is actually about Low-Bandwidth Environments, no server can run here.


Quote:
Original post by KulSeran
I read each of those end to end before posting them, in the hopes of finding documents that would help you understand what makes P2P the worse of the two solutions.


Worse? What are you referring to, this:

Online multiplayer games have become an important
part of the computing landscape. There is a growing
desire to serve these games using the machines of
participants themselves rather than with dedicated
servers.

Using participant machines reduces subscription costs,
eliminates dependency on centralized infrastructure, and
automatically scales to an arbitrary number of clients.




Quote:
Original post by KulSeran
That said. You were looking for data to prove or disprove P2P is "better". Do your research and you will find there IS research being done in both the C/S and P2P directions.

I can't find anything, so if you have more links to share, please do.

Quote:
Original post by KulSeran
Agreed the "donybrook" method can make P2P work better than P2P currently does, But it is NOT a document comparing C/S directly to P2P. But a comparison of different techniques of P2P vs other P2P techniques.


Are you aware they were measuring "fun". The test subject were all gamers, Quake players, they did not even know the game was running in P2P. So when these guys tell you game was nearly as much fun as one played over LAN, what do you make of it?

P2P Quake had terrible problems?
P2P Quake played worse than normal Quake?
P2P Quake played better than normal Quake?

#56 Azgur   Members   -  Reputation: 325

Like
0Likes
Like

Posted 06 September 2009 - 04:16 AM

Andrew F Marcus, please answer me this:
Do you have any personal experience to back up these claims?
Have you prototyped different approaches and witnessed the pro's and con's first hand?

A large amount of the people providing counter arguments to your claims actually have experience in designing and implementing network models.
Articles (especially if not your own) and theories are not a substitute for real life experiences.

#57 Promit   Moderators   -  Reputation: 7213

Like
0Likes
Like

Posted 06 September 2009 - 04:18 AM

Come on guys, the OP should have made it perfectly clear not to talk to this moron.

#58 Gaffer   Members   -  Reputation: 422

Like
0Likes
Like

Posted 06 September 2009 - 07:41 AM

Quote:
In a P2P model you're required to run the game deterministically (or face a really really huge mess) which means you can't hide latency with the same trick.
P2P games often can't simulate the next step without having received input from all peers.
This results in 1 client being able to freeze all other clients.
This is considered acceptable in RTS games, but completely unacceptable in FPS games.


this is a very valid point.

consider what would happen if abaraba took age of empires networking model and scaled it up to 250 players...

if any one of those 250 players experienced network delays, all other players in the game would have to stop and wait for them.

as the number of players increases the probability of any player having network issues at any time increases too - eventually at some point the game would be generally unplayable as all players get stuck in a feedback loop waiting for the most lagged player.

consider also that a deterministic lockstep model would also generally require all 250 players to join into a lobby before starting the game. you cannot support late joins. consider how long it would take to get 250 players together before you could start the game - it takes long enough to get four together in left 4 dead!

[Edited by - Gaffer on September 6, 2009 4:41:31 PM]

#59 Gaffer   Members   -  Reputation: 422

Like
0Likes
Like

Posted 06 September 2009 - 08:05 AM

Quote:
Original post by Promit
Come on guys, the OP should have made it perfectly clear not to talk to this moron.


absolutely.

and now, i think it's only fitting to say goodbye to abaraba - and to thank him for all the good times we've had together



so long abaraba, until next time!

#60 jsaade   Members   -  Reputation: 197

Like
0Likes
Like

Posted 06 September 2009 - 09:10 AM

I've been away from gamedev's network section for a while now.
This topic is really funny, I can't believe there is a discussion
about s/c vs p2p! It is a funny 3 page read though, thanks :)




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS