• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Gaffer

Incoming hilarity: abaraba is back, and this time he's fixated on P2P networking!

82 posts in this topic

Quote:
Original post by hplus0603
If there is no per-packet overhead, that statement would not be true, because squared and times-two are different functions.


I do believe the server upload bandwidth is O(n^2) since the server needs to send (n-1) sized packets to n players during worst case scenario.

Quote:
Original post by hplus0603
Do you think that the people who wrote Counter-Strike: Source, or Quake III: Arena, or Unreal Tournament, are idiots? Do you think they do not carefully test a number of different approaches before they settle on what works best? Why do you think they are not using P2P topologies?


I think they don't use P2P because its not very secure, which leads to a lot of cheating/hacking.

edit:
Quote:
Original post by hplus0603
I'll leave you with another question: We've all heard about MAG (Massive Actiongame) [www.gamespot.com], right? Do you think they use client/server, or peer-to-peer, for their 256-player matches? And why?

P2P only works up to the lowest uploading speed of the client, which we know as increasing at the rate of O(n).
Where as Client Server requires O(1) upload for each client.
edit2: also the packet header is going to limit P2P even more as it requires each 256 players to send 255 packets each.
Client Server only needs 256 + 256 packets


sidenote: the link has an extra ' at the end (or is missing one at the start)

Quote:
Original post by Andrew F Marcus
It is easy to write P2P programs and very much easier to sync them. Ask our friend "Stickman": "The multiplayer part is P2P, I've done my best on reducing latency and bandwidth, using various packing schemes and priorities. It works nicely with 30 people online, using only 5kBps upload."

http://www.gamedev.net/community/forums/topic.asp?topic_id=509402&whichpage=1

I don't see how it's "easy" to write a P2P program.

[Edited by - Jeonjr on September 5, 2009 2:58:13 AM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Jeonjr
P2P only works up to the lowest uploading speed of the client, which we know as increasing at the rate of O(n).
Where as Client Server requires O(1) upload for each client.


Yes, but advantage only applies if some company is hosting a dedicated server, paying for all the bandwidth. Still, if the game is not super successful, servers will go offline rather quickly, leaving O(n^2) problem to you.


Quote:
Original post by Jeonjr
edit2: also the packet header is going to limit P2P even more as it requires each 256 players to send 255 packets each.
Client Server only needs 256 + 256 packets


Peer upload: 255 packets
Server upload: 256*256 = 65536 packets


Quote:
Original post by Jeonjr
I don't see how it's "easy" to write a P2P program.


I don't know what to tell you. You use exactly the same network code as in client-server approach, except that you run a server on each client, so to speak. Is there anything in particular you're worried about?
0

Share this post


Link to post
Share on other sites
http://stickman.hu/
http://stickman.hu/page/?o=download&l=eng

Ok, this is absolutely fantastic! Stickman works wonderfully!!

There is quite a few people playing there right now, most of them seem to be half the way across the globe, in Hungary, so latency was probably about 400-500ms, but it played beautify nevertheless. There are clans, top players, high-scores, forums, all the stuff... this is really a quality game with many happy users to vouch for it.


It took me about 2min to play the game from the moment I clicked on download link. There is no need for installation or any kind of setup what so ever, you just download 10mb exe and run it. Simple as that.

Just like author says - "The portable version doesnt need to be installed, it just runs". Indeed, it just runs! Ha, great stuff. Click "connect" and there you are... running, jumping, driving vehicle, shooting Stickman half the way across the world. It's smooth, it's fun, it's great... bravo!




Stickman on P2P:
- "Interestingly, this is a P2P online game, and the server I talked about is a simple NAT introducer. It was a great challenge to reduce bandwidth to the minimum. But I think I got it right, because it only uses 5KBps upload with 50 players, so It could even work with Dialup!"

-"I use byte packing and some priority thing (like, I send packets more often to players, who are closer to me etc.), so there's less lag than you would think, since if you're shooting with someone, you send most of your packets to each other."

-"1000 players have played with it since i started to develop it. Not Massive, eh?"




I didn't think stick figures can get any better than Xiao-Xiao, but you did it. Stickman, you rule.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Andrew F Marcus
Peer upload: 255 packets
Server upload: 256*256 = 65536 packets


Maximum size of a packet is 1500 bytes due to MTU, minimum is 576. Depending on payload, server will be able to pack multiple payloads into same packet.

(1500-28)/255 = 5.7. This means, if payload is less than 6 bytes (quite possible), then server will send each client entire state of 255 players in single packet:
[UDP Header][C1][C2][C3]....[C254]


As such, server upload is 255 packets.

With 5 bytes of payload, the overhead of P2P is: 84%
With 5 bytes of payload, the overhead of server upload is: 2%

For every another 5 bytes of payload per client, server would need to send an extra packet.

This is where the difference comes from.
0

Share this post


Link to post
Share on other sites
Quote:

Maximum size of a packet is 1500 bytes due to MTU, minimum is 576.


What do you mean - minimum?


Quote:

(1500-28)/255 = 5.7. This means, if payload is less than 6 bytes (quite possible), then server will send each client entire state of 255 players in single packet:

[UDP Header][C1][C2][C3]....[C254]

As such, server upload is 255 packets.


Ok, fair enough, quite a few bytes saved there. But packet is not a measure of size anymore and we need to plug in some real bytes, like so...

Peer upload cost: 254 *(28+5)bytes = 8,382 bytes
Server upload cost: 255 *1500bytes = 382,500 bytes



The difference remains huge.
0

Share this post


Link to post
Share on other sites
Quote:
What do you mean - minimum?


Packet fragmentation.

Quote:
Peer upload cost: 254 *(28+5)bytes = 8,382 bytes
Server upload cost: 255 *1500bytes = 382,500 bytes

The difference remains huge.


Yes, it does.

P2P upload: 8,386 bytes.
P2P download: 8,382 bytes.
S/C client upload: 33 bytes.
S/C client download: 1500 bytes.

Next, remember that this is *one single update*. You need 10-30 *per second*.

Now calculate the above numbers, taking into consideration payload merging, for WoW, or 3000 users per shard.

Then take into consideration that each user pays $15, and that each server is on backbone, with multiple gigabit connections.

If all you want to do is a little multiplayer game for your friends, then P2P will "work". If you plan on running a business, think again.

-------------
Edit, part II, The Heavy Cannons

So far, we have only been sending payload in raw form. But we have means of reducing it via compression.

Let's have a standard zlib compressor, which reduces the size of payload by 40%. UDP header cannot be compressed.

When sending 5 bytes, we reduce that to 3.

P2P, both directions, : 254 *(28+3)bytes = 7,874 bytes

What about server? Remember that we can compress *entire* payload, which in our case is all of 255 individual messages.

Server upload cost: 255 *(28+254*3) bytes = 201,450 bytes.


Now we apply this to real case. 10 updates per second.
P2P, both directions, 7,874 bytes * 10 * 8 = 629920 bits per second. While this is not horrible, 630kbits is more than upload of most users. It definitely doesn't work over dial-up.

S/C, client upload: 31 * 10 * 8 = 2480 bits per second.
S/C, client download: (28+254*3)*10*8 = 63200 bits per second.

In S/C model, user could just barely use V.92 USR dial-up modem, if only compression were slightly better (56kbits per second)

What about server:
UL: 255*(28+254*3)*10*8 = 16,116,000 bits per second.
DL: 255 * 2480 bits per second = 632,400 bits per second

Big numbers... But do they matter?

DL rate is same as DL rate of P2P model.
However, upload rate is much higher, but almost close enough to be hosted on a 10Mbit connection.

Now let's set what we would need in each case:
P2P: 255 users with 1Mbit upload, 1Mbit download connections

S/C: 254 users with just a little bit more than dial-up
S/C: one client with 20Mbit upload, 1Mbit download

Experience shows, that it's easier to find one player with very good connection and 254 users with really poor connections, than it is to find 255 users with decent connections.


And, despite the title - these aren't the biggest cannons. Then there is merging of state on server, removal of redundant messages, area of interest management, update prioritization, etc....


I'm curious, does everyone in this thread have 1Mbit upload connection? Mine is 8/2, but various bandwidth meter statistics say this ranks in top 10%, based on world-wide metrics. Then again, I can easily get 20/20 for just a bit more, or even 100/100 fiber for same price.

Is everyone in same position? Can you scale up your connection at next to no cost to symmetric fiber? Can your 255 friends too?

[Edited by - Antheus on September 5, 2009 9:22:06 AM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Andrew F Marcus
Ok, fair enough, quite a few bytes saved there. But packet is not a measure of size anymore and we need to plug in some real bytes, like so...

Peer upload cost: 254 *(28+5)bytes = 8,382 bytes
Server upload cost: 255 *1500bytes = 382,500 bytes

The difference remains huge.


I think the comparison is meaningless. If you care about client performance, you should compare client in P2P versus client in C/S mode. Let's assume 50 clients, UDP header of 28 bytes and 16 bytes of data per client, cost per update/frame is:

P2P upload cost per client: 49 * (28 + 16) = 2,156 bytes
P2P download cost per client: 49 * (28 + 16) = 2,156 bytes
P2P global upload bandwidth usage: 50 * (49 * (28 + 16)) = 107,800 bytes
P2P global download bandwidth usage: 50 * (49 * (28 + 16)) = 107,800 bytes

CS upload cost per client: 1 * (28 + 16) = 44 bytes
CS download cost per client: 28 + (49 * 16) = 812 bytes
CS upload cost on server: 50 * (28 + (49 * 16)) = 40,600 bytes
CS download cost on server: 50 * (1 * (28 + 16)) = 2,200 bytes

Now, let's read that again: if you want to compare client in one mode versus client in another mode, CS is clearly a winner. If you want to compare global bandwidth usage, CS is *also* clearly a winner.

If you make a biased comparison, like comparing the P2P client to the CS server bandwidth usage, of course you'll arrive to a wrong conclusion, which is that P2P is better for the client... but you didn't compare with the client in CS mode. That's like comparing apples to oranges.

I also want to point out, since you seem to be very impressed by that Stickman game, that the optimizations the developer mentions (byte packing, lesser priorities for far away clients, etc.. ) are all equally applicable to CS and would save bandwidth in the same proportions.

Challenge: doing a comparison client P2P versus client CS, can you find a realistic case where P2P would clearly be the winner ?

Y.
0

Share this post


Link to post
Share on other sites
Quote:

Ok, fair enough, quite a few bytes saved there. But packet is not a measure of size anymore and we need to plug in some real bytes, like so...

Unfortunately, this is totally incorrect. The underlying system uses packets, not raw "bytes". You have to consider information in packets.
1) The system sends a packet header that must contain data to route the payload. This is overhead that eats bandwidth.
2) Every high level packet will consist of one or more lower level packets. That fragmentation leads to overhead and lag.
3) They system will deliver a packet in whole, or not at all. So it doesnt matter how full the packet was, all the data and headers need to be resent. If you have a high overhead from your headers you'll waste a lot of bandwidth resending tiny amounts of data.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
P2P, both directions: 7,874 bytes * 10 * 8 = 629920 bits per second.
C/S, server upload: 255*(28+254*3)*10*8 = 16,116,000 bits per second.


20Mb upload?! Fiber optics? Suppose you have it, but how would you feel about paying for 20mb per second? Beside that, I think you just proved how 255 players is no problem for P2P and average DSL/ADSL. By the way, how long does it take for server to calculate physics and collision for 255 clients?


Quote:
Original post by Ysaneya
P2P upload cost per client: 49 * (28 + 16) = 2,156 bytes
P2P download cost per client: 49 * (28 + 16) = 2,156 bytes
P2P global upload bandwidth usage: 50 * (49 * (28 + 16)) = 107,800 bytes
P2P global download bandwidth usage: 50 * (49 * (28 + 16)) = 107,800 bytes

CS upload cost per client: 1 * (28 + 16) = 44 bytes
CS download cost per client: 28 + (49 * 16) = 812 bytes
CS upload cost on server: 50 * (28 + (49 * 16)) = 40,600 bytes
CS download cost on server: 50 * (1 * (28 + 16)) = 2,200 bytes

Now, let's read that again: if you want to compare client in one mode versus client in another mode, CS is clearly a winner. If you want to compare global bandwidth usage, CS is *also* clearly a winner.


Challenge: doing a comparison client P2P versus client CS, can you find a realistic case where P2P would clearly be the winner ?


I'll take on challenge with this one - The goal is to be able to play the game, so the real winner is the number that can help you and your 49 friends do just that. Therefore, if you wanted to run the game on 30FPS it comes down to this:

P2P upload cost per client: 30 * 2,156 bytes = 64,680 bytes/sec
C/S upload cost on server: 30 * 40,600 bytes = 1,218,000 bytes/sec


Winner is everyone who is playing, and loser is the sucker who is hosting the game. With P2P everyone is a winner. By the way, how long does it take for server to calculate physics and collision for 50 clients?


Quote:

If you make a biased comparison, like comparing the P2P client to the CS server bandwidth usage, of course you'll arrive to a wrong conclusion, which is that P2P is better for the client... but you didn't compare with the client in CS mode. That's like comparing apples to oranges.


I don't know what to tell you. Can you make your point other than proving me wrong? I mean, none of your objections makes P2P any less practical. You are complaining about bandwidth, but please realize there is a point where certain number of players can only be supported with P2P, having the common bandwidth limits. To illustrate, if a maximum number of players on average Xbox 360 game with average ADSL is 16, that means P2P would be able to host 32 or more. So, don't forget, with P2P everyone is a winner.

[Edited by - Andrew F Marcus on September 5, 2009 1:06:11 PM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by KulSeran
Quote:

Ok, fair enough, quite a few bytes saved there. But packet is not a measure of size anymore and we need to plug in some real bytes, like so...

Unfortunately, this is totally incorrect. The underlying system uses packets, not raw "bytes". You have to consider information in packets.
1) The system sends a packet header that must contain data to route the payload. This is overhead that eats bandwidth.
2) Every high level packet will consist of one or more lower level packets. That fragmentation leads to overhead and lag.
3) They system will deliver a packet in whole, or not at all. So it doesnt matter how full the packet was, all the data and headers need to be resent. If you have a high overhead from your headers you'll waste a lot of bandwidth resending tiny amounts of data.


Ok, but is that argument for or against P2P?

I agree overhead makes it less efficient for smaller packets, but that still does not make a difference. Say in 255 player game one peer has to resend (28+5)bytes which is very wasteful and inefficient, but when server needs to resend (28+1275)bytes it's still more, no matter how efficient it's still more, even when you compress, it's always more with a server.
0

Share this post


Link to post
Share on other sites
Quote:
abaraba says:
Peer upload: 255 packets
Server upload: 256*256 = 65536 packets


Incorrect.

Peer upload: 255 packets
Client upload: 1 packet
Integrated server upload: 255 packets
Dedicated server upload: 256 packets

Why exactly do you think the server must upload 65536 packets?

Why would anybody use client/server if this was true?
0

Share this post


Link to post
Share on other sites
Quote:
abaraba says
I don't know what to tell you. Can you make your point other than proving me wrong? I mean, none of your objections makes P2P any less practical. You are complaining about bandwidth, but please realize there is a point where certain number of players can only be supported with P2P, having the common bandwidth limits. To illustrate, if a maximum number of players on average Xbox 360 game with average ADSL is 16, that means P2P would be able to host 32 or more. So, don't forget, with P2P everyone is a winner.


To be fair mate you do make it pretty difficult to make a point without proving you wrong.

If you would like for people to stop proving you wrong, perhaps you should stop posting incorrect statements?

0

Share this post


Link to post
Share on other sites
Quote:
Original post by Andrew F Marcus
Quote:

Challenge: doing a comparison client P2P versus client CS, can you find a realistic case where P2P would clearly be the winner ?


I'll take on challenge with this one - The goal is to be able to play the game, so the real winner is the number that can help you and your 49 friends do just that. Therefore, if you wanted to run the game on 30FPS it comes down to this:

P2P upload cost per client: 30 * 2,156 bytes = 64,680 bytes/sec
C/S upload cost on server: 30 * 40,600 bytes = 1,218,000 bytes/sec

Winner is everyone who is playing, and loser is the sucker who is hosting the game. With P2P everyone is a winner.


Correct. There's a choice to make. In C/S (in my particular example), individual bandwidth per client is x5 times lower than for a P2P client. But you need a server with a capacity much higher than a single client. The good news is that companies, when speaking of a number of players over the two digits, normally provide such a server; therefore players don't have to worry about this kind of concern, and can play with x5 times as many players than in the P2P model.

Quote:
Original post by Andrew F Marcus
By the way, how long does it take for server to calculate physics and collision for 50 clients?


Roughly the same time it takes for a peer to calculate physics and collision for 50 clients.. ? The advantage being that a server can be authoritative while peers trusting each other opens the door to all sorts of hacks and cheats..

Quote:
Original post by Andrew F Marcus
To illustrate, if a maximum number of players on average Xbox 360 game with average ADSL is 16, that means P2P would be able to host 32 or more. So, don't forget, with P2P everyone is a winner.


Unless the server is hosted on a professional datacenter, in which case C/S would be able to host many times more than P2P and elegantly solve the latency/security issues that P2P has.

Y.
0

Share this post


Link to post
Share on other sites
It is great that you're actually discussing the replies now. Let me reply to the replies, and I hope you can answer those as well.

Quote:
Packet overhead hardly makes any difference.


That has been shown wrong by numbers in a previous thread. The overhead of a single UDP packet is 28 bytes. If all you send is client commands, then the commands data packet is on the order of 4-8 bytes. Thus, packet overhead is on the order of 3.5-7x as important as actual data.

Quote:
Besides, it can always be argued P2P needs only to pass player input between peers.


Many client/server solutions only send player input between clients and servers, too. There's no savings from going P2P there.

Quote:
Quote:
Do you think that the people who wrote Counter-Strike: Source, or Quake III: Arena, or Unreal Tournament, are idiots? Do you think they do not carefully test a number of different approaches before they settle on what works best? Why do you think they are not using P2P topologies?


Yes, they likely did not think, test, nor care to consider other approaches, most of them did not even write their engines, they copied what others were doing.


That has not been my experience from actually talking to and in some cases working with those people. They each are very good engineers, carefully considering all the possible approaches, and choosing the approach that works best in practice. This can be easily verified by looking at the different network architectures of the engines: they each use a very different approach, so saying "they didn't write their own" or that "they were doing what everyone else was doing" is clearly false.
Because you can examine the implementation details for each of those cases, your statement of opinion about how those engines were engineered is thus provably false.

Quote:
Quote:
Anyway, the proof is simple: Just write a game that is P2P and works much better than current games based on Source, Unreal, Quake etc engines.


I wrote some even more incredible stuff, unfortunately no one cares and I am even giving it all away for free.


If it's very incredible, and you give it away for free, then clearly people will use it. If nobody is interested, then it's more likely that they do not share your opinion about how incredible it is.

Do you have a download link where people can make up their own minds about the incredibleness of your contribution?


Quote:
Quote:

I'll leave you with another question: We've all heard about MAG (Massive Actiongame), right? Do you think they use client/server, or peer-to-peer, for their 256-player matches? And why?


I don't know. They are being silly?


With dozens of millions on the line, and a studio of 150-200 people working for years on a game, do you think "being silly" is the reason why they're accomplishing something that nobody else has done before?

AFAICT, they're using client/server, and they're using user-input passing, and they're doing 256 simultaneous players. I suggest you try replicating that with P2P and see how far you get.


0

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus

I reject your reality, and substitute my own.

Mythbusters :D.

My current reality is that P2P allows for a higher maximum number of player than Server-Client does in the case we don't use dedicated servers

players=16updates per sec=30

p2pclient 15*(28+16)*30=7228
server up 15*(28+(16*15))*30=120600

players=32 updates per sec=30

p2pclient 31*(28+16)*30=40920
server up 31*(28+(16*31))*30=487320


players=64 updates per sec=30

p2pclient 63*(28+16)*30=83160
server up 63*(28+(16*63))*30=1958040


players=128 updates per sec=30

p2pclient 127*(28+16)*30=167640
server up 127*(28+(16*127))*30=7848600

with my current internet 32 players is already using almost all my upload bandwidth, however if I were to host a server I wouldn't even be able to host a game with 16 players

0

Share this post


Link to post
Share on other sites
Quote:
Original post by oliii
P2P is a pain in the ass. The extra bandwidth requirement and update rate are trivial problems when compared to host migration, and achieving consistency (with exotic NAT router settings and all), packet forwarding, ect...


This probably is the most important point of this entire discussion.
P2P really is a huge pain due to the very wide use of NAT (and it's exotic differences between implementations).

Demigod is actually a great example in this case as the developers have been really open about their network implementation and the many issues they've encountered in using P2P.
Ultimately they ended up creating several workarounds involving proxies (which leads to a client-server architecture between individual peers) to combat this problem when everything else fails.

In the end it did work out for them, but it came at a cost significant effort.
The reason for using a client/server approach often isn't related to bandwidth at all, security and complexity are huge factors in making these decisions.


Back to abaraba's comeback. I actually remember his initial rampage and it has provided me with plenty of entertainment in the past. I'm glad to see he hasn't lost his ways and still puts up one hell of a show.
I can't help but think he's just faking it. I can only hope anyways.
0

Share this post


Link to post
Share on other sites
Quote:
Back to abaraba's comeback. I actually remember his initial rampage and it has provided me with plenty of entertainment in the past. I'm glad to see he hasn't lost his ways and still puts up one hell of a show.
I can't help but think he's just faking it. I can only hope anyways.


Dude, if he's faking it I will personally nominate him for an academy award.

According to Downey Jr. he may have blown his chances though:

Quote:
Stiller: "It's what we do, right?"
Downey: "Everybody knows you never do a full retard."
Stiller: "What do you mean?"
Downey: "Check it out. Dustin Hoffman, 'Rain Man,' look retarded, act retarded, not retarded. Count toothpicks to your cards. Autistic, sure. Not retarded. You know Tom Hanks, 'Forrest Gump.' Slow, yes. Retarded, maybe. Braces on his legs. But he charmed the pants off Nixon and won a ping-pong competition. That ain't retarded. You went full retard, man. Never go full retard."


Zing! :)
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Jeonjr
My current reality is that P2P allows for a higher maximum number of player than Server-Client does in the case we don't use dedicated servers

with my current internet 32 players is already using almost all my upload bandwidth, however if I were to host a server I wouldn't even be able to host a game with 16 players


Exactly. If Age of Empires did not use P2P we would have never been able to have 8 players in a game on 28.8 modem. If there was no P2P there will be no Stickman and 50 in one game, free game.



Quote:
Original post by hplus0603
That has been shown wrong by numbers in a previous thread. The overhead of a single UDP packet is 28 bytes. If all you send is client commands, then the commands data packet is on the order of 4-8 bytes. Thus, packet overhead is on the order of 3.5-7x as important as actual data.


Ok, it matter very much. It makes overall bandwidth usage much less for the C/S model, but P2P can still support more players given the common bandwidth limits, not having a dedicated server.


Quote:
Original post by hplus0603
That has not been my experience from actually talking to and in some cases working with those people. They each are very good engineers, carefully considering all the possible approaches, and choosing the approach that works best in practice.


How do you explain they researched it if there is not a single paper, article or any kind of written assessment or analysis to show, or even suggest P2P would not be suitable or would be less practical for whatever the situation they required. There is no such document. All the research, theory and practice goes only to prove P2P is very useful, practical and heavily underrated. What would Quake or Counter Strike lose if they provided P2P at least as an option, and what would they gain?



Quote:
Original post by hplus0603
With dozens of millions on the line, and a studio of 150-200 people working for years on a game, do you think "being silly" is the reason why they're accomplishing something that nobody else has done before?

AFAICT, they're using client/server, and they're using user-input passing, and they're doing 256 simultaneous players. I suggest you try replicating that with P2P and see how far you get.


No problem, we already have it. Stickman uses 5KBps upload with 50 players, therefore it would need about 25Kbps for 250 players. It scales easy as that. Now, you tell me how much would they save if they used P2P approach instead, and how many more players could they support? It costs less, yet gives you more.



Quote:
Original post by Ysaneya
Correct. There's a choice to make. In C/S (in my particular example), individual bandwidth per client is x5 times lower than for a P2P client. But you need a server with a capacity much higher than a single client.

The good news is that companies, when speaking of a number of players over the two digits, normally provide such a server; therefore players don't have to worry about this kind of concern, and can play with x5 times as many players than in the P2P model.


This is particularly about the games that do not have dedicated server, about games whose decided servers closed down and about all those games that simply do not have a chance of ever having a dedicated server, or simply desire to use less expensive approach. Is there a reason why not have P2P as an option?


Quote:
Original post by Ysaneya
Roughly the same time it takes for a peer to calculate physics and collision for 50 clients.. ? The advantage being that a server can be authoritative while peers trusting each other opens the door to all sorts of hacks and cheats..


P2P can distribute the load and have each peer do the physics and collision only for itself. This is particularity about the games without dedicated server or console games, where security risks are not an issue or are the same as with C/S approach.
0

Share this post


Link to post
Share on other sites
Quote:

Ok, but is that argument for or against P2P?
I agree overhead makes it less efficient for smaller packets, but that still does not make a difference.


I'm not arguing either. You were saying "everyone discount X, it doesnt matter" But it does matter. And where packet loss happens matters too. Losing packets on TCP slows you down. Losing packets on UDP will go unnoticed. Losing parts of larger packets stalls both ends, but more than that, knowing that you may need to resend a packet means that the "server" side has to keep track of all the packets it has sent recently. This means your peer computers have to have more memory and better hardware to make sure the extra network stack doesn't bog them down. And it CAN bog them down, you ever notice how much resources uTorrent uses when you get a good download going?

Quote:

Quote:

Original post by Ysaneya
Roughly the same time it takes for a peer to calculate physics and collision for 50 clients.. ? The advantage being that a server can be authoritative while peers trusting each other opens the door to all sorts of hacks and cheats..



P2P can distribute the load and have each peer do the physics and collision only for itself. This is particularity about the games without dedicated server or console games, where security risks are not an issue or are the same as with C/S approach.

Refer to my other post (here). Your logic is flawed. As soon as your game becomes complex enough that it involves graphics, and an unknown end-client computer spec, the server wins the compute-time battle every time.

Quote:

This is particularly about the games that do not have dedicated server, about games whose decided servers closed down and about all those games that simply do not have a chance of ever having a dedicated server, or simply desire to use less expensive approach. Is there a reason why not have P2P as an option?

Many games release dedicated servers. As time goes by everyone's specs have increased. If the company shuts down their dedicated servers, you hook up the dedicated server on a hosting site OR on a good home PC, and run with it. Times change things you have to remember that.

Quote:

with my current internet 32 players is already using almost all my upload bandwidth, however if I were to host a server I wouldn't even be able to host a game with 16 players

Ok, so I'm FINALLY begining to understand what "BETTER" means. It is nice if people explain their ambiguous words. "Better" could have meant a LOT of things. In the face of "i, as a funded company can choose to host servers, and have gamers be clients" the client server approach wins every time. In the face of "i'm releasing this to the people, and can't afford a server" P2P may look more attractive.

Economy of scale and all though. You are maxing out your bandwidth at 32 players, you still didn't gain much from P2P. Consider also that consumer lines will always be smaller than business lines, so for 60+$ a month, you COULD be hosting a 128 or 256 player game on a server that has 10 or 100 fold the upload that any consumer will have, if you optimize for that case you can insure a good QoS for your customers.

Besides, how often do you REALLY setup games that need more than 4 people? 8? 16? 32? 256? 32768? When I do lan parties we are usually lucky to get 8 to 16 people, AND we are all in one place on 100baseT lines. You seem to be arguing a case that doesn't often exist anyway. You get X people, each pitches in 5$, you all get a dedicated server for the month, and you all play happly ever after, and don't have to worry about firewalls, packet shapers, NATs, etc. And every player doesn't need to by cable/fiber internet just to play.

And on THAT note:
Quote:

Does the fact that Age of Empires implements P2P on 28.8 modem with perfect synchronization, determinism and fluid animation not make all the objections against P2P invalid, especially if this is intended for console systems and games without dedicated server where the security risks are just the same as with client hosting the game?

Do YOU remember how many "CABLE ONLY!!!111eleven" games people hosted? The bandwidth usage was still out of bounds from playable if you got more than 2 people and a decent unit cap. Just because it "worked" doesn't mean it was end-all the best solution out there.

and THAT reminds me of the Physics thing again. Have you played supreme commander? it is P2P. Everyone hosts "QuadCore ONLY!" games because it is way more physics than the clients can handle at the same time they do all the pretty graphics, networking, and task switching to my voip app.
0

Share this post


Link to post
Share on other sites
Quote:
abaraba says
How do you explain they researched it if there is not a single paper, article or any kind of written assessment or analysis to show, or even suggest P2P would not be suitable or would be less practical for whatever the situation they required. There is no such document. All the research, theory and practice goes only to prove P2P is very useful, practical and heavily underrated.


this statement is incorrect. there are many documents discussing the quake, counterstrike and unreal networking models. each of them discuss why client/server is a more attractive choice than peer-to-peer.

Quote:
What would Quake or Counter Strike lose if they provided P2P at least as an option, and what would they gain?


their overall bandwidth would be higher, they would be massively susceptible to cheating, and the players would be lagged by the most lagged player when using a lockstep deterministic P2P networking model like the one used in age of empires.

Quote:
No problem, we already have it. Stickman uses 5KBps upload with 50 players, therefore it would need about 25Kbps for 250 players. It scales easy as that.


yes, and... ?

it is well known that bandwidth per-peer scales at O(n) - so what?

the reason we don't care is because there is something better: with client/server the client bandwidth is O(1) - it does not increase as the number of players increases - it stays the same.

Quote:
Now, you tell me how much would they save if they used P2P approach instead, and how many more players could they support? It costs less, yet gives you more.


they would save nothing. their product would not work and they would lose their jobs.

Quote:
P2P can distribute the load and have each peer do the physics and collision only for itself. This is particularity about the games without dedicated server or console games, where security risks are not an issue or are the same as with C/S approach.


assuming you go with a deterministic lockstep input passing based P2P sync, you cannot perform the physics and collision just for yourself. you must by definition run the exact same code for all peers. i am surprised that you don't seem to understand this, since the age of empires paper is quite explicit in describing that they use a P2P lockstep deterministic network model, like pretty much all other RTS games.

[Edited by - Gaffer on September 6, 2009 2:01:55 AM]
0

Share this post


Link to post
Share on other sites
Comparing bandwidth utilization between P2P and Server/Client isn't always as relevant either.
Often these models also have a very different implementation in which data is send, resulting in very different bandwidth utilizations.

For P2P it's not uncommon to only share input between all clients and have the clients simulate the game deterministically by freezing the game if one of the clients stall.
This results in a very low bandwidth utilization but comes at the cost of leaving several potential attack vectors for cheaters open.
Many RTS games don't owe their low bandwidth utilization due to P2P, but rather they got it due to the use of an input sharing model.
The input sharing model is less common in client/server models as latency is a much bigger issue here. So the choice here is often based on latency, not bandwidth utilization.

In a client/server model it's much more common to see updated data flow from the server to the clients rather than input.
This model can deal with updates at variable times and use interpolation/extrapolation to keep the appearance of a smooth simulation.
Here the simulation is often not deterministic among different clients, but close enough.
Using clever prediction these simulations also very often give a real-time feel, something that often isn't the case in input sharing P2P models.
In fact, in an input model sharing it's not uncommon to intentionally introduce artificial lag to handle network fluctuations. This is a lot less suitable for FPS for example.

These differences in networking models causes the decision of P2P vs Client/Server to only partially be about bandwidth utilization. Genre is a very defining factor in this decision and certain models are much more suited for a specific genre.

Abaraba is really naive in thinking there's a single holy grail while his arguments only hold for a tiny part of the equation even if they were correct.

Finally, there's also a commercial factor involved.
Using a client/server setup you can effectively only release a client and keep all your server software inhouse to profit from.
This is for example very common MMO's. But it's also seen in free-to-play shooters where the authors intend to profit from renting out game servers instead.

Basically, P2P vs client/server is a very silly discussion. There's many factors involved that will quite easily push a developer into one of the 2 solutions.
Bandwidth utilization often isn't one them. Latency and genre is.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by KulSeran
Many games release dedicated servers. As time goes by everyone's specs have increased. If the company shuts down their dedicated servers, you hook up the dedicated server on a hosting site OR on a good home PC, and run with it. Times change things you have to remember that.


How many games have dedicated servers, 2%? Regardless of time P2P will always be able to make it more practical for average users to get together in greater numbers.


Quote:
Original post by Gaffer
this statement is incorrect. there are many documents discussing the quake, counterstrike and unreal networking models. each of them discuss why client/server is a more attractive choice than peer-to-peer.


There is no such document and I challenge you to prove me wrong. Do you accept the challenge, mate?
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Andrew F Marcus
There is no such document and I challenge you to prove me wrong. Do you accept the challenge, mate?


Read up on it.
Many shooters (including unreal/quake) use several tricks to give the appearance of real-time behavior to the user.
These tricks only work in a client/server model.

Basically, the clients predict actions of the player ahead of time and only corrects it when the local result differs too much from the result provided by the server.
This trick requires an authoritative server which makes the final decision as several clients are running around slightly out of sync.
This works well in most cases but does result in rubber banding or delayed deaths on laggy connections. But this is considered an acceptable loss as most of the time the player feels he receives instant feedback.

In a P2P model you're required to run the game deterministically (or face a really really huge mess) which means you can't hide latency with the same trick.
P2P games often can't simulate the next step without having received input from all peers.
This results in 1 client being able to freeze all other clients.
This is considered acceptable in RTS games, but completely unacceptable in FPS games.

Of course I expect you to either completely ignore this post or nitpick on irrelevant details and ignore the main subject in the progress.

[Edited by - Azgur on September 6, 2009 4:19:31 AM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Azgur
Using clever prediction these simulations also very often give a real-time feel, something that often isn't the case in input sharing P2P models.

In fact, in an input model sharing it's not uncommon to intentionally introduce artificial lag to handle network fluctuations. This is a lot less suitable for FPS for example.


You are describing two identical techniques, equally implemented in either case. Server running in the past 100ms is the same thing as issuing commands 100ms for the future. The difference is only if you gonna interpolate this lag with client-side predictions or simply let the user experience the latency.


Quote:
Original post by Azgur
Abaraba is really naive in thinking there's a single holy grail while his arguments only hold for a tiny part of the equation even if they were correct.


The point is very simple and not involving any grails - "Stickman" is possible, 5KBps upload with 50 players over P2P. That's all. It's a fact so it's not incorrect. Predicting the possibility of such game, while everyone is saying it's impossible, does not make it naive, but visionary.



Quote:
Original post by Azgur
Quote:
Original post by Andrew F Marcus
There is no such document and I challenge you to prove me wrong. Do you accept the challenge, mate?


Read up on it.


Read what? There is no such document.


Quote:
Original post by Azgur
Many shooters (including unreal/quake) use several tricks to give the appearance of real-time behavior to the user.
These tricks only work in a client/server model.


Not true, all the same tricks work all the same.


Quote:
Original post by Azgur
Basically, the clients predict actions of the player ahead of time and only corrects it when the local result differs too much from the result provided by the server.
This trick requires an authoritative server which makes the final decision as several clients are running around slightly out of sync.


Why do you imagine peers can not predict actions ahead? It's nothing more but velocity extrapolation. Authoritative server is not necessary to make interaction fair.


Quote:
Original post by Azgur
In a P2P model you're required to run the game deterministically (or face a really really huge mess) which means you can't hide latency with the same trick.


How did you come up with that? If you pass only input then you likely need determinism, but if you pass all the data then you don't, same as with a server approach. Go play Stickamn.


Quote:
Original post by Azgur
P2P games often can't simulate the next step without having received input from all peers.
This results in 1 client being able to freeze all other clients.
This is considered acceptable in RTS games, but completely unacceptable in FPS games.


False. It is not about P2P, it's about passing input and determinism. If you wanted to make Age of Empires with C/S model you would need to do exactly the same thing.


Quote:
Original post by Azgur
Of course I expect you to either completely ignore this post or nitpick on irrelevant details and ignore the main subject in the progress.


I expect you to give me that link you told me to read upon, so I can read it.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0