Jump to content

  • Log In with Google      Sign In   
  • Create Account


Where Is The Line Drawn For Too Many Clients?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
15 replies to this topic

#1 Toothpix   Crossbones+   -  Reputation: 810

Like
0Likes
Like

Posted 13 September 2012 - 05:01 PM

So, despite the fact that a single person could never implement this, I have a question. I recently thought of an idea (that I'm not seriously interested in) that sparked an interesting concept in my mind. Would a 40 person co-op RPG with 20+ AI characters be too much for a server? The reason I ask is because on one hand I want to say no, its not possible, as even games like TF2 can't do that. On the other hand, I question how games like WoW host thousands of players on a single server (realm). I'm not asking about feasibility, just possibility, because I'm not interested.

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.


Sponsor:

#2 frob   Moderators   -  Reputation: 19633

Like
0Likes
Like

Posted 13 September 2012 - 07:54 PM

Depends on the game.

Some games are very compute intensive. Others require nearly no computation.

Some games are very communication intensive. Others require very little communication.

Some games have massive memory requirements for the servers. Others are minimal.


For the major online games, each "server" is actually a cluster of machines potentially involving two or ten or even hundreds of machines on the backend to support that one front-end interface. There are authentication servers, lobby servers, world servers, money-handling servers, and more.

Edited by frob, 16 September 2012 - 08:20 PM.

Check out my personal indie blog at bryanwagstaff.com.

#3 Firestryke31   Members   -  Reputation: 350

Like
0Likes
Like

Posted 13 September 2012 - 08:02 PM

Games like WoW usually split their players among several physical servers for each realm (i.e. Emerald Dream might have 4 physical servers) and each server then handles it's own subset of the players connected. The original Planetside had a limit of around 200 players per faction (for a total of 600ish) on an in-game continent (each of which was probably it's own server) and that was a fast-action FPS from back in the mid 2000s. The AI might cause some concern, but if you implement it intelligently you should be fine.

I don't know how TF2 is typically hosted, but if it's either a pick-a-client-and-they're-host or a downloadable server thing then it has to be able to handle a wide variety of hardware capabilities. If you're controlling the servers yourself, 40 players and 20+ AI should be plenty doable as long as you design it intelligently.

#4 hplus0603   Moderators   -  Reputation: 5069

Like
2Likes
Like

Posted 14 September 2012 - 12:00 AM

I believe that some large-scale FPS-es with physics, such as Battlefield 3 or APB, run up to 128 players per server.
Also, even EverQuest could have 150 players, and another 100 NPCs, on the same physical server -- this was in 1998!

40 players and 20 NPCs on a single server is not a problem just by itself. It mainly depends on how advanced you want the physics to be, and how advanced you want the NPC AI to be.

On another note, really large games typically split all users across a large number of physics servers (or server processes) and move players between these processes as they move through the world.

enum Bool { True, False, FileNotFound };

#5 flodihn   Members   -  Reputation: 235

Like
1Likes
Like

Posted 16 September 2012 - 01:39 PM

I made a server (on my own) that handles 12,000 simulated players, connected with TCP sending in average one random command every 5 seconds to about 70 other players per second.
There were no AI or heavy math functions on either the client or server side, the server side did state updates on 1/3 of all the commands.
Response time was between 10 and 30 ms (clients were connecting from another machine on the local network).

You can see the results in more detail here:
http://www.next-gen....7_39_report.txt

You can download the source of the server here if you want to play around with it, the license is BSD:
http://www.next-gen....e&id=2&Itemid=3

Edited by flodihn, 18 September 2012 - 01:33 PM.


#6 flodihn   Members   -  Reputation: 235

Like
1Likes
Like

Posted 16 September 2012 - 01:52 PM

Also, something more relevant for an FPS server is this:
http://muchdifferent.com/1000PlayerFPS/

#7 Toothpix   Crossbones+   -  Reputation: 810

Like
0Likes
Like

Posted 16 September 2012 - 05:28 PM

@flodihn Wow, thanks. That is really awesome. I will definitely take a good look at that. Your response was such a help Posted Image

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.


#8 jefferytitan   Crossbones+   -  Reputation: 1929

Like
1Likes
Like

Posted 16 September 2012 - 05:32 PM

Also recently in the news:
Just Cause 2 Multiplayer Beta

1,800 players on one server! I don't know how on Earth they achieved that, it sounds too good to be true!

#9 hplus0603   Moderators   -  Reputation: 5069

Like
1Likes
Like

Posted 17 September 2012 - 10:25 AM

1,800 players on one server! I don't know how on Earth they achieved that, it sounds too good to be true!


Ten years ago, Planetside had 500 players per server in an FPS. Planetside 2 is coming out soon, I hear.
Five years ago, Sony/Zipper had MAG, with 256 players in a game (on a console.)
If your goal is massive player counts, you simply design the gameplay, simulation, and networking to optimize for that goal. This means there are certain kinds of things you can't do in your game, because it would break the player count target.
enum Bool { True, False, FileNotFound };

#10 jefferytitan   Crossbones+   -  Reputation: 1929

Like
1Likes
Like

Posted 18 September 2012 - 06:56 PM


1,800 players on one server! I don't know how on Earth they achieved that, it sounds too good to be true!


Ten years ago, Planetside had 500 players per server in an FPS. Planetside 2 is coming out soon, I hear.
Five years ago, Sony/Zipper had MAG, with 256 players in a game (on a console.)
If your goal is massive player counts, you simply design the gameplay, simulation, and networking to optimize for that goal. This means there are certain kinds of things you can't do in your game, because it would break the player count target.


That's my point, the gameplay etc was NOT designed for massively multiplayer, AFAIK multiplayer was just a mod applied to an existing single player game. So the odds against the existing design being compatible with what the mod achieved seem pretty staggering to me.

Out of interest, what kind of features are you thinking of that rule out a large player count?

#11 hplus0603   Moderators   -  Reputation: 5069

Like
1Likes
Like

Posted 18 September 2012 - 07:55 PM

what kind of features are you thinking of that rule out a large player count?


Dwarf Fortress level AI for the NPCs?
Radars that track the position of all other players in real time?
Game design that creates very dense areas where all players congregate?
Any other feature that requires N-squared examination of all player pairs?


enum Bool { True, False, FileNotFound };

#12 riuthamus   Crossbones+   -  Reputation: 4469

Like
1Likes
Like

Posted 25 September 2012 - 01:01 PM

I dont know, seems doable. In 2000's NWN ( a great game, but horribly coded for mass multilayer ) was easily hosting 250 players on the big servers. This loaded all areas into memory and was doing some silly things. For an indie developer not using any of the newest technologies you should easily be able to handle 100 players on a single server. Minecraft ( which currently has shit for netcode ) can host up to 75+ with a decently sized world ( 5gb ) and have little to no lag unless you are doing some seriously massive world changes. That said, with the right code I am almost certain you could triple that.

As for games like TF2, they were not designed for bigger numbers and thus they are limited. It is not because they are not capapble, they just didnt want a game that would work with anything past the current cap. I run, and have hosted for 4 years, one of the top tf2 servers and we have been hosting it at 32 players ever since the crack came out to do that. We asked them for more player slots on several occasions as our server could have easily handled 64 players, but they said it would break the concept of the game... and I can see how it would. We had to make several changes to the core systems in order to allow it to work for the 32 player setup.

Anyway, all in all i think what you are trying to do is possible.

#13 Toothpix   Crossbones+   -  Reputation: 810

Like
0Likes
Like

Posted 25 September 2012 - 03:49 PM

I'm not sure if this could even be taken seriously, but do you guys think that an ethernet-connected Raspberry Pi could handle a multiplayer game? I mean, it does have a 700mhz processor, 256mb of RAM and you can add a HDD, so its not like its worse than anything they had in the 90's back in the days of Everquest.

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.


#14 frob   Moderators   -  Reputation: 19633

Like
0Likes
Like

Posted 25 September 2012 - 05:07 PM

I'm not sure if this could even be taken seriously, but do you guys think that an ethernet-connected Raspberry Pi could handle a multiplayer game? I mean, it does have a 700mhz processor, 256mb of RAM and you can add a HDD, so its not like its worse than anything they had in the 90's back in the days of Everquest.

Again, it depends entirely on the game.

A game where the server must run all the physics and other simulations, heavy bandwidth requirements, where the server must coordinate between a large number of clients, or where the server must handle a wide range of non-gameplay tasks like matchmaking and lobby service and authentication and possibly even financial transactions ... That is a situation it would likely struggle with.

A game server where the CPU load and bandwidth are both fairly light, where memory and performance requirements are well within the system's hardware, in that case a well-coded game could potentially handle a thousand or more clients.

For an old-style text based MUD the system could probably support tens of thousands of connections and not be limited except as far as your network bandwidth could handle it.
Check out my personal indie blog at bryanwagstaff.com.

#15 hplus0603   Moderators   -  Reputation: 5069

Like
0Likes
Like

Posted 25 September 2012 - 06:01 PM

I'm not sure if this could even be taken seriously, but do you guys think that an ethernet-connected Raspberry Pi could handle a multiplayer game? I mean, it does have a 700mhz processor, 256mb of RAM and you can add a HDD, so its not like its worse than anything they had in the 90's back in the days of Everquest.


They have both hosted and played Quake 3 on the RPi.

enum Bool { True, False, FileNotFound };

#16 Khatharr   Crossbones+   -  Reputation: 2918

Like
1Likes
Like

Posted 26 September 2012 - 07:11 AM

It's an error to think of networking and computational power in terms of 'force'.

There are reasonable limitations to both, but they relate to the specific tasks that each is being asked to perform. Contrary to what may have been said in congress, the internet is not actually a series of tubes, and data on the line is not a dump-truck.

Network libraries do nothing more (hopefully) than send data down the line. If you've not worked with it before then working with a 'normal' TCP connection is something like reading and writing to a file, except that what you write is what gets read by the machine at the other end of the connection. The main factor here is how well you design the client/server interaction. If you're concerned about how much load you can handle then focus on designing in such a way that you never send data that doesn't need sent.

In the end the only practical limitation comes from the NIC itself. I think my NIC at home can run something like 8 simultaneous full-duplex TCP connections before it starts getting tetchy. Using UDP can get past that a bit because a single UDP socket can accept datagrams from any number of sources, but you're just shifting work to the CPU in that case because you'll probably have to design a more complex application-level protocol to make up for the loss of TCP's reliability. Even in that case, if the NIC's input buffer gets filled with UDP datagrams you're going to start losing them, so there's still a limit to what the hardware can do.

Expensive server hardware can, of course, handle a much larger amount of data, but you don't need that hardware in order to know what kind of performance you'd get from it. It's all about the scalability of your design. Like hplus0603 mentioned earlier, N^2 complexity is going to hurt you a lot. You want to design in a way that eliminates redundant work, both for the CPU and in terms of how much data you're putting on the line. Once again, in effective design, brevity is power. The fastest and most bug-free code is the code that doesn't exist.

If you think about it, the huge web servers like google and yahoo and msn handle a ridiculous amount of clients at the same time. More impressively, those are TCP connections (since HTTP runs over TCP), so good design can accomplish a lot more than you'd probably think possible.

Edited by Khatharr, 26 September 2012 - 07:15 AM.

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS