Sign in to follow this  
whiterook6

client/client vs client/host/client

Recommended Posts

So I've read the faq and some other posts and I've got a question. It seems to me that the arguments against each client connecting to each other client include: 1. o(n^2) connections vs. o(2*n) connections (and the related bandwidth) 2. Easier cheating (no central arbiration) and generally people say that for yoru run-of-the-mill multiplayer shooter, people should go through a host. However, if I'm making an game for a course project, and thus won't be releasing it to a large market, then there would be very little chance of cheating (2) and if I'm in a lan only to test this game then the bandwidth isn't that big of a deal (1). I agree that the server/host should be in charge of some things (such as when a round ends, joining/leaving the game, timing lag, etc.,) but should all communications go through the host? particularily, I was considering sending position/orientation updates to players directly, as well as I've-Hit-You-You're-Dead messages. I would've figured that the reduction in trip time would be more important than efficient bandwidth. Also, on a related question, is there a limit to the number of sockets I can open? How about opening both a TCP and a UDP socket to the same client? Thanks.

Share this post


Link to post
Share on other sites
If you don't care about cheating and you'll only play on LAN... pretty much anything works, don't worry about reductions in trip time or bandwidth - just do what you find easiest. Server/client are easier to reason around, at least in my humble opinion.

Share this post


Link to post
Share on other sites
I would generally discourage using a P2P networking model for games because it is, in my experience, harder to implement effectively, and the risk of cheating is greater. Having clients exchange position information is a bad idea because an attacker could send fake messages to other clients, rendering him invisible and impossible to hit. The same thing applies to "I killed you" messages. While it might not matter since it's a course project, if I were your teacher I'd give you bonus points for making your game somewhat secure.

The way you describe your game (LAN only, lots of bandwidth available), it sounds like it might be a good idea to apply the networking principles described in book of hook, which describes the excellent Quake III networking model. Carmack took the interesting and unusual approach of not using reliable communication at all, and instead chose to exploit the high bandwidth of local area networks.

Quote:
Original post by whiterook6
Also, on a related question, is there a limit to the number of sockets I can open? How about opening both a TCP and a UDP socket to the same client?


Under Linux there's a limit to the amount if file descriptors (sockets) you can have open at any one time. This can be changed through a sysctl value though. I'm not aware of any such limitations on Windows. However, there is a limit to the number of sockets that can be in a single set of file descriptors (FD_SET), which is used in conjunction with the select function.

It has been suggested here that TCP and UDP might not interoperate well. Note that this study is old, so I don't know how relevant it is.


Share this post


Link to post
Share on other sites
Quote:
Original post by whiterook6
I agree that the server/host should be in charge of some things (such as when a round ends, joining/leaving the game, timing lag, etc.,) but should all communications go through the host? particularily, I was considering sending position/orientation updates to players directly, as well as I've-Hit-You-You're-Dead messages. I would've figured that the reduction in trip time would be more important than efficient bandwidth.
It looks like you are confusing the network topology with the role of each machine.

Your game has a virtual network across the Internet. The topology of your network is how the packets move from computer to computer.

If everybody is talking to a central machine and that central hub redirects the packets to their target, that is a star topology. If every machine can talk with every other machine directly, you have a fully connected mesh. If machines can talk to each other with ad-hoc connectivity and you can pass packets along a from machine to machine until it gets to the destination, that's a partial mesh.

Exactly how you manage your virtual network is up to you. A partial mesh is the most work but most flexible in the real world. It allows for limited connectivity and can be written to reduce overall bandwidth requirements. A star is a medium amount of work because you still must look at the headers and redirect as necessary, but it doesn't handle the possibility that one or more machines cannot connect to the hub, and it also requires higher bandwith requirements on the central machine. A fully connected mesh is the least work to use because you have direct communications, it minimizes bandwith needs, but is the highest risk in a real-world network environment if any connectivity fails.



Regardless of how your packets are flowing between machines, you can make any machine can act as the authoritative simulation.

You generally want the authoritative machine to also be the best connected machine in the network. In a star, the best connected machine is the hub. In a mesh or partial mesh, the best connected machine is the one with the most connections and also the highest bandwidth.


Quote:
Also, on a related question, is there a limit to the number of sockets I can open?
Yes, but you are unlikely to reach it. It varies based on the platform. On a *nix box, the number is in /proc/sys/fs/file-max, and it is most likely in the hundreds of thousands, or even in the millions. (My server box says almost 1.6 million.)
Quote:
How about opening both a TCP and a UDP socket to the same client?
Not an issue. They are different protocols. They compete for the same network resources, but otherwise they can live together just fine.

Share this post


Link to post
Share on other sites
Note: For Windows There are several limits that you can run into. The first limit is a compile time constant which is 64 by default. You can redefine it to more but after that you need to start checking what the system will support dynamically.

Share this post


Link to post
Share on other sites
Quote:
Original post by stonemetal
Note: For Windows There are several limits that you can run into. The first limit is a compile time constant which is 64 by default. You can redefine it to more but after that you need to start checking what the system will support dynamically.

I believe you are referring to the FD_SET size. That limit is only if you choose to use select() and related functions to wait for the sockets.

You can certainly use select() for small programs where you only have a small number of connections. If you advance beyond that very small number, which most game servers will do, then you need to use different methods that are more flexible.

Way back in the days of WinNT we could host around 5000 connections before starting to hit a performance issues. The issue there was memory and bandwith, not the number of sockets.

Share this post


Link to post
Share on other sites
Max number of sockets in an operating system is 65536 (I think it is limited by the TCP/IP stack).

A socket is treated as a file descriptor, and most OS's limit max number of files per user.
This can be changed permanently in linux by editing /etc/security/limits.conf.
It is also possible to change in Windows by right clicking on something.
If you google max open files linux/windows you probably figure it out.

I have experienced that Ubuntu and Windows Vista limits max open files to 1024 by default.

I have been doing mostly TCP, but I don't think UDP uses sockets, you just send data to a host/ip and port and hope it arrives.

[Edited by - flodihn on March 8, 2010 9:46:35 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by flodihn
Max number of sockets in an operating system is 65536 (think it is limited by the TCP/IP stack).

There is no real reason why an OS couldn't handle arbitrary number of sockets. It is also possible to have multiple network interfaces on same OS.

There is a historical precedent for this in form of eDonkey servers which had no problem handling hundreds of thousands (~300k+) of TCP connections on single CPU machines as far back as ~2004, with only limit being memory. Sadly, given the controversial nature of these servers, the article describing how it was done are long gone.

Quote:
I have experienced that Ubuntu and Windows Vista limits max open files to 1024 by default.

Per process? Per thread? Per shell? Globally?

As far as sockets go, under Windows the number IIRC is limited by size of non-paged pool, which even on 32-bit machine should allow 100k+. On x64 there should be no realistic upper limit on number of connections.

In practice, Windows versions might put an artificial limit for different editions. Consumer versions tend (used) to be fairly restricted in this regard.

On Linux, upping the number of open files is a fairly trivial task, and 100k+ is typically not a problem. A bigger problem on Linux might be the socket API used to process these sockets.

This is different from FD_SET structure which can typically have 64 or 1024 connection limits, but this type of sockets isn't suitable for large numbers of connections anyway.

Quote:
but I don't think UDP uses sockets

What does the SOCK in SOCK_DGRAM stand for :p

UDP usually uses a single socket to do all the communication, and they delegate remote address/ip pair management to user.
TCP handles those internally and exposes them to user via file descriptors/sockets.

Share this post


Link to post
Share on other sites
A single IP address source will have a 32-bit remote IP address, a 16-bit outgoing port, and 16-bit incoming port. That's 64-bits worth of unique sockets coming from a single IP address. That's 18 PENTILLION unique values.

A single IP address connecting to a different single IP address with a single target port has only got about 16-bits of outgoing ports that it can assign -- roughly 64K connections between the two machines on a single port.

Factor in that a single machine can support an arbitrary number of IP addresses on a card, and an arbitrary number of cards. The *maximum* is extremely high.



The bottleneck is the resources on the computer, not the number of logical ports offered by the protocol. The operating systems just don't allocate the resources by default for more than a few thousands or tens of thousands.

If you want to get into the hundreds of thousands or millions of connections you will have to do some actual work. You will need to tell your system to allocate the resources for all those connections, and you will need to work with the system in managing those resources effectively.


As long as you stay below a thousand or so, it is fairly easy to work with. Some of the most trivial functions (like select() ) are not usable, but there are much more flexible listening techniques available.

Share this post


Link to post
Share on other sites
One other question -- can I open two sockets to the same other computer? I think so, from what I'm hearing, but I want someone to confirm that as long as each goes to a different port, I'm okay -- and that thus I can have a UDP and a TCP open to a single remote machine. Is that right?

Share this post


Link to post
Share on other sites
There are no "sockets to other computers."
A socket is something local to your machine. No other machine knows about the *socket*.
A port is an address on a specific machine.
Sockets are bound to ports on the local machine -- either a known port ("80") for listeners, or to an ephemereal port (5000 and up, randomly selected, say) for client connections.
A TCP connection is a four-tuple of {source IP, source port, destination IP, destination port}. As long as any one of those quantities varies, then it's a different connection.
A socket is generally tied to a connection after it is bound to a port. (connect() on a client will auto-bind before establishing the connection)

So, the short answer to your question is "yes," but I'll let you read through the above information to actually understand why that is so.

Share this post


Link to post
Share on other sites
Quote:

Quote:

Max number of sockets in an operating system is 65536 (think it is limited by the TCP/IP stack).

There is no real reason why an OS couldn't handle arbitrary number of sockets. It is also possible to have multiple network interfaces on same OS.

My bad, it is really 65536 per IP address. And as frob says, you can add many ip addresses to one network card.

In Linux this is pretty easy:

ifconfig eth0:my_second_ip 192.168.0.102





Quote:

Quote:
I have experienced that Ubuntu and Windows Vista limits max open files to 1024 by default.

Per process? Per thread? Per shell? Globally?


Per user.

Quote:

Quote:
but I don't think UDP uses sockets

What does the SOCK in SOCK_DGRAM stand for :p

UDP usually uses a single socket to do all the communication, and they delegate remote address/ip pair management to user.
TCP handles those internally and exposes them to user via file descriptors/sockets.

That is true, what I meant was, I don't think the programmer ever use sockets when sending data with UDP and as I stated before, I have not worked much with UDP, someone with actual experience might want to reveal the truth.

[Edited by - flodihn on March 10, 2010 5:50:58 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by flodihn

Per user.

I don't think that is accurate. There might be limits in some language run-time, but OS as such shouldn't have a problem with more files.

#include <iostream>
#include <string>
#include "boost/filesystem.hpp"
#include <windows.h>
namespace fs = boost::filesystem;

int count = 0;
int offs = 0;

void print(const fs::path & p) {
if ( fs::is_directory( p ) ) {
fs::directory_iterator end_iter;
for ( fs::directory_iterator dir_itr( p ); dir_itr != end_iter; ++dir_itr ) {
try {
if ( fs::is_directory( dir_itr->status() ) ) {
print(dir_itr->path());
} else if ( fs::is_regular_file( dir_itr->status() ) ) {
HANDLE h = CreateFile( dir_itr->string().c_str(),
GENERIC_READ, 0, NULL, OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL, NULL);
if (h != INVALID_HANDLE_VALUE) { count++; }
}
}
catch ( const std::exception & ex ) {
printf("\n%s %s\n", dir_itr->filename().c_str(), ex.what());
}
}
if (offs++ > 7) { printf("\n"); offs = 0; }
printf("%8d", count);
}
}

int main(int argc, char** argv)
{
print("c:\\");
}

This simple test has no problem going well over 100k, and opens just as many handles. Win7.

You might be thinking of some other limit.

Quote:
That is true, what I meant was, I don't think the programmer ever use sockets when sending data with UDP


Usually there is one single socket through which reading and writing is done, same as with TCP. It serves as application's API to networking stack.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Quote:
Original post by flodihn
That is true, what I meant was, I don't think the programmer ever use sockets when sending data with UDP
Usually there is one single socket through which reading and writing is done, same as with TCP. It serves as application's API to networking stack.
The issue is that to handle 1,000 clients in a TCP server, the server must allocate 1,000 TCP connection sockets (plus a single server socket to accept connections in the first place).

For 1,000 clients over UDP, you can use a single UDP socket, with the caveat that you must handle any concept of a connection yourself.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Quote:
Original post by flodihn

Per user.

I don't think that is accurate. There might be limits in some language run-time, but OS as such shouldn't have a problem with more files.

*** Source Snippet Removed ***
This simple test has no problem going well over 100k, and opens just as many handles. Win7.

You might be thinking of some other limit.

Quote:
That is true, what I meant was, I don't think the programmer ever use sockets when sending data with UDP


Usually there is one single socket through which reading and writing is done, same as with TCP. It serves as application's API to networking stack.


I was talking about the default max open of files per user allowed by the OS. It is easy to change on your own computer/server but not something you would want players have to change.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Antheus
Quote:
Original post by flodihn
That is true, what I meant was, I don't think the programmer ever use sockets when sending data with UDP
Usually there is one single socket through which reading and writing is done, same as with TCP. It serves as application's API to networking stack.
The issue is that to handle 1,000 clients in a TCP server, the server must allocate 1,000 TCP connection sockets (plus a single server socket to accept connections in the first place).

For 1,000 clients over UDP, you can use a single UDP socket, with the caveat that you must handle any concept of a connection yourself.


What would be the issue? I have no problems handling about 20 000 connections on my tcp server. The bottleneck is the traffic, not the amount of connections.

I was about to give a link to Project DarkStar where somone managed to establish about 65000 active connections, sending simple pings to the MMO Server. It seems the project has been cancled by Sun and forked by the community here:
http://www.reddwarfserver.org/

Share this post


Link to post
Share on other sites
Quote:
Original post by flodihn

I was talking about the default max open of files per user allowed by the OS. It is easy to change on your own computer/server but not something you would want players have to change.


The test done above was with default settings - I didn't modify anything at all.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Quote:
Original post by flodihn

I was talking about the default max open of files per user allowed by the OS. It is easy to change on your own computer/server but not something you would want players have to change.


The test done above was with default settings - I didn't modify anything at all.


Ok, but I tested with Ubuntu and Windows Vista, you did the test in Windows 7, seems like Microsoft increased the limit there.

[Edited by - flodihn on March 10, 2010 6:06:58 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by flodihn
What would be the issue? I have no problems handling about 20 000 connections on my tcp server. The bottleneck is the traffic, not the amount of connections.
Certainly there shouldn't be a problem with low connection counts (i.e. anything much under 64,000). However, as you scale beyond that, TCP should engender significantly higher in-kernel overheads than UDP.

TCP requires per-connection state (remote address/port, sequence #, receive window size, retransmit timer, etc.), and certain minimum number of buffers assigned to each connection, whereas the single UDP socket should need only a single set of state, and can pool a smaller number of buffers.

This is a largely illusory benefit, as the application must still maintain some or all of the state otherwise maintained by TCP, but it does allow overheads to be strictly controlled (and minimised) by the application.

And of course, if you need TCP, you need TCP, but it should (at least in theory) be a little easier to scale a UDP server to a million connections. Then again, in the real world, it might not be as simple as it sounds.
Quote:
Original post by hplus0603
And someone created a 1,000,000 connection server on a single machine
That is a very interesting read.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this