client/client vs client/host/client

Started by
20 comments, last by hplus0603 14 years, 2 months ago
So I've read the faq and some other posts and I've got a question. It seems to me that the arguments against each client connecting to each other client include: 1. o(n^2) connections vs. o(2*n) connections (and the related bandwidth) 2. Easier cheating (no central arbiration) and generally people say that for yoru run-of-the-mill multiplayer shooter, people should go through a host. However, if I'm making an game for a course project, and thus won't be releasing it to a large market, then there would be very little chance of cheating (2) and if I'm in a lan only to test this game then the bandwidth isn't that big of a deal (1). I agree that the server/host should be in charge of some things (such as when a round ends, joining/leaving the game, timing lag, etc.,) but should all communications go through the host? particularily, I was considering sending position/orientation updates to players directly, as well as I've-Hit-You-You're-Dead messages. I would've figured that the reduction in trip time would be more important than efficient bandwidth. Also, on a related question, is there a limit to the number of sockets I can open? How about opening both a TCP and a UDP socket to the same client? Thanks.
Advertisement
If you don't care about cheating and you'll only play on LAN... pretty much anything works, don't worry about reductions in trip time or bandwidth - just do what you find easiest. Server/client are easier to reason around, at least in my humble opinion.
I would generally discourage using a P2P networking model for games because it is, in my experience, harder to implement effectively, and the risk of cheating is greater. Having clients exchange position information is a bad idea because an attacker could send fake messages to other clients, rendering him invisible and impossible to hit. The same thing applies to "I killed you" messages. While it might not matter since it's a course project, if I were your teacher I'd give you bonus points for making your game somewhat secure.

The way you describe your game (LAN only, lots of bandwidth available), it sounds like it might be a good idea to apply the networking principles described in book of hook, which describes the excellent Quake III networking model. Carmack took the interesting and unusual approach of not using reliable communication at all, and instead chose to exploit the high bandwidth of local area networks.

Quote:Original post by whiterook6
Also, on a related question, is there a limit to the number of sockets I can open? How about opening both a TCP and a UDP socket to the same client?


Under Linux there's a limit to the amount if file descriptors (sockets) you can have open at any one time. This can be changed through a sysctl value though. I'm not aware of any such limitations on Windows. However, there is a limit to the number of sockets that can be in a single set of file descriptors (FD_SET), which is used in conjunction with the select function.

It has been suggested here that TCP and UDP might not interoperate well. Note that this study is old, so I don't know how relevant it is.


Quote:Original post by whiterook6
I agree that the server/host should be in charge of some things (such as when a round ends, joining/leaving the game, timing lag, etc.,) but should all communications go through the host? particularily, I was considering sending position/orientation updates to players directly, as well as I've-Hit-You-You're-Dead messages. I would've figured that the reduction in trip time would be more important than efficient bandwidth.
It looks like you are confusing the network topology with the role of each machine.

Your game has a virtual network across the Internet. The topology of your network is how the packets move from computer to computer.

If everybody is talking to a central machine and that central hub redirects the packets to their target, that is a star topology. If every machine can talk with every other machine directly, you have a fully connected mesh. If machines can talk to each other with ad-hoc connectivity and you can pass packets along a from machine to machine until it gets to the destination, that's a partial mesh.

Exactly how you manage your virtual network is up to you. A partial mesh is the most work but most flexible in the real world. It allows for limited connectivity and can be written to reduce overall bandwidth requirements. A star is a medium amount of work because you still must look at the headers and redirect as necessary, but it doesn't handle the possibility that one or more machines cannot connect to the hub, and it also requires higher bandwith requirements on the central machine. A fully connected mesh is the least work to use because you have direct communications, it minimizes bandwith needs, but is the highest risk in a real-world network environment if any connectivity fails.



Regardless of how your packets are flowing between machines, you can make any machine can act as the authoritative simulation.

You generally want the authoritative machine to also be the best connected machine in the network. In a star, the best connected machine is the hub. In a mesh or partial mesh, the best connected machine is the one with the most connections and also the highest bandwidth.


Quote:Also, on a related question, is there a limit to the number of sockets I can open?
Yes, but you are unlikely to reach it. It varies based on the platform. On a *nix box, the number is in /proc/sys/fs/file-max, and it is most likely in the hundreds of thousands, or even in the millions. (My server box says almost 1.6 million.)
Quote:How about opening both a TCP and a UDP socket to the same client?
Not an issue. They are different protocols. They compete for the same network resources, but otherwise they can live together just fine.
Thanks guys--for the quick replies AND for not treating me like the nub I am.
Note: For Windows There are several limits that you can run into. The first limit is a compile time constant which is 64 by default. You can redefine it to more but after that you need to start checking what the system will support dynamically.
Quote:Original post by stonemetal
Note: For Windows There are several limits that you can run into. The first limit is a compile time constant which is 64 by default. You can redefine it to more but after that you need to start checking what the system will support dynamically.

I believe you are referring to the FD_SET size. That limit is only if you choose to use select() and related functions to wait for the sockets.

You can certainly use select() for small programs where you only have a small number of connections. If you advance beyond that very small number, which most game servers will do, then you need to use different methods that are more flexible.

Way back in the days of WinNT we could host around 5000 connections before starting to hit a performance issues. The issue there was memory and bandwith, not the number of sockets.
Max number of sockets in an operating system is 65536 (I think it is limited by the TCP/IP stack).

A socket is treated as a file descriptor, and most OS's limit max number of files per user.
This can be changed permanently in linux by editing /etc/security/limits.conf.
It is also possible to change in Windows by right clicking on something.
If you google max open files linux/windows you probably figure it out.

I have experienced that Ubuntu and Windows Vista limits max open files to 1024 by default.

I have been doing mostly TCP, but I don't think UDP uses sockets, you just send data to a host/ip and port and hope it arrives.

[Edited by - flodihn on March 8, 2010 9:46:35 AM]
Quote:Original post by flodihn
Max number of sockets in an operating system is 65536 (think it is limited by the TCP/IP stack).

There is no real reason why an OS couldn't handle arbitrary number of sockets. It is also possible to have multiple network interfaces on same OS.

There is a historical precedent for this in form of eDonkey servers which had no problem handling hundreds of thousands (~300k+) of TCP connections on single CPU machines as far back as ~2004, with only limit being memory. Sadly, given the controversial nature of these servers, the article describing how it was done are long gone.

Quote:I have experienced that Ubuntu and Windows Vista limits max open files to 1024 by default.

Per process? Per thread? Per shell? Globally?

As far as sockets go, under Windows the number IIRC is limited by size of non-paged pool, which even on 32-bit machine should allow 100k+. On x64 there should be no realistic upper limit on number of connections.

In practice, Windows versions might put an artificial limit for different editions. Consumer versions tend (used) to be fairly restricted in this regard.

On Linux, upping the number of open files is a fairly trivial task, and 100k+ is typically not a problem. A bigger problem on Linux might be the socket API used to process these sockets.

This is different from FD_SET structure which can typically have 64 or 1024 connection limits, but this type of sockets isn't suitable for large numbers of connections anyway.

Quote:but I don't think UDP uses sockets

What does the SOCK in SOCK_DGRAM stand for :p

UDP usually uses a single socket to do all the communication, and they delegate remote address/ip pair management to user.
TCP handles those internally and exposes them to user via file descriptors/sockets.
A single IP address source will have a 32-bit remote IP address, a 16-bit outgoing port, and 16-bit incoming port. That's 64-bits worth of unique sockets coming from a single IP address. That's 18 PENTILLION unique values.

A single IP address connecting to a different single IP address with a single target port has only got about 16-bits of outgoing ports that it can assign -- roughly 64K connections between the two machines on a single port.

Factor in that a single machine can support an arbitrary number of IP addresses on a card, and an arbitrary number of cards. The *maximum* is extremely high.



The bottleneck is the resources on the computer, not the number of logical ports offered by the protocol. The operating systems just don't allocate the resources by default for more than a few thousands or tens of thousands.

If you want to get into the hundreds of thousands or millions of connections you will have to do some actual work. You will need to tell your system to allocate the resources for all those connections, and you will need to work with the system in managing those resources effectively.


As long as you stay below a thousand or so, it is fairly easy to work with. Some of the most trivial functions (like select() ) are not usable, but there are much more flexible listening techniques available.

This topic is closed to new replies.

Advertisement