Jump to content
  • Advertisement
Sign in to follow this  

concurrent TCP/IP connections?

This topic is 5483 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, Does anyone know if there are limits on the number of simultaneous open TCP/IP connections for say a system running RH 9.0? I know about there is this listen argument (default to 5?) that limits the no. of pending connections a system can take but is there also a limit once the connection is established? If so is what does it depend upon and if there is any ball park figure on how much? I'm writing a client-server app with the client running on windows and server on Linux (RH 9.0). The client once establishing a connection won't disconnect forever so the # of connections per hour would only go up. That's why I'm wondering. Any help, links, tips, etc would be appreciated. Thanks, San.

Share this post

Link to post
Share on other sites
Guest Anonymous Poster
The number of sockets a program can have open is a function of the limits put on the process, limitations on the select() function, and the system limit on file descriptors. Post-2.2.something linux kernels have effectively no fd limit. If you're trying to select() on them you'll run into FD_SETSIZE, which is 1024 if I recall correctly; I believe the actual syscall has no limits, but some fixed-size buffer is required so the fd_set can be completely stack-allocated. Just use poll() or some other socket-readiness-notification mechanism instead if that matters. The process file-descriptor limit can be found (and set by root) with "ulimit -n".

Share this post

Link to post
Share on other sites
Limits are as follows:

1. listen() - this only affects non-accepted connections. It might be ignored by the OS anyway

2. Per-process file descriptor limit - set by ulimit or sysctl - the default is 1024 - but of course not all of those can be used for sockets as some will be used for other things too.

This can be increased by root using ulimit - not sure if "su" resets it - read about ulimit - there is probably some way of changing the default for certain processes.

3. System-wide FD limit - this can be set with sysctl in /proc/sys/fs/file-max

On mine it seems to be 26208

4. Using select() limits you to FD_SETSIZE. Using select() is not the preferred way of waiting for a large number of file descriptors. Try poll (or better still epoll) instead - they have no theoretical limit.

4. Any hard-coded limits - I don't think so. Ultimately however, you wil be limited by the amount of kernel memory each socket takes up - which will be finite - perhaps as much as 64k - they have buffers. This memory is not swappable AFAIK, so it will be limited by physical RAM.

Probably system performance will become very low before you reach the memory limit, consider getting more ram if you need that many.

10k doesn't sound like a problematic number, 1 million might be pushing it.


Share this post

Link to post
Share on other sites
Thank you for the reply. I'm using a perl script on the server side and checking for the connections using the IO::Select module as

my ( $pending ) = IO::Select->select( $handles, undef, undef, 60 );

So this is effectively using the select(). So will you suggest I would have to change my code or can I just change the FD_SETSIZE to a larger value if needs be? My need is actually around 10k only. I guess I will go with poll/epoll if there is lot of speed difference b/w them.

I also tried setting the ulimit command and now I have the following:

[root@sls-cb12p2 server]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 900000
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited

[root@sls-cb12p2 server]# cat /proc/sys/fs/file-max

Also I haven't done any stress testing for this myself. I couldn't find any tools to do this, so what I have in mind is to write another simple perl script and fork 10,000 connections to this server. Is this a good idea?

Thanks again for the help,

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!