boost::asio allowing multiple connections to a single server socket?

Started by
3 comments, last by hplus0603 13 years, 5 months ago
I've been experimenting with boost::asio as the basis for my networking code, and in doing unit testing I've found something that doesn't make sense to me.

In my very simple test, I open a listening server with async_accept() in my server object, and then in my client object I call async_connect(). This works fine, the connect_handler() gets called and everything appears good.

Then, as another test, I had two client objects both try to connect to the server, assuming that one would fail since the server object currently only has a single socket and only makes one call to async_accept(). However, this test succeeded. Even if I never call io_service::run() on the server to process incoming connections, both clients succeed, with both connect_handler() functions being called.

This confuses me. In the asio examples on the boost website, every time the server accepts a connection, it calls async_accept() again on a new socket to accept the next connection. But here I've apparently got two connections being made to the same server socket, at least as far as the clients are concerned. Both connect_handler() calls give no error, saying the operation completed successfully.

What is going on here? I can only assume that the client connections are not fully being made and that it only thinks it succeeded, but I can't find anything in the documentation to indicate that this can happen. I've tested this on both Windows and Linux, and get the same result.
Advertisement
Quote:Original post by Nairou
But here I've apparently got two connections being made to the same server socket
Server sockets != data sockets.

In TCP your server opens a single server socket, which is bound to a specific port, and can accept any number of connection requests. For *each* connection request, you then spawn a new data socket, which is used to communicate only with that particular client.

Boost::asio abstracts some of this away for you. I am assuming you are referring to the asynchronous TCP daytime server example?

A tcp::acceptor is itself the server socket, and in the start_accept() method you create an empty connection, and ask the acceptor to fill it with the data socket for the next incoming connection request.

The reason you can accept more than one connection is that an incoming connection request will trigger the handle_accept() handler, which itself calls start_accept() again to prepare for the next connection request.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

So, if I'm understanding you right, this means that the "listening" functionality of async_accept() happens on the first call to it and lasts for the lifetime of the server, and will accept every incoming connection and call accept_handler() for each. But the example calls it multiple times so as to provide a unique socket for the next accept_handler() to use, to actually communicate with the new connection.

Do I have that right? This makes a lot more sense now.

Does this mean that async_accept() is calling listen() on the first call? I've always wondered what listen() was for, since I apparently never need to call it myself.

Thank you for your help swiftcoder!
Quote:Original post by Nairou
So, if I'm understanding you right, this means that the "listening" functionality of async_accept() happens on the first call to it and lasts for the lifetime of the server, and will accept every incoming connection and call accept_handler() for each. But the example calls it multiple times so as to provide a unique socket for the next accept_handler() to use, to actually communicate with the new connection.

Do I have that right? This makes a lot more sense now.


I believe when you define the acceptor and you call listen(), the function is passed either a backlog maximum number of connections.

What this maximum number of connections represents is the "backlog queue". The backlog queue is used to temporarily "accept" the connection from the client so that it can then be passed off to your accept handler.

In theory, code should be fast enough that this backlog queue shouldn't fill up; however, lets say this value is set to 5 and you have 100 clients attempt to connect at precisely the same quantum of time, it is highly possible that some of those connections could be rejected because the backlog fills up too quickly.

Prior to using any of the boost APIs and relying solely on the winsock API directly, I have always followed the premise that whatever my accept socket does should be lean and quick to get the handler started for the connection, so that the listener could handle the next accept() in queue.
Back in the day of scarce memory, listen() pre-allocated some number of TCP control blocks for incoming connections, so that the kernel could answer "SYN/ACK" to some number of incoming packets and start receiving data, without having to do a round-trip up to the application. When you ran out of control blocks, the kernel wouldn't respond, and the client would time out (3 second SYN time-out) and try again.

These days, not only is memory plenty (you can have thousands of pending SYN connections without much impact), but you can even use SYN cookies to allocate zero memory when you receive a SYN, yet still know what to do when the first "ACK" comes in. listen() is basically a legacy function; as soon as a socket is bound to a port using "bind()," it should be listening.

Also, don't confuse the listening socket with the connected socket. On a TCP server, the listening socket never transfers any data, but is just there to represent the port that you're listening on. Each new client (call to accept(), basically) will generate a new socket. In boost::asio, I believe this is represented by you pre-allocating a socket object that you call into async_accept(); it will be "filled in" with the data of the incoming accepted connection and passed back to your callback/handler.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement