• entries
109
175
• views
117894

# It's been a tough week

1011 views

Yep, companones, it's been a tough week. I'm not speaking of work (that, too was a tough week, mostly because bash 2.3 and bash 4.1 do not work the same way, for unknown reasons (apparently, introducing breaking changes that deny the right to exist to myriads of old Makefiles is now called "bug fixing")).

I'm speaking of the next iteration of [ekogen] (the library name still evolves; see, I added brackets!) and it's upcoming addition : the network library. In my previous entry, I showed how Asio (and consequently boost.asio) is quite short when it comes to addressing some particular issue. The library has some design limitations that I find weird and unconvenient for many uses. So my goal is to address these limitations - and this cannot be done by pushing the proposed modifications to boost.asio, as such modification would break existing code (hey, bash team: existing code is precious). There are other C++ network libraries out there but none of them provide the level of abstraction provided by Asio so there is the place for yet another network library that do things correctly. This is the intent behind [ekogen]'s network component.

The technicalities of providing both synchronous and asynchronous operations is of limited interest to me (because it's well known since Steven's book on communication programming in the Unix environment). Sure, there are some platform specific issues to address (Windows Socket API version 2 does asynchronous operation differently from POSIX systems) but in the end ther's nothing complicated here.

On the architecture side, things are more complicated. Choices that seems unnatural are to be made (do I provide a tcp_server class? should sockets be copiable or movable ? and so on). Here are my observations :

* copying sockets is of limited use; most of the time, a socket copy is done to transfer its ownership to another thread - or at least another context of execution. It's far better to provide a socket with move semantics, as it's the best way to transfer an object to another context. C++0x makes move constructors and move operator= explicit (by using a rvalue reference). C++98 is akward wrt to the move semantic : the only movable object in the current standard is to be deprecated in the next standard (std::auto_ptr). But anyway, this can be done easily. It also gives a release() function for free (although I'm not sure this should be in the public interface). When real copying is to be used, a weak_socket can be constructed from the socket.

* both TCP servers and UDP servers are just wrappers around other, simple to use classes. A TCP server is made of a polling mechanism and a binded listening socket (the polling can be asynchronous, allowing the implementation of high performance servers in a single thread). This is simple to implement if you have access to both mechanism. It can be a little more cumbersome if you hide all that stuff in a tcp_server class (as Asio and most other network library does).

* naming is primordial for the success of a library: I don't want to hide the fact that I'm handling sockets, but I don't want to accept() incoming connections, as such accept() would necessarily imply an operation which is different from the original accept() in the BSD socket API. Same goes for listen(), select() (which might not even be implemented using select()) and so on. There is some reasearch to do to find clever equivalents that are both concise, clear and different from the original verbs.

* synchronous sockets are not asynchronous sockets: depending on the OS, they are not created using the same system calls and their use is vastly different. A distinction need to be made between UDP and TCP sockets as well, as UDP sockets gives me more informations than TCP sockets. It is possible to create a synchronous socket from an asynchronous one - the inverse is not always true. I should end up with something along the line of :

class stream_socket{ stream_socket(const stream_socket&); // define only stream_socket& operator)(const stream_socket&); // define onlypublic: typedef implementation-defined native_socket_type; typedef implementation-defined boolean_type; stream_socket(); stream_socket(stream_socket&) throw(); // move ctor stream_socket& operator=(stream_socket&) throw(); // move operator= ~stream_socket(); // does nothing is released // properties and query operator boolean_type (); // boolean_type converts to bool weak_socket weak() const; // throws if (!s) // operations void read(...) const; // std::vector ? void write(...) const; // std::vector ? void shutdown_write() const; // = shutdown(SHUT_WR) (man 2 shutdown for further info) void shutdown_read() const; // = shutdown(SHUT_RD) void shutdown() const; // = shutdown(SHUT_RDWR)private: native_socket_type release() throw();};class async_stream_socket{ async_stream_socket(const async_stream_socket&); // define only async_stream_socket& operator)(const async_stream_socket&); // define onlypublic: typedef implementation-defined native_socket_type; typedef implementation-defined boolean_type; async_stream_socket(); async_stream_socket(async_stream_socket&) throw(); // move ctor async_stream_socket& operator=(async_stream_socket&) throw(); // move operator= ~async_stream_socket(); // throws if pending() ? // properties and query operator boolean_type (); // boolean_type converts to bool weak_async_socket weak() const; // throws if (!s) bool pending(); // socket is used in an async operation. Useful? // operations void read(...) const; // std::vector ? void write(...) const; // std::vector ? void shutdown_write() const; // = shutdown(SHUT_WR) (man 2 shutdown for further info) void shutdown_read() const; // = shutdown(SHUT_RD) void shutdown() const; // = shutdown(SHUT_RDWR) weak_socket weak_sync(); // throws if (!s); returns a normal weak_socketprivate: native_socket_type release() throw();};

This is a lot of code to provide only two operations (read and write, which are only partly defined here. Some buffer has to be provided, and I still have to decide whether a std::vector<> would do the trick - in which case this function shall be a template function).

Same goes for UDP sockets (datagram_socket and async_datagram_socket). Now, how do I want to create them? Using endpoints (endpoints and resolvers are two Asio features that I believe are very well designed - at least from a conceptual point of view; I haven't checked the full implementation yet), I can tie an IP address and a port number. Using a listener (name to be changed), I can bind this endpoint to a logical port and create an internal socket. Using a poller, I can use this listener to get new socket instances. So in the end, I never really create them - I get them from the poller (for an UDP server, there is no need to listen from the socket, but a poller can be of interest)

Polling also pose a (slightly limited) software design problem. I have no guarantee that polling will return me only one event, so I must find a way to handle possibly multiple incomming events. Moreover, polling gives me two information: the socket which is to be processed and the event to process (is the socket to be read ? write ?). This information is available from all pollers that are available on any platforms (*nix have epoll (linux), kqueue (BSD, OS X), select (all), /dev/poll (Solaris ; I can ignore that - not my target) ; Windows has select, WSAAsyncSelect). How do I handle these events ? I have two ways :

1/ feed a vector of (socket, event) to the poller. This is quite ugly, might need loads of reallocation if the vector grows, and so on. If I can avoid that, I won't be very sad.

2/ give the poller a tack to execute on the reception of an event. Asio provided the idea of strands, which are (as their name says) "mini-threads". This is not really true. In fact, strands are deferred tasks that are ran when a promise(*) is set (the promise itself is the result of the operation, be it a synchronous or an asynchronous one). If you already have a good knowledge of the upcoming C++0x standard, you can view them as generalized futures : futures only gives you back the promise when strands allows you to do some processing on the promise. Any word play aside, this is a promising solution.

Since (1) looks bad and (2) looks far better, the obvious choice it to implemement (2). For a synchronous poll, this is easy : I just wait for the poll to complete and then I execute my function (which is likely to be a std::function (or std::tr1::function if no C++0x features are available). For asynchonous operations, this is trickier. Windows has completion ports but that only allows me to execute a free function or a static member function, and the select operation is not implemented this way (keep away from WSAAsyncSelect - this is a WTF function that sends a message to a HWND when the operation completes). *nix has no system wide primitive that looks like completion ports so we have to fake them by creating a thread, waiting for the asynchronous function to end, then execute the operation.

In short, asynchronous poll mandates the creation of a short-lived thread in this design. The upcoming standard thread library offers std::thread which provides exactly what we need (I have re-implemented the basic mechanism of std::thread in [ekogen] to compile the library on platforms where it is not yet available, such as g++ prior to version 4.4 and all versions of Visual Studio (std::thread is not part of the tiny part of C++0x that has been implemented in Visual C++ 2010 ; we're looking forward for the next version...).

With such a design, the code for a TCP server shall look like (remember some names are to be changed):

using namespace ekogen::network;struct event_receiver{ listener& m_l; event_receiver(listener& l) : m_l(l) { } // not sure of the prototype yet void operator()(poller& p, const socket& s, const event_type& event) { if (m_l.get_socket() == s) { // add the socket to the poller list poller.add(m_l.incomming().get_socket(), event_type::read | event_type::write); } else { if (event & event_type::read) { s.read(... } else if (event & event_type::write) { s.write(... } } }};int main(){ listener my_listener(endpoint(ipv6_address::loopback, port::any)); poller my_poller; my_poller.add(my_listener.get_socket(), event_type::read); my_poller.wait_for_events(event_receiver(my_listener));}

A TCP server class would just encapsulate a small part of this code. This can still be of interest for simple applications - but then, a tcp_async_server shall be provided, as well as a mechanism to filter the incoming connections and so on.

The code for an async TCP server is quite similar - and the code for the UDP server is not very far way from that (UDP servers does not need to listen() and accept() incoming connections ; you just need to poll the binded socket (and even polling is not really needed, as recvfrom() is a blocking call).

--
(*) don't worry if you're lost with these promise, future and strands ; I will publish something about this very subject in the coming weeks.

I was under the impression that a destructor should never, ever throw (~async_stream_socket() // comment...).

True. My mistake. It should call std::terminate()

## Create an account

Register a new account

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.