Protecting single threaded TCP server from malicious packets

Started by
5 comments, last by hplus0603 12 years, 7 months ago
At present I'm trying to write a fairly simple, single threaded server application in C++ that uses SDLNet_TCP_sockets as its primary communication protocol. Unfortunately, this means dealing with blocking on a single thread, which is a headache but not nearly so much as concurrent processing. Packets of information sent on the TCP connection have the (presumably common) binary format of 'total_size:header_size:header:data1_size:data1:data2_size: ... :data#_size:data#' and processing of the packet is essentially (in terrible 6:00AM psuedocode):
//Note all sizes are of type Uint16 ("short")
short size << TCP_recv(2);//read the 16 bit packet size
if size>0
stringstream buffer << TCP_recv(size);//read the whole packet into a buffer. Since it's TCP, I KNOW I will receive this much data.. and this promise is the problem
while int pos+sizeof(short) < size //cycle through all parts of the buffer.
short data_size << buffer;//get the next size out of the buffer
if pos+data_size<size //prevent accidental/malicious overrun of total size
char* data << buffer;//read data_size bytes from buffer
//Handle the data
pos+=data_size;//update the position
The problem I see is the behaviour of SDLNet's sockets (blocking) and the clients ability to promise data. A client, theoretically, could send just a packet in the above format like "9999999:", or in other words promise a huge chunk of data and never actually send it, and the server would naively wait until ragnarok for 9999999 bytes of data to arrive when it calls TCP_recv(999999). Even if SDLNet provided some sort of timeout feature (it doesn't), a client could spam those packets if the server didn't have many layers of protection written over it (which is bad for server dealing with more than just one client) and stall it repeatedly for that timeout value. Sure, I can set a limit to how much a packet can promise, but even if it promises 1 byte (eg, TCP_Recv(1)) and never actually sends it, the same problem exists.

Since I cant get around SDLNet_TCP's blocking nature (I can't find an asyncronous setting) and I am specifically and intentionally avoiding all concurrency (no threads, forks() or otherwise), I either need to change the packet format or abandon TCP altogether. I was wondering if there was a third way altogether to avoid either of these two alternatives without resorting to threads.

PS> I have always and will always hate blocking I/O. I've never actually experienced any of the benefits, but man has it made my life painful.
Advertisement
As you already noticed, the rule of thumb here is to never allow a client to make the server block. You could try moving the socket reading to another thread, which blocks and waits for new messages to appear. Or, you could try directly doing nonblocking receives with TCP (set O_NONBLOCK on for the socket - SDLNet dies give you the socket descriptor?). This approach is typical if your game runs a realtime loop over and over again, because you can simply add a nonblocking check at each frame to see if you have received any data.

I don't understand -- why are you not just sanity checking the packet sizes?


I'm very, very adamant about keeping this single threaded (no concurrency). I appreciate the suggestion, and realize that threading would solve this particular issue, but it would also introduce a deal of problems that, when I initially set out to create this project, I specifically sought to avoid. Personally I love the concept of scalability via threading (and multiprocessing for resource intensive jobs) but I've made the decision and will stick by it not to introduce the complexity of concurrency and all of its related issues for this project.

I'm not sure if SDL_net exposes that functionality. Most of my google/gamedev searches have strongly pointed towards no, there is no asyncronous TCP for SDL_Net. Quite explicitly, actually. But, that may be wrong. If anyone can confirm or better yet refute this, I would appreciate it greatly.

I don't understand -- why are you not just sanity checking the packet sizes?
The trouble is that this whole "packet" thing is merely something I've layered on top of TCP, unless I'm grossly misunderstanding things. UDP naturally allows for such a sanity check, because it's a packet based protocol, but TCP streams in chunks of who knows how large a size and stuffs them into a stream buffer. TCP_recv just reads out of the stream. The "packet" is merely a portion of the whole stream that contains related data whose format I control. So far as I'm aware, SDL_Net does not provide a means of ascertaining how large the stream size currently is, useful though that would be. If it did, then I could easily verify that the stream size contains first enough for a length indicator, and then that the stream is as long (or longer) than that indicator says. Something like:

Check if sizeof(stream) > 2
Read 2 bytes into "size"
Check if sizeof(stream) > size
Read size bytes and continue if true
Else deal with malicious client

If you know of a way in SDL_Net to see the current amount of data stored in the stream prior to a TCP_recv call, then you've just made me very happy. Unfortunately, looking over SDL_Net's documentation suggests that no such feature exists. Or I'm overlooking it.

Thank you for the input, though. It's all very much appreciated.

Edit: Something has ocurred to me. SDLNet_TCP_recv() blocks until data is found to receive. I interpreted this as meaning it will block until it receives as much data as was passed as an argument (maxlen), which would open the above vulnerability. I may have just grossly misinterpreted this meaning, though. Will it, instead, read as much as is in the stream (up to maxlen), returning a succesful result if it reaches maxlen and a failure if it's less? That makes far more sense to me. Perhaps this is where the problem is - my misconceptions! I'd be happy if that were the case.

Also, apparently I've had this problem before. After ignoring a nagging voice in my head, I checked my 3 year old posts. Turns out I ran into this before, but decided to go with threading to resolve the issue. That was a mistake, and that project failed. The problem persists, however.
You can use SDL_Net's socket set and SDLNet_CheckSockets to avoid blocking reads. You still could block during writes though.

You have the option of building a custom version of SDL_Net which includes a function (or flag during socket creation) to set the socket into non-blocking mode.

I don't understand -- why are you not just sanity checking the packet sizes?



The packet size is not the problem. Not sending the intended size is the problem. A client could send "1" for size, and then not send a byte, and the server would still hang. Or the client could send one of the two size bytes, and not the second, and the server would hang.

In a regular sockets application, this is solved with select(). You wait on all sockets, or for a timeout, and then only call recv() or send() on sockets that are ready. recv() and send() guarantee to not block if select() say they're ready -- but may return less than the requested number of bytes (greater than 0, though).
enum Bool { True, False, FileNotFound };
I don't know admittedly how SDL sockets work, but most likely they work exactly like "normal" BSD sockets.

That is, they will block if zero is available, but as long as at least 1 byte is available, they will not block.

While I don't know a truly good way of dealing with malicious clients for TCP, there is a simple trick that I use with UDP: Make the buffer a byte larger than the largest message you expect and request that much. If you actually get that much, it's impossible (since you have asked for max_size + 1) and if you get less, check what you have and stuff it in the state machine. The nice thing with UDP is that the network stack will kindly throw away the rest. TCP will of course still buffer everything.

While I don't know a truly good way of dealing with malicious clients for TCP


Typically, you re-buffer into the application. Keep an input buffer. When receiving, append to that buffer. Then, parse the buffer to a function that does something like:


if (select_said_there_is_no_data) {
return;
}
if (nrec = ::recv(socket, buffer + end, buffer_size - end, 0) <= 0) {
user_disconnected();
return;
}
size += nrec;
while (size >= 2) {
sz = peek_size_2_bytes(buffer);
if (sz > maximum_packet_size) {
disconnect_user(); // corrupt data
return;
}
if (size >= 2 + sz) {
dispatch_packet(buffer + 2, sz);
remove_from_front(2 + sz, buffer);
}
}


The "peek" and "remove" operations can be made very efficient, not having to memcpy data around most of the time, by using a cyclic buffer.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement