Sign in to follow this  
Zouflain

Protecting single threaded TCP server from malicious packets

Recommended Posts

Zouflain    548
At present I'm trying to write a fairly simple, single threaded server application in C++ that uses SDLNet_TCP_sockets as its primary communication protocol. Unfortunately, this means dealing with blocking on a single thread, which is a headache but not nearly so much as concurrent processing. Packets of information sent on the TCP connection have the (presumably common) binary format of 'total_size:header_size:header:data1_size:data1:data2_size: ... :data#_size:data#' and processing of the packet is essentially (in terrible 6:00AM psuedocode):
[code]//Note all sizes are of type Uint16 ("short")
short size << TCP_recv(2);//read the 16 bit packet size
if size>0
stringstream buffer << TCP_recv(size);//read the whole packet into a buffer. Since it's TCP, I KNOW I will receive this much data.. and this promise is the problem
while int pos+sizeof(short) < size //cycle through all parts of the buffer.
short data_size << buffer;//get the next size out of the buffer
if pos+data_size<size //prevent accidental/malicious overrun of total size
char* data << buffer;//read data_size bytes from buffer
//Handle the data
pos+=data_size;//update the position
[/code]The problem I see is the behaviour of SDLNet's sockets (blocking) and the clients ability to promise data. A client, theoretically, could send just a packet in the above format like "9999999:", or in other words promise a huge chunk of data and never actually send it, and the server would naively wait until ragnarok for 9999999 bytes of data to arrive when it calls TCP_recv(999999). Even if SDLNet provided some sort of timeout feature (it doesn't), a client could spam those packets if the server didn't have many layers of protection written over it (which is bad for server dealing with more than just one client) and stall it repeatedly for that timeout value. Sure, I can set a limit to how much a packet can promise, but even if it promises 1 byte (eg, TCP_Recv(1)) and never actually sends it, the same problem exists.

Since I cant get around SDLNet_TCP's blocking nature (I can't find an asyncronous setting) and I am specifically and intentionally avoiding all concurrency (no threads, forks() or otherwise), I either need to change the packet format or abandon TCP altogether. I was wondering if there was a third way altogether to avoid either of these two alternatives without resorting to threads.

PS> I have always and will always hate blocking I/O. I've never actually experienced any of the benefits, but man has it made my life painful.

Share this post


Link to post
Share on other sites
clb    2147
As you already noticed, the rule of thumb here is to never allow a client to make the server block. You could try moving the socket reading to another thread, which blocks and waits for new messages to appear. Or, you could try directly doing nonblocking receives with TCP (set O_NONBLOCK on for the socket - SDLNet dies give you the socket descriptor?). This approach is typical if your game runs a realtime loop over and over again, because you can simply add a nonblocking check at each frame to see if you have received any data.

Share this post


Link to post
Share on other sites
Zouflain    548
I'm very, very adamant about keeping this single threaded (no concurrency). I appreciate the suggestion, and realize that threading would solve this particular issue, but it would also introduce a deal of problems that, when I initially set out to create this project, I specifically sought to avoid. Personally I love the concept of scalability via threading (and multiprocessing for resource intensive jobs) but I've made the decision and will stick by it not to introduce the complexity of concurrency and all of its related issues for this project.

I'm not sure if SDL_net exposes that functionality. Most of my google/gamedev searches have strongly pointed towards no, there is no asyncronous TCP for SDL_Net. Quite explicitly, actually. But, that may be wrong. If anyone can confirm or better yet refute this, I would appreciate it greatly.
[quote name='Katie' timestamp='1313936556' post='4851903']
I don't understand -- why are you not just sanity checking the packet sizes?
[/quote]The trouble is that this whole "packet" thing is merely something I've layered on top of TCP, unless I'm grossly misunderstanding things. UDP naturally allows for such a sanity check, because it's a packet based protocol, but TCP streams in chunks of who knows how large a size and stuffs them into a stream buffer. TCP_recv just reads out of the stream. The "packet" is merely a portion of the whole stream that contains related data whose format I control. So far as I'm aware, SDL_Net does not provide a means of ascertaining how large the stream size currently is, useful though that would be. If it did, then I could easily verify that the stream size contains first enough for a length indicator, and then that the stream is as long (or longer) than that indicator says. Something like:

Check if sizeof(stream) > 2
Read 2 bytes into "size"
Check if sizeof(stream) > size
Read size bytes and continue if true
Else deal with malicious client

If you know of a way in SDL_Net to see the current amount of data stored in the stream prior to a TCP_recv call, then you've just made me very happy. Unfortunately, looking over SDL_Net's documentation suggests that no such feature exists. Or I'm overlooking it.

Thank you for the input, though. It's all very much appreciated.

Edit: Something has ocurred to me. SDLNet_TCP_recv() blocks until data is found to receive. I interpreted this as meaning it will block until it receives as much data as was passed as an argument (maxlen), which would open the above vulnerability. I may have just grossly misinterpreted this meaning, though. Will it, instead, read as much as is in the stream (up to maxlen), returning a succesful result if it reaches maxlen and a failure if it's less? That makes far more sense to me. Perhaps this is where the problem is - my misconceptions! I'd be happy if that were the case.

Also, apparently I've had this problem before. After ignoring a nagging voice in my head, I checked my 3 year old posts. Turns out I ran into this before, but decided to go with threading to resolve the issue. That was a mistake, and that project failed. The problem persists, however.

Share this post


Link to post
Share on other sites
rip-off    10979
You can use [url="http://jcatki.no-ip.org:8080/SDL_net/SDL_net.html#SEC41"]SDL_Net's socket set and SDLNet_CheckSockets to avoid blocking reads[/url]. You still could block during writes though.

You have the option of building a custom version of SDL_Net which includes a function (or flag during socket creation) to set the socket into non-blocking mode.

Share this post


Link to post
Share on other sites
hplus0603    11356
[quote name='Katie' timestamp='1313936556' post='4851903']
I don't understand -- why are you not just sanity checking the packet sizes?
[/quote]


The packet size is not the problem. Not sending the intended size is the problem. A client could send "1" for size, and then not send a byte, and the server would still hang. Or the client could send one of the two size bytes, and not the second, and the server would hang.

In a regular sockets application, this is solved with select(). You wait on all sockets, or for a timeout, and then only call recv() or send() on sockets that are ready. recv() and send() guarantee to not block if select() say they're ready -- but may return less than the requested number of bytes (greater than 0, though).

Share this post


Link to post
Share on other sites
I don't know admittedly how SDL sockets work, but most likely they work exactly like "normal" BSD sockets.

That is, they will block if [i]zero [/i]is available, but as long as at least 1 byte is available, they will not block.

While I don't know a truly good way of dealing with malicious clients for TCP, there is a simple trick that I use with UDP: Make the buffer a byte larger than the largest message you expect and request that much. If you actually get that much, it's impossible (since you have asked for max_size + 1) and if you get less, check what you have and stuff it in the state machine. The nice thing with UDP is that the network stack will kindly throw away the rest. TCP will of course still buffer everything.

Share this post


Link to post
Share on other sites
hplus0603    11356
[quote name='samoth' timestamp='1313963906' post='4852024']
While I don't know a truly good way of dealing with malicious clients for TCP
[/quote]

Typically, you re-buffer into the application. Keep an input buffer. When receiving, append to that buffer. Then, parse the buffer to a function that does something like:

[code]
if (select_said_there_is_no_data) {
return;
}
if (nrec = ::recv(socket, buffer + end, buffer_size - end, 0) <= 0) {
user_disconnected();
return;
}
size += nrec;
while (size >= 2) {
sz = peek_size_2_bytes(buffer);
if (sz > maximum_packet_size) {
disconnect_user(); // corrupt data
return;
}
if (size >= 2 + sz) {
dispatch_packet(buffer + 2, sz);
remove_from_front(2 + sz, buffer);
}
}
[/code]

The "peek" and "remove" operations can be made very efficient, not having to memcpy data around most of the time, by using a cyclic buffer.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this