TCP Layer problems

Started by
11 comments, last by jsaade 16 years, 5 months ago
I have fully implemented a tcp layer that should of worked in a general network engine. As tcp is stream based, i am dividing the the received bytes into packets which are checked for correctness. this works... I tried a simple game where the server has around 40 network objects. the client can control 4 of these objects with his keyboard and mouse. the current implementation is: - client sends mouse +keyboard input - client receives update messages for each object - server has virtual keyboard+mouse per client - server runs all game logic - server transmits update object packets when there are 40 objects, the size of the pending data on the tcp layer is very large and most of the times new[] fails to allocate memory. I am having the server send all the network objects states every game loop. I re-tried the same test using UDP, it worked like a charm. Is it possible that TCP is this slow even on LAN? I would really like some experienced user opinion because I think I might be doing something wrong.
Advertisement
I'm not sure how your implementation is setup but with UDP you're basically stateless, so it sounds like maybe you have some IO blocking issues that are slowing you down with your TCP setup? Not quite sure why you would have malloc errors though, I'll have to think about that one. Any code snippets you can post?
Check out my current project on Kickstarter: Genegrafter
Quote:when there are 40 objects, the size of the pending data on the tcp layer is very large and most of the times new[] fails to allocate memory. I am having the server send all the network objects states every game loop.

I re-tried the same test using UDP, it worked like a charm.


Are the messages you send over UDP reliable?

Unlike TCP, which will recover missing traffic, UDP won't.

This would explain the congestion with TCP - there's too much pending data waiting for ack, resulting in saturation of network buffers. With UDP, if the buffer is full, incoming messages will be discarded.

My guess would be you're not processing fast enough for the traffic you're receiving, resulting in congestion.

I don't know why new[] would fail though. Network stack should have fixed buffers, and any traffic above that will be discarded. New failing would indicate other memory problems.

How much traffic do 40 objects generate? How often are you updating? While 100Mbit is more than enough for this size of world, your encoding might be saturating something. For example, sending every mouse move can generate 50-200+ messages per second. And while this isn't breaking anything, if your server isn't able to handle this at same rate, you will run into problems.
Thanks for the info.
It seems with input updates and many clients in session the server using tcp will be very slow. If every object needs to synchronize this means 40 messages per object multiply that by the number of connected clients. and this is per second.
one solution is to have 1 update message for all the objects (compressed of course) and send that over tcp. the second solution (this is what i am doing right now). is to have game state server messages using udp (it is not that reliable) but with 60 message per second for every object it works fine.
First: synchronization in network games is typically done at rates between 1 Hz (one update a second per object) and 30 Hz (thirty updates a second per object). Updates are not full state dumps, but instead only deltas, or maybe even only input needed to co-simulate the object on the client.

Second: Are you using blocking sockets? Are you using select()? How do you deal with partial writes to the TCP socket for updates, if the outgoing buffer is full?

Third: You must manage the outgoing bandwidth. That's something every networked game does. Select what to update, each time you update; don't send everything every time.

Fourth: If new[] fails, then you have a bug. Unless your objects have a state size of a megabyte each...

Fifth: Have you turned on the TCP_NODELAY option?
enum Bool { True, False, FileNotFound };
Sorry if it took me a while to write back but i have been really busy.
Anyways here is the answers:
For a start I am doing a stress test, this means I am trying to put the worst case scenario to test the base network engine.
First: synchronization is done at 60hz, the server sends full object states.
Client sends full input states.
Of course they can be deltas but as i said i am trying to use as much bandwidth as possible in order to test the engine.

Second: I am using nonblocking sockets In comibation with another thread polling the sockets (using select). if there is data on a socket, if it is UDP read the data and enqueue. (the messages queue is managed on the main thread to minimize wait time on the Networking thread). If the socket is TCP, read the oncoming buffer. each packet has its header, if a packet is less than the header size this means it is an incomplete, so just read the remainder of the packet and continue

Third: Already answered in 1st.

Fourth: new has been fixed it was a tcp send/receive error, usually i m sending a structure which is the header followed by the data in the same message. the problem was if the tcp stream received was even less than the header size, this was fixed.

Fifth: TCP_NODELAY is on

I have also another layer implemented above this for managing the network in the game I.E: client side prediction, checking what to send and what to send, having a minimum updates to send from the server side.

My big problem is the following:
I am testing this scenario:
- The server sends 60 updates per second for each object.
- the client sends 60 input updates per second.
I am having 2 planes fly around and shoot. the bullets are dynamic network objects which are updated by the server (created, destroyed, moved). the client just interpolates. so at anytime i can have around 400-600 sprites managed by the server. This is a lot of messages. of course the simulation would be slow. But using udp everything is working, If i am using tcp for sending the updates, after a while the network thread will just lock I am still trying to debug the error but it is a it hard using pthread on windows.
Quote:if a packet is less than the header size this means it is an incomplete, so just read the remainder of the packet and continue


There is no guarantee that the remainder of the packet will be there yet. Someone who opened a connection to your socket, and sent a single byte, and then left it open, would lock up your server if you just "read the rest."
enum Bool { True, False, FileNotFound };
if i understand what you are saying this can happen at any time ...
i read your implementation @ http://www.mindcontrol.org/~hplus/stream-messages.html
I do not think it solves the issue you discussed, any solution?
I am thinking of re-writing the tcp receive in a more general way (the tcp receive is actually written by another guy in the team, i am just patching in it up so if it happens that patching will become messier i guess i might have to re-write it).
If you look at the code later in the article, it buffers data until it has enough to dequeue a header, which means it can tell how much data it really has to have in the buffer.

A working implementation of the same can be found in the Etwork library: http://www.mindcontrol.org/~hplus/etwork/

enum Bool { True, False, FileNotFound };
oh thanks a lot, this really helped but i was concerned about this function:

if( gotHdr_ && buf_.size() >= packet_.h.size ) {        packet_.data = buf_.data();        return &packet_;      }


returning the packet, but the buf.poll() could of easily read more than the packet size. but this can be easily managed by getting the true size from the header.
I have already looked on etwork and i find that it does its job in a nice and really organized manner. I wish i was able to include any external library in this project than my choice would be to use yours. Thanks for your help.

This topic is closed to new replies.

Advertisement