Socket buffert UDP

Started by
4 comments, last by jmp97 15 years, 4 months ago
Hey, I've recently created a server and a client that does communicate with eachother using the UDP protocol. The server loop looks like this: recvmessages() execute() simulate() foreachclient_sendback() And the client is just a dumb client, checking input, sending data, recieving data and finally rendering the game. Now if I run the server with Sleep(1), the server is updating every position etc correctly, but the clients processes "old" data. I'm wondering if this is because the socket used in the client piles up the data on the buffer, meaning that for instance, when the client has used recvfrom() and processed that message, 5-6 new message are already in the buffert, making the next message for the client being an "old" message in the client. Is that how it works? If so, how does one go about solving this? I'm thinking mostly since rendering times for the client in the while loop will be different for all computers. Can you reset the buffert or so? [Edited by - rd000 on November 24, 2008 12:33:49 PM]
Advertisement
You should define a packet send rate. For example, 10 times a second, or 20 times a second, you send packets from the server. This will allow you to define how much bandwidth is used by the server (and each client).

Also, on the client, you should receive, decode and process all packets that are in the network before you allow another frame to be rendered. The look will look something like:

  while (receive_packet()) {    process_packet();  }  render_one_frame();

enum Bool { True, False, FileNotFound };
Ah! That was the thing I had missed, going to try to whip up something. Thanks for your fast answer!
A quick question concerning the sending routine of the server-side (i hope not off-topic):

The main reason for using an outgoing queue and do all the sending in one go is efficiency and bandwidth control? I assume since network I/O is expensive it is better to handle it only every so often instead of intercepting the main server execution by it, is that correct?
Quote:Original post by jmp97

The main reason for using an outgoing queue and do all the sending in one go is efficiency and bandwidth control? I assume since network I/O is expensive it is better to handle it only every so often instead of intercepting the main server execution by it, is that correct?


Sending is typically cheap CPU-wise. It's much more likely that you will fill network buffer before overloading the CPU. Some operations may be expensive, such as compression, encrpytion or even serialization itself, but typically not unless you're on gigabit connection. State management may be expensive as well. It's also more likely that simulation-related logic will be dominating CPU usage, making all of the above take just a fraction of total time.

For sending, you have two basic approaches. One is to control sending from simulation. Whenever something changes, store that into network queue. Network handler then pulls data off this queue, and sends it up to available bandwidth.

The other approach is to have network handler periodically check if anything has changed, or you let it know something has changed. At that point, networking polls what has changed and builds the packets/messages.

First approach is closer to event notification model, where you "fire" an event regardless of whether anyone is listening, or if listeners are capable of receiving. It may be simpler to design. Downside is, if client is overloaded, networking will need to discard some of these changes.

With second approach, simulation isn't concerned with networking, it simply simulates from one state to another. When activated, networking will scan the most recent state and try to send as much as client and network stack can accept in that moment. This approach may scale better once clients get overloaded, since data which cannot be sent doesn't get processed.

Representation of state may be an issue. With first approach you can get by with single state (and possible a partial temporary one). For second approach, you either need to maintain full per-client state , or keep track of last n states (where n is large enough for slowest client to still be able to receive enough data to maintain valid state).

The math (let's say network packets are 500 bytes):
- 100Mbit network can send ~21000 such packets per second ~= 0.05ms per packet

If your sending routine takes more than that to generate a single packet, you will not be able to utilize full bandwidth

- Network buffer can be up to 64k, or ~130 packets x 0.05ms = 6.5ms

You can send 130 such packets before you fill up the buffer. You need to send at least once per 6.5 ms, but not more often than once per 0.05ms to fully utilize 100Mbit network.

Serialization in trivial case is just a memcpy, and you can assume 1Gb/sec rate there. So it should take a fraction of 0.05ms. Determining state changes may be more expensive.

PS: those approximations are just that. YMMV.
I see, thanks for that. Actually for my purposes, simulation is mostly reflection of client data, the world itself is rather static. For this reason I currently do work on the server only when client state changes, i.e. when a client sends an action to the server. In such system, I have no periodic intervals at which to send queued outgoing data and I was thus wondering if I should just send data like this

if (client has moved) {  for all clients in range {     send update  }}


or like this

if (client has moved) {  for all clients in range {     store update message to client's outgoing buffer  }}...for all clientd with outgoing messages {  send messages in client's outgoing buffer}


I think in the long run I will rather have a loop-based simulation because some things like cleanup of dropped items need to occur periodically anyway and then I would opt to group the sends instead of emitting them all over the place.

Ok, I didn't mean to change the subject of the thread that much, apologies for that.

This topic is closed to new replies.

Advertisement