Jump to content
  • Advertisement
Sign in to follow this  
Richard3d

Basic Multi-threaded networking question

This topic is 2533 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

I have a newbie question regarding multithreaded networking. I have an application that uses eNet to pass and receive messages between a client and server. In this simple implementation I only need the server to pass data to the client. I have certain messages that come in different packets but the client needs to "handle" them at the same time for synchronicity issues between gameplay objects (This is a constraint by the way and cannot be changed for this project). My desired goal is to have the following

-the client spawns a new thread which reads network packets off the UDP stream (this way network traffic is always being serviced regardless of framerate, so you can't get false timeouts from another piece of code)

-the packets are stored into a dynamic array (STL vector)

-the application (main process/thread) processes all the packets in the array when it is time for an update

My questions are the following:

What is the best strategy to make the vector thread-safe (since I can't have the network thread adding packets, while the main thread tries to read simultaneously)?

Or maybe the better question is what is the best strategy to let the main thread know when the data is ready for processing/consumption and vice-versa (main thread tells network thread to collect more packets from the stream again)?

The only problem with the second idea is that I don't want the network thread to hang/wait on collecting packets while the main thread does the processing. Should I use two vectors (one local in the network thread that is always filling and one shared between the two and then use an event?) Thanks in advance for helping a beginner - R

Share this post


Link to post
Share on other sites
Advertisement
I see two basic options that would work well for your situation.

One is to simplify everything drastically by not multithreading at all. Simply check the socket via select() or similar mechanism when you have time to process network events in the main game loop. There's really no benefit in buffering a bunch of network events to process if you don't have the CPU time to do it anyways, and the OS will typically buffer things satisfactorily in any case, so adding your own buffering mechanism is just another point of failure.

The other option would be a simple double-buffering scheme. Maintain two vectors, and a pointer to each vector. When your network logic receives an event, store it in the (arbitrarily designated) "back" buffer. When the game loop becomes available to process network events, swap the two pointers, effectively making the "back" buffer become the "front" buffer. From then on, the game loop does all of its processing on the "front" vector. This ensures that your two threads never stomp on each others' vectors. Note that you will need to either provide an atomic exchange of the pointers (usually via OS or CPU intrinsic functions) or you will need to protect the buffer swap (on the game loop thread) and all accesses to the back buffer in the network thread via critical section.

Share this post


Link to post
Share on other sites

...
There's really no benefit in buffering a bunch of network events to process if you don't have the CPU time to do it anyways, and the OS will typically buffer things satisfactorily in any case,
[/quote]
Times are gone where a standard PC has a single core to process events of any sort. And while using UDP sockets the OS definitely throws away packets in case of an overflow. Should be no problem for the OS because UDP has no guarranty that packets gets delivered and so it would be expected that the application can handle such loss of data.


so adding your own buffering mechanism is just another point of failure.
[/quote]
You are absolutly right here. But any added code adds a point of failure.

Having a thread to process incoming network events adds the benefit of validating and pre-processing the events for the game thread. This way you lower the complexity of the game-thread. If you "know" that all incoming data are perfect you can remove lots of validation code off the game loop.

Another good reason to process the network events yourself is that you can define your own rules to detect and handle flooding situations.

Share this post


Link to post
Share on other sites
How does a multicore environment affect my argument?

My point is this: suppose we have N cores. N-1 cores are dedicated to "game logic", and 1 core is dedicated to simply buffering (and possibly doing some validation for) network traffic.

If the N-1 "game logic cores" are all busy to the point that they cannot handle additional network event simulation, having a spare core buffering the data does nothing. It actually can compound the problem, because you can't guarantee that the workload will ever decrease to the point where the simulation cores can catch up with the backlog.

My proposal is to have N game logic cores instead, where each can synchronize access to the network stream and pull events directly from the OS buffers as desired, and process them. Of course, if you already are using N game logic cores where N > 1, you probably have a very sophisticated threading model to deal with, so the question rapidly becomes one of how you distribute simulation work across cores in the first place. Typically an N-core simulation has a synchronization phase somewhere in each tick (beginning or end typically) and that synchronized time span is perfect for harvesting network traffic and distributing it across cores for simulation purposes. No need to inject another thread into the mix.

Most realtime networking models are designed to discard events that can't be processed, and rely on more up-to-date information from the network to resume simulation. This is standard practice in any network model that relies on UDP; you don't send transactional commands, but rather state snapshots. If snapshot S can't be processed or is lost on the network, you wait for snapshot S+1 and move along as before.

Unless you're trying to write a very high scalability IOCP-based server, adding this kind of threading to network logic is unnecessary complication and has many difficult pitfalls. The OP is by his own words a "newbie" in multithreaded netcode, and therefore probably not equipped to safely architect a more complex solution. I'm simply offering alternatives that minimize the chances of something getting overly messy.

Share this post


Link to post
Share on other sites

Most realtime networking models are designed to discard events that can't be processed, and rely on more up-to-date information from the network to resume simulation.
[/quote]
The problem is that the OS network buffer does the opposite, it drops recent packets in favour of older ones. By pulling them from the OS A.S.A.P; the application gets the choice of which events to process/drop.

That said, I wouldn't necessarily use this argument to decide on a multi-threaded solution. In any case, such a networking thread could be added later if it turns out to be necessary. It isn't necessary to prematurely optimise this. For a newbie - they are unlikely to ever need this optimisation.

Share this post


Link to post
Share on other sites
So, everybody talked about the threading, and nobody answered the question? :-)
Yes, a thread for networking generally just means you're doing it for the first time, but assuming that's easiest for the OP, here's what I would do in that case:
I would use a CRITICAL_SECTION on Windows, or a pthread_mutex_t on Linux, to guard access on the vector. Lock it to push_back a packet. Lock it to check the size of the vector (note: size() is *not thread safe*) and if the size is what you want, swap it with the contents of an empty vector to "dequeue" all items at once. Then work on the dequeued vector.
The time spent holding the lock is so small, and so infrequent (at most, one packet per player per tick), that it really doesn't matter, compared to all the other issues you'll have to work out.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!