Concurrency - heap vs. shared memory segment

Started by
6 comments, last by dangerdaveCS 14 years, 2 months ago
Hey all. I've recently been on a mission to tidy up my custom event manager code. I've been looking at boosts boost::interprocess::managed_memory_segment. My problem is that I'm just not understanding why it would be advantageous to go through the rigmerole of using that method of shared memory, when I could just store my shared data on the heap...? [As a bit of background: my plan is to use boost::interprocess::message_queue to communicate between threads. The message will simply be a tuple of event ID plus a (optional) void* pointer to additional data. Then I can use a switch/case block on the event ID to cast the void* pointer to the correct type then handle the message. Maybe there is a better design pattern I haven't thought of?]
Dave.
Advertisement
Shared memory means two separate processes can share memory, which they can't do with the heap. [edit oops didn't realize interprocess was the name of the lib]
Quote:Original post by dangerdaveCS
Hey all. I've recently been on a mission to tidy up my custom event manager code. I've been looking at boosts boost::interprocess::managed_memory_segment. My problem is that I'm just not understanding why it would be advantageous to go through the rigmerole of using that method of shared memory, when I could just store my shared data on the heap...?


Because shm is shared between processes (aka applications), while heap is not.


Quote:[As a bit of background: my plan is to use boost::interprocess::message_queue to communicate between threads.
Wrong tool. Read the namespace again. It is for communication between processes.


Quote:The message will simply be a tuple of event ID plus a (optional) void* pointer to additional data. Then I can use a switch/case block on the event ID to cast the void* pointer to the correct type then handle the message. Maybe there is a better design pattern I haven't thought of?]


boost::asio::io_service::post/dispatch. Typesafe, cross-thread, function invocation.
Aha! Feeling a bit more enlightened now. I thought process==thread, but process==application. That clears things up a little.

Wierd, the first library I looked at was boost::asio, but I thought that was meant for networks, rather than threads?
Dave.
Quote:Original post by dangerdaveCS
Aha! Feeling a bit more enlightened now. I thought process==thread, but process==application. That clears things up a little.

Wierd, the first library I looked at was boost::asio, but I thought that was meant for networks, rather than threads?


It's a completely general framework for performing things asynchronously. Yes, it comes out-of-the-box with plugin classes that allow you to do asynchronous network i/o, and if you're on windows it also comes out-of-the-box with a plugin class that lets you do asynchronous disk i/o. But it is very easy to extend boost::asio to perform aribtrary asynchronous work.

For example, in a commercial application I work on, we send massive amounts of data over a network. First we read the data from disk, then we compress and/or encrypt it, then we send over a network.

1) I use boost::asio to issue a bunch of reads from the disk.
2) Whenever it completes, the framework calls back some function I've supplied.
3) I then issue a request to a thread pool which contains number of threads equal to # of cores on the machine I'm running on to perform encryption / compression
4) Whenever it completes, the framework calls back some function Ive supplied.
5) I then issue a request to send the data over the network.
6) When it completes, the framework calls back some function I've supplied.
7) Finally, I issue more reads.


Note that every single step described above runs on ONE THREAD. This is in sharp contrast to most peoples' intuition when they think of how to do asynchronous work. Usually they think "ok have thread A do this and have it tell thread B when something happens, and then have thread B tell thread A when something else happens". This is a poor way of writing asynchronous code and it won't scale.

Anyway, back to the multi-threaded thing, if I were to insert a step 3.5, where the data is actually being encrypted or compressed, that happens on any one of a number of equivalent threads. So basically I have n+1 threads total, where n is the number of hardware threads (often, but not always equal to the number of cores) on my machine.

Notice then that in order to get from 3 -> 3.5, the framework is automatically handling for me the issue of cross-thread function invocation, and likewise from 3.5 -> 4.


That being said, boost::asio has a high learning curve. It's significantly easier to write boost::asio code if your compiler supports lambdas.


That being said, if you don't actually have the need to do things *asynchronously* (i.e. if you don't need to issue a request, have it return immediately, and have it invoke some callback for you at a later date when it's complete) then asio might not be the right tool for the job. You'd probably be better served with task-level parallelism such as that provided by Intel TBB or Microsoft ConcRT.
Wow thanks a lot for your reply. That's definately helped a lot. From your description it sounds like boost::asio is a much better fit. Time to work my way through the tutorials.
Dave.
Quote:Original post by dangerdaveCS
Wow thanks a lot for your reply. That's definately helped a lot. From your description it sounds like boost::asio is a much better fit. Time to work my way through the tutorials.



Be aware that it can be difficult to redesign large parts of your application around a single-threaded async model like this. Think about it carefully and you can see the problem - you have *one thread*, and everything happens asynchronously on that thread. When something completes, you get notified back on that thread. How does that work?

Basically, it requires a large part of your app to be designed with it in mind. It's good for things like pipelines, where each stage of the pipeline is a relatively small well-defined independent task.

Deep down in the framework, it's executing something like this:

repeat{   block_until(work_queue_has_items)   {      item = work_queue.pop();      execute(item);   }}


In the example I gave above, the work items would be things like "read a piece of data from the disk", "post a message to another thread", "write some data to the network".

But every task that your thread is performing has to fit into this little model. If your entire game update is sequential and then all of a sudden you decide "I need to send object X a message and I need to know when it's complete" it's going to be hard to make this work.

parallelism in general can be often difficult to retrofit an application with.
Just thought I would pop back after I'd started converting my code to use boost::asio. I have to say it's a revelation!

My event handling code is now a fraction of what it was originally with my custom code, and also a fraction of what it was becoming with my attempted switch to boost::interprocess. The ability to call functions through another thread, with the option of either asynchronous or blocking, is just brilliant, exactly what I needed.

Thanks for all your help guys, especially cache_hit.
Dave.

This topic is closed to new replies.

Advertisement