C++ Concurrency Library

Started by
6 comments, last by Khatharr 11 years, 1 month ago

I was looking over Microsoft's Parallel Patterns Library and saw parallel_for_each(), which is something that I had been thinking would be a nice feature for some time now. I poked around Boost and didn't see anything similar. Does anyone know if there are there any plans to make a more portable implementation of this kind of thing?

I don't really need any big concurrency frameworks, just something like the parallel_for_each() that opens the possibility of using multiple cores to work on large blocks of data. (My texture hue-change routine would LOVE this...)

Thank you for any replies or infos. smile.png

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.
Advertisement

TBB seems to be a very common choice.

[Provided you've had practice with memory re-ordering issues and esoteric C language rules, then] Writing your own isn't that hard.
I like to use a SPMD style of programming quite a bit these days. What I do is:

  • Create a bunch of threads -- typically 1 per core.
  • Assign each thread a number from 0 to numThreads-1. Store this number using TLS so it can be retrieved like a global variable at any time.
  • Write a pure function that can distrubute a range of a problem between threads, e.g.

inline void DistributeTask( uint workerIndex, uint numWorkers, uint items, uint &begin, uint& end )
{
	uint perWorker = items / numWorkers;
	begin = perWorker * workerIndex;
	end = (workerIndex==numWorkers-1)
		? items                      //ensure last thread covers whole range
		: begin + perWorker;
	begin = perWorker ? begin : min(workerIndex, (u32)items);   //special case,
	end   = perWorker ? end   : min(workerIndex+1, (u32)items); //less items than workers
}
  • Run the same main loop on all threads. Upon reaching a processing loop, use this function to determine the range of the loop that should be processed.
  • If the results of one loop become the inputs to another loop, then make sure you synchronize the workers appropriately. e.g. increment an atomic counter as you leave the first loop, and before the second loop, wait until counter == numThreads.
  • In my implementation, I combine this SPMD model with a job-based model, where while any thread is busy-waiting on a condition such as the above, they continually try to pop 'job packets' from a shared queue for processing, to make good use of this waiting time.

There was talk in the boost mailing list about a GPGPU libraray, though I'm not sure the state of it at the moment.

'SPMD'

Ah, I used to do something like that in Ruby for accelerating large http downloads.

I'll take a look at TBB.

Thanking you.

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.
Would OpenMP do what you need? If your compiler supports it, you don't need a library at all, and parallelizing loops is the kind of thing that is really easy to do with it.
Have you tried C++'s standard library already? <future> and async in particular?

It doesn't give you tools to automatically parallelize a loop like OpenMP does, but it does look extremely convenient for pushing some kinds of work on their own threads.

There's also thrust, a nice parallelization framework. You can find it here. It can use GPU so is even more powerful for some solutions.

I poked around in the async stuff for a little while, but really I'm only interested in the parallel_for_each() at the moment. Honestly my only problem with the MS impl is that it's platform specific. I'm really quite interested in the C++11 async stuff, but going to school for programming is heavily interfering with programming time. -.-

I've added OpenMP and thrust to the research list as well. Thanks, guys. smile.png

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

This topic is closed to new replies.

Advertisement