Jump to content

  • Log In with Google      Sign In   
  • Create Account


Accessing C++ containers from multiple threads


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 PolyVox   Members   -  Reputation: 708

Like
0Likes
Like

Posted 12 February 2010 - 11:49 PM

Hi Guys, Sorry for the slightly noobish question but I'm not that experienced with threading. In my engine I have a thread pool and am able to send it tasks derived from a 'Runnable' base class. I would like to have my main thread retrieve the results from these tasks, and so my idea was to have the tasks write their results into a global(ish) queue. The main thread would then retrieve results from this queue at it's convenience. This means I need a queue which I can read and write safely from multiple threads, and my plan is initially to write a class which wraps the std::queue and a mutex. My wrapper will expose the push_back(), pop(), front(),etc methods, but will make sure the mutex is obtained before calling through to the std::queue implementations. So firstly I would like to know if this is a sensible and conventional approach, or whether there is a better way of doing things (given that I'm trying to keep it relatively simple). Secondly, I would like to know for which std::queue functions I actually need to lock the mutex before calling through. For example, can I safely call size() without locking the mutex? What about front()? Obviously push() and pop() are out. More generally, where can I find this information for all the containers? Is it specified in the C++ standard? Thanks for any input.

Sponsor:

#2 ComicSansMS   Members   -  Reputation: 138

Like
0Likes
Like

Posted 13 February 2010 - 12:02 AM

the problem is, the c++ standard does address multi-thread issues at all.
instead, you will have to consult the documentation of your stl-implementation, which is of course sub-optimal since it breaks cross-platform compatibility of your code.

i guess the best thing you can do is wrap the usual suspects (i.e. all non-const members) with mutexes and hope for the best [wink]

#3 Hodgman   Moderators   -  Reputation: 27614

Like
0Likes
Like

Posted 13 February 2010 - 12:15 AM

If you never write to the queue, then all of the const functions are thread-safe.

If you do ever write to the queue at all, from any thread, then NONE of the functions are thread safe (and you'll have to acquire a lock before calling ANY member).

An alternative is to use a specially designed "lock free" queue, which is thread-safe by design (i.e. not std::queue).
OR
Carefully schedule your program into read and write sections. For example, I use std::vectors in my multi-threaded game a lot - During one "phase" of a frame, a single thread writes to a vector and no other thread reads it, then during a second "phase" (which cannot overlap with the first phase), every thread reads from the vector concurrently.

#4 Antheus   Members   -  Reputation: 2393

Like
0Likes
Like

Posted 13 February 2010 - 12:16 AM

Quote:
Original post by PolyVox

Secondly, I would like to know for which std::queue functions I actually need to lock the mutex before calling through.


Each of them.

Implementation of STL is black box, and each function depends on some opaque shared state.

Quote:
More generally, where can I find this information for all the containers? Is it specified in the C++ standard?


STL does not address threading issues and is not thread-safe. The only guarantee it can give is const-ness, which may or may not be taken into consideration.

Quote:
For example, can I safely call size() without locking the mutex?

Probably not. push() could be implemented as:
void push() {
if (_capacity <= _size) {
resize(2*_capacity);
}
_size++;
_data[size] = x;
}
In this case, calling size() to return _size would return invalid value if called while push_back is being executed.

Quote:
What about front()?
No. Container might be in the middle of resizing its internal storage, which might affect front.

#5 PolyVox   Members   -  Reputation: 708

Like
0Likes
Like

Posted 13 February 2010 - 02:10 AM

Thanks all, that's great information. For my first implementation I will simply make sure that I lock before any operation is performed on the std::queue, but longer term is seems like lock-free data structures and algorithms are the way forward. Now I know what I'm looking for I'm sure I can find some good information and implementations. Thanks again!

#6 Antheus   Members   -  Reputation: 2393

Like
0Likes
Like

Posted 13 February 2010 - 02:36 AM

Quote:
Original post by PolyVox
Thanks all, that's great information. For my first implementation I will simply make sure that I lock before any operation is performed on the std::queue


You need to lock around each of *your* operations. Consider:
void push_message(...) {
lock(queue) {
if (queue.size() > 100) return;
}
lock(queue) {
queue.push_back(...);
}
}
Atomicity is not enough, container operations might even be atomic. What you need to ensure are transactional semantics - each of the composite operations performed on container need to be a transaction.

Quote:
but longer term is seems like lock-free data structures and algorithms are the way forward.

IMHO, not really.

First, they are an incredibly difficult concept to implement correctly, many data structures still don't exist in lock-free manner. Implementations that rely on spin locks might also do more harm, and finally, they solve the wrong problem.

For intra-thread queueing, there are better ways. For example, two queues.
queue local;
while (foo) {
..
local.push(message);
}
// when done with current update
other_thread.add(local);


Such approach can almost eliminate the cost of data sharing, since all operations are single-threaded, and only flips need to be synchronized, perhaps once per update/frame/tick/....

The other is to define data exchange in such a way that each thread can read shared state without locking and without affecting other state.


#7 PolyVox   Members   -  Reputation: 708

Like
0Likes
Like

Posted 13 February 2010 - 05:31 AM

Quote:
Original post by AntheusFor intra-thread queueing, there are better ways. For example, two queues.
queue local;
while (foo) {
..
local.push(message);
}
// when done with current update
other_thread.add(local);


Such approach can almost eliminate the cost of data sharing, since all operations are single-threaded, and only flips need to be synchronized, perhaps once per update/frame/tick/....


Ok, thanks, I think this is applicable to my scenario. I expect the data to take several frames to compute so it is no problem to wait until the end of a frame before syncronizing. Well, I'll give it a shot and see.

#8 iMalc   Crossbones+   -  Reputation: 2259

Like
0Likes
Like

Posted 13 February 2010 - 08:23 AM

Quote:
Original post by Antheus
Quote:
Original post by PolyVox
Thanks all, that's great information. For my first implementation I will simply make sure that I lock before any operation is performed on the std::queue


You need to lock around each of *your* operations. Consider:
void push_message(...) {
lock(queue) {
if (queue.size() > 100) return;
}
lock(queue) {
queue.push_back(...);
}
}
Atomicity is not enough, container operations might even be atomic. What you need to ensure are transactional semantics - each of the composite operations performed on container need to be a transaction.

Quote:
but longer term is seems like lock-free data structures and algorithms are the way forward.

IMHO, not really.

First, they are an incredibly difficult concept to implement correctly, many data structures still don't exist in lock-free manner. Implementations that rely on spin locks might also do more harm, and finally, they solve the wrong problem.

For intra-thread queueing, there are better ways. For example, two queues.
queue local;
while (foo) {
..
local.push(message);
}
// when done with current update
other_thread.add(local);


Such approach can almost eliminate the cost of data sharing, since all operations are single-threaded, and only flips need to be synchronized, perhaps once per update/frame/tick/....

The other is to define data exchange in such a way that each thread can read shared state without locking and without affecting other state.
QFE!

Couldn't have said it better myself. I fully agree with all of that, and have myself used "batching" of messages to/from the global queue, which works great.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS