Sign in to follow this  
CrazyCamel

Sharing a std::vector between threads

Recommended Posts

I'm a bit new to multithreading and have read some on mutexes, critical sections, and other interesting ways to share information between threads. Which is the best way to share a std:vector between 2 threads? Or is there a best way? I don't want to have copies of it, and I need read/write access by both threads (e.g. so I can use the [] operator and the push_back() function). Is this possible?

Share this post


Link to post
Share on other sites
You need exactly one mutex (or critical section in Microsoft speak). Every time one of your threads wants to do anything with the vector (add elements to it, read the nth element, remove an element, etc.), it first needs to acquire the mutex. When it's done fiddling with the vector, it must release the mutex. That's all there's to it really.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Yes. Make sure the thread that reads or writes to it always owns the mutex that's locking it every time you're reading or writing from it.

Share this post


Link to post
Share on other sites
Okay, I'm not quite sure if I understand precisely the implementation of a critical section; is it just there as a sort of flag? Do I use it to know if I can access the data? Or does it hold the data itself?

Share this post


Link to post
Share on other sites
A critical section is just some sort of a lock. You need one of these to protect each resource contended between 2 more threads. Your vector is such a contended resource. The mutex (i.e. the critical section) ensures that only ONE thread at a time can access the vector because each thread that wants to access the vector will first have to acquire the mutex. While that thread owns the mutex, all other threads trying to acquire it are blocked (and hence cannot mess with the vector). Then as soon as the thread currently owning the mutex releases it, one of the waiting threads acquires it and can subsequently access the vector. Does that makes sense?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by CrazyCamel
Okay, I'm not quite sure if I understand precisely the implementation of a critical section; is it just there as a sort of flag? Do I use it to know if I can access the data? Or does it hold the data itself?


You're not supposed to implement a "critical section", just use the ones provided by the operating system.

Think of a mutex (critical section) this way:

In order to enter the public rest-room (the public rest room is the vector) at the gas station, you need to have the key (the key is the mutex). You walk up to the attendant (the attendant is the operating system), and ask for it (at this point, you are a thread asking for the mutex). The attendant tells you that someone already has the key, so you will have to wait (another thread has the mutex).

After a while, the person who had the key comes out of the rest room back and hands the key to the attendant (the thread that was using the vector is done and releases the mutex). You take the key from the attendant and have access to the rest-room (you are now a thread that owns the mutex, and can use the vector).

Share this post


Link to post
Share on other sites
I think everyone here is clear on this but just in case... a critical section and a mutex are not the same thing in windows terminology:)

A Mutex is a Kernel level lock( and partially because of this it is Operating System wide).

A critical section is a user land idea that doesn't require a kernel level transition so its faster than a mutex but only visible within a process boundary.

Linky

I'd encourage you to use Boost::Threads mutex and locks instead of the windows mutex's. IMHO, I find them easier to work with:)

Cheers
Chris

Share this post


Link to post
Share on other sites
A mutex is anything that supports the concept of mutual exclusion. I wouldn't necessarily go with the Microsoft usage of the terms "critical section" and "mutex". Really, what MS calls a critical section is a mutex, whereas the section of code it can be used to protect is the critical section. And what MS calls a mutex is also a mutex, just one that also works across processes.

Share this post


Link to post
Share on other sites
Quote:
Original post by Red Ant
A mutex is anything that supports the concept of mutual exclusion. I wouldn't necessarily go with the Microsoft usage of the terms "critical section" and "mutex". Really, what MS calls a critical section is a mutex, whereas the section of code it can be used to protect is the critical section. And what MS calls a mutex is also a mutex, just one that also works across processes.


Fair enough:) I updated my post to reflect that! Someone had brought up windows so I thought I'd clarify for those people who may not be familar with windows threading terminology:)

Cheers
Chris

Share this post


Link to post
Share on other sites
Thanks.
Lots of replies... such devoted people =)

Anyway, one last question- not as important, but will I take much of a performance impact if I call EnterCriticalSection() and LeaveCriticalSection() together 5-10 extra times or so per 10 frames (approximately 50-100 extra calls per second; fps was generally around 100 in my last release tests). I ask because in a specific function I call another function, and both access the critical section, and to speed it up I suppose I could condense it and put the whole thing in the first function, but it seems less structured. It doesn't seem like calling EnterCriticalSection() and LeaveCriticalSection() would cause much delay though, but I am curious what yall think (since you seem to take much interest in mutexes anyway =) ).
So speed (whatever little it may be) or (from my point of view) a more structured approach?

Edit - Not sure if this little code snippet (roughly showing the two ways) will help:
//some "//do stuff" sections removed =)
//more structured (imo)
int VWGref::ProcCom(int olv) {
EnterCriticalSection(&cs);
//do stuff
LeaveCriticalSection(&cs);
return 0;
}
int VWGref::BuildCom(int olv) {
int e=0;
if(e=ProcCom(olv))return e;
//it should be noted that sometimes ProcCom is called
//multiple times via a loop
EnterCriticalSection(&cs);
//do stuff
LeaveCriticalSection(&cs);
return 0;
}
//faster(?)
int VWGref::BuildCom(int olv) {
EnterCriticalSection(&cs);
//do stuff normally in ProcCom()
//do stuff
LeaveCriticalSection(&cs);
return 0;
}



[Edited by - CrazyCamel on July 20, 2006 11:54:08 PM]

Share this post


Link to post
Share on other sites
The rate at which you're talking (50-100 per second) should be fine. You can always measure the difference pretty easily of course. However, you should be aware that locks don't scale as you add processors, and they introduce problems like dead-locks and priority inversion. If all you need to do is read or change a value, there are atomic operations you can use to construct lock-free algorithms that accomplish this, that are much faster and scale more freely.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this