std::Queue and multithreading

Started by
11 comments, last by Hodgman 11 years, 4 months ago
Hello,
I have a problem with the multithreading in c++, since I use it the first time. I have a std::queue, which will have added stuff at the front at random times by thread 1. Thread 2 then reads in the objects at the back of the queue and pops them out.
But since the queue is critical memory, the programm will spit out memoryerrors from time to time.

How can I make sure that the queue is not accessed by thread 2, when thread 1 needs to write. What I found on the www wasn't really helpful, so I hope you can help me out here!

Thanks a bunch.
Advertisement
You could try using concurrent_queue if it's available on your compiler: http://msdn.microsoft.com/en-us/library/ee355358.aspx

Edit:
Ffs... Full Editor button doesn't work and forum doesn't parse link correctly -.-
Of course, you could use a lock (like the one in C++11).
Normally seeing if you don't have access to specialised structures like the concurrent_queue Zaoshi mentioned, you would have to enforce a mutual exclusion on your data so it can accessed safely.

An often-used solution for this is, as Alvaro mentioned, a lock.
Be very careful when using synchronization mechanisms like these though, careless usage of these mechanisms may lead to some really nasty issues (deadlock anyone?) which will guarantee some massive headaches.

I gets all your texture budgets!

If you don't have access to C++11, then you can also download TBB and use their concurrent_queue.
The easy solution is to use a lock or someone else's thread safe queue as has already been suggested. If you're just trying to solve a problem, this is definitely the way to go.

If you're interested in really digging into concurrent programming for the purpose of learning, you may want to investigate writing your own lockless queue where exactly one thread can write and exactly one thread can read. It doesn't require a ton of code to do something simple and it's well worth the learning experience. The need to pass data continuously from thread A to thread B in this way is extremely common. An easy implementation is to have a "read pointer" and a "write pointer" that wraps in a circular buffer - just be careful of order of operations and make the read pointer and write pointer volatile. The "pointer" can be an index into the static-sized circular buffer.

Using a queue like this is great for passing data to be processed. It can also proxy a function call to another thread by using the "command" pattern. You can even pass exceptions this way.
I would recommend a std::mutex with a std::condition_variable. See Stack Overflow for some explanation. That is what I use for my thread pool and queue of jobs. You'll find an example (using a queue) in the documentation of the condition variable.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

The easy solution is to use a lock or someone else's thread safe queue as has already been suggested. If you're just trying to solve a problem, this is definitely the way to go.

If you're interested in really digging into concurrent programming for the purpose of learning, you may want to investigate writing your own lockless queue where exactly one thread can write and exactly one thread can read. It doesn't require a ton of code to do something simple and it's well worth the learning experience. The need to pass data continuously from thread A to thread B in this way is extremely common. An easy implementation is to have a "read pointer" and a "write pointer" that wraps in a circular buffer - just be careful of order of operations and make the read pointer and write pointer volatile. The "pointer" can be an index into the static-sized circular buffer.

Using a queue like this is great for passing data to be processed. It can also proxy a function call to another thread by using the "command" pattern. You can even pass exceptions this way.



Lockless programming is a black art. I strongly suggest not messing with it until you are highly proficient with concurrent systems and understand them deeply. Moreover, it requires a very rich comprehension of how the OS and even hardware work to do lockless correctly.

IMHO, suggesting lockless programming to someone who is not already very experienced with concurrency is utterly bad advice.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

I recently saw an implementation of a lock free ringbuffer which would fit your problem, unfortunately it is written in Go, but it is a very small amount of code so if you are interested you could take a look:

https://github.com/textnode/gringo/blob/master/gringo.go
I am not sure why you recommend a lock free ringbuffer, it will use a busy wait, won't it? A busy wait is seldom a good idea.

The Go example is short and nice, and will always yield the CPU to another process, if there is one that wants to execute. But if there isn't, it will busy wait.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

This topic is closed to new replies.

Advertisement