Lock free priority queue

Started by
4 comments, last by Silver Phoenix 14 years, 10 months ago
Hi. Does anyone know how to implement a lock free priority queue without using a randomized scheme? I know there's an implementation that uses random skiplists, but I'm looking for something more deterministic. [Edited by - Silver Phoenix on June 20, 2009 7:23:53 AM]
My passive agressiveness can be so devastating. - Alanis Morissette, "Everything"
Advertisement
AFAIK, lock-free and deterministic rarely go well together.

What is your actual problem, maybe there is a way to remove the lock-free requirement?
Quote:Original post by Hodgman
AFAIK, lock-free and deterministic rarely go well together.


What I mean with deterministic is that it doesn't make use of any random variable. For example skiplists (random level) vs binary heap (insert always in last position).

I understand that with multithreading the data structure has "random" structure, i.e., that the elements might be inserted in different order.

Quote:Original post by Hodgman
What is your actual problem, maybe there is a way to remove the lock-free requirement?


I want to implement an "almost" lock free task scheduler with dependency between tasks. The dependencies ensure that the tasks won't need any locks. The only allowed locks would be when there are no more tasks to run (to put the worker threads to sleep instead of destroying them).

The scheduler and the dependencies (avoid cycles, etc...) are all done, more or so, without locking.

All there is left to do is the lock free priority queue. In this case, the priority is the number of dependencies (other tasks) that a task has. The starting tasks are the ones that don't depend on any other task. When one task finishes, it inserts it's dependencies on the queue, if these dependent tasks don't have any more dependencies (or, in another way, a dependent task is inserted when all it's dependencies finish).

If I really can't find any non-blocking algorithm for the queue, then I'll go for the skiplist. I thought that someone might know a different approach to it.
My passive agressiveness can be so devastating. - Alanis Morissette, "Everything"
A lockfree priority queue is quite simple to implement in the N-publishers, 1-subscriber case. Just use a standard N-pubs,1-sub lockfree queue, but make the receiving thread store an internal priority queue that is not shared. When the subscriber needs to pop an item, it first takes all new items from the lockfree queue and inserts them into the priority queue it owns. Then it returns the item in front of that priority queue.

In the general N-publishers, M-subscribers case, I don't know.

[Edited by - clb on June 20, 2009 1:01:21 PM]
Well, I suppose the above method can be extended to a N-publishers, M-subscribers lockfree priority queue by the following method:

- There are N publisher threads that produce work items to be placed into a standard N-publishers, 1-subscriber lockfree (non-priority) queue.
- There are M worker threads. Each of them have a lockless mailslot that works as follows:
The mailslot can contain a single work item. If the worker thread is working on an item, the mailslot is marked full. If the worker thread is idle, the mailslot is marked empty.
- There is a mediator thread that is the subscriber to the lockfree queue. The mediator owns an internal nonshared priority queue.

The process goes as follows:

1. The N publishers produce work items and place them to the lockfree queue.
2. The mediator thread watches over this queue and removes all items from it and places them to its own internal priority queue as soon as possible.
3. The mediator thread watches over all of the mailslots of the M worker threads. Whenever a mailslot becomes empty, the mediator thread gets notified and it places the first item in its own priority queue into the mailslot for the worker to consume.
4. The workers constantly listen to their mailslots and immediately consume the item that they receive. When the task is finished, the mailslot is marked free.

So there you have it, a lockfree priority queue mechanism. The drawback is that the whole architecture combines both push and pull models to pass data forward, where all data goes through a single mediator thread. The mediator thread must register to get notifications of different events (mailslot empty, new work item produced). Those don't strictly count as being locks if you can do an efficient WaitForMultibleObjects-kind of thing, but still, I've no idea how this scheme performs in practice.

I haven't seen this described anywhere, just came up with it... The method I described in the previous reply I have used on several occasions, and it works very well.

Try it at your own risk, no guarantees.
Quote:Original posting by clb
Well, I suppose the above method can be extended to a N-publishers, M-subscribers lockfree priority queue by the following method


I actually thought about that one. The main thread (scheduler) would serve as a listener for those events instead of being almost completely idle until all work is done. The problem is that the workers will have to block, or use a lock, every time they finish a single task.

My current approach is: the worker pushes all possible (dependent) tasks into the priority queue and then fetches the next task from it. If one exists, processes it and pushes the dependent tasks again, and so on until no more tasks are in the queue. At that time it goes to sleep and waits for the scheduler to wake it up with new tasks.

My priority queue is based on a binary heap, but instead of an array (implicit pointers) I'm using nodes (explicit pointers) and trying to make it lock free.

I'm still testing with single dependency, i.e., the task depends on either zero or one other task, but a task can have more than one dependent. This way the workers never block until all tasks are processed, as all their dependents are queued.

The same doesn't hold if a task depends on more that one other task, because one may not enqueue any tasks and the queue might be empty until some other worker finishes (that shares dependents). In this case your option seems good. (But I'm kind of stubborn =D, and will use it if there's absolutely no other way to implement the lock free priority queue.)

Although it doesn't prevent workers from blocking if there's still work to do, with a (lock free or otherwise) queue they can try to pull another task before going to sleep, minimizing the worker's idle time.

I also thought about Fibonacci heaps, since they can be merged, and seem more relaxed with merging than binomial heaps. In this case the thread creates it's own private priority queue and then merges it with the public priority queue, but it's still hard to make it lock free.

A relaxation to the problem is that it isn't necessary for all dependents to be queued all at once. All that matters is that a worker can dequeue the next best task.
My passive agressiveness can be so devastating. - Alanis Morissette, "Everything"

This topic is closed to new replies.

Advertisement