Synchronization in DX12

Started by
4 comments, last by Baemz 4 years, 7 months ago

Hello,

I am working on a DX12 renderer which is utilizing multiple threads to build command lists. I am currently trying to figure out a proper way to handle fencing and synchronization between the threads.

My problem is that for some reason my waits seem to be passing through even if the fence hasn't yet been signaled.

This is my structure:

Render Thread: Supplies render data from scene and launches several render tasks on worker threads. Lastly uses GPU&CPU waits to check if all tasks are ready before executing Present.

Worker threads: Builds command list for specified task. Uses cpu-waits if there are dependencies. Queues itself into Queue-thread for GPU-submission.

Queue-thread: Runs continously, checking if any cmd list has been queued. Inserts gpu-waits if needed. Executes the cmd list and lastly signals fence.

As noted, the fences seem to act as if they are signaled when Render thread reaches present.

Am I missing something trivial?

/Baemz

Advertisement

Which APIs are you using for interacting with the fences? Using APIs on the fence itself does CPU timeline signals/waits, where queue waits are GPU timeline. That's the only trivial thing that comes to mind.

34 minutes ago, SoldierOfLight said:

Which APIs are you using for interacting with the fences? Using APIs on the fence itself does CPU timeline signals/waits, where queue waits are GPU timeline. That's the only trivial thing that comes to mind.

I'm using the CommandQueue::Wait for GPU waits and SetEventOnCompletion to bind the fence to a handle for later use with WaitForSingleObject.

For CPU I'm simply using Windows Events, eg SetEvent & ResetEvent, together with WaitForSingleObject.

The one thing that have come to my mind is whether or not it is okay to call several Signals on the Cmdqueue inbetween Executes. I know I read somewhere that you shouldn't expect a signal to be triggered more than once per Execute, but I'm not sure in what context that was.

Yes, you can signal the same fence multiple times on the same queue, or multiple fences back-to-back, there's no issues there.

Can you be more specific about how you're using the various APIs? Some pseudocode for what each thread is doing, including how many queues/fences/events you have, and where fence values are incremented and stored would be helpful.

Apparently I seem to have fixed the issue.
Initially I was creating both my CPU event and my GPU event at creation of my waitable-class.
I used a reset-function to reset the event and increment my fence. Lastly in the reset I did a call to SetEventOnCompletion.
Somehow when trying to use ResetEvent on this event it got out of sync with my waits and the event remained in signaled-mode forever.

The thing I do now is whenever a call is made to wait for the GPU. I look if the current fence value is lower than the completed value, and if so, I create a temp-event, call SetEventOnCompletion with that event and the fence values, then wait for the event.
This path worked fine for me and now it's running smooth.

Thanks to @SoldierOfLight for taking time to answer my questions.

This topic is closed to new replies.

Advertisement