Software fallbacks on NV with mutliple threads?

Started by
1 comment, last by Poopio 16 years, 10 months ago
Has anybody noticed that if you run threads that create and destroy render contexts, eventually nVidia's OGL driver will use the software fallbacks for all GL calls? Eg. ThreadProc { GetDC SetPixel CreateContext MakeCurrent MakeNotCurrent DeleteContext ReleaseDC } Main { CreateThread WaitForDeath CreateThread WaitForDeath .... } The ~11th thread gets no hardware acceleration on 2.2 NVPerfKit's instrumented drivers in the and 94.24 forceware. Unfortuantely I don't have an ATI card to test and I was wondering if there's extra context management missing when multi-threading? Thanks!
Advertisement
Strange. Make sure that your thread actually destroys the context before exiting, and that you don't have some kind of weird race condition that somehow leads to context (or DC) leakage. There might be an internal limit on the number of open contexts that can be hardware accelerated.

What you're trying to do there is a very uncommon (and extremely inefficient) usage scenario. So I wouldn't be surprised if NVidia didn't test their drivers in such a situation.

The usual way to handle this scenario would be through a permanent worker thread, that would be assigned a single GL context. If you don't need the thread, put it to sleep, but don't destroy its context. Once you need it, wake it up, let is to its work, and let it sleep again. Constantly creating and destroying threads is already bad enough. But doing this with GL contexts is even worse, from a performance point of view.
I've tested on an ATI card and it works fine.

This isn't a performance code path, it's a visualization toolkit and I have no control over how many views are required, or the life cycle of a worksheet.
Sleeping is not an option, resources must be released.

I guess I'll have to wait till my developer account on nvidia gets approved, sigh.

This topic is closed to new replies.

Advertisement