'Threaded optimization'

Started by
3 comments, last by tobindax 13 years, 7 months ago
What is the right process to get compatibility or at least a workaround for the ‘Threaded optimization’ feature of NVIDIA?

It's peculiar this issue is not well understood on NVIDIA forums and project forums.

For example, the well known ioquake3 project based on id tech 3 requires to force 'Threaded optimization' off on the NVIDIA settings or there are severe FPS drops.

Do you know what a programmer has to do to acquire compatibility with the feature or at least a workaround to not get issues with it (e.g. turning it off explicitly via the application or other means)?

--

The main importance is that such projects get issues with it by default since not all users know they have to explicitly turn off the feature.
Advertisement
Are you getting performance issues with your own program, or just asking?

If you're having issues, try to compose a minimal test case and see what operations cause the problems to appear. One cause might be unnecessary readback of rendering API state (ie. various getxxx() calls)

Users definitely shouldn't have to turn off anything, ie. the aim should be a program which works either way.
Normally you could do this with the nVidia Control Panel API, but it seems somewhat outdated and I didn't see any refs to newer tech. Perhaps the newer nVidia API would be more helpful.

On a related note, do you have more info about how the Threaded Optimization is affecting your performance? We are also getting some seriously weird behavior only on nVidia under Vista/Win7.
No, but you can investigate the ioquake3 project in parallel; it's open source. (I suspect it's an issue from the initial GPL release of id tech 3)
some insight here http://gamedev.stackexchange.com/questions/3254/what-is-the-right-process-to-get-compatibility-or-at-least-a-workaround-for-the-

This topic is closed to new replies.

Advertisement