Jump to content
  • Advertisement
Sign in to follow this  
ma_hty

OpenGL How to disable threaded optimization?

This topic is 3689 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I recently figured out that the option "threaded optimization" causing my program some problems in updating screen in Window Vista. When the "threaded optimization" option is at its default state (auto), my OpenGL program written in MFC would not update the screen for some events (e.g. window-size-change). I have never encountered such a problem in Window XP. However, when I disabled "threaded optimization" manually in NVidia Control Panel, my program behave correctly again as it was in Window XP. Is there any way to disable "threaded optimization" within my program instead of doing it manually? And, why is "threaded optimization" causing me program such a problem? Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by ma_hty
Is there any way to disable "threaded optimization" within my program instead of doing it manually?

No.

Quote:

And, why is "threaded optimization" causing me program such a problem?

In all cases I have encountered with threaded optimization problems (on XP and Vista), they were exclusively due to problems in the user application code. In other words, if your code is perfectly well behaved and doesn't do things it isn't supposed to do, you will have no problems with the threaded optimization option (at least not with recent drivers). In fact, enabling the option can significantly increase rendering speed.

Share this post


Link to post
Share on other sites
That's an interesting topic. I must certainly fall into the "do things it isn't supposed to do" category, because if I don't disable threaded optimization, even drawing a simple skybox is slowing down to 10 fps.

Is there somewhere a list of things to avoid when threaded optimization is enabled and that could explain the framerate going from 1000 fps to 10 fps ?

Y.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ysaneya
Is there somewhere a list of things to avoid when threaded optimization is enabled and that could explain the framerate going from 1000 fps to 10 fps ?

Hmm. Difficult to say, it seems to be very picky. However, I noticed that each time I had trouble with threaded optimization (usually slowdowns, although not as much as yours), it turned out to be my code in the end.

I had a lot of problems with accurate frame timing when the optimization was on, but this was mainly due to a slightly different form of Windows message pump I was using (due to it running in a separate thread). I realised that what I was doing wasn't explicitly supported by MS, so I reverted back to a standard pump. This resolved the timing issues.

I also got slowdowns while having a heavy physics simulation running on 3 cores, while the fourth core run the 3D engine. Every few seconds, the framerate would drop, and then go up again. It didn't happen with threaded optimization switched off. I traced it down to a license check thread that was waking up every few seconds to check a USB dongle. For whatever reason (I didn't write that part of the code) that thread had slightly elevated priority. I put it back to normal, and everything was fine.

Finally, I had another very weird effect on an older driver. Even very few faces (100k or so) in the view would grind a 8800 almost to a halt. After a few hundred slow frames, the FPS would suddendly shoot up and then stay high until the engine was restarted. I wasn't able to track the source of that one down (except for disabling threaded optimization), but it magically went away with a driver update and never came back.

From what I can say, threaded optimization works very well. However it is quite sensitive to things you do with the windows message pump. I don't know why, it must use it for thread sync or something. And it is extremely sensitive about how you manage multi-threading in your application, especially on multicore CPUs. Messing around with priorities seems to have an effect. Having threads spinlock for a long time also seem to get it off track (but you shouldn't do that anyway). It also doesn't like you manually messing around with thread affinity - but again, you shouldn't do that.

From what I gathered from NVs side, the optimization uses some internal heuristics to determine how it will multithread the parts of the driver used by your app. In order to do so, some standard use practices have apparently been assumed by the driver devs. If you do something 'strange', it might interfere with these assumptions.

But you're right, NV should be more open about how the feature works and how you can optimize for it and avoid pitfalls.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
... In other words, if your code is perfectly well behaved and doesn't do things it isn't supposed to do, you will have no problems with the threaded optimization option (at least not with recent drivers). ...


I didn't do strange something. Actually, I'm just using OpenGL honestly, no multi-thread, no fancy stuff.

Mmm... my program is written with MFC MDI of Visual Studio 6. Would it has something to with this problem?

Share this post


Link to post
Share on other sites
Quote:
Original post by ma_hty
I didn't do strange something. Actually, I'm just using OpenGL honestly, no multi-thread, no fancy stuff.

Mmm... my program is written with MFC MDI of Visual Studio 6. Would it has something to with this problem?
Maybe - are you calling any OpenGL functions in response to Windows/MFC events?

It's possible that these events are arriving on a separate thread, which means you could potentially have two threads calling OpenGL functions at the same time.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodgman
It's possible that these events are arriving on a separate thread, which means you could potentially have two threads calling OpenGL functions at the same time.


My program is handling quite a lot of MFC events. More than likely, there are some events calling OpenGL functions (e.g. WM_PAINT).

Be frank, I'm not very good at MFC. Are there any obvious guidelines when it comes to OpenGL and MFC MDI?

[Edited by - ma_hty on November 5, 2008 3:19:14 AM]

Share this post


Link to post
Share on other sites
I have made my program working correctly with "threaded optimization" enabled.

Just, it was done dirtily somehow. (And, I felt myself so dirty now.)

Thanks everyone.

Share this post


Link to post
Share on other sites
Thanks for the hints Yann. Yesterday evening I tested again my program with threaded optimisation on, and I didn't get the incredible framerate hit that I had noticed a year ago, but I still get a slowdown of ~500 to 200 fps (which of course doesn't mean that much on so high framerates, I will have to make more serious tests in heavier loads).

Quote:
Original post by Yann L
I had a lot of problems with accurate frame timing when the optimization was on, but this was mainly due to a slightly different form of Windows message pump I was using.


No problem here, I'm using the standard windows message queue.

Quote:
Original post by Yann L
I also got slowdowns while having a heavy physics simulation running on 3 cores, while the fourth core run the 3D engine.


While my code is heavily multithreaded in general, the case I was speaking of (skybox running at 10 fps) used only one core.

Quote:
Original post by Yann L
For whatever reason (I didn't write that part of the code) that thread had slightly elevated priority. I put it back to normal, and everything was fine.


I seem to remember that I have a task scheduler (which is in charge of managing jobs and synchronizing them) that has a higher priority. However, it doesn't do any heavy work, as it justs wakes up other threads and goes to sleep immediately. Still an area to investigate.

Quote:
Original post by Yann L
Having threads spinlock for a long time also seem to get it off track (but you shouldn't do that anyway). It also doesn't like you manually messing around with thread affinity - but again, you shouldn't do that.


I'm guilty of that last one too. I'm forcing the thread affinity of my main rendering thread to work on CPU #0, because on old AMD Athlon X2 machines, each core has an independent clock and calls to QueryPerformanceCounter returned incoherent results depending on which core was in activity when doing the call.

There's a Microsoft hotfix for that if I remember well, but it's pretty hard to make users understand that the OS is at fault and not your program, especially when your program is the only one suffering from the problem..

Y.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!