Memory leaks when Windows shuts down the display

Started by
10 comments, last by Mona2000 9 years, 1 month ago

I posted this in another thread but I thought it needed a thread on its own.

When Windows power saving settings turn off the display my application stops releasing resources as they normally do. This means that if the user goes AFK and when the default 20 minutes has passed, all my GPU memory allocations starts to pile up. After a while ID3D11Device::CreateBuffer returns E_OUTOFMEMORY.

I'm not sure if it is the COM smart pointers (CComPtr<ID3D11Buffer>) that stop releasing, or if it happens further down in WDDM or gpu drivers.

As you can see in the screen-shot below: the program runs stable, then when windows turns off the display the GPU memory usage spikes to max, and on the CPU side, Commit Charge builds up, if it hits 100%, ID3D11Device::CreateBuffer returns E_OUTOFMEMORY.

Any help on how to alleviate or circumvent is much appreciated.

[sharedmedia=gallery:images:6197]

Advertisement

Are you checking return codes of every single DirectX call e.g. with if(FAILED())?

If not, do this and try again just to make sure that there is nothing you have forgotten to check on your end to prevent calling a failing function over and over.

You can also intercept WM_SYSCOMMAND in your message loop to watch for the screensaver coming on or power save being enabled and pause your game loop at this point which will also help. Personally in my own game I return 0 here, so that power save and screensaver are disabled while my game is active, this would work around your problem.


Are you checking return codes of every single DirectX call e.g. with if(FAILED())?

Yes, everything gets checked and I throw if failed.


You can also intercept WM_SYSCOMMAND in your message loop to watch for the screensaver coming on or power save being enabled and pause your game loop at this point which will also help. Personally in my own game I return 0 here, so that power save and screensaver are disabled while my game is active, this would work around your problem.

Is it the SC_MONITORPOWER wParam you return 0 for? So returning 0 for this will intercept and stop the monitor from shutting down, and calling DefWindowProc for the message sends it back to Windows for a shutdown?


Is it the SC_MONITORPOWER wParam you return 0 for? So returning 0 for this will intercept and stop the monitor from shutting down, and calling DefWindowProc for the message sends it back to Windows for a shutdown?

Yeah, that's the one. In the event this is a driver bug this would be an effective workaround if not slightly impolite to the user to override their power saving setting :D

The intercept works on Release build, but not on Debug build -.- I guess this can work, but I'd rather solve the underlying issue, which is the leak bug.


all my GPU memory allocations starts to pile up.

When the app is running "normally," what are you allocating (and apparently later deallocating), and what specific conditions trigger the deallocation?

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Skinning data is calculated each frame sent to the GPU, and the next frame all that data is released. Most games probably have a fixed number of allocations at start-up and use them from there and don't see this problem. But if you Create new buffers while the monitor is shut off none of that gets deallocated, even if you release on the CPU side.


But if you Create new buffers while the monitor is shut off none of that gets deallocated

Maybe this is because the os has physically powered off the pci-e device so the allocations go onto nowhere?

If this is the case and it turns out to be a documented issue you could use the SC_MONITORPOWER message to pause your game while the monitor is off and stop allocating buffers. This would be correct behaviour anyhow as why allocate buffers on a device that can't be seen? You'd just be wasting cpu time...

You shouldn't be creating new buffers every frame! I know you are hesitant to move away from that solution, but it is quite simple to use UpdateSubresource and achieve the same effect. If nothing else, it is worth trying so that you can see if this issue also affects your application when you aren't creating buffers every frame.

I have seen something similar to this before using the Memory Diagnostic feature of VS2015. Every time you create a resource, your process's private bytes will be increased, and it doesn't immediately go back down right afterwards. This is probably due to the fact that the runtime/driver doesn't immediately release the buffer memory, even though from the application side it is released. I haven't confirmed this suspicion officially yet, but it would makes sense in your case too.

Are you getting any diagnostic messages at application exit time? If you are leaking resources, you will get a live device object report in the output window. As long as you don't get that message, then your app is not the source of the additional memory.

If all else fails you can prevent the computer turning off the display with SetThreadExecutionState() https://msdn.microsoft.com/en-us/library/windows/desktop/aa373208%28v=vs.85%29.aspx

This topic is closed to new replies.

Advertisement