ATI and memory leaks

Started by
5 comments, last by alek314?? 9 years, 12 months ago

Hi guys

So a while back I was trying to find a solution as to why some users of a DX11 module I created were experiencing severe memory leaks with textures not being released while I had no such problems as many others.

I have narrowed it down to one common denominator. They are all running ATI video cards.

So my question is, is there anything special that can be done when setting up the DX device etc to play nice with ATI?

Thanks

JB

Advertisement

The only thing that springs to mind is that all of the D3D11 methods that bind objects to the pipeline will hold references to those objects, so unless you're unbinding them before destruction you're going to leak those references. Maybe other drivers detect this happening and silently release them behind your back, whereas ATI are less forgiving?

The simplest way to unbind everything is to call ID3D11DeviceContext::ClearState in the appropriate place (e.g before shutdown or when loading a new map) so if you're not already doing that I'd suggest doing it first and seeing if the problem reproduces.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

How are you diagnosing/detecting the leaks in the first place?

My DX11 modules inject graphics into a DX9 program. They only do so on command and not every frame(it is a render to texture cockpit lighting system). The users who are experiencing this problem are all ATI users and they have given me video and screenshot evidence of the VAS usage of the program climbing as they change light intensity settings (Release is called on the resources within a few seconds of stopping at a particular setting since the program is 32bit and running close to the limits of 32bit VAS space). ClearState has been tried but to no avail. All NVIDIA users don't have this problem however and I ran my own tests which showed a momentary spike in VAS within the limits of what I would expect given the amount of resources I am handling but it drops back down almost immediately after the unloading is complete.

I found through Google that this occurs in several DX11 games on ATI cards as well so its definitely not an isolated issue either.

If the reference count on your textures is 1, and you call release on your COM pointer, then that's the best you can do. If there is an internal driver bug that keeps the memory floating around, then that should be handled / corrected by AMD... Have you reported the behavior? Or have you tried to capture some details with the graphics debugger / PIX? You will have to provide a repro case to them for there to be any chance to get the issue fixed.

One other point - just because it works on NVidia cards doesn't mean that you are in the clear. Drivers can sometimes be more lenient than they are supposed to be, so it is possible that you are actually doing something incorrectly but the NV drivers happen to handle it in a desired manner.

Even so, if you are using shared resources between D3D9 and D3D11, then my guess is that you are probably right that there is a driver issue in handling the memory.

It happens with nonshared resources as well. They are actually the worst offenders cuz they are in the form of several textures per render target (they get blended together). Releasing them doesn't clear them up on the ATI side.

Our project use multi-thread rendering with Dx9. When threading is not handled properly, Dx9 would sometime give us E_OUTOFMEMORY error, or it just worked, or the computer crash and reboot. Debug mode would always print warning message when these things happened. BTW, we use AMD card.

This topic is closed to new replies.

Advertisement