Sign in to follow this  
pcmaster

DX11 D3D11 WARP without swap-chain - LEAK?

Recommended Posts

pcmaster    982

Hi all!

 

We've had rendering on servers without GPUs (and without monitors) using DX11.0 WARP running just fine. Now that our loads have increased I noticed we run out of memory. I'm releasing all resources (mainly textures and buffers) but since we don't have any swap-chain (and hence no Present), I can't get to any other conclusion than that the resources are never actually released and linger in RAM forever, until we run out of memory. My theory is supported by similar behaviour observation with GPUs (from all vendors) where the RAM is actually freed only after a couple frames and captures by Windows Performance Recorder/Analyzer with WARP.

 

I tried ID3D11DeviceCtx::Flush and ClearState, both to no avail (as expected).

 

Can anyone comment on WARP without swap-chains? Can I render "forever" without running out of memory because I never present?

 

Thanks for any ideas!

 

EDIT: Here I see that calling Flush should help but I just don't believe it - it doesn't help me and my RAM usage never goes down.

Edited by pcmaster

Share this post


Link to post
Share on other sites
Andy Glaister    136

In D3D11 almost anything you delete is not actually deleted until you flush or present. The reason for this is the 'command buffer' that is being filled with commands may reference these resources and so anything deleted during command buffer creation is kept alive until the GPU has executed the command buffer. Resources are kept alive while they are bound to the pipeline, even if you release them and the ref-count says zero, if they are bound to the pipeline, they are still alive - ClearState solves this. So ClearState/Flush should make sure any resources that you believe you have deleted are actually deleted. If you still find resources alive it might be worth using the debug layers to see if there are objects alive at this point that you didn't expect.

 

WARP does use additional memory to optimize it's rendering, we typically hold onto surfaces if we think you are going to reuse them in the future (particularly resources in use that you call map / discard on) as this is faster than releasing them and then asking the OS for them again (and getting zero filled memory). We do limit this memory, and it will 'decay' over time, so it should not use too much and it should certainly not leak.

 

Andy

Share this post


Link to post
Share on other sites
pcmaster    982

Hi Andy!

 

I'm very well aware, I've implemented multiple command buffers submission and resource tracking on one of the consoles. Indeed, there we finally release deleted resources after a few frames, once we're sure GPU is done with them, as it's left to us, not the driver.

 

We don't have much Map/Write/Discard, certainly not the amount that would contribute to 10 GB over 1200 draw-calls. That was exactly what I feared would be double-buffered by the driver. But doesn't seem the case. We do have plenty of Map/Write/NoOverwrite though.

 

I added ID3D11Debug::ReportLiveDeviceObjects before and after release of textures and it does show exactly the difference, after the Release + Flush + ClearState they are gone from the ReportLiveDeviceObjects listing, however not from RAM (WPA and even TaskMgr confirm that).

 

So Flush + ClearState to no avail, so far sad.png

Edited by pcmaster

Share this post


Link to post
Share on other sites
pcmaster    982

Okay, I reduced it as much as I could and now I do only some rendering using immutable resources (and constant buffers) to a few render targets, then a few Map/NoOverwrite for a fullscreen quad and finally read back the render targets on CPU. At that point (first Map/Read), all the massive allocations happen. No Map/Discard at all.

Edited by pcmaster

Share this post


Link to post
Share on other sites
pcmaster    982

Hehe, my epic fail. There was indeed a leak on my side - forgotten release on a bunch of textures smile.png So now I believe WARP is fine and actually releases the memory smile.png

Edited by pcmaster

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By noodleBowl
      I was thinking about how to render multiple objects. Things like sprites, truck models, plane models, boats models, etc. And I'm not too sure about this process
      Let's say I have a vector of Models objects
      class Model {  Matrix4 modelMat;  VertexData vertices;  Texture texture;  Shader shader; }; Since each model has is own model matrix, as all models should, does this mean I now need to have 1 draw call per model?
      Because each model that needs to be drawn could change the MVP matrix used by the bound vertex shader. Meaning I have to keep updating/mapping the constant buffer my MVP matrix is stored in, which is used by the vertex shader
      Am I thinking about all of this wrong? Isn't this horribly inefficient?
    • By RubenRS
      How do i open an image to use it as Texture2D information without D3DX11CreateShaderResourceViewFromFile? And how it works for different formats like (JPG, PNG, BMP, DDS,  etc.)?
      I have an (512 x 512) image with font letters, also i have the position and texcoord of every letter. The main idea is that i want to obtain the image pixel info, use the position and texcoords to create a new texture with one letter and render it. Or am I wrong in something?
    • By thmfrnk
      Hey,
      I found a very interesting blog post here: https://bartwronski.com/2017/04/13/cull-that-cone/
      However, I didn't really got how to use his "TestConeVsSphere" test in 3D (last piece of code on his post). I have the frustumCorners of a 2D Tile cell in ViewSpace and my 3D Cone Origin and Direction, so where to place the "testSphere"? I thought about to also move the Cone into viewspace and put the sphere to the Center of the Cell with the radius of half-cellsize, however what about depth? A sphere does not have inf depth?
      I am missing anything? Any Ideas?
      Thx, Thomas
    • By Modymek
      hi all
      I want to enable and disable shader in MPCH Media player Classic
      the MPCH have shader option using HLSL shaders
      I want the shader to read each file extension before it plays the file
      so if the video file name is video.GR.Mp4 it will play it in Grayscale shader 
      if it is not and standard file name Video.Mp4 without GR. unique extension so it plays standard without shader or end the shader
      here is the shader I have for grayscale
      // $MinimumShaderProfile: ps_2_0
      sampler s0 : register(s0);
      float4 main(float2 tex : TEXCOORD0) : COLOR {
          float c0 = dot(tex2D(s0, tex), float4(0.299, 0.587, 0.114, 0));
          return c0;
      }
       
      I want to add if or block stantement or bloean to detect file name before it call the shader in order to go to the procedure or disable it or goto end direct without it
       
      any thoughts or help
    • By noodleBowl
      I've gotten to part in my DirectX 11 project where I need to pass the MVP matrices to my vertex shader. And I'm a little lost when it comes to the use of the constant buffer with the vertex shader
      I understand I need to set up the constant buffer just like any other buffer:
      1. Create a buffer description with the D3D11_BIND_CONSTANT_BUFFER flag 2. Map my matrix data into the constant buffer 3. Use VSSetConstantBuffers to actually use the buffer But I get lost at the VertexShader part, how does my vertex shader know to use this constant buffer when we get to the shader side of things
      In the example I'm following I see they have this as their vertex shader, but I don't understand how the shader knows to use the MatrixBuffer cbuffer. They just use the members directly. What if there was multiple cbuffer declarations like the Microsoft documentation says you could have?
      //Inside vertex shader cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; }  
  • Popular Now