thmfrnk

DX11 Reduce Overdraw on Particles using RWTexture2D

Recommended Posts

Hey,

I just had the idea to use a RWTexture2D within the pixel shader of my particles to try to reduce overdraw/fillrate. I have created a R32_Float texture with UAV and bound it together with the RenderTarget. In my pixel shader I just add a contstant value to pixel of the current fragment while I am checking for a maximum at the beginning. However it does not work. It seams that the texture is not getting written. What I am doing wrong? Or is it not possible to read/write at the same time in PixelShader?

Thx,
Thomas

Share this post


Link to post
Share on other sites

The same subresource(s) cannot be simultaneously bound to the pipeline for write at multiple bind points (the actual restriction is even stronger: if a resource is bound for write it cannot be bound for read either).

See:

https://msdn.microsoft.com/en-us/library/windows/desktop/ff476517(v=vs.85).aspx

"The runtime read+write conflict prevention logic (which stops a resource from being bound as an SRV and RTV or UAV at the same time) treats views of different parts of the same video surface as conflicting for simplicity"

If you're running on D3D10 or 11 you should get an error/warning reported by the debug runtime.  With D3D12 and bindless UAVs the conflict is slightly harder to detect.

Share this post


Link to post
Share on other sites

Yes sure you can't use an SRV of the same ressource where you have bound the RTV.. but in my case I have a second texture with an UAV which is bound parallel to the main RTV. Using an UAV it should be possible to read/write a texture within the PS. I am binding the UAV togehter with the current RTV and DSV using ID3D11DeviceContext::OMSetRenderTargetsAndUnorderedAccessViews

and i don't get any error message..

 

Edited by thmfrnk

Share this post


Link to post
Share on other sites
  1. Do you have the Debug Layer turned on?
  2. What does your call to OMSetRenderTargetsAndUnorderedAccessViews look like?

And 3, I fail to see how this is going to produce desirable results. The execution order of reads/writes to the UAV is inherently 'Unordered' (it's in the name) and so trying to do any sort of "Read-Modify-Write" on a floating-point UAV Texture in the pixel shader every time the it gets invoked is going to be non-deterministic.

How does this reduce overdraw? It sounds like you're still binding a Render Target and writing something to it?

Could you go into a little more detail how it is you think this technique will a) Be deterministic and b) Perform better than what you were doing before?

Share this post


Link to post
Share on other sites
19 hours ago, ajmiles said:
  1. Do you have the Debug Layer turned on?
  2. What does your call to OMSetRenderTargetsAndUnorderedAccessViews look like?

And 3, I fail to see how this is going to produce desirable results. The execution order of reads/writes to the UAV is inherently 'Unordered' (it's in the name) and so trying to do any sort of "Read-Modify-Write" on a floating-point UAV Texture in the pixel shader every time the it gets invoked is going to be non-deterministic.

How does this reduce overdraw? It sounds like you're still binding a Render Target and writing something to it?

Could you go into a little more detail how it is you think this technique will a) Be deterministic and b) Perform better than what you were doing before?

1. Yes of course Debug Layer is on.. No Warnings or Errors.
2. I am using SlimDX so I don't call that method nativly, but it looks like this:

   Dim UAVs() As UnorderedAccessView = {Overdraw.UAV}
   Dim RTVs() As RenderTargetView = {Path.HDR_Buffer.RTV}
   C.OutputMerger.SetTargets(View.DepthStencilView, 1, UAVs, RTVs)

 

About my idea:
I thought about to have a "coverage texture" parallel to the RenderTarget to check how much alpha was already drawn to the current fragment in order to discard any further draws if a specific value is reached.

In order to simply test if a UAV can be written and read in PS i just tried something like that:

RWTexture2D<uint> Overdraw;  // only R32_UINT are supported..
float4 main(PixelShaderInput input, float4 coord : SV_POSITION) : SV_TARGET
{
  ...
    uint2 uv = (uint2) coord.xy;
     
   if (Overdraw[uv] > 0) discard;
   ...  
   Overdraw[uv] = 3;    
   ...
}

But nothing gets discarded.

About "unordered".. yes you are right, but I thought also the execution order for all fragements in PS is also "unordered"
 

Share this post


Link to post
Share on other sites

Binding UAVs to the Pixel Shader stage has a slight quirk in that the UAV indices don't start at 0, but rather at an index 1 if you have 1 render target (or 2 if you have 2 RTV outputs).

I've not looked at the internals of SlimDX, but it needs to make sure that it starts binding UAVs from N (where N = NumRTVs) rather than from 0.

Pixel Shader output is strictly ordered by triangle index, so no, it's not unordered. Also worth noting is that killing/discarding pixels is unlikely to save you anything on fill-rate as the waves have already been launched. At least on hardware I'm familiar with, fill-rate is not saved by killing pixels.

Share this post


Link to post
Share on other sites

Yea in the HLSL I posted I forgot the : register(u1).. so I was aware about the indices.

In my case the shading of each particle is very complex (lots of lighting) so I was hoping this could be an easy way of reducing the calculation for already covered particles.

 

Finally he also did something equal I think:

 

Share this post


Link to post
Share on other sites

Nothing about what you're doing (Read + Write to a UAV from a Pixel Shader) is inherently not going to work (it should), it's just your expectation around the ordering.

At the very least you'll need to use an Atomic operation (with return) to keep the per-pixel count deterministic. If you don't you'll likely end up with a non-deterministic number of particles affecting each pixel.

If you're still having problems you might try something like RenderDoc, VSGD, NSight, GPUPerfStudio, Intel GPA to debug it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Announcements

  • Forum Statistics

    • Total Topics
      628387
    • Total Posts
      2982398
  • Similar Content

    • By KarimIO
      Hey guys,
      I'm trying to work on adding transparent objects to my deferred-rendered scene. The only issue is the z-buffer. As far as I know, the standard way to handle this is copying the buffer. In OpenGL, I can just blit it. What's the alternative for DirectX? And are there any alternatives to copying the buffer?
      Thanks in advance!
    • By joeblack
      Hi,
      im reading about specular aliasing because of mip maps, as far as i understood it, you need to compute fetched normal lenght and detect now its changed from unit length. I’m currently using BC5 normal maps, so i reconstruct z in shader and therefore my normals are normalized. Can i still somehow use antialiasing or its not needed? Thanks.
    • By 51mon
      I want to change the sampling behaviour to SampleLevel(coord, ddx(coord.y).xx, ddy(coord.y).xx). I was just wondering if it's possible without explicit shader code, e.g. with some flags or so?
    • By GalacticCrew
      Hello,
      I want to improve the performance of my game (engine) and some of your helped me to make a GPU Profiler. After creating the GPU Profiler, I started to measure the time my GPU needs per frame. I refined my GPU time measurements to find my bottleneck.
      Searching the bottleneck
      Rendering a small scene in an Idle state takes around 15.38 ms per frame. 13.54 ms (88.04%) are spent while rendering the scene, 1.57 ms (10.22%) are spent during the SwapChain.Present call (no VSync!) and the rest is spent on other tasks like rendering the UI. I further investigated the scene rendering, since it takes über 88% of my GPU frame rendering time.
      When rendering my scene, most of the time (80.97%) is spent rendering my models. The rest is spent to render the background/skybox, updating animation data, updating pixel shader constant buffer, etc. It wasn't really suprising that most of the time is spent for my models, so I further refined my measurements to find the actual bottleneck.
      In my example scene, I have five animated NPCs. When rendering these NPCs, most actions are almost for free. Setting the proper shaders in the input layout (0.11%), updating vertex shader constant buffers (0.32%), setting textures (0.24%) and setting vertex and index buffers (0.28%). However, the rest of the GPU time (99.05% !!) is spent in two function calls: DrawIndexed and DrawIndexedInstance.
      I searched this forum and the web for other articles and threads about these functions, but I haven't found a lot of useful information. I use SharpDX and .NET Framework 4.5 to develop my game (engine). The developer of SharpDX said, that "The method DrawIndexed in SharpDX is a direct call to DirectX" (Source). DirectX 11 is widely used and SharpDX is "only" a wrapper for DirectX functions, I assume the problem is in my code.
      How I render my scene
      When rendering my scene, I render one model after another. Each model has one or more parts and one or more positions. For example, a human model has parts like head, hands, legs, torso, etc. and may be placed in different locations (on the couch, on a street, ...). For static elements like furniture, houses, etc. I use instancing, because the positions never change at run-time. Dynamic models like humans and monster don't use instancing, because positions change over time.
      When rendering a model, I use this work-flow:
      Set vertex and pixel shaders, if they need to be updated (e.g. PBR shaders, simple shader, depth info shaders, ...) Set animation data as constant buffer in the vertex shader, if the model is animated Set generic vertex shader constant buffer (world matrix, etc.) Render all parts of the model. For each part: Set diffuse, normal, specular and emissive texture shader views Set vertex buffer Set index buffer Call DrawIndexedInstanced for instanced models and DrawIndexed models What's the problem
      After my GPU profiling, I know that over 99% of the rendering time for a single model is spent in the DrawIndexedInstanced and DrawIndexed function calls. But why do they take so long? Do I have to try to optimize my vertex or pixel shaders? I do not use other types of shaders at the moment. "Le Comte du Merde-fou" suggested in this post to merge regions of vertices to larger vertex buffers to reduce the number of Draw calls. While this makes sense to me, it does not explain why rendering my five (!) animated models takes that much GPU time. To make sure I don't analyse something I wrong, I made sure to not use the D3D11_CREATE_DEVICE_DEBUG flag and to run as Release version in Visual Studio as suggested by Hodgman in this forum thread.
      My engine does its job. Multi-texturing, animation, soft shadowing, instancing, etc. are all implemented, but I need to reduce the GPU load for performance reasons. Each frame takes less than 3ms CPU time by the way. So the problem is on the GPU side, I believe.
    • By noodleBowl
      I was wondering if someone could explain this to me
      I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide
      I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA
      In one case I would think that: 
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels
      In the second case I would think that:
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void
      I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end
       
  • Popular Now