Sign in to follow this  
pcmaster

DX11 DX11 - Pixel Shader 5 vs. Group Shared Memory and Atomic operations

Recommended Posts

Greetings community,

we all know that SM5 brought the possibility to scatter stuff in pixel shaders, too (not only compute shaders). MSDN is rather brief on this topic. I can only see that I can use Interlocked*() instructions in both PS and CS. I suppose on UAVs. DeviceMemoryBarrier() seems to work in both PS and CS and it seems to be the only barrier instruction usable in PS. My question now is whether it's principally impossible to[b] take advantage of the group shared memory in PS [/b]too. I don't see the API for that and maybe that makes sense. In GL4.2 I noticed they released the GL_ARB_shader_image_load_store extension, which obviously supports the same stuff but still nothing for the scarce but fast shared memory manipulation :( I did implement various parallel algorithms in OpenCL, so although I might seem little confused now, I'm very much aware which memory is which and what's it good for in GPGPU via CUDA/OpenCL/DX11 CS.

Also, I see virtually nobody discussing using the atomic instructions outside compute shaders and wonder why. I see some OIT and Bokehs around which use Append Buffers. But I have a scenario where I need to rasterise normal geometry with a lot of textures and where I might benefit from being able to reduce a lot of info from pixel shaders using atomic operations on global (device) buffers, instead of writing out shitloads of texture data and reducing it parallelly afterwards. I'm not going to elaborate on my scenario further, I just state that I'll need to analyse what has been rasterised. I don't know how will the performance suffer if all units (fragments) try to write to the same memory location using InterlockedMax() or similar :(

Any thoughts on pixel shaders (not compute shaders!) and shared and atomic stuff in DX11?

Share this post


Link to post
Share on other sites
There's no way to access shared memory at all in pixel shaders. I would assume that the GPU is already using shared memory for coordinating pixel shader executions, but even if that's not the case the API has no means of using it. So you're out of luck on that one.

I really haven't played around too much with using UAV's in pixel shaders, aside from using an append buffer for bokeh (I wrote that sample you're talking about). I'd imagine it's pretty slow using device-wide interlocked operations due to the kind of synchronization required for that sort operation. Even interlocked adds on shared memory is pretty slow...if you look at any fast parallel reductions for compute shaders or Cuda you'll find that they all avoid atomics. But it would definitely be better to profile than to assume, so if you do try any experiments I'd love to know how they turn out.

Share this post


Link to post
Share on other sites
That is my understanding too. That there is magic going on behind the scenes when you compile a pixel shader that converts the pixel shader code into low-level GPU instructions that use shared memory and the like (basically everything you have to do yourself when you write CS or Cuda code).

I have used DeviceMemoryBarrier() in a pixel shader, the documentation is VERY sketchy. As I understand it this is basically a hint to tell the compiler all the GPU threads in the current block should finish accessing globak memory before continuing. Used correctly this should reduce the memory access overhead associated with different threads accessing global memory. But without a coherent description of exactly what this means in the context of pixel shader its difficult to know if I'm using it correctly. Does anyone know of a good description of what this function means in the context of a pixel shader ?

Share this post


Link to post
Share on other sites
Building on what the others have said, there is no access to the group shared memory in pixel shaders. If you consider for a moment how it is used in compute shaders, I think it will be clear why. In the compute shader, you specify how large the thread groups are that you will be working with, and how many of them will be executed in a particular dispatch. Part of your thread group size declaration is the declaration of how much shared memory it will be using. This gives very fine control over how many threads will be needing to access the memory, and you can design your algorithm very precisely to coordinate access to it.

In a pixel shader on the other hand, there is currently no method or concept of a thread group. Instead, it is up to the vendors to determine the optimal split size to be used when rasterizing a primitive, and then it is done more or less behind the scenes. This makes it impossible for a developer to write a shader that will have a coherent access strategy to the shared memory.

Who knows what will be coming in the next versions of D3D, but this seems like a logical extension of the possibilities. People have been talking about programmable rasterization for a while too, so perhaps sometime down the road there could be selectable group sizes for rasterization... That is just pure speculation though - I would be happy with a programmable rasterizer, but I don't know if one would ever come around and/or be useful...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628403
    • Total Posts
      2982477
  • Similar Content

    • By KarimIO
      Hey guys,
      I'm trying to work on adding transparent objects to my deferred-rendered scene. The only issue is the z-buffer. As far as I know, the standard way to handle this is copying the buffer. In OpenGL, I can just blit it. What's the alternative for DirectX? And are there any alternatives to copying the buffer?
      Thanks in advance!
    • By joeblack
      Hi,
      im reading about specular aliasing because of mip maps, as far as i understood it, you need to compute fetched normal lenght and detect now its changed from unit length. I’m currently using BC5 normal maps, so i reconstruct z in shader and therefore my normals are normalized. Can i still somehow use antialiasing or its not needed? Thanks.
    • By 51mon
      I want to change the sampling behaviour to SampleLevel(coord, ddx(coord.y).xx, ddy(coord.y).xx). I was just wondering if it's possible without explicit shader code, e.g. with some flags or so?
    • By GalacticCrew
      Hello,
      I want to improve the performance of my game (engine) and some of your helped me to make a GPU Profiler. After creating the GPU Profiler, I started to measure the time my GPU needs per frame. I refined my GPU time measurements to find my bottleneck.
      Searching the bottleneck
      Rendering a small scene in an Idle state takes around 15.38 ms per frame. 13.54 ms (88.04%) are spent while rendering the scene, 1.57 ms (10.22%) are spent during the SwapChain.Present call (no VSync!) and the rest is spent on other tasks like rendering the UI. I further investigated the scene rendering, since it takes über 88% of my GPU frame rendering time.
      When rendering my scene, most of the time (80.97%) is spent rendering my models. The rest is spent to render the background/skybox, updating animation data, updating pixel shader constant buffer, etc. It wasn't really suprising that most of the time is spent for my models, so I further refined my measurements to find the actual bottleneck.
      In my example scene, I have five animated NPCs. When rendering these NPCs, most actions are almost for free. Setting the proper shaders in the input layout (0.11%), updating vertex shader constant buffers (0.32%), setting textures (0.24%) and setting vertex and index buffers (0.28%). However, the rest of the GPU time (99.05% !!) is spent in two function calls: DrawIndexed and DrawIndexedInstance.
      I searched this forum and the web for other articles and threads about these functions, but I haven't found a lot of useful information. I use SharpDX and .NET Framework 4.5 to develop my game (engine). The developer of SharpDX said, that "The method DrawIndexed in SharpDX is a direct call to DirectX" (Source). DirectX 11 is widely used and SharpDX is "only" a wrapper for DirectX functions, I assume the problem is in my code.
      How I render my scene
      When rendering my scene, I render one model after another. Each model has one or more parts and one or more positions. For example, a human model has parts like head, hands, legs, torso, etc. and may be placed in different locations (on the couch, on a street, ...). For static elements like furniture, houses, etc. I use instancing, because the positions never change at run-time. Dynamic models like humans and monster don't use instancing, because positions change over time.
      When rendering a model, I use this work-flow:
      Set vertex and pixel shaders, if they need to be updated (e.g. PBR shaders, simple shader, depth info shaders, ...) Set animation data as constant buffer in the vertex shader, if the model is animated Set generic vertex shader constant buffer (world matrix, etc.) Render all parts of the model. For each part: Set diffuse, normal, specular and emissive texture shader views Set vertex buffer Set index buffer Call DrawIndexedInstanced for instanced models and DrawIndexed models What's the problem
      After my GPU profiling, I know that over 99% of the rendering time for a single model is spent in the DrawIndexedInstanced and DrawIndexed function calls. But why do they take so long? Do I have to try to optimize my vertex or pixel shaders? I do not use other types of shaders at the moment. "Le Comte du Merde-fou" suggested in this post to merge regions of vertices to larger vertex buffers to reduce the number of Draw calls. While this makes sense to me, it does not explain why rendering my five (!) animated models takes that much GPU time. To make sure I don't analyse something I wrong, I made sure to not use the D3D11_CREATE_DEVICE_DEBUG flag and to run as Release version in Visual Studio as suggested by Hodgman in this forum thread.
      My engine does its job. Multi-texturing, animation, soft shadowing, instancing, etc. are all implemented, but I need to reduce the GPU load for performance reasons. Each frame takes less than 3ms CPU time by the way. So the problem is on the GPU side, I believe.
    • By noodleBowl
      I was wondering if someone could explain this to me
      I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide
      I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA
      In one case I would think that: 
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels
      In the second case I would think that:
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void
      I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end
       
  • Popular Now