• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
    • By Axiverse
      I'm wondering when upload buffers are copied into the GPU. Basically I want to pool buffers and want to know when I can reuse and write new data into the buffers.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 [DX12] Compute shader will not compile with MinMax sampler

Recommended Posts

Hello guys,

I would like to use MinMax filtering (D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT) in compute shader.

I am trying to compile compute shader (cs_5_0) and encounter an error: "error X4532: cannot map expression to cs_5_0 instruction set".

I tried to compile the shader in cs_6_0 mode and got "unrecognized compiler target cs_6_0". I do not really understand the error as cs_6_0 is supposed to be supported.

According to MSDN, D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT should "Fetch the same set of texels as D3D12_FILTER_MIN_MAG_MIP_POINT and instead of filtering them return the maximum of the texels. Texels that are weighted 0 during filtering aren't counted towards the maximum. You can query support for this filter type from the MinMaxFiltering member in the D3D11_FEATURE_DATA_D3D11_OPTIONS1 structure".

Not sure if this is valid documentation as it is talking about Direct3D 11. D3D12_FEATURE_DATA_D3D12_OPTIONS does not seem to provide this kind of check.

Direct3D device is created with feature level D3D_FEATURE_LEVEL_12_0 and I am using VS 2015.

Thanks!

Share this post


Link to post
Share on other sites
Advertisement

Shader model 6 is a whole new compiler generating DXIL bytecode instead of DXBC, it is on github, you also need to turn on the experimental feature on a creator update or newer windows.

I don't think the sampler is the problem, are you passing it in a descriptor table or static sampler by the way ? 

Your real issue is that you call Sample instead of SampleLevel or SampleGrad and it is invalid in a Compute Shader because it can't produce the derivatives.

Share this post


Link to post
Share on other sites

@galop1n Yep, you are right. I was using Sample instead of SampleLevel. The issue has been solved.

Do you know if MinMax filtering is supported by default in D3D12? How do you check if it is supported otherwise?

Thanks!

Edited by _void_

Share this post


Link to post
Share on other sites

In D3D11, Min/Max filtering modes were optional, and had a dedicated cap bit in D3D11_FEATURE_DATA_D3D11_OPTIONS1 that you could check for support. However the docs also stated that Min/Max filtering modes were tied to Tier 2 tiled resource functionality. D3D12 doesn't seem to have a dedicated caps bit, and the docs for D3D12_TILED_RESOURCES_TIER doesn't mention Min/Max filtering at all. A cap bit is mentioned in the docs for D3D12_FILTER, but unfortunately it seems to be partially copy/pasted from the D3D11 docs since it links to the docs for the older D3D11_FEATURE_DATA_D3D11_OPTIONS1 structure. So unless someone from Microsoft can clarify (or the validation layer complains at you), I would probably assume that in D3D12 Min/Max filtering is still tied to D3D12_TILED_RESOURCES_TIER_2. 

FYI D3D_FEATURE_LEVEL_12_0 implies support for D3D12_TILED_RESOURCES_TIER_2, so you should be okay using Min/Max filtering on your hardware.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement