Sign in to follow this  
_void_

DX12 [DX12] Compute shader will not compile with MinMax sampler

Recommended Posts

_void_    864

Hello guys,

I would like to use MinMax filtering (D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT) in compute shader.

I am trying to compile compute shader (cs_5_0) and encounter an error: "error X4532: cannot map expression to cs_5_0 instruction set".

I tried to compile the shader in cs_6_0 mode and got "unrecognized compiler target cs_6_0". I do not really understand the error as cs_6_0 is supposed to be supported.

According to MSDN, D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT should "Fetch the same set of texels as D3D12_FILTER_MIN_MAG_MIP_POINT and instead of filtering them return the maximum of the texels. Texels that are weighted 0 during filtering aren't counted towards the maximum. You can query support for this filter type from the MinMaxFiltering member in the D3D11_FEATURE_DATA_D3D11_OPTIONS1 structure".

Not sure if this is valid documentation as it is talking about Direct3D 11. D3D12_FEATURE_DATA_D3D12_OPTIONS does not seem to provide this kind of check.

Direct3D device is created with feature level D3D_FEATURE_LEVEL_12_0 and I am using VS 2015.

Thanks!

Share this post


Link to post
Share on other sites
galop1n    977

Shader model 6 is a whole new compiler generating DXIL bytecode instead of DXBC, it is on github, you also need to turn on the experimental feature on a creator update or newer windows.

I don't think the sampler is the problem, are you passing it in a descriptor table or static sampler by the way ? 

Your real issue is that you call Sample instead of SampleLevel or SampleGrad and it is invalid in a Compute Shader because it can't produce the derivatives.

Share this post


Link to post
Share on other sites
_void_    864

@galop1n Yep, you are right. I was using Sample instead of SampleLevel. The issue has been solved.

Do you know if MinMax filtering is supported by default in D3D12? How do you check if it is supported otherwise?

Thanks!

Edited by _void_

Share this post


Link to post
Share on other sites
MJP    19788

In D3D11, Min/Max filtering modes were optional, and had a dedicated cap bit in D3D11_FEATURE_DATA_D3D11_OPTIONS1 that you could check for support. However the docs also stated that Min/Max filtering modes were tied to Tier 2 tiled resource functionality. D3D12 doesn't seem to have a dedicated caps bit, and the docs for D3D12_TILED_RESOURCES_TIER doesn't mention Min/Max filtering at all. A cap bit is mentioned in the docs for D3D12_FILTER, but unfortunately it seems to be partially copy/pasted from the D3D11 docs since it links to the docs for the older D3D11_FEATURE_DATA_D3D11_OPTIONS1 structure. So unless someone from Microsoft can clarify (or the validation layer complains at you), I would probably assume that in D3D12 Min/Max filtering is still tied to D3D12_TILED_RESOURCES_TIER_2. 

FYI D3D_FEATURE_LEVEL_12_0 implies support for D3D12_TILED_RESOURCES_TIER_2, so you should be okay using Min/Max filtering on your hardware.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By NikiTo
      Would it be a problem to create in HLSL ~50 uninitialized arrays of ~300000 cells each and then use them for my algorithm(what I currently do in C++(and I had stack overflows problems because of large arrays)).
      It is something internal to the shader. Shader will create the arrays in the beginning, will use them and not need them anymore. Not taking data for the arrays from the outside world, not giving back data from the arrays to the outside world either. Nothing shared.
      My question is not very specific, it is about memory consumption considerations when writing shaders in general, because my algorithm still has to be polished. I will let the writing of HLSL for when I have the algorithm totally finished and working(because I expect writing HLSL to be just as unpleasant as GLSL). Still it is useful for me to know beforehand what problems to consider.
    • By mark_braga
      I am working on optimizing our descriptor management code. Currently, I am following most of the guidelines like sorting descriptors by update frequency,...
      I have two types of descriptor ranges: Static (DESCRIPTOR_RANGE_FLAG_NONE) and Dynamic(DESCRIPTORS_VOLATILE). So lets say I have this scenario:
      pCmd->bindDescriptorTable(pTable); for (uint32_t i = 0; i < meshCount; ++i) { // descriptor is created in a range with flag DESCRIPTORS_VOLATILE // setDescriptor will call CopyDescriptorsSimple to copy descriptor handle pDescriptor[i] to the appropriate location in pTable pTable->setDescriptor("descriptor", pDescriptor[i]); } Do I need to call bindDescriptorTable inside the loop?
    • By nbertoa
      I want to implement anti-aliasing in BRE, but first, I want to explore what it is, how it is caused, and what are the techniques to mitigate this effect. That is why I am going to write a series of articles talking about rasterization, aliasing, anti-aliasing, and how I am going to implement it in BRE.
      Article #1: Rasterization
      All the suggestions and improvements are very welcome! I will update this posts with new articles
    • By mark_braga
      I am working on optimizing barriers in our engine but for some reason can't wrap my head around split barriers.
      Lets say for example, I have a shadow pass followed by a deferred pass followed by the shading pass. From what I have read, we can put a begin only split barrier for the shadow map texture after the shadow pass and an end only barrier before the shading pass. Here is how the code will look like in that case.
      DrawShadowMapPass(); ResourceBarrier(BEGIN_ONLY, pTextureShadowMap, SHADER_READ); DrawDeferredPass(); ResourceBarrier(END_ONLY, pTextureShadowMap, SHADER_READ); // Uses shadow map for shadow calculations DrawShadingPass(); Now if I just put one barrier before the shading pass, here is how the code looks.
      DrawShadowMapPass(); DrawDeferredPass(); ResourceBarrier(NORMAL, pTextureShadowMap, SHADER_READ); // Uses shadow map for shadow calculations DrawShadingPass(); Whats the difference between the two?
      Also if I have to use the render target immediately after a pass. For example: Using the albedo, normal textures as shader resource in the shading pass which is right after the deferred pass. Would we benefit from a split barrier in this case?
      Maybe I am completely missing the point so any info on this would really help. The MSDN doc doesn't really help. Also, I read another topic 
      but it didn't really help either. 
    • By ZachBethel
      I'm reading through the Microsoft docs trying to understand how to properly utilize aliasing barriers to alias resources properly.
      "Applications must activate a resource with an aliasing barrier on a command list, by passing the resource in D3D12_RESOURCE_ALIASING_BARRIER::pResourceAfter. pResourceBefore can be left NULL during an activation. All resources that share physical memory with the activated resource now become inactive or somewhat inactive, which includes overlapping placed and reserved resources."
      If I understand correctly, it's not necessary to actually provide the pResourceBefore* for each overlapping resource, as the driver will iterate the pages and invalidate resources for you. This is the Simple Model.
      The Advanced Model is different:
      Advanced Model
      The active/ inactive abstraction can be ignored and the following lower-level rules must be honored, instead:
      An aliasing barrier must be between two different GPU resource accesses of the same physical memory, as long as those accesses are within the same ExecuteCommandLists call. The first rendering operation to certain types of aliased resource must still be an initialization, just like the Simple Model. I'm confused because it looks like, in the Advanced Model, I'm expected to declare pResourceBefore* for every resource which overlaps pResourceAfter* (so I'd have to submit N aliasing barriers). Is the idea here that the driver can either do it for you (null pResourceBefore) or you can do it yourself? (specify every overlapping resource instead)? That seems like the tradeoff here.
      It would be nice if I can just "activate" resources with AliasingBarrier (NULL, activatingResource) and not worry about tracking deactivations.  Am I understanding the docs correctly?
      Thanks.
  • Popular Now