• Advertisement
Sign in to follow this  

DX11 Feature level shaders...

This topic is 1219 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Was recently think of adding feature level support to my dx11 engine and was wondering if feature level 9_3 supports shader model 2_a or 2_b. I see that it doesn't support 3_0 but couldn't find info of what shader model 2_x shaders are supported. Also do I write the shaders just like shader model 5_0 as far as data sent to the shaders and just limit them to the limits from my dx9 engine (supporting 2_b and 3_0).thanks for any help in this matter would just be nice to combine both my engines if possible.

Share this post


Link to post
Share on other sites
Advertisement

Always read the footnotes: http://msdn.microsoft.com/en-us/library/windows/desktop/ff476876%28v=vs.85%29.aspx

 

 

 


Also do I write the shaders just like shader model 5_0 as far as data sent to the shaders and just limit them to the limits from my dx9 engine (supporting 2_b and 3_0).

You don't have to implement any such limiting. The DX11 API enforces the feature levels for you, so for example, if you try to set a shader that is not supported by the feature level in use, it won't set it. You can use the debug layer to check if the shader was set successfully, or if there were compatibility errors. As for the shader input data,  you'll get errors if try to use a shader-input format that doesn't match the input fromat of the compiled shader. The shader compiler will also give you errors if you use features (like data formats) that aren't compatible with the shader model specified during compilation.

 

If you want to support multiple feature levels, and use the shader features supported by newer shader models with the newer feature levels, you'll have to write (or use #ifdefs extensively) and copile the shaders separately, for each shader model... And if all your shaders are going to use only the shader models supported by the lowest feature level, then you can still use them with the newer feature levels, but then I think there's no point in using the higher feature levels - the higher feature levels are only useful if you actually use the new shader features.

 

So no, you can't just write your shaders with SM5.0 features and compile them for lower SM targets (unless you can #ifdef-out the use of the SM5.0 features).

Edited by tonemgub

Share this post


Link to post
Share on other sites

I probably didn't explain myself that well, I didn't mean use my sm_5_0 directly (as I have already compiled them all I just wanted to add a new rendering pipeline that would use the dx9 features instead).

I meant, being I have a list of shader model 2_x shaders could I just add them to the dx11 engine and run them instead, as my sm 5 shaders use mrt's but of coarse i can't do that in dx9 I just wanted to use the shaders from my older engine in my new dx 11 engine as a seperate pipeline if dx11 isn't supported and was wondering if they still needed to be in the fx shader format or the new hlsl format where vertex and pixel shaders are separate and using cBuffers etc, so compiling a seperate shader frame work is fine with me I just wondered if there was any voodoo I had to perform in porting them, as in do I write the shaders like the .fx files or the same as hlsl 5_0 with seperate vs and ps files if that makes sense. thanks for the link, I'm assuming 2_x is the equivelant to 2_a/2_b based on dx9_b and dx9_c.

Share this post


Link to post
Share on other sites

The Effects framework is no longer part of DirectX SDK (since DX10 even).

 

You could also still use Effects in DirectX11, as Microsoft provides it as source-code, but they also said that it will be completely removed in a future DirectX version, along with the rest of the D3DX functions (they will probably not be supported starting with DX12).

 

The hlsl format hasn't changed though... You just can't use Effects techniques and shader passes in your shaders anymore. You don't have to remove the techniques, because the compiler will just ignore them, but you still can't use them without the Effects framework.

You have to compile and load each shader separately now, along with setting constant buffers and sampler states (if you used to set them from hlsl before) - this would be the same as not using the Effects framework in DX9.

 

And sadly, you also cannot use preshaders anymore: http://msdn.microsoft.com/en-us/library/windows/desktop/bb206299%28v=vs.85%29.aspx#PreShaders_Improve_Performance so you have to be a lot more careful about what calculations you put in your shaders - if they are better done on the CPU, you have to move them out of your shaders manually now if you want to improve performance.

 

But I think this is exactly what you wanted (no Effects), right?

 

2_x refers to all of the shader model 2 shaders (2_a, 2_b etc...) except 2.0.

 

I also found another footnote, which may be of interest to you:

(from http://msdn.microsoft.com/en-us/library/windows/desktop/jj215820%28v=vs.85%29.aspx#direct3d_9.1__9.2__and_9.3_feature_levels)

 

Feature level 9.3 effectively requires hardware that complies with the requirements for legacy Direct3D 9 shader model 3.0, but this feature level does not make use of vs_3_0 or ps_3_0 targets

Edited by tonemgub

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By evelyn4you
      hi,
      until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
      Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
      For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
      My Graphic Card is Directx 12 compatible NVidia GTX 960
      the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
      https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/
      Now my questions
       is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
        the same question is about the constant buffer of the matrixes
       my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
      for example i could use 2 vertexbuffer bindings
      1 containing only the uv coordinates
      2.containing position and normal
      How do i copy from the RWByteAddressBuffers to the vertexbuffer ?
       
      (Code from turanszkij )
      Here is my shader implementation for skinning a mesh in a compute shader:
      1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer;   ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices   RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos   inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) {  float4 p = 0, pp = 0;  float3 n = 0;  float4x4 m;  float3x3 m3;  float weisum = 0;   // force loop to reduce register pressure  // though this way we can not interleave TEX - ALU operations  [loop]  for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i)  {  m = boneBuffer[(uint)inBon].pose;  m3 = (float3x3)m;   p += mul(float4(pos.xyz, 1), m)*inWei;  n += mul(nor.xyz, m3)*inWei;   weisum += inWei;  }   bool w = any(inWei);  pos.xyz = w ? p.xyz : pos.xyz;  nor.xyz = w ? n : nor.xyz; }   [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) {  const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now...   uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress);  uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress);  uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress);  uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress);   float4 pos = asfloat(pos_u);  float4 nor = asfloat(nor_u);  float4 wei = asfloat(wei_u);  float4 bon = asfloat(bon_u);   Skinning(pos, nor, bon, wei);   pos_u = asuint(pos);  nor_u = asuint(nor);   // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props:  streamoutBuffer_POS.Store4(fetchAddress, pos_u);  streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }  
    • By mister345
      Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?
       
      _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
      It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
          m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
      and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
      https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
  • Advertisement