Sign in to follow this  
DavidGallagher1

DX11 Feature level shaders...

Recommended Posts

Was recently think of adding feature level support to my dx11 engine and was wondering if feature level 9_3 supports shader model 2_a or 2_b. I see that it doesn't support 3_0 but couldn't find info of what shader model 2_x shaders are supported. Also do I write the shaders just like shader model 5_0 as far as data sent to the shaders and just limit them to the limits from my dx9 engine (supporting 2_b and 3_0).thanks for any help in this matter would just be nice to combine both my engines if possible.

Share this post


Link to post
Share on other sites

Always read the footnotes: http://msdn.microsoft.com/en-us/library/windows/desktop/ff476876%28v=vs.85%29.aspx

 

 

 


Also do I write the shaders just like shader model 5_0 as far as data sent to the shaders and just limit them to the limits from my dx9 engine (supporting 2_b and 3_0).

You don't have to implement any such limiting. The DX11 API enforces the feature levels for you, so for example, if you try to set a shader that is not supported by the feature level in use, it won't set it. You can use the debug layer to check if the shader was set successfully, or if there were compatibility errors. As for the shader input data,  you'll get errors if try to use a shader-input format that doesn't match the input fromat of the compiled shader. The shader compiler will also give you errors if you use features (like data formats) that aren't compatible with the shader model specified during compilation.

 

If you want to support multiple feature levels, and use the shader features supported by newer shader models with the newer feature levels, you'll have to write (or use #ifdefs extensively) and copile the shaders separately, for each shader model... And if all your shaders are going to use only the shader models supported by the lowest feature level, then you can still use them with the newer feature levels, but then I think there's no point in using the higher feature levels - the higher feature levels are only useful if you actually use the new shader features.

 

So no, you can't just write your shaders with SM5.0 features and compile them for lower SM targets (unless you can #ifdef-out the use of the SM5.0 features).

Edited by tonemgub

Share this post


Link to post
Share on other sites

I probably didn't explain myself that well, I didn't mean use my sm_5_0 directly (as I have already compiled them all I just wanted to add a new rendering pipeline that would use the dx9 features instead).

I meant, being I have a list of shader model 2_x shaders could I just add them to the dx11 engine and run them instead, as my sm 5 shaders use mrt's but of coarse i can't do that in dx9 I just wanted to use the shaders from my older engine in my new dx 11 engine as a seperate pipeline if dx11 isn't supported and was wondering if they still needed to be in the fx shader format or the new hlsl format where vertex and pixel shaders are separate and using cBuffers etc, so compiling a seperate shader frame work is fine with me I just wondered if there was any voodoo I had to perform in porting them, as in do I write the shaders like the .fx files or the same as hlsl 5_0 with seperate vs and ps files if that makes sense. thanks for the link, I'm assuming 2_x is the equivelant to 2_a/2_b based on dx9_b and dx9_c.

Share this post


Link to post
Share on other sites

The Effects framework is no longer part of DirectX SDK (since DX10 even).

 

You could also still use Effects in DirectX11, as Microsoft provides it as source-code, but they also said that it will be completely removed in a future DirectX version, along with the rest of the D3DX functions (they will probably not be supported starting with DX12).

 

The hlsl format hasn't changed though... You just can't use Effects techniques and shader passes in your shaders anymore. You don't have to remove the techniques, because the compiler will just ignore them, but you still can't use them without the Effects framework.

You have to compile and load each shader separately now, along with setting constant buffers and sampler states (if you used to set them from hlsl before) - this would be the same as not using the Effects framework in DX9.

 

And sadly, you also cannot use preshaders anymore: http://msdn.microsoft.com/en-us/library/windows/desktop/bb206299%28v=vs.85%29.aspx#PreShaders_Improve_Performance so you have to be a lot more careful about what calculations you put in your shaders - if they are better done on the CPU, you have to move them out of your shaders manually now if you want to improve performance.

 

But I think this is exactly what you wanted (no Effects), right?

 

2_x refers to all of the shader model 2 shaders (2_a, 2_b etc...) except 2.0.

 

I also found another footnote, which may be of interest to you:

(from http://msdn.microsoft.com/en-us/library/windows/desktop/jj215820%28v=vs.85%29.aspx#direct3d_9.1__9.2__and_9.3_feature_levels)

 

Feature level 9.3 effectively requires hardware that complies with the requirements for legacy Direct3D 9 shader model 3.0, but this feature level does not make use of vs_3_0 or ps_3_0 targets

Edited by tonemgub

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627762
    • Total Posts
      2978971
  • Similar Content

    • By schneckerstein
      Hello,
      I manged so far to implement NVIDIA's NDF-Filtering at a basic level (the paper can be found here). Here is my code so far:
      //... // project the half vector on the normal (?) float3 hppWS = halfVector / dot(halfVector, geometricNormal) float2 hpp = float2(dot(hppWS, wTangent), dot(hppWS, wBitangent)); // compute the pixel footprint float2x2 dhduv = float2x2(ddx(hpp), ddy(hpp)); // compute the rectangular area of the pixel footprint float2 rectFp = min((abs(dhduv[0]) + abs(dhduv[1])) * 0.5, 0.3); // map the area to ggx roughness float2 covMx = rectFp * rectFp * 2; roughness = sqrt(roughness * roughness + covMx); //... Now I want combine this with LEAN mapping as state in Chapter 5.5 of the NDF paper.
      But I struggle to understand what theses sections actually means in Code: 
      I suppose the first-order moments are the B coefficent of the LEAN map, however things like
      float3 hppWS = halfVector / dot(halfVector, float3(lean_B, 0)); doesn't bring up anything usefull.
      Next theres:
      This simply means:
      // M and B are the coefficents from the LEAN map float2x2 sigma_mat = float2x2( M.x - B.x * B.x, M.z - B.x * B.y, M.z - B.x * B.y, M.y - B.y * B.y); does it?
      Finally:
      This is the part confuses me the most: how am I suppose to convolute two matrices? I know the concept of convolution in terms of functions, not matrices. Should I multiple them? That didn't make any usefully output.
      I hope someone can help with this maybe too specific question, I'm really despaired to make this work and i've spend too many hours of trial & error...
      Cheers,
      Julian
    • By Baemz
      Hello,
      I've been working on some culling-techniques for a project. We've built our own engine so pretty much everything is built from scratch. I've set up a frustum with the following code, assuming that the FOV is 90 degrees.
      float angle = CU::ToRadians(45.f); Plane<float> nearPlane(Vector3<float>(0, 0, aNear), Vector3<float>(0, 0, -1)); Plane<float> farPlane(Vector3<float>(0, 0, aFar), Vector3<float>(0, 0, 1)); Plane<float> right(Vector3<float>(0, 0, 0), Vector3<float>(angle, 0, -angle)); Plane<float> left(Vector3<float>(0, 0, 0), Vector3<float>(-angle, 0, -angle)); Plane<float> up(Vector3<float>(0, 0, 0), Vector3<float>(0, angle, -angle)); Plane<float> down(Vector3<float>(0, 0, 0), Vector3<float>(0, -angle, -angle)); myVolume.AddPlane(nearPlane); myVolume.AddPlane(farPlane); myVolume.AddPlane(right); myVolume.AddPlane(left); myVolume.AddPlane(up); myVolume.AddPlane(down); When checking the intersections I am using a BoundingSphere of my models, which is calculated by taking the average position of all vertices and then choosing the furthest distance to a vertex for radius. The actual intersection test looks like this, where the "myFrustum90" is the actual frustum described above.
      The orientationInverse is the viewMatrix in this case.
      bool CFrustum::Intersects(const SFrustumCollider& aCollider) { CU::Vector4<float> position = CU::Vector4<float>(aCollider.myCenter.x, aCollider.myCenter.y, aCollider.myCenter.z, 1.f) * myOrientationInverse; return myFrustum90.Inside({ position.x, position.y, position.z }, aCollider.myRadius); } The Inside() function looks like this.
      template <typename T> bool PlaneVolume<T>::Inside(Vector3<T> aPosition, T aRadius) const { for (unsigned short i = 0; i < myPlaneList.size(); ++i) { if (myPlaneList[i].ClassifySpherePlane(aPosition, aRadius) > 0) { return false; } } return true; } And this is the ClassifySpherePlane() function. (The plane is defined as a Vector4 called myABCD, where ABC is the normal)
      template <typename T> inline int Plane<T>::ClassifySpherePlane(Vector3<T> aSpherePosition, float aSphereRadius) const { float distance = (aSpherePosition.Dot(myNormal)) - myABCD.w; // completely on the front side if (distance >= aSphereRadius) { return 1; } // completely on the backside (aka "inside") if (distance <= -aSphereRadius) { return -1; } //sphere intersects the plane return 0; }  
      Please bare in mind that this code is not optimized nor well-written by any means. I am just looking to get it working.
      The result of this culling is that the models seem to be culled a bit "too early", so that the culling is visible and the models pops away.
      How do I get the culling to work properly?
      I have tried different techniques but haven't gotten any of them to work.
      If you need more code or explanations feel free to ask for it.

      Thanks.
       
    • By evelyn4you
      hi,
      i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
      e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
       is the buffer bound
      a.  to the VertexShaderStage
      or
      b. to the VertexShader that is currently set as the active VertexShader
      Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
      I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

      Look at this example:
      SetVertexShader ( VS_A )
      updateSubresource(buffer_A)
      vertexshader.setConstantbuffer ( buffer_A,  slot_A )
      perform drawcall       ( buffer_A is used )

      SetVertexShader ( VS_B )
      updateSubresource(buffer_B)
      vertexshader.setConstantbuffer ( buffer_B,  slot_A )
      perform drawcall   ( buffer_B is used )
      SetVertexShader ( VS_A )
      perform drawcall   (now which buffer is used ??? )
       
      I ask this question because i have made a custom render engine an want to optimize to
      the minimum  updateSubresource, and setConstantbuffer  calls
       
       
       
       
       
    • By noodleBowl
      I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
      IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
      Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data
       
    • By Rockmover
      I am really stuck with something that should be very simple in DirectX 11. 
      1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine.
      2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders).
       
      However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that?
       
      If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up.  I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC). 
      I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines.  
      I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines?  But I have no clue how to use two different shaders in the same scene.  And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)?  
      I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated.
       
      I'm also more than happy to post my simple test code if that helps as well!
       
      THANKS SO MUCH IN ADVANCE!!!
  • Popular Now