Sign in to follow this  

DX11 DX11 .fx-Effect - disable shaders [SlimDX, C#]

Recommended Posts

Hi, now I hit another problem in my engine. I have some effect files. In those effects I compile shaders in this way:
SetVertexShader(CompileShader(vs_5_0, VS()));
SetHullShader(CompileShader(hs_5_0, HS()));
SetDomainShader(CompileShader(ds_5_0, DS()));
SetGeometryShader(CompileShader(ds_5_0, GS()));
SetPixelShader(CompileShader(ps_5_0, PS()));
In some effects, I do not use all of the shaders. For example for the z-pass I do not need the pixel shader, so I'm disabling it with:
SetVertexShader(CompileShader(vs_5_0, VS()));
SetHullShader(CompileShader(hs_5_0, HS()));
SetDomainShader(CompileShader(ds_5_0, DS()));
SetGeometryShader(CompileShader(ds_5_0, GS()));
Now I have a problem while rendering my GUI. Because the GUI does not need the hull/domain/geometry shaders, I want to disable them too. If I do not disable it, just saing:
SetVertexShader(CompileShader(vs_5_0, VS()));
then somehow the old h/d/g shaders are still active. I tried to do this:
SetVertexShader(CompileShader(vs_5_0, VS()));
SetPixelShader(CompileShader(ps_5_0, PS()));
But then I get an error saing:
System.Exception: Error occured while creating shader: Managed shader: Resources/Shader/gui.fx ---> SlimDX.Direct3D11.Direct3D11Exception: E_FAIL: An undetermined error occurred (-2147467259)
   bei SlimDX.Result.Throw[T](Object dataKey, Object dataValue)
   bei SlimDX.Result.Record[T](Int32 hr, Boolean failed, Object dataKey, Object dataValue)
   bei SlimDX.Direct3D11.Effect..ctor(Device device, ShaderBytecode data, EffectFlags effectFlags)
Does someone know, how I can disable the shaders using the effect files? Or do I have to disable it in my C# code manually by setting for example the vertex shader of the devicecontext to NULL. Is there an other way to "reset" quickly(!) the shaders on a device? Is this maybe a bug? ^^

Share this post

Link to post
Share on other sites
OK, I figured it out, that the problem must be a SlimDX or DX bug. The fxc.exe compiler from the DX SDK compiles the shaders successfully. I will post this isue on the SlimDX project page. Maybe they can help. :)

Share this post

Link to post
Share on other sites
Have you tried using the overloaded Effect ctor with the out errors parameter? It might give you some more information on why SlimDX is having problems with the file (although it uses fxc.exe, so it shouldn't).

I'm assuming you're using the latest SlimDX release too? (Feb 2010)

Share this post

Link to post
Share on other sites
Yes, I used the 02.2010 release on my first try. Actually I'm using the SVN version, but this has still the same problem.

I tried out the constructor with the out-parameter. But I can't get the error string, because calling to this method causes the mentioned exception.

I've posted this isue here:

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Similar Content

    • By evelyn4you
      i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
      e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
       is the buffer bound
      a.  to the VertexShaderStage
      b. to the VertexShader that is currently set as the active VertexShader
      Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
      I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

      Look at this example:
      SetVertexShader ( VS_A )
      vertexshader.setConstantbuffer ( buffer_A,  slot_A )
      perform drawcall       ( buffer_A is used )

      SetVertexShader ( VS_B )
      vertexshader.setConstantbuffer ( buffer_B,  slot_A )
      perform drawcall   ( buffer_B is used )
      SetVertexShader ( VS_A )
      perform drawcall   (now which buffer is used ??? )
      I ask this question because i have made a custom render engine an want to optimize to
      the minimum  updateSubresource, and setConstantbuffer  calls
    • By noodleBowl
      I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
      IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
      Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data
    • By Rockmover
      I am really stuck with something that should be very simple in DirectX 11. 
      1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine.
      2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders).
      However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that?
      If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up.  I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC). 
      I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines.  
      I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines?  But I have no clue how to use two different shaders in the same scene.  And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)?  
      I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated.
      I'm also more than happy to post my simple test code if that helps as well!
    • By Reitano
      I am writing a linear allocator of per-frame constants using the DirectX 11.1 API. My plan is to replace the traditional constant allocation strategy, where most of the work is done by the driver behind my back, with a manual one inspired by the DirectX 12 and Vulkan APIs.
      In brief, the allocator maintains a list of 64K pages, each page owns a constant buffer managed as a ring buffer. Each page has a history of the N previous frames. At the beginning of a new frame, the allocator retires the frames that have been processed by the GPU and frees up the corresponding space in each page. I use DirectX 11 queries for detecting when a frame is complete and the ID3D11DeviceContext1::VS/PSSetConstantBuffers1 methods for binding constant buffers with an offset.
      The new allocator appears to be working but I am not 100% confident it is actually correct. In particular:
      1) it relies on queries which I am not too familiar with. Are they 100% reliable ?
      2) it maps/unmaps the constant buffer of each page at the beginning of a new frame and then writes the mapped memory as the frame is built. In pseudo code:
 = device.Map(page.buffer)
          Alloc(size, initData)
              memcpy( + page.start, initData, size)
          Alloc(size, initData)
              memcpy( + page.start, initData, size)
      (Note: calling Unmap at the end of a frame prevents binding the mapped constant buffers and triggers an error in the debug layer)
      Is this valid ? 
      3) I don't fully understand how many frames I should keep in the history. My intuition says it should be equal to the maximum latency reported by IDXGIDevice1::GetMaximumFrameLatency, which is 3 on my machine. But, this value works fine in an unit test while on a more complex demo I need to manually set it to 5, otherwise the allocator starts overwriting previous frames that have not completed yet. Shouldn't the swap chain Present method block the CPU in this case ?
      4) Should I expect this approach to be more efficient than the one managed by the driver ? I don't have meaningful profile data yet.
      Is anybody familiar with the approach described above and can answer my questions and discuss the pros and cons of this technique based on his experience ? 
      For reference, I've uploaded the (WIP) allocator code at  Feel free to adapt it in your engine and please let me know if you spot any mistakes
      Stefano Lanza
    • By Matt Barr
      Hey all. I've been working with compute shaders lately, and was hoping to build out some libraries to reuse code. As a prerequisite for my current project, I needed to sort a big array of data in my compute shader, so I was going to implement quicksort as a library function. My implementation was going to use an inout array to apply the changes to the referenced array.

      I spent half the day yesterday debugging in visual studio before I realized that the solution, while it worked INSIDE the function, reverted to the original state after returning from the function.

      My hack fix was just to inline the code, but this is not a great solution for the future.  Any ideas? I've considered just returning an array of ints that represents the sorted indices.
  • Popular Now