Sign in to follow this  
MoeTM

DX11 different results from VS debugger with debug device and normal device

Recommended Posts

Hi,

 

i have a serious problem with my engine, but i dont know where to locate the problem. The actual problem is that when I create a dx11 device context normally, my font rendering is messed up an looks ugly (see attached pictures) but when I run the vs graphic debugger with my engine, the engine is unnormal slow (in the past is was not so slow while debugging, now it's times 100 slower) but shows correct results (see attached pictures). So now i am a bit lost to find the reason for this behaviour, since the debug results look correct?! Does someone have hint where to look for this problem?

 

And i should mention that, the graphics debugger does not output any warnings or errors, everything runs just fine, like the other effects in the engine.

 

thx Moritz

 

worse.jpg

good.jpg

Edited by MoeTM

Share this post


Link to post
Share on other sites

although its very rare, VS is known to occasionally not work the same in debug as release mode. i first got bit by this some time back in the early to mid '90's. as a result i now develop exclusively in release mode only. i hear of someone with this issue about once every 5 years or so. so like i said, its pretty rare. if you can find no other explanation, then this might be the cause.  but odds are its your code - not debug mode messing up again.

Share this post


Link to post
Share on other sites

although its very rare, VS is known to occasionally not work the same in debug as release mode. i first got bit by this some time back in the early to mid '90's. as a result i now develop exclusively in release mode only. i hear of someone with this issue about once every 5 years or so. so like i said, its pretty rare. if you can find no other explanation, then this might be the cause.  but odds are its your code - not debug mode messing up again.

 

yes I think that the debugger that a bit more security layers than the release/normal mode. The thing is i change some thinks around the font rendering e.g. texture loading, the graphic context generation etc. but the rendering itself, is unchanged and worked some time ago in release mode as well. So i think i might have corruped some memory or i have some state corruption, but I dont know how to find out where :( I already have simplyfied my code a lot but the problem remains. Are there debug features from dx which are normally not activated, like an security mode or something?

 

Also you can see strong arifacts, which are only in y direction, this looks really strange to me?!

Edited by MoeTM

Share this post


Link to post
Share on other sites

That looks like anti-aliasing is disabled to me. That'd make sense, since the graphics debugger probably disables AA in order to be able to read contents more easily.

 

Actually just realized your problem is running without the debugger, so maybe it's not that...

 

Have you tried using a different GPU? Maybe WARP for example?

Edited by Jesse Natalie

Share this post


Link to post
Share on other sites
Have you tried using a different GPU? Maybe WARP for example?

Okey i tried on my onboard intel chip and it works :D, now the question is what causes it on the nvidia card not to work :D Also my shaders are all with these flags compiled:  D3DCOMPILE_IEEE_STRICTNESS | D3DCOMPILE_ENABLE_STRICTNESS | D3DCOMPILE_WARNINGS_ARE_ERRORS | D3DCOMPILE_DEBUG;

 

I made some tests and i think this behaviour is somehow caused by the ddx and ddy and fwidth functions, could that be possible?

 

Okay i solve the problem by myself, somehow the ddx and ddy used internally (i guess) the ddx_coarse and ddy_coarse, but i  use ddx_fine and ddy_fine, it works fine :D. Can i somehow force the shader compilation to pick ddx_fine and ddy_fine?

Edited by MoeTM

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Forum Statistics

    • Total Topics
      627653
    • Total Posts
      2978433
  • Similar Content

    • By evelyn4you
      hi,
      i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
      e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
       is the buffer bound
      a.  to the VertexShaderStage
      or
      b. to the VertexShader that is currently set as the active VertexShader
      Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
      I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

      Look at this example:
      SetVertexShader ( VS_A )
      updateSubresource(buffer_A)
      vertexshader.setConstantbuffer ( buffer_A,  slot_A )
      perform drawcall       ( buffer_A is used )

      SetVertexShader ( VS_B )
      updateSubresource(buffer_B)
      vertexshader.setConstantbuffer ( buffer_B,  slot_A )
      perform drawcall   ( buffer_B is used )
      SetVertexShader ( VS_A )
      perform drawcall   (now which buffer is used ??? )
       
      I ask this question because i have made a custom render engine an want to optimize to
      the minimum  updateSubresource, and setConstantbuffer  calls
       
       
       
       
       
    • By noodleBowl
      I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
      IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
      Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data
       
    • By Rockmover
      I am really stuck with something that should be very simple in DirectX 11. 
      1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine.
      2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders).
       
      However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that?
       
      If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up.  I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC). 
      I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines.  
      I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines?  But I have no clue how to use two different shaders in the same scene.  And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)?  
      I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated.
       
      I'm also more than happy to post my simple test code if that helps as well!
       
      THANKS SO MUCH IN ADVANCE!!!
    • By Reitano
      Hi,
      I am writing a linear allocator of per-frame constants using the DirectX 11.1 API. My plan is to replace the traditional constant allocation strategy, where most of the work is done by the driver behind my back, with a manual one inspired by the DirectX 12 and Vulkan APIs.
      In brief, the allocator maintains a list of 64K pages, each page owns a constant buffer managed as a ring buffer. Each page has a history of the N previous frames. At the beginning of a new frame, the allocator retires the frames that have been processed by the GPU and frees up the corresponding space in each page. I use DirectX 11 queries for detecting when a frame is complete and the ID3D11DeviceContext1::VS/PSSetConstantBuffers1 methods for binding constant buffers with an offset.
      The new allocator appears to be working but I am not 100% confident it is actually correct. In particular:
      1) it relies on queries which I am not too familiar with. Are they 100% reliable ?
      2) it maps/unmaps the constant buffer of each page at the beginning of a new frame and then writes the mapped memory as the frame is built. In pseudo code:
      BeginFrame:
          page.data = device.Map(page.buffer)
          device.Unmap(page.buffer)
      RenderFrame
          Alloc(size, initData)
              ...
              memcpy(page.data + page.start, initData, size)
          Alloc(size, initData)
              ...
              memcpy(page.data + page.start, initData, size)
      (Note: calling Unmap at the end of a frame prevents binding the mapped constant buffers and triggers an error in the debug layer)
      Is this valid ? 
      3) I don't fully understand how many frames I should keep in the history. My intuition says it should be equal to the maximum latency reported by IDXGIDevice1::GetMaximumFrameLatency, which is 3 on my machine. But, this value works fine in an unit test while on a more complex demo I need to manually set it to 5, otherwise the allocator starts overwriting previous frames that have not completed yet. Shouldn't the swap chain Present method block the CPU in this case ?
      4) Should I expect this approach to be more efficient than the one managed by the driver ? I don't have meaningful profile data yet.
      Is anybody familiar with the approach described above and can answer my questions and discuss the pros and cons of this technique based on his experience ? 
      For reference, I've uploaded the (WIP) allocator code at https://paste.ofcode.org/Bq98ujP6zaAuKyjv4X7HSv.  Feel free to adapt it in your engine and please let me know if you spot any mistakes
      Thanks
      Stefano Lanza
       
    • By Matt Barr
      Hey all. I've been working with compute shaders lately, and was hoping to build out some libraries to reuse code. As a prerequisite for my current project, I needed to sort a big array of data in my compute shader, so I was going to implement quicksort as a library function. My implementation was going to use an inout array to apply the changes to the referenced array.

      I spent half the day yesterday debugging in visual studio before I realized that the solution, while it worked INSIDE the function, reverted to the original state after returning from the function.

      My hack fix was just to inline the code, but this is not a great solution for the future.  Any ideas? I've considered just returning an array of ints that represents the sorted indices.
  • Popular Now