• Content count

  • Joined

  • Last visited

Community Reputation

288 Neutral

About piluve

  • Rank

Personal Information


  • Twitter
  • Github
  1. Hi, I'll double check the barriers thanks for the tip Hi, that may be useful I will bookmark it. Thanks!
  2. Hello! Jumping from one bug the the next . I wonder, If having the same descriptor table is a bad idea or if it should be avoided? In my rendering code, I'm only using one command list (graphics), depending if the current call is a Draw* or a Dispatch, I set the Descriptor table with SetComputeRootDescriptorTable or with SetGraphicsRootDescriptorTable. I'm asking this because I encountered a bug where, if I perform a draw call in between some compute, the following compute work will be all messed up and not working properly. This problem may not be related at all with the main question (I think there is something going on with my depth buffer). Thanks!
  3. I just wanted to give an update on this problem. I finally solved the issue. First, as I said in my previous comment, I removed all the remaining dx11on12 which was causing PIX and NSight to crash. After that I did some debugging and realised that our rendering API can expect a NULL resource to be set (and I was ignoring it) this caused many problems with the bindings etc. Right now, instead of using null descriptors, I'm setting a dummy texture of size 1 (which seems to work fine). I also found another bug that was affecting the hole thing. During RootSignature creation, the resources are added in order to the signature, but during the binding I was setting them in a wrong order sometimes. And that was it for this bug. Thanks!
  4. Yeah that is strange, the spec says... OutputWindow Type: HWND An HWND handle to the output window. This member must not be NULL. Maybe that swap chain is invalid but then how you are able to get the surface...
  5. Its strange that there is only one window. If you check this: It looks like the local camera is in a second window ...
  6. You could also try to use the ID3D11DeviceContext::OMGetRenderTargets() and then get the resource from the returned view.
  7. HI! Doesn't Skype have like a floating window to show the remote camera? Maybe they are rendering to two different windows and you should have to do some compositing.
  8. I did some cleaning to the rendering code as I had a lot of remaining DX11on12 code and now I'm able to debug using PIX and NVIDIA Nsight. As far as I can see, it looks like the compute shaders are in place as they should but looks like one SRV is invalid. I have to play around with PIX because I've never used it, so far looks much better than the VSGraphics Debugger
  9. 2D Best API for 2D animations

    Hi I think you could go for DirectX (11) or OpenGL, I will stick with the last as supports multiple platforms. You should be able to reach the 60FPS in both APIs, it will only depend on the amount of work you push to the GPU and how well is everything programmed . Same for pixel perfect, it should be doable in both.
  10. I've been investigating this issue again and I think I'm missing some kind of synchronisation. If I leave the first shader as it is and replace the second with a dummy one that just outputs a colour, I see that the second Dispatch now has the appropriate compute shader. Maybe I need to do some some kind of GPU-GPU synchronisation? Does the UAV->SRV barrier stall the hole compute dispatch or only the access of the resource? I'm running out of ideas EDIT: Maybe this pic helps: I added a Wait/Signal block to discard any issues related with synchronisation.
  11. Hi @ajmiles, yeah I have it enabled and quiet at the moment
  12. Hi @MJP, I´m using the VS Debugger because it is the only one that seems to work fine with DX11on12. I´ll check out the new release of RenderDoc (I think I have an older version). Thanks for clarifying the UAV barriers, the doc is a bit confusing in that aspect :c
  13. Hello once again, I'm performing a bunch of Dispatch() calls but I'm not getting the correct result out of them. I used the VS GraphicsDebuffer , and for some reason, the compute shader of each of the Dispatch call is the same! So If I have: // mat1, mat2 and mat3 have different PSO and RootSignatures renderPlarform->ApplyMaterial(mat1); renderPlatform->Dispath(...); renderPlarform->ApplyMaterial(mat2); renderPlatform->Dispath(...); // <- using mat1 compute shader! renderPlarform->ApplyMaterial(mat3); renderPlatform->Dispath(...); // <- using mat1 compute shader! Could anyone shed some light on this problem? I wan't to add a question that may be related with this problem. I'm using this "problematic dispatches" to write to a Resource. So each Dispatch will take a few SRV as inputs and will write to an UAV. Following Dispatches will take the resultant UAV as input (SRV). Texture FooTex1, FooTex2; Texture Mat1Out,Mat2Out; // mat1 Inputs: (SRV)FooTex1, (SRV)FooTex2 Outputs: (UAV) Mat1Out // mat2 Inputs: (SRV)FooTex1, (SRV)FooTex2, (SRV) Mat1Out Outputs: (UAV) Mat2Out // etc. Currently I'm adding (transition) barriers between UAV<->SRV. Should I add (uav) barriers to make sure writing to the UAV is done? As far as I understand from the MSDN Doc , "all unordered access view (UAV) accesses (reads or writes) must complete before any future UAV accesses (read or write) can begin" I don't need it as I'm not doing further UAV read/write. Thanks
  14. Hi, why you can't use texture.Sample() ? I think you should be able to create a DXGI_FORMAT_R32G32_SINT and sample it
  15. DX12 Reading from the CPU

    @SoldierOfLight I'm using Windows 10 Pro (with the latest patch I guess), I tried mapping once the ReadBack buffers but getting same results :C. I'll dig around this one that is failing and try to find out why.