Jump to content
  • Advertisement

karnaltaB

Member
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

120 Neutral

About karnaltaB

  • Rank
    Member

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, I am currently hosting my DX12 renderer in WPF application by using a System.Windows.Form.Panel into a WindowsFormHost (WindowsFormIntegration). It work but it feel like a dirty trick and I am not sure if I am going to encounter problems this way. I can't seem to find anyone or any sample on internet of someone who has been able to host DX12 into WPF. I know WPFDXInterop can do it with DX11 but it doesn't seem to be updated anymore to support DX12. Does anyone has achieved it ? Thank you.
  2. karnaltaB

    VS output problem

    Thank you very much. I got it
  3. karnaltaB

    VS output problem

    I finally found my error ! By default, when initializing a D3D12_INPUT_ELEMENT_DESC structue with SharpDX, it set the input classification as D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA. In my native renderer, I was initializing my input element with D3D12_INPUT_CLASSIFICATION_PER_INSTANCE_DATA. So, my vertices ID were entering the VS in a wrong order... I don't know what is exactly input classification but it was my issue
  4. Hello, I am stuck on a problem since 24h and it's driving me crazy. I have my original DX12 renderer which is written under ShaprDX and now, I have ported it in C++ /CLI. But my draw calls with the C++ renderer doesn’t produce anything at VS stage (at least at depth pass). Here are two screenshot of PIX, the OK.jpg is the VS stage output of my working .NET renderer and the NOK.jpg is what I get with the native one. The depth map is left empty as geometry seem far far away or bugged. I have compared everything in PIX and I cannot find what’s the difference. Vertex buffer, index buffer, constant buffer data, all states (rasterizer, depth, blend, ...) are identical. It look like a simple projection problem but how is it possible with the same matrices, the same vertices and the same vertex shader. I join two PIX frames if someone can help me out. Just look at the first draw call which is the draw call to render a shadow map from a spot light. This is a ultra minimalist scene to reduce thing to their simplest state. Thank a lot if someone can tell me what’s wrong I am doing. OK.pix3 NOK.pix3
  5. karnaltaB

    PCSS Shadow Sample ?

    Thank you, I will have a look at it. I solved some of my issues thought, I now render my shadow map with back culling and so avoid self shadow issue I was encountering.
  6. Hi, I am looking for HLSL shader sample on PCSS shadowing. My current implementation is quite buggy and doesn't produce a soft shadow. I really have hard time with shader programming. If someone could lead me to a working sample I could try to implement in my learning 3D engine, it would be really cool. Here is my current shader, just for info : // // Compute PCSS shadow. // float PCSS_Shadow(float3 uvd, Texture2D shadowMap, float pixelSize, float lightSize) { // Search for blockers float avgBlockerDepth = 0; float blockerCount = 0; for (int i = 0; i < SEARCH_POISSON_COUNT; ++i) { float2 offset = SEARCH_POISSON[i]; float d4 = shadowMap.SampleLevel(PointWrapSampler, uvd.xy + offset, 0); float4 b4 = (uvd.z <= d4) ? 0.0 : 1.0; blockerCount += dot(b4, 1.0); avgBlockerDepth += dot(d4, b4); } // Check if we can early out if (blockerCount <= 0.0) return 1.0; else if (blockerCount == SEARCH_POISSON_COUNT) return 0.0; // Penumbra width calculation avgBlockerDepth /= blockerCount; float fRatio = 1 + (((uvd.z - avgBlockerDepth) * lightSize) / avgBlockerDepth); fRatio *= fRatio; // Apply the filter float att = 0; for (i = 0; i < PCF_POISSON_COUNT; i++) { float2 offset = fRatio * pixelSize.xx * PCF_POISSON[i]; att += shadowMap.SampleCmpLevelZero(PCFSampler, uvd.xy + offset, uvd.z); } // Devide to normalize return att / PCF_POISSON_COUNT; } // // PCSS Shadow for a spot light // float SpotShadowPCSS(float4 position, float4x4 lightViewProj, Texture2D shadowMap, float pixelSize, float lightSize) { // Transform the world position to shadow projected space float4 posShadowMap = mul(position, lightViewProj); // Transform the position to shadow clip space float3 uvd = posShadowMap.xyz / posShadowMap.w; // Convert to shadow map UV values uvd.xy = 0.5 * uvd.xy + 0.5; uvd.y = 1.0 - uvd.y; return PCSS_Shadow(uvd, shadowMap, pixelSize, lightSize); }
  7. With WARP renderer no. But I have tested it on two machines with updated drivers, none work. I will certainly find out what I am doing wrong while progressing in my learning engine.
  8. I load 6 textures (simple solid color 512x512) with a RootDescriptorTable in the range of t0-t5 to the pixel shader. I can't tell you why they are not visible in PIX, I am quite new to it. For the OffsetInDescriptorsFromTable, it was a mistake, I used int.MinValue to set it, which is -2147483648 I corrected that and I now use zero. But unfortunately it doesn't solve my issue.
  9. karnaltaB

    One DescriptorHeap per frame buffer ?

    Thank you for your reply. At the moment, my heaps are shaders visible, so having one by frame buffer is the way to go. If I understand you right, if I only wanted to have common heaps, I will need to have one heap CPU only visible where every frame can access and at the beginning of each frame rendering, I would copy descriptor to a another GPU visible heap. That the way ring buffer are working ? And for texture, when it's time to destroy their resource, I will have to wait a few frames to be certain another buffer isn't using them ?
  10. Hello, By digging a bit more in my DX12 learning, I am hitting a problem and I am not certain if my way to handle it, is correct. I will try to explain myself the best I can Basics DX12 samples and tutorials are generally using 2 Descriptor Heap (CBV / Sampler) and all constant buffer resource creation are generally multiplied by the amount of backbuffer. So when double buffering, I have one descHeap (CBV) allocating two slots for each constant buffer (to avoid conflict when modifying data). But now I'd like to play around with Root Signature Descriptor Table, so I need my CB heap slots to be continuous, however if each time I initialize a CB my descHeap allocate two room to it, I end up with discontinuous CB. So to avoid this problem, I was thinking of having one CBV/SRV/UAV DescriptorHeap per back buffer and one common Sample DescriptorHeap. So when allocating several CB, they are all continuous in their own heap. Is that a common practice ? And additionally, for stuffs like textures, I don't want to create two commited resource for them because it will mean upload them 2 times in GPU memory while they are "read only" resource (most of the time). So with my multiple DescHeap system, it will mean multiple heap pointing on the same resource. Is that a problem ? Hope I've been understandable Thank you.
  11. Ok here is how my sample is supposed to work : I have a rotating cube and 6 textures loaded in the shader, every x seconds, a 32 bits root constant (an integer) is updated to the pixel shader and the texture is switched (1-6). And if my value fall down to zero, I juste return black as pixel shader color. When doing my sample with a traditionnal constant buffer view to pass my int value, it's working fine. But when I use a 32 bits root constant, after a few second of running the value fall to zero. Here are two PIX captures, one when it's working and another one when the value fall down to zero. PS : I am using C# with SharpDX, maybe a pointer lifecycle issue ? Even if in debug mode the pointer seem to be OK even after root constant has stop updating correctly. Thank in advance GPU 1 - OK.pix3 GPU 3 - NOK.pix3
  12. Yes, I haven't figured yet. I keep learning other aspect, maybe I will find what's wrong when I get a better understanding of D3D12. I understand that it's quite hard for you too help me with that bug, but pasting code is impossible even my small learning engine has dozen of classes.
  13. Thank all. I have installed PIX it will probably save me a lot of time later. But in my specific case (really simple app), thing happen as I thought, the constant switch between value 5 and 6 (which is intended) while my app is working and fall down to 0 when all my cubes goes to black. So, as I expected the root constant stop being updated in the middle of my rendering loop after a few second.
  14. No yet. The shader is really minimalist. When my cube turns black it’s because the root constant has not an expected value. So I know the problem come from that constant. Can I see root constant life with PIX ?
  15. Hi, I am trying to learn D3D12 programming by programming a small engine but I have a strange behavior that I cannot understand. Basically, I am drawing 200 cubes each frame and between each draw call I use a root 32 bits constant to determine the cube color in my shader. It work fine for a couple of seconds then the root constant seem to stop to be updated and so my shader can't determine color anymore. I don't know why it work and suddenly stop without any debug error. Here is the pseudo code of what I am doing : Init() { ... - Define a float value - Create a pointer to this float value - Add a root constant to my root signature parameters. ... } RenderLoop() { ... For 0 to 200 { - Update my float value - CommandList->SetGraphicsRoot32BitConstants(floatPtr) - Draw cube } } Thank for help.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!