Jump to content
  • Advertisement

karnaltaB

Member
  • Content count

    21
  • Joined

  • Last visited

Community Reputation

119 Neutral

About karnaltaB

  • Rank
    Member

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. With WARP renderer no. But I have tested it on two machines with updated drivers, none work. I will certainly find out what I am doing wrong while progressing in my learning engine.
  2. I load 6 textures (simple solid color 512x512) with a RootDescriptorTable in the range of t0-t5 to the pixel shader. I can't tell you why they are not visible in PIX, I am quite new to it. For the OffsetInDescriptorsFromTable, it was a mistake, I used int.MinValue to set it, which is -2147483648 I corrected that and I now use zero. But unfortunately it doesn't solve my issue.
  3. karnaltaB

    One DescriptorHeap per frame buffer ?

    Thank you for your reply. At the moment, my heaps are shaders visible, so having one by frame buffer is the way to go. If I understand you right, if I only wanted to have common heaps, I will need to have one heap CPU only visible where every frame can access and at the beginning of each frame rendering, I would copy descriptor to a another GPU visible heap. That the way ring buffer are working ? And for texture, when it's time to destroy their resource, I will have to wait a few frames to be certain another buffer isn't using them ?
  4. Hello, By digging a bit more in my DX12 learning, I am hitting a problem and I am not certain if my way to handle it, is correct. I will try to explain myself the best I can Basics DX12 samples and tutorials are generally using 2 Descriptor Heap (CBV / Sampler) and all constant buffer resource creation are generally multiplied by the amount of backbuffer. So when double buffering, I have one descHeap (CBV) allocating two slots for each constant buffer (to avoid conflict when modifying data). But now I'd like to play around with Root Signature Descriptor Table, so I need my CB heap slots to be continuous, however if each time I initialize a CB my descHeap allocate two room to it, I end up with discontinuous CB. So to avoid this problem, I was thinking of having one CBV/SRV/UAV DescriptorHeap per back buffer and one common Sample DescriptorHeap. So when allocating several CB, they are all continuous in their own heap. Is that a common practice ? And additionally, for stuffs like textures, I don't want to create two commited resource for them because it will mean upload them 2 times in GPU memory while they are "read only" resource (most of the time). So with my multiple DescHeap system, it will mean multiple heap pointing on the same resource. Is that a problem ? Hope I've been understandable Thank you.
  5. Ok here is how my sample is supposed to work : I have a rotating cube and 6 textures loaded in the shader, every x seconds, a 32 bits root constant (an integer) is updated to the pixel shader and the texture is switched (1-6). And if my value fall down to zero, I juste return black as pixel shader color. When doing my sample with a traditionnal constant buffer view to pass my int value, it's working fine. But when I use a 32 bits root constant, after a few second of running the value fall to zero. Here are two PIX captures, one when it's working and another one when the value fall down to zero. PS : I am using C# with SharpDX, maybe a pointer lifecycle issue ? Even if in debug mode the pointer seem to be OK even after root constant has stop updating correctly. Thank in advance GPU 1 - OK.pix3 GPU 3 - NOK.pix3
  6. Yes, I haven't figured yet. I keep learning other aspect, maybe I will find what's wrong when I get a better understanding of D3D12. I understand that it's quite hard for you too help me with that bug, but pasting code is impossible even my small learning engine has dozen of classes.
  7. Thank all. I have installed PIX it will probably save me a lot of time later. But in my specific case (really simple app), thing happen as I thought, the constant switch between value 5 and 6 (which is intended) while my app is working and fall down to 0 when all my cubes goes to black. So, as I expected the root constant stop being updated in the middle of my rendering loop after a few second.
  8. No yet. The shader is really minimalist. When my cube turns black it’s because the root constant has not an expected value. So I know the problem come from that constant. Can I see root constant life with PIX ?
  9. Hi, I am trying to learn D3D12 programming by programming a small engine but I have a strange behavior that I cannot understand. Basically, I am drawing 200 cubes each frame and between each draw call I use a root 32 bits constant to determine the cube color in my shader. It work fine for a couple of seconds then the root constant seem to stop to be updated and so my shader can't determine color anymore. I don't know why it work and suddenly stop without any debug error. Here is the pseudo code of what I am doing : Init() { ... - Define a float value - Create a pointer to this float value - Add a root constant to my root signature parameters. ... } RenderLoop() { ... For 0 to 200 { - Update my float value - CommandList->SetGraphicsRoot32BitConstants(floatPtr) - Draw cube } } Thank for help.
  10. Hi all,   According Microsoft MSDN, it's a better practice to draw using Direct2D with a ID2D1DeviceContext (and a D3D Swapchain) than using an ID2D1RenderTarget. You have access to some caching methods, can switch render target anytime, ...   So I am trying to adapt my application that way (with SharpDX) but I can't find the constructor to create a SolidColorBrush (or gradiant, ..) from a DeviceContext. They all require a RenterTarget as parameter... I have found some code sample in SharpDX where people construct a new brush with a DeviceContext as parameter, so this contructor was there at a time.   Where it's gone ? How do I do now to create my brushes if I use the DeviceContext draw method ?   Thank a lot for help, it's really getting me mad :)
  11. Hi,    I am playing around with PCSS shadows in my small 3D engine (for learning purpose) but there is two things I don't understand. Probably beginner problems...     - Reading from my shadow map requiere an UV coordinate, and I add poisson disk offset to them to read surrounding pixel and try to find blockers (nVidia PCSS method). But what I don't get is how it work when the UV + Offset are greater than 1 ? It go out of the texture then ?   Exemple : UV = 0.54,0.74 + Offset = 0.84,0.44. I would expect poisson disk to be really small value like 0.00xx.. to read nearly surrounding pixel and not random pixel on the whole map. I probably don't get how reading texture work...     - My other problem is that the depth map I generate from the point of view of my light has a too small range so I don't find a lot of blockers (so shadow are still hard). When visualizing my shadow map in the VS debugger, all my value are between 0.998 - 1.0 (sometime event between 1 - 1 :s). Even with my clip space as tight as possible. In a particular case my near plane is 0.1f and my far plane is 20.0f, I would expect depth values a bit more expended like 0.97 - 1.0. What could influence the depth repartition when rendering my shadow map (a 32bit floating point texture). Due to that tight depth range, my PCSS offset value after having calculated blockers is really small, so I don't really get soft shadows. When debugging the sample code from nVidia, I notice that the main difference is that their depth value are more expended and so the offset value is something like 8-9 while mine is something like 0.025..   Thank a lot if someone can light me up ;)
  12. karnaltaB

    Problem when resizing Render Targets

    I have partially identified the problem.   It look like a far clip plane problem, if I zoom in on my scene, I can see it rendered correctly. It's like there is a clip plane at a few unit of the camera.   But I am on DirectX 11 and I have not set any clip plane, if I am right, in DX11 this not automatic ? You have to setup the clip plane via shader ?   EDIT :   Ok it's solved, it was a far clip plane problem, it's still automatic in DX11, my fault. I was using a 10000 unit far plane in engine initialisation but after a resize I was using a 500 unit far place, I was setting at another place of my code..
  13. karnaltaB

    Problem when resizing Render Targets

    Yes debug layer is activated. There is no messages.
  14. Hi,   When I resize my render frame, I resize the swap chain and recreate all my render targets. But as soon as I do it, my render did work anymore, I just got my clear color displayed.   I can't really provide code sample because this is part of a larger project, but here is how I proceed :   - Resize the swapchain - Recreate the viewport - For each render targets :    + Dispose the Texture2D and recreate it with the new size.    + Dispose and recreate the ShaderResourceView from the Texture2D    + Dispose and recreate the RenderTargetView from the Texture2D - For each depth stencil :    + Dispose the Texture2D and recreate it with the new size.    + Dispose and recreate the ShaderResourceView from the Texture2D    + Dispose and recreate the DepthStencilTargetView from the Texture2D   As soon as I do that, my draw aren't displayed anymore.   If I debug the frame, I can't see that all my draw calls correctly occur and pass throught the vertex shader but stop there, pixel shader is not executed (but one is bind correctly). In the pixel history, I just seen :   - Initial - ClearTargetView - Final   No Pixel shader stage or depth test stage... I am probably missing something simple but I don't know what..   Thank if someone can help me.
  15. Thank, it look like it was a normal precision problem and not a depth problem.   I switched my normal storing buffer from a R11G11B10_Float to an R16G16B16A16_Float and removed my normal transformation (normal.xyz * 0.5 + 0.5) and now it's rendering all smooth.   You put me in the right direction ;)
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!