• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

1 Follower

About lubbe75

  • Rank

Personal Information

  • Interests
  1. I think it's better to reduce the precision in my case, since reading from another constant buffer would probably take too long. What is the equivalent of R8G8B8A8_UInt or R8G8B8A8_UNorm in HLSL? I can only see 16 bit components mentioned in the documentation. No 8 bit types.
  2. I do have hundreds of thousands of vertices (thousands of meshes), so I would save quite a bit of data. Constant buffer? I suppose it's the best way. I am drawing everything in bundles so I suppose I would need to store all the colors in a long vector (one value per mesh), and then index that vector correctly when recording my draw calls. Makes sense.
  3. What is the best practice when you want to draw a surface (for instance a triangle strip) with a uniform color? At the moment I send vertices to the shader, where each vertice has both position and color information. Since all vertices for that triangle strip have the same color I thought I could reduce memory use by sending the color separate somehow. A vertex could then be represented by three floats instead of seven (xyz instead of xys + rgba). Does it make sense? What's the best practice?
  4. DX12 Overlay in DX12?

    My first thought was naturally to draw normal System.Drawing.Graphics elements on top of the 3D rendering. This would be done right after presenting the swapchain and waiting for the GPU to catch up. Unfortunately it didn't work. Nothing gets drawn, at least not on top. If you have any method that works, please let me know. The area that I'm drawing onto is a SharpDX.Windows.Renderform (which inherits from System.Windows.Form.Form). So I guess it's the long route of drawing more 3D polygons instead.
  5. Simple question. What is best practice for drawing overlay graphics in Directx 12? For now, all I want to do is to draw a semi-transparent rectangle in the upper left corner of my view. Is there a shortcut, or do I need to set up more shaders, vertex buffers, constant buffers, root signatures etc. etc.? Since we are talking about DX12 I guess it's the later Any small example project out there?
  6. I think I finally got it working. My mistake lies somewhere in step (5). I always mess up the waiting for the fence part. Now I went back to the simple one allocator (per window) instead of two, following the hello world examples... and it works. Goes to show that I still don't fully understand the routine for working with two allocators. Anyway, I still wonder what makes most sense when rendering to multiple windows. Should each window have its own allocator, queue and command list, or should these things be central to the application? Porting a hello world example to multiple windows I guess it's easier to keep all those things per view.
  7. I am having a problem rendering to multiple (two) windows, using SharpDX and DX12. I am setting up two swapchains etc, and it's almost working. The effect is that both windows shows sporadic flickering. It looks as though one window's transformation leaks into the other window every now and then (it goes both ways but not necessarily at the same time). The rendering is triggered by user input, not a render loop, and the flickering happens mostly when there is lots of input (like panning quickly). When I deliberately slow things down (for instance by printing debug info to Output) it looks fine. At the moment I have this level of separation: The application has: one device, bundles, all the resources like vertex, texture and constant buffers. Each view has: a swap chain, render target, viewport, fence, command list, command allocator and command queue, (I have also tried to use a single command list, allocator, queue and fence, but it doesn't make any difference regarding my flicker problem) The rendering process is quite straight forward: One of the windows requests to be re-rendered, and the other window's requests will be ignored until the first is done Update the transformation matrix, based on the window's parameters (size, center position etc). The matrix is written to the constant buffer: IntPtr pointer = constantBuffer.Map(0); Utilities.Write<Transform>(pointer, ref transform); constantBuffer.Unmap(0); Reset, populate and close the window's command list (populating here means setting its render target & viewport, executing bundles etc). Execute the window's command list on its command queue. Present the window's swapchain Wait for the window's command queue to finish rendering (using its fence) Reset the window's command allocator I really believe that since both windows use the same constant buffer to write and read the transformation, sometimes the first window's transformation is still there when the other window is rendering. But I don't understand why. The writing to the buffer really happens when the code in step 1 is executed... Right? And the reading of the buffer really happens when the command list is executed in step (3)... Right? At very least it must be read before reaching step (6). Am I right? Then how can it sometimes read the wrong transformation? Has anyone tripped over a similar problem before? What was the cause and how did you fix it?
  8. Ok. I thought that the min filter would sample the texture at 4 points (bi-linear) for very pixel... and by using mipmap you can avoid those four samples and replace them with just one lookup. So how does min filtering work? What does it mean when I specify that the min filter should be linear? Even now when I read the description of min filter it's not entirely clear. Does the min filter sample from different mipmap levels? Then what does the mipmap filter specify? Is that just for how the mipmap should be generated?
  9. I am trying to set up my sampler correctly so that textures are filtered the way I want. I want to use linear filtering for both min and mag, and I don't want to use any mipmap at all. To make sure that mipmap is turned off I set the MipLevels to 1 for my textures. For the sampler filter I have tried all kind of combinations, but somehow the mag filter works fine while the min filter doesn't seem to work at all. As I zoom out there seems to be a nearest point filter. Is there a catch in Dx12 that makes my min filter not working? Do I need to filter manually in my shader? I don't think so since the mag filter works correctly. My pixel shader is just a simple texture lookup: textureMap.Sample(g_sampler, input.uv); My sampler setup looks like this (SharpDX): sampler = new StaticSamplerDescription() { Filter = Filter.MinMagLinearMipPoint, AddressU = TextureAddressMode.Wrap, AddressV = TextureAddressMode.Wrap, AddressW = TextureAddressMode.Wrap, ComparisonFunc = Comparison.Never, BorderColor = StaticBorderColor.TransparentBlack, ShaderRegister = 0, RegisterSpace = 0, ShaderVisibility = ShaderVisibility.Pixel, };
  10. DX12 MSAA in DX12?

    Oh, forgot to ask a question... How do I switch MSAA on / off at run-time? The thing is that when I want MSAA I need to specify sample description count > 1 already when creating the pipeline state object. This number cannot be changed later on, can it? Is the only solution to create another pipeline state object (with its own command lists etc) and then switch between these two sets at run-time?
  11. DX12 MSAA in DX12?

    Ok, got it! Going the ResolveSubresource way was really a lot easier. It's good enough for what I am looking for. With this method there is no need to have extra pipeline states, root signatures or shaders. Just the extra render target and depth targets will do. Thanks Vilem Otte and ajmiles for standing by!
  12. DX12 MSAA in DX12?

    Actually it does. Almost everything is set up in the LoadRenderTargetData function. Compare it with the non-post-processing version in https://github.com/RobyDX/SharpDX_D3D12HelloWorld/blob/master/D3D12HelloMesh/HelloMesh.cs it's clear what has been added. Anyway, I'll continue trying today. Finding more than just code fragments for MSAA in Dx12 seems impossible. ajmiles, thanks for the tip. I will use it when I get that far.
  13. DX12 MSAA in DX12?

    Thanks for the encouragement. I guess I need even more details (that's where the devil is). Following your example I am at creating another pipeline state, more buffers, shaders etc, but so far I haven't gotten all the parameters right. There is always another clash of illegal parameter combinations it seems. For instance, how do you match Texture2DMS<float4, SamplesMSAA> msaaTexture : register(t0); with an input structure? And how does the corresponding shader resource view description look like? Do I need to create new resource heaps for the new render target and depth targets (I guess adding to the already existing heaps is fine)? What parameters do you give when creating your resource descriptions? Maybe you are not so keen on reading DX12 + SharpDX code, but if you are, here is an interesting example I found where they render to an off-screen buffer (for post processing effects). I understand the code, and it's working, but they don't do MSAA. Can you or someone else help me to figure out what would be needed in order to add multisampling here? I have a feeling it's not much missing. https://github.com/RobyDX/SharpDX_D3D12HelloWorld/blob/master/D3D12HelloRenderTarget/HelloRenderTarget.cs
  14. Does anyone have a working example of how to implement MSAA in DX12? I have read short descriptions and I have seen code fragments on how to do it with DirectX Tool Kit. I get the idea, but with all the pipeline states, root descriptions etc I somehow get lost on the way. Could someone help me with a link pointing to a small implementation in DirectX 12 (or SharpDX with DX12)?
  15. DX12 DX12 and threading

    OK. So if I always get into the wait section it means that the GPU is doing the lengthy work compared to the CPU. Would I gain anything here by adding more allocators? I'm already at good speed, but I'm aiming for all the low-hanging fruit here So, without doing multithreading, does it mean that I'm only using one GPU, even if the hardware has more than one? Does the Dx12 driver ever utilise multiple GPUs without me telling it to do so?
  • Advertisement