Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About GMCommand

  • Rank

Personal Information

  • Interests
  1. Of course, I have the version 1809 build 17763.55 Thanks
  2. I didn't know PIX until recently, but I tried it to solve my problem and it seems to have some nice features, I'll probably switch to it now :) I'll also have a look at RenderDoc, thanks :) As for my std::queue problem, it was only a problem with the union and constructors... sorry
  3. ūüėģ Okay if I don't create zero-sized heaps, it solves the problem. I've just "finished" my ring buffer, as I didn't use the Sampler and DSV heap to copy descriptors yet, it didn't even cross my mind this could be the problem... I would have expected CreateDescriptorHeap to return an invalid parameter error code or the debug layer to tell me it was illegal though. Can you tell me why it crashes only with the graphics debugger ? And any idea about the std::queues ? Or should I make another topic for it ? (to be more precise, it works fine when I push elements, but crashes when I use front()/back(), and I double checked it wasn't empty) I'll try to see if I can reproduce it. Anyway, thank you very much for you help ajmiles, it's the second time you help me
  4. I tried to make a small code sample to reproduce the behaviour, but it worked fine, so I'm probably doing something wrong the the descriptor heaps... So I tried to narrow down the problem, it seems it happens after the creation of my ring buffer. The ring buffer has 4 heaps, and the next descriptor heap I create after the ring buffer's initialization has an invalid CPU handle, be it for depth or something else. void RingBuffer::Initialize() { UINT heap_sizes[] = { 250, 0, 20, 0 }; for (int i = 0; i < D3D12_DESCRIPTOR_HEAP_TYPE_NUM_TYPES; ++i) { m_HeapList[i].Init(m_pDevice, heap_sizes[i], D3D12_DESCRIPTOR_HEAP_TYPE(i)); } } void Heap::Init(ID3D12Device* device, UINT num_descriptors, D3D12_DESCRIPTOR_HEAP_TYPE type) { D3D12_DESCRIPTOR_HEAP_DESC heap_desc = {}; heap_desc.NumDescriptors = num_descriptors; heap_desc.Type = type; if (type == D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV) heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; else heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; ThrowIfFailed(device->CreateDescriptorHeap(&heap_desc, IID_PPV_ARGS(&DescriptorHeap))); Markers = new std::queue<SectorMarker>(); Markers->push({ D3D12Descriptor(DescriptorHeap->GetCPUDescriptorHandleForHeapStart(), DescriptorHeap->GetGPUDescriptorHandleForHeapStart()), 0, 0 } ); AvailableMemory = Size = num_descriptors; DescriptorIncrementSize = device->GetDescriptorHandleIncrementSize(type); } The crash happens when I create descriptor heaps after calling RingBuffer::Initialize(); By the way I take this opportunity to ask another unrelated question : I know it looks stupid to dynamically allocate the std::queues here, but when I statically allocate them, the program crashes when I use the queues giving me this error : The std::queue are members of struct Heap, Heaps are stored as a union in the ring buffer class, which is dynamically allocated. struct SectorMarker { D3D12Descriptor Descriptor; UINT64 FenceValue; UINT Tail; }; struct Heap { ... std::queue<SectorMarker>* Markers; } class RingBuffer { ... private: union { struct { Heap m_SRVHeap; Heap m_SamplerHeap; Heap m_RTVHeap; Heap m_DSVHeap; }; Heap m_HeapList[D3D12_DESCRIPTOR_HEAP_TYPE_NUM_TYPES]; }; } Any idea why ?
  5. I only have one D3D12 compatible GPU (GTX 770), but my drivers are up to date, I updated them few days ago after updating to Windows 10 - 1809. Also tried it on an old laptop with a GeForce G102M but it crashed all the same with the graphics debugger. When I run the application with the graphics debugger, I have 2 adapters : "Capture Adapter", and "Microsoft Basic Render Driver", but both produce the same crash :S
  6. Hi everybody ! I have a problem with the VS Graphics Debugger and a D3D12 desktop application. When I create my depth stencil resource, I create two descriptor heaps, one with HEAP_TYPE_DSV, and one with HEAP_TYPE_CBV_SRV_UAV (to rebuild position from depth map), both calls to CreateDescriptorHeap succeed. Then I create a shader-resource view, and a depth-stencil view. Things go well with and without the debugger, but when I use the graphics debugger, the application crashes stating that the CPU descriptor handles does not refer to a location in a descriptor heap when I call either CreateDepthStencilView or CreateShaderResourceView. Here is my code : { D3D12_DESCRIPTOR_HEAP_DESC heap_desc = {}; heap_desc.NumDescriptors = 1; heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; ThrowIfFailed(device->CreateDescriptorHeap(&heap_desc, IID_PPV_ARGS(&m_SRVDescriptorHeap))); heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV; ThrowIfFailed(device->CreateDescriptorHeap(&heap_desc, IID_PPV_ARGS(&m_DSVDescriptorHeap))); } DXGI_FORMAT depth_format = DXGI_FORMAT_D24_UNORM_S8_UINT; DXGI_FORMAT depth_format_SRV = DXGI_FORMAT_R24_UNORM_X8_TYPELESS; D3D12_CLEAR_VALUE depth_clear = {}; depth_clear.Format = depth_format; depth_clear.DepthStencil.Depth = 1.0f; depth_clear.DepthStencil.Stencil = 0; ThrowIfFailed(device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Tex2D( depth_format, width, height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL), D3D12_RESOURCE_STATE_DEPTH_WRITE, &depth_clear, IID_PPV_ARGS(&m_Resource) )); { D3D12_DEPTH_STENCIL_VIEW_DESC dstencil_desc = {}; dstencil_desc.Format = depth_format; dstencil_desc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D; dstencil_desc.Flags = D3D12_DSV_FLAG_NONE; // this call causes a crash device->CreateDepthStencilView(m_Resource, &dstencil_desc, m_DSVDescriptorHeap->GetCPUDescriptorHandleForHeapStart()); } { D3D12_SHADER_RESOURCE_VIEW_DESC srv_desc = {}; srv_desc.Shader4ComponentMapping = D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING; srv_desc.Format = depth_format_SRV; srv_desc.ViewDimension = D3D12_SRV_DIMENSION_TEXTURE2D; srv_desc.Texture2D.MipLevels = 1; // this call causes a crash device->CreateShaderResourceView(m_Resource, &srv_desc, m_SRVDescriptorHeap->GetCPUDescriptorHandleForHeapStart()); } This is the message I get from the DebugLayer : D3D12 ERROR: ID3D12Device::CreateShaderResourceView: Specified CPU descriptor handle ptr=0x0000000000008781 does not refer to a location in a descriptor heap. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE] I have no more information with the GPUBasedValidation. The only thing different I noticed with the graphics debugger is that the handle has a much smaller ptr (and always the same) than without it, but I suppose it's only due to the fact that the application is run in some kind of environment with the graphics debugger, right ? PIX gives me the same error (E_PIX_INVALID_DESCRIPTOR_HANDLE). Can anyone help me understand or give me a clue why I have this problem, please :| ? Many Thanks
  7. GMCommand

    Direct2D for 2D games

    Hi, I think you have the answer on MSDN (https://docs.microsoft.com/en-us/windows/uwp/gaming/working-with-2d-graphics-in-your-directx-game) You can also use this : (it uses Direct3D) https://github.com/Microsoft/DirectXTK/wiki/Sprites-and-textures https://github.com/Microsoft/DirectXTK/wiki/SpriteBatch I would personally use Direct3D to draw the sprites, and depending on the time I want to spend on the game use Direct3D or 2D/Write to draw texts and interfaces. I've only used Direct2D/DirectWrite to draw interfaces and texts, but I think you'll get more flexibility with Direct3D for sprites. Hope this helps
  8. GMCommand

    SSAO running slow

    Yes, that's what I thought when I realized it ! By the way, do I have to close the thread ? I don't see any close thread button
  9. GMCommand

    SSAO running slow

    I didn't know that tool ! So I downloaded it to try, and when I clicked the start analysis button, a message box popped up telling me "Hey, you're stupid !". Seriously, it said "This capture was created on a different GPU (Microsoft Basic Render Driver..." I didn't notice it, but I was asking for D3D_FEATURE_LEVEL_11_1, and the GTX 770 supports up to D3D_FEATURE_LEVEL_11_0 -.- So thank you very much ajmiles, you solved my issue !! Thanks for your answer JoeJ ! I tried to specify [roll] but it did not change the render time. Setting [unroll] on the other side reduced it to around 73ms ! Sorry I wasted your time on something so stupid... Thanks a lot to both of you !
  10. Hi everyone : ) I'm trying to implement SSAO with D3D12 (using the implementation found on learnopengl.com https://learnopengl.com/Advanced-Lighting/SSAO), but I seem to have a performance problem... Here is a part of the code of the SSAO pixel shader : Texture2D PositionMap : register(t0); Texture2D NormalMap : register(t1); Texture2D NoiseMap : register(t2); SamplerState s1 : register(s0); // I hard coded the variables just for the test const static int kernel_size = 64; const static float2 noise_scale = float2(632.0 / 4.0, 449.0 / 4.0); const static float radius = 0.5; const static float bias = 0.025; cbuffer ssao_cbuf : register(b0) { float4x4 gProjectionMatrix; float3 SSAO_SampleKernel[64]; } float main(VS_OUTPUT input) : SV_TARGET { [....] float occlusion = 0.0; for (int i = 0; i < kernel_size; i++) { float3 ksample = mul(TBN, SSAO_SampleKernel[i]); ksample = pos + ksample * radius; float4 offset = float4(ksample, 1.0); offset = mul(gProjectionMatrix, offset); offset.xyz /= offset.w; offset.xyz = offset.xyz * 0.5 + 0.5; float sampleDepth = PositionMap.Sample(s1, offset.xy).z; float rangeCheck = smoothstep(0.0, 1.0, radius / abs(pos.z - sampleDepth)); occlusion += (sampleDepth >= ksample.z + bias ? 1.0 : 0.0) * rangeCheck; } [....] } The problem is this for loop. When I run it, it takes around 140 ms to draw the frame (a simple torus knot...) on a GTX 770. Without this loop, it's 5ms. Running it without the PositionMap sampling and the matrix multiplication takes around 25ms. I understand that matrix multiplication and sampling are "expensive", but I don't think it's enough to justify the sluggish drawing time. I suppose the shader code from the tutorial is working, so unless I've made something terribly stupid that I don't see I suppose my problem comes from something I did wrong with D3D12 that I'm not aware of (I just started learning D3D2). Both PositionMap and NormalMap are render targets from the gbuffer, for each one I created two DescriptorHeap : one as D3D12_DESCRIPTOR_HEAP_TYPE_RTV and one as D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV, and called both CreateRenderTargetView and CreateShaderResourceView. The NoiseMap only has one descriptor heap of type D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV. Before calling DrawIndexedInstanced for the SSAO pass, I copy the relevant to a descriptor heap that I then bind, like so : CD3DX12_CPU_DESCRIPTOR_HANDLE ssao_heap_hdl(_pSSAOPassDesciptorHeap->GetCPUDescriptorHandleForHeapStart()); device->CopyDescriptorsSimple(1, ssao_heap_hdl, _gBuffer.PositionMap().GetDescriptorHeap()->GetCPUDescriptorHandleForHeapStart(), D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); ssao_heap_hdl.Offset(CBV_descriptor_inc_size); device->CopyDescriptorsSimple(1, ssao_heap_hdl, _gBuffer.NormalMap().GetDescriptorHeap()->GetCPUDescriptorHandleForHeapStart(), D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); ssao_heap_hdl.Offset(CBV_descriptor_inc_size); device->CopyDescriptorsSimple(1, ssao_heap_hdl, _ssaoPass.GetNoiseTexture().GetDescriptorHeap()->GetCPUDescriptorHandleForHeapStart(), D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); ID3D12DescriptorHeap* descriptor_heaps[] = { _pSSAOPassDesciptorHeap }; pCommandList->SetDescriptorHeaps(1, descriptor_heaps); pCommandList->SetGraphicsRootDescriptorTable(0, _pSSAOPassDesciptorHeap->GetGPUDescriptorHandleForHeapStart()); pCommandList->SetGraphicsRootConstantBufferView(1, _cBuffSamplesKernel[0].GetVirtualAddress()); Debug/Release build give me the same results, so do shader compilation flags with/without optimisation. So does anyone see something weird in my code that would cause the slowness ? By the way, when I run the pixel shader in the graphics debugger, this line : offset.xyz /= offset.w; does not seem to produce the expected results, the two lines in the following table are the values in the debugger before and after the execution of that line of code Name Value Type offset offset x = -1.631761000, y = 1.522913000, z = 2.634875000, w = 2.634875000 x = -0.619293700, y = 0.577983000, z = 2.634875000, w = 2.634875000 float4 float4 so X and Y are okay, not Z. Please tell me if you need more info/code. Thank you for your help !
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!