Jump to content
  • Advertisement

kretash

Member
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

196 Neutral

About kretash

  • Rank
    Member
  1. kretash

    [D3D12] Enabling MSAA

    Thanks!   I was wondering if that would be a solution.
  2. Hello,   I'm having issues enabling MSAA in d3d12 and I'm not sure what I'm missing.   I have changed the PSO to have a count of 4 and the set the quality to standard ( not sure if necessary ). psoDesc.SampleDesc.Count = 4; psoDesc.SampleDesc.Quality = DXGI_STANDARD_MULTISAMPLE_QUALITY_PATTERN; I also have set those 2 variables for the Depth stencil.   When running the application like this I get the following error.   "the render target sample desc in slot 0 does not match that specified by the current pipeline state.(pipeline state = count 4 quality -1, render target view = count 1 quality 0"   As I'm rendering directly to the back buffer so I tried changing the count of swap chain, but that will crash.     Also, I'm trying to query for the multi-sample support and getting an empty struct. Don't know whats the issue there either.   Do anybody know where I'm going wrong?   Thanks,
  3. kretash

    IASetIndexBuffer Error

    The cause might have to do with the Nvidia drivers, a bug report was sent.   https://github.com/Microsoft/DirectX-Graphics-Samples/issues/76#issuecomment-170174144
  4. kretash

    IASetIndexBuffer Error

      Yes, the error appeared in my code and it was also present in a sample that was working well and had no error messages. I'm gonna open a ticket.     It just came out of no where in both my code and the unmodified samples code, so I guessed that it must have come with the new drivers. Not sure, just a guess.     The index buffer is live. I'm updating it in between frames, don't know if that could be part of the issue.
  5. Hello,   I started getting the following error in my project:   D3D12 ERROR: ID3D12CommandList::IASetIndexBuffer: pDesc->BufferLocation + SizeInBytes - 1 (0x000000020514a4cb) exceeds end of the virtual address range of Resource (0x0000000000000000, Debug Name: '(nullptr)', GPU VA Range: 0x0000000204e5d000 - 0x0000000204e6cfff). This is OK as out of bounds access is guarded by the GPU (writes are discarded and reads return 0). However the developer probably did not intend to make use of this behavior.  [ STATE_CREATION ERROR #725: SET_INDEX_BUFFER_INVALID]   The error came out of nowhere as I wasn't touching that part of the project. I decided to execute one of the sample projects ( D3D12Multithreading ) and it was getting the same error. Both projects work well. I'm guessing this was part of the driver update? What does it mean?   Thanks.
  6. That looks a lot better, gonna give it try. Thanks!
  7. I'll look into that. The reason I have the view and the projection matrices in the buffer is because it has to be a multiple of 256. They are not even being used in the shader, they are just padding.
  8. Ok, I just figured out how to do it in a single call. Im not sure if this is the correct way.   The first step is to create a single buffer that has the size of the struct in the shader times the instances that will be accessing it. HRESULT result; CD3DX12_HEAP_PROPERTIES heapProperties = CD3DX12_HEAP_PROPERTIES( D3D12_HEAP_TYPE_UPLOAD ); CD3DX12_RESOURCE_DESC resourceDesc = CD3DX12_RESOURCE_DESC::Buffer( sizeof( uber_buffer ) * k_engine->get_total_drawables() ); result = k_engine->get_device()->CreateCommittedResource( &heapProperties, D3D12_HEAP_FLAG_NONE, &resourceDesc, D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, IID_PPV_ARGS( &r->m_uber_buffer ) ); assert( result == S_OK && "CREATING THE CONSTANT BUFFER FAILED" ); r->m_uber_buffer->SetName( L"UBER BUFFER" ); Then when creating the view for the the constant buffer I set a "buffer_offset" that is equal to the size between the start and the element that I want to access in the shader. "buffer_size" is the size of a single struct. I then store that in the drescriptor heap. const UINT buffer_size = sizeof( uber_buffer ) + 255 & ~255; r->m_uber_buffer_desc = {}; D3D12_GPU_VIRTUAL_ADDRESS addr = r->m_uber_buffer->GetGPUVirtualAddress(); r->m_uber_buffer_desc.BufferLocation = addr + buffer_offset; r->m_uber_buffer_desc.SizeInBytes = buffer_size; CD3DX12_CPU_DESCRIPTOR_HANDLE cbvSrvHandle( r->m_cbv_srv_heap->GetCPUDescriptorHandleForHeapStart(), offset, r->m_cbv_srv_descriptor_size ); k_engine->get_device()->CreateConstantBufferView( &r->m_uber_buffer_desc, cbvSrvHandle ); I then Map the whole array into the array of constant buffers. HRESULT result = r->m_uber_buffer->Map( 0, nullptr, reinterpret_cast< void** >( &r->m_uber_buffer_WO ) ); assert( result == S_OK && "MAPPING THE CONSTANT BUFFER FALILED" ); memcpy( r->m_uber_buffer_WO, ub, sizeof( uber_buffer ) * k_engine->get_total_drawables() ); r->m_uber_buffer->Unmap( 0, nullptr ); Then i have the constant buffer defined in the shader. This constant buffer has to be a multiple of 256. cbuffer uber_buffer : register ( b0 ) { float4x4 mvp; float4x4 model; float4x4 view; float4x4 projection; } The only step missing is binding the cbv in the render function.     The number of calls get reduced to 1 from 14000. But there has only been a slight improvement in performance, I'm gonna clean up the code and see if I find why.  
  9. Thanks for the reply, that's exactly what I was looking for.   I'm gonna give it a try!
  10. Hello,   Im rendering a scene with a total of 14400 entities. It runs ok, but I saw that the biggest bottleneck was coping idividually all constant buffers for every single entity.     A thing that I'm doing wrong is that I'm updating buffers that will not be renderer because they are outside the frustum. But I went ahead an tried to create what I called an uber buffer that will be updated every frame with all matrices inside to reduce it to one call. Then i'll have a single static buffer for every instance that holds an instance_id, this buffer only beeing updated when created. It looks something like this. cbuffer constant_buffer : register ( b0 ) { int instance_id; } #define MAX_INSTANCES 14428 cbuffer uber_buffer : register ( b1 ) { float4x4 mvp[MAX_INSTANCES]; } This does not work as the maximun allowed size of a constant buffer is 4096 entries. Witch I think it means 4096 float4's, or 1024 float4x4's.   Are there any ways of speeding this up apart than from just avoiding updating the buffers that will not be used?
  11. kretash

    [D3D12] Uploading Textures

    Ok, just realized reading the code for the sample that its recorded inside a command list and then executed.
  12. Hello   I have been trying to upload a texture using directx12 and I cant figure out what I'm missing. I have almost the same exact code as the "D3D12DynamicIndexing" sample, but the texture appears completely black. I have checked that the data that I'm trying to upload is not null so im guessing that I'm messing up somewhere else.     This is the code that I'm using to upload the texture, does anybody know where I'm messing up? //texture   int x = 512, y = 512, n = 4;   unsigned char *data = stbi_load( "image.png", &x, &y, &n, 4 );   D3D12_RESOURCE_DESC textureDesc{};   textureDesc.MipLevels = 1;   textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;   textureDesc.Width = x;   textureDesc.Height = y;   textureDesc.Flags = D3D12_RESOURCE_FLAG_NONE;   textureDesc.DepthOrArraySize = 1;   textureDesc.SampleDesc.Count = 1;   textureDesc.SampleDesc.Quality = 0;   textureDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;   result = m_device->CreateCommittedResource(     &CD3DX12_HEAP_PROPERTIES( D3D12_HEAP_TYPE_DEFAULT ),     D3D12_HEAP_FLAG_NONE,     &textureDesc,     D3D12_RESOURCE_STATE_COPY_DEST,     nullptr,     IID_PPV_ARGS( &m_texture ) );   assert( result == S_OK && "CreateCommittedResource FALILED" );   const UINT subresourceCount = textureDesc.DepthOrArraySize * textureDesc.MipLevels;   const UINT64 uploadBufferSize = GetRequiredIntermediateSize( m_texture.Get(), 0, subresourceCount );   result = m_device->CreateCommittedResource(     &CD3DX12_HEAP_PROPERTIES( D3D12_HEAP_TYPE_UPLOAD ),     D3D12_HEAP_FLAG_NONE,     &CD3DX12_RESOURCE_DESC::Buffer( uploadBufferSize ),     D3D12_RESOURCE_STATE_GENERIC_READ,     nullptr,     IID_PPV_ARGS( &m_texture_upload ) );   assert( result == S_OK && "CreateCommittedResource FALILED" );   D3D12_SUBRESOURCE_DATA textureData = {};   textureData.pData = data;   textureData.RowPitch = static_cast< LONG_PTR >( ( 4 * x ) );;   textureData.SlicePitch = textureData.RowPitch*y;   UpdateSubresources( m_commandList.Get(), m_texture.Get(), m_texture_upload.Get(), 0, 0, subresourceCount, &textureData );   m_commandList->ResourceBarrier( 1, &CD3DX12_RESOURCE_BARRIER::Transition( m_texture.Get(),     D3D12_RESOURCE_STATE_COPY_DEST, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE ) );   D3D12_SAMPLER_DESC samplerDesc = {};   samplerDesc.Filter = D3D12_FILTER_MIN_MAG_MIP_LINEAR;   samplerDesc.AddressU = D3D12_TEXTURE_ADDRESS_MODE_WRAP;   samplerDesc.AddressV = D3D12_TEXTURE_ADDRESS_MODE_WRAP;   samplerDesc.AddressW = D3D12_TEXTURE_ADDRESS_MODE_WRAP;   samplerDesc.MinLOD = 0;   samplerDesc.MaxLOD = D3D12_FLOAT32_MAX;   samplerDesc.MipLODBias = 0.0f;   samplerDesc.MaxAnisotropy = 1;   samplerDesc.ComparisonFunc = D3D12_COMPARISON_FUNC_ALWAYS;   m_device->CreateSampler( &samplerDesc, m_samplerHeap->GetCPUDescriptorHandleForHeapStart() );   D3D12_SHADER_RESOURCE_VIEW_DESC diffuseSrvDesc = {};   diffuseSrvDesc.Shader4ComponentMapping = D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING;   diffuseSrvDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;   diffuseSrvDesc.ViewDimension = D3D12_SRV_DIMENSION_TEXTURE2D;   diffuseSrvDesc.Texture2D.MipLevels = 1;   m_device->CreateShaderResourceView( m_texture.Get(), &diffuseSrvDesc, cbvSrvHandle );   cbvSrvHandle.Offset( m_cbvSrvDescriptorSize ); Thanks
  13. Yes, you were right. I just saw your post.
  14. I was missing this CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHandle( m_rtvHeap->GetCPUDescriptorHandleForHeapStart(),     m_frameIndex, m_rtvDescriptorSize );   CD3DX12_CPU_DESCRIPTOR_HANDLE dsvHandle( m_dsvHeap->GetCPUDescriptorHandleForHeapStart() );   m_commandList->OMSetRenderTargets( 1, &rtvHandle, FALSE, &dsvHandle ); Now it works!
  15. I have added a depth buffer but I is not rendering to it.     D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {};     dsvHeapDesc.NumDescriptors = 1 + FrameCount * 1;     dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV;     dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;     result = m_device->CreateDescriptorHeap( &dsvHeapDesc, IID_PPV_ARGS( &m_dsvHeap ) );     assert( result == S_OK && "ERROR CREATING THE DSV HEAP" ); {//Creating the depth texture     CD3DX12_RESOURCE_DESC depth_texture( D3D12_RESOURCE_DIMENSION_TEXTURE2D, 0,       static_cast< UINT >( m_viewport.Width ), static_cast< UINT >( m_viewport.Height ), 1, 1,       DXGI_FORMAT_D32_FLOAT, 1, 0, D3D12_TEXTURE_LAYOUT_UNKNOWN,       D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL | D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE );     D3D12_CLEAR_VALUE clear_value;     clear_value.Format = DXGI_FORMAT_D32_FLOAT;     clear_value.DepthStencil.Depth = 1.0f;     clear_value.DepthStencil.Stencil = 0;     result = m_device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES( D3D12_HEAP_TYPE_DEFAULT ),       D3D12_HEAP_FLAG_NONE, &depth_texture, D3D12_RESOURCE_STATE_DEPTH_WRITE, &clear_value,       IID_PPV_ARGS( &m_depthStencil ) );     assert( result == S_OK && "CREATING THE DEPTH STENCIL FAILED" );     m_device->CreateDepthStencilView( m_depthStencil.Get(), nullptr, m_dsvHeap->GetCPUDescriptorHandleForHeapStart() );   } m_commandList->ClearDepthStencilView( m_dsvHeap->GetCPUDescriptorHandleForHeapStart(),     D3D12_CLEAR_FLAG_DEPTH, 1.0f, 0, 0, nullptr);   The only thing that i think its missing is that the resource is not bound, but i dont know how to bind it.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!