I have a compute shader which writes to an RWStructuredBuffer and later a pixel shader uses it as SRV for reading (StructuredBuffer).
For this I put a barrier to transition the buffer to SRV and back to UAV after the pixel shader is done. I am trying to minimize number of barriers so one option is to just use RWStructuredBuffer even in the pixel shader even if the shader only reads from it.
So my question is, does RWStructuredBuffer come with a hidden cost for reading which is greater than the cost of the two barriers?
I wanted to get a gauge on how everyone is handling resource descriptors and bind slots in the new API. Under directx11 I would allow the compiler to generate its own bind slots, and look it up at runtime. I wanted to see if people using directx12/Villanova have switched to explicit slots for easier descriptor set management, or are still using the prior version, and burning through descriptors
By Inline Engine
Our professional experienced hobby team (5 active members) is looking for Volunteers to take a seat in the development of a next generation C++ 3D Game Engine.
The minimum requirement is to have passion about games/game engine programming.
- Cross Platform & Clean Code Design ( For now we only want to aim PC & Console )
- Fully Customizable Graph Based DirectX12 Graphics Engine, (PBR is in progress)
- Own Editor + UI system + Math library
- (Will be) multithreaded with Job Based System like in Uncharted 4's engine
Roles we are looking for now: (Later will be more)
- Editor & UI & Tools Programmer
- Generalist Programmer
Picture from the current early state of the editor:
Source code: https://github.com/petiaccja/Inline-Engine
If you have the passion to build game engines / games write an e - mail to: InlineEngine@gmail.com
I am working on making a DX12, Vulkan framework run on CPU and GPU in parallel.
Decided to finish the Vulkan implementation before DX12. (Eat the veggies before having the steak XDD)
I have a few questions about the usage of ID3D12CommandAllocator:
Different sized command lists should use different allocators so the allocators dont grow to worst size Does this mean that I need to know the size of the command list before calling CreateCommandList and pass the appropriate allocator? Try to keep number of allocators to a minimum What are the pitfalls if I create a command allocator per list? This way each allocator will never grow too large for the list. In addition, there will be no need for synchronization. Most of the examples I have seen just use a pool of allocators and do fence based synchronization. I can modify that to also consider command list size but before that any advice on this will really help me to understand the internal workings of the ID3D12CommandAllocator in a better way.
I would like to use MinMax filtering (D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT) in compute shader.
I am trying to compile compute shader (cs_5_0) and encounter an error: "error X4532: cannot map expression to cs_5_0 instruction set".
I tried to compile the shader in cs_6_0 mode and got "unrecognized compiler target cs_6_0". I do not really understand the error as cs_6_0 is supposed to be supported.
According to MSDN, D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT should "Fetch the same set of texels as D3D12_FILTER_MIN_MAG_MIP_POINT and instead of filtering them return the maximum of the texels. Texels that are weighted 0 during filtering aren't counted towards the maximum. You can query support for this filter type from the MinMaxFiltering member in the D3D11_FEATURE_DATA_D3D11_OPTIONS1 structure".
Not sure if this is valid documentation as it is talking about Direct3D 11. D3D12_FEATURE_DATA_D3D12_OPTIONS does not seem to provide this kind of check.
Direct3D device is created with feature level D3D_FEATURE_LEVEL_12_0 and I am using VS 2015.