D3D12 atrocious performance

Started by
20 comments, last by mlfarrell00 8 years, 8 months ago

I'm getting about 10 FPS with only about 20 draw calls of very small meshes.

Clearly I'm doing something very very wrong.

Is there an issue with populating a single command list to draw 20 items? I figured even if that isn't optimal, it shouldn't be THIS terrible. At this point my iPad running OpenGL ES is outperforming my d3d12 engine by a massive margin.

The following code runs before most of my draw calls to set up necessary state:

Sorry for the messed up tab spacings


	void CoreStateMachine::prepareToDraw()
	{
		auto cl = continueRenderingCommands();
    cl->SetGraphicsRootSignature(rootSignature.Get());

		//prepare any buffers required by current shader program
		auto prog = currentProgram;

		if(prog)
		{
			if(!prog->globalCBuffer)
			{
				prog->globalCBuffer = make_shared<BufferArray>();
				prog->globalCBufferDirty = true;
			}
			else
			{
				//this should be cleaned up at some point
				preserveResourceUntilRenderComplete(prog->globalCBuffer->uploadBuffers[0]);
				preserveResourceUntilRenderComplete(prog->globalCBuffer->buffers[0]);
				prog->globalCBuffer = make_shared<BufferArray>();
				prog->globalCBufferDirty = true;
			}

			if(prog->globalCBufferDirty)
			{
				prog->globalCBuffer->provideData(0, prog->globalCBufferSize, prog->globalCBufferData, BufferArray::UT_DYNAMIC);
				prog->globalCBufferDirty = false;
			}

			cl->SetGraphicsRootConstantBufferView(2, prog->globalCBuffer->buffers[0]->GetGPUVirtualAddress());
		}

		auto currentVA = VertexArray::current();
		assert(currentVA != nullptr);
		currentVA->prepareForDraw();

		device->CopyDescriptorsSimple(textureTableSize, cbSrvHeaps[descriptorHeapIndex]->hCPU(textureTableIndex), cpuCbSrvHeap->hCPU(0), D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
		device->CopyDescriptorsSimple(textureTableSize, samplerHeaps[descriptorHeapIndex]->hCPU(textureTableIndex), cpuSamplerHeap->hCPU(0), D3D12_DESCRIPTOR_HEAP_TYPE_SAMPLER);

    if(descriptorHeapsChanged)
    {
      ID3D12DescriptorHeap *descHeaps[] = { cbSrvHeaps[descriptorHeapIndex]->get(), samplerHeaps[descriptorHeapIndex]->get() };
      cl->SetDescriptorHeaps(ARRAYSIZE(descHeaps), descHeaps);
      descriptorHeapsChanged = false;
    }

		//might be smarter to set this up earlier if I can.. not sure what the tradeoff is here
		if(pipelineState)
		{
			preserveResourceUntilRenderComplete(pipelineState);
			pipelineState = nullptr;
		}
		ThrowIfFailed(device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pipelineState)));

		cl->SetPipelineState(pipelineState.Get());

    if(descriptorTablesChanged)
    {
      cl->SetGraphicsRootDescriptorTable(0, cbSrvHeaps[descriptorHeapIndex]->hGPU(textureTableIndex));
      cl->SetGraphicsRootDescriptorTable(1, samplerHeaps[descriptorHeapIndex]->hGPU(textureTableIndex));
      descriptorTablesChanged = false;
    }
	}
Advertisement

Unless my reading of your code is wrong, you appear to be creating a PSO before every draw. If that's the case, that's the reason for your bad performance. PSOs should be created well in advance of a draw call and cached for future use.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

Thanks. You're so right. I just came to this. My arch isn't setup well for caching PSOs but after refactoring some stuff, fingers crossed, I'll get some good draw call performance.

In general, the rule of thumb consensus seems to be this:

In D3D11 and prior, *late decisions* were good because you had the most information with which to batch effectively to minimize state changes.

In D3D12 (and other new APIs) that's turned on its head -- you want to set down decisions as soon as you can practically make them, and re-use them as much as possible.

This especially means that naively porting D3D11-optimized patterns are a recipe for disaster on D3D12. Here's a presentation from Sigraph 2015 about Unity's experience bringing their engine forward from D3D11/Classic-GL style towards D3D12-style: http://aras-p.info/texts/files/201508-SIGGRAPH-PortingUnityToNewAPIs.pdf

throw table_exception("(? ???)? ? ???");

omfg. here's another tip. DON'T test performance with debug builds. I forgot about how bad the performance of MSVC generated debug builds are. Doing a release build shot the performance of the whole system up to max FPS

Spent the last hour chasing around a phantom

omfg. here's another tip. DON'T test performance with debug builds. I forgot about how bad the performance of MSVC generated debug builds are. Doing a release build shot the performance of the whole system up to max FPS

Spent the last hour chasing around a phantom

I wrote a massive nest of inter-related entities involving thousands of STL containers and a half-assed messaging/scheduling system for school once. The time to simulate a single day in debug was about 2 minutes. The time in release was about 10 seconds.

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

It may be time to give up on d3d12. My opengl system massively outperforms it with a drastically simpler architecture, even on a freaking ipad. I thought I'd be able to get performance gains without dicking around too much - that was the appeal of d3d12 and the coming vulkan to me. But man... 7 days in a row of staying up til 2am... and I still don't have it. This just plainly is not worth it for me.

My system changes "uniform" global constants very often and at unpredictable times. At a high level, the kind of "precomputation" that d3d12 would need to be fast just isnt there. It feels like instanced rendering all over again. I can't seem to find a way to efficiently copy the needed cbuffer data before draw calls in a way that doesn't hurt performance.

I know my code is suboptimal, but I expected to outperform opengl at the very least with this kind of baseline.

very dissapointing...

Without trying to be too dismissive of your technical abilities, I feel pretty confident in saying that the fault probably lies at your door rather than any shortcomings in the API.

The ability to rapidly "map" and set new constants very quickly is one of the key areas that has become a lot faster in D3D12. The simplest scheme to be able efficiently update constants is simply to allocate two chunks of memory, each large enough to hold all the constants that need to be written for a single frame. You can persistently map a buffer for the lifetime of the application and need only memcpy your constants to a monotonically increasing address during the frame before switching to the other of the two buffers for the following frame.

Perhaps you could explain how you've implemented constant updates?

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

My system changes "uniform" global constants very often and at unpredictable times. At a high level, the kind of "precomputation" that d3d12 would need to be fast just isnt there. It feels like instanced rendering all over again. I can't seem to find a way to efficiently copy the needed cbuffer data before draw calls in a way that doesn't hurt performance.

I just use a per-frame stack allocator.
When the high level code asks to create a cbuffer, I just malloc some regular memory for them.
When the high level code asks to bind a cbuffer, I memcpy it's contents (from my own malloc'ed RAM) into my D3D12 stack, and then give that stack pointer to d3d as the cbuffer address.
D3D requires that all the rendering state (shader programs, blend, depth, stencil, raster, etc) is precomputed, but still allows you to bind resources (such as cbuffers) at the last minute.

D3D12 / Vulkan have gotten rid of a lot of niceties in order to make the API as simple as possible. This means that lots of things that your graphics driver used to implement for you, are now your responsibility. They're definately going to be harder to use than GL/D3D11.

But ¿porque no los dos? You shouldn't be using D3D12 only, as it's only usable by your Win10 customers. Keep your GL version too.


It may be time to give up on d3d12. My opengl system massively outperforms it with a drastically simpler architecture, even on a freaking ipad. I thought I'd be able to get performance gains without dicking around too much - that was the appeal of d3d12 and the coming vulkan to me.

This is very important right here. D3D12, Vulkan (and Mantle) do not exist to make a programmer's life easier, they exist to get the most out of your hardware when you need it, at the cost of the developer having to take on a lot more work to get the rendering side of your application in a stable state.

If you can comfortably do whatever your application needs to do with OpenGL/D3D11, then by all means keep using those APIs; the new generation of APIs is not meant as a replacement for these, but as an alternative.

If you feel that your application can't get to the point where you want it to be because the runtime itself is holding you back then you'll want to spend your time getting intimate with D3D12 or Vulkan.

Remember that these new APIs are not a magic wand you can wave at your application to suddenly make things run faster (even though Microsoft's marketing department would like you to believe that), nor do they magically push your GPU into overdrive so it can suddenly process a lot more data. These APIs were designed by and for those people who needed to push things to the limit and those people who felt confident enough in their knowledge of the inner workings of both graphics hardware and modern rendering engines to be able to make better low level decisions than any D3D11/OpenGL driver ever could.

I gets all your texture budgets!

This topic is closed to new replies.

Advertisement