Jump to content
  • Advertisement
Sign in to follow this  
jdub

Moving Data from CPU to a Structured Buffer

This topic is 1231 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am building a ray tracer.  I have a structured buffer that holds elements containing information about the geometry/materials of my scene.  I want to be able to supply this geometry from the CPU to my Compute Shader (not through a constant buffer because there is too much Geometry data).

 

The way that immediately comes to mind is to create my structured buffer as a dynamic buffer and use Map()/Unmap() to write data to it.  However, apparently dynamic resources cannot be directly bound to the pipeline as shader resources.  

What is a good way to do this?

Share this post


Link to post
Share on other sites
Advertisement


However, apparently dynamic resources cannot be directly bound to the pipeline as shader resources.  

 

What makes you say that? You can certainly do this, I've done it myself many times. See the docs here for confirmation (scroll down to the table at the bottom).

Share this post


Link to post
Share on other sites

Hmm.. It appears I misread the error message.  The result is still a little cryptic to me.  Here is the code I'm calling:

HRESULT CreateStructuredBuffer(
		ID3D11Device *device,
		UINT element_size,
		UINT count,
		void *initial_data,
		ID3D11Buffer **out)
	{
		*out = NULL;
		D3D11_BUFFER_DESC desc;
		ZeroMemory(&desc, sizeof(D3D11_BUFFER_DESC));
		desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE;
		desc.ByteWidth = element_size * count;
		desc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
		desc.StructureByteStride = element_size;
		desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
		desc.Usage = D3D11_USAGE_DYNAMIC;
		
		if (initial_data)
		{
			D3D11_SUBRESOURCE_DATA subresource_data;
			subresource_data.pSysMem = &initial_data;
			return device->CreateBuffer(&desc, &subresource_data, out);
		}
		else
		{
			return device->CreateBuffer(&desc, NULL, out); 
		}
	}

And here is the error message I get when I try to call CreateBuffer:

D3D11 ERROR: ID3D11Device::CreateBuffer: A D3D11_USAGE_DYNAMIC Resource cannot be bound to certain parts of the graphics pipeline, but must have at least one BindFlags bit set. The BindFlags bits (0x88) have the following settings: D3D11_BIND_STREAM_OUTPUT (0), ...

Share this post


Link to post
Share on other sites

You can't use D3D11_BIND_UNORDERED_ACCESS, since that implies that the GPU can write to the buffer.

Share this post


Link to post
Share on other sites

I figured out that my GPU (GTX690) can easily create new ID3D11Buffer's so fast that instead up updating an old buffer with new data using ID3D11DeviceContext::UpdateSubresource, I can just use ID3D11Device::CreateBuffer with initial data. Since I only touch ID3D11Device and not ID3D11DeviceContext, the whole operation can be done on another thread.

void DXDevice::SetMatrices(std::vector<DirectX::XMFLOAT4X4A> &Matrices)
{
	auto pBuf = CreateStructuredBufferResource(Matrices.data(), Matrices.size() * sizeof(DirectX::XMFLOAT4X4A));
	if (pBuf == nullptr)
		throw std::exception("CreateStructuredBufferResource failed");
	
	m_pImmediateContext->VSSetShaderResources(0, 1, &pBuf.p); // don't thread this
}
CComPtr<ID3D11ShaderResourceView> DXDevice::CreateStructuredBufferResource(const void* pDataSrc, UINT BufferSize)
{
	CComPtr<ID3D11ShaderResourceView> pShaderResourceView{ nullptr };
	CComPtr<ID3D11Buffer> pBuffer = CreateBufferResource(pDataSrc, BufferSize, D3D11_BIND_SHADER_RESOURCE, D3D11_USAGE_DEFAULT, D3D11_RESOURCE_MISC_BUFFER_STRUCTURED);

	if (pBuffer == nullptr)
		return nullptr;

	try
	{
		D3D11_SHADER_RESOURCE_VIEW_DESC rd;
		ZeroMemory(&rd, sizeof(rd));
		rd.ViewDimension = D3D11_SRV_DIMENSION_BUFFEREX;
		rd.BufferEx.NumElements = BufferSize / sizeof(DirectX::XMFLOAT4X4A);

		HR(m_pDevice->CreateShaderResourceView(pBuffer, &rd, &pShaderResourceView));
	}
	catch (std::exception &e)
	{
		WriteFile("error.log", e.what());
		return nullptr;
	}

	return pShaderResourceView;
}
CComPtr<ID3D11Buffer> DXDevice::CreateBufferResource(const void* pDataSrc, UINT BufferSize, UINT BindFlags, D3D11_USAGE Usage, UINT MiscFlags)
{
	CComPtr<ID3D11Buffer> pBuffer = nullptr;

	try
	{
		if (BufferSize == 0)
			throw std::exception("The requested buffer resource is of size 0");

		D3D11_SUBRESOURCE_DATA sd;
		ZeroMemory(&sd, sizeof(sd));
		sd.pSysMem = pDataSrc;

		D3D11_BUFFER_DESC bd;
		ZeroMemory(&bd, sizeof(bd));
		bd.Usage = Usage;
		bd.ByteWidth = BufferSize;
		bd.BindFlags = BindFlags;
		bd.MiscFlags = MiscFlags;
		if (MiscFlags == D3D11_RESOURCE_MISC_BUFFER_STRUCTURED)
			bd.StructureByteStride = sizeof(DirectX::XMFLOAT4X4A);

		HR(m_pDevice->CreateBuffer(&bd, pDataSrc ? &sd : nullptr, &pBuffer));
	}
	catch (std::exception &e)
	{
		WriteFile("error.log", e.what());
		return nullptr;
	}

	return pBuffer;
}
StructuredBuffer<float4x4> Matrices : register(t0);

Share this post


Link to post
Share on other sites


I figured out that my GPU (GTX690) can easily create new ID3D11Buffer's so fast that instead up updating an old buffer with new data using ID3D11DeviceContext::UpdateSubresource, I can just use ID3D11Device::CreateBuffer with initial data. Since I only touch ID3D11Device and not ID3D11DeviceContext, the whole operation can be done on another thread.

That is generally not a good idea, as you are going to potentially fragment your video memory pool.  It might not be an issue with your current video card, but doing things like this usually lead to degraded performance over time.  If you have ever had to track a bug down like this, then you will know it is no fun!

 

Have you left your application running for a long period of time?

Share this post


Link to post
Share on other sites

Have you left your application running for a long period of time?

 

I ran a 2 hour stable demo in windowed mode with stable FPS and memory usage on both CPU and GPU. But as soon as Windows shuts down the display (power saving setting) it takes just a couple of minutes to leak all memory and the application crashes (CreateStructuredBufferResource returns nullptr). I guess the memory is not getting released once the displays are turned off.

Edited by Tispe

Share this post


Link to post
Share on other sites

On that note, would anyone know why ID3D11Device::CreateBuffer returns E_OUTOFMEMORY after a while when the monitor shuts off? It might be that the CComPtr<ID3D11Buffer> or CComPtr<ID3D11ShaderResourceView> are not releasing when the monitor is shut off for some reason... 

Edited by Tispe

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!