Sign in to follow this  
pseudomarvin

CreateBuffer throws _com_error exception and I can't catch it

Recommended Posts

I have an application (running in VS2015) that procedurally creates 3D meshes and then renders them. I set it to run with a lot of different settings in a loop doing this:

 

1. Create a mesh on the CPU

2. Upload it to the GPU and release it on the CPU

3. Render it and take a screenshot

4. Deallocate it from the GPU

 

When I run it in debug mode it seems to run fine. But when I run it in release mode, after about 200 iterations (although this varies) I get a _com_error exception when calling ID3D11Device:CreateBuffer and VS breaks on the exception. It might be a GPU memory problem because it often happens for large models (200MB) but I am not sure.  I have used process manager to monitor GPU memory usage when the app is running and it seems to work correctly (there are no memory leaks and the usage never goes above 300MB).

 

My problem is that I can't get to the error mesage from the exception. I have written a try and catch block but the program completely ignores it:

	try
	{
		device->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &model->vertexBuffer);
	}
	catch (_com_error &e)
	{
		std::string description = e.Description();
		DebugOutput("_com_error exception thrown: \n %s\n", description.c_str());
		LPCTSTR errMsg = e.ErrorMessage();
		HRESULT h = e.Error();
		DebugOutput("%d\n", h);
		DebugOutput("%s\n", errMsg);
	}

Nothing is ever printed out and if I set a breakpoint in the catch block it is never accessed. When I let VS continue after breaking it ignores the catch block and goes on where the NULL vertexBuffer causes it to crash. 

 

Am I handling the exception correctly?

Edited by pseudomarvin

Share this post


Link to post
Share on other sites

Thanks I have deleted the catch/try block and inserted a macro for checking the HRESULT values and it displays the error message correctly (something about driver failure). It seems that you are right Happy SDE there proably isn't a continuous chunk of memory this large on the GPU.

 

Frankly I haven't thought of the approach you are suggesting (one big mesh allocated once), that's a pretty good idea.

 

I've tried implementing the  using UpdateSubResource(I will change it to map/unmap later since it's more appropriate in this case) but I keep getting access violations. I am probably not passing the parameters correctly although I've read the MSDN api for this function. The access violation occurs because UpdateSubResource is reading more from the newMesh->vertices array than I expect it to.

 

In this code: model->vertexBuffer (*ID3D11Buffer) has been initialized with a large amount of empty memory and now I am trying to  update it with an actual mesh (newMesh contains the data). My reasoning for the parameters: box is NULL (no offset), SrcRowPitch = the total amount of data passed, SrcDepthPitch = 1 since there is just one array of data. Could someone correct me please?

UINT vertexDataSize = newMesh->vertexCount * newMesh->vertexStride;
renderer->deviceContext->UpdateSubresource(model->vertexBuffer, 0, NULL, newMesh->vertices, vertexDataSize , 1);

 

Edited by pseudomarvin

Share this post


Link to post
Share on other sites

1. I have this code for updating texture:

  d3dContext->UpdateSubresource( tex, 0, nullptr, bitData, static_cast<UINT>(rowBytes), static_cast<UINT>(numBytes) );

It seems that last param 1 is not correct.

 

 

2. Here is an example on update via map/unmap:

Microsoft::WRL::ComPtr<ID3D11Buffer>         m_lineVertices;

std::vector<XMFLOAT4> lines;

//Update buffer
D3D11_MAPPED_SUBRESOURCE res;
_check_hr(m_context->Map(m_lineVertices.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &res));

XMFLOAT4* pData = reinterpret_cast<XMFLOAT4*>(res.pData);
memcpy(pData, &lines[0], lines.size() * sizeof(XMFLOAT4));

m_context->Unmap(m_lineVertices.Get(), 0);

And here is a suggestion: when you run your application in Debug mode with VS attached, and device was created with D3D11_CREATE_DEVICE_DEBUG flag,

DirectX runtime will output all errors to your VS output window.

 

It's pretty helpful sometimes.

 

BTW, sometimes if your app is 32 bit, there might be no process continues memory (for vector for example) that is allocated via alloc/new.

Edited by Happy SDE

Share this post


Link to post
Share on other sites

My reasoning for the parameters: box is NULL (no offset), SrcRowPitch = the total amount of data passed, SrcDepthPitch = 1 since there is just one array of data.


Depth pitch is not the number of depth slices, but rather the number of bytes between each depth slice.  In the MSDN samples, when updating a non-volume resource, depth pitch is set to 0, so you should follow that example and set it to 0 yourself.  The documentation could, of course, have been clearer about that.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this