CreateBuffer throws _com_error exception and I can't catch it

Started by
6 comments, last by 21st Century Moose 7 years, 12 months ago

I have an application (running in VS2015) that procedurally creates 3D meshes and then renders them. I set it to run with a lot of different settings in a loop doing this:

1. Create a mesh on the CPU

2. Upload it to the GPU and release it on the CPU

3. Render it and take a screenshot

4. Deallocate it from the GPU

When I run it in debug mode it seems to run fine. But when I run it in release mode, after about 200 iterations (although this varies) I get a _com_error exception when calling ID3D11Device:CreateBuffer and VS breaks on the exception. It might be a GPU memory problem because it often happens for large models (200MB) but I am not sure. I have used process manager to monitor GPU memory usage when the app is running and it seems to work correctly (there are no memory leaks and the usage never goes above 300MB).

My problem is that I can't get to the error mesage from the exception. I have written a try and catch block but the program completely ignores it:


	try
	{
		device->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &model->vertexBuffer);
	}
	catch (_com_error &e)
	{
		std::string description = e.Description();
		DebugOutput("_com_error exception thrown: \n %s\n", description.c_str());
		LPCTSTR errMsg = e.ErrorMessage();
		HRESULT h = e.Error();
		DebugOutput("%d\n", h);
		DebugOutput("%s\n", errMsg);
	}

Nothing is ever printed out and if I set a breakpoint in the catch block it is never accessed. When I let VS continue after breaking it ignores the catch block and goes on where the NULL vertexBuffer causes it to crash.

Am I handling the exception correctly?

Advertisement

You shouldn't be handling exceptions from this. At all.

ID3D11Device::CreateBuffer returns a HRESULT and the returned values are well-defined in the documentation. The correct way to handle D3D errors is the documented way: check the HRESULT returned by the call.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Probably you are trying to allocate one big continues chunk of memory.

And there is no such piece of memory in your video card.

Probably your exception is from another place.

If you think that this is a _com_error, you may try to set exception filter in VS: Debug=>Windows=>Exception Settings.

Set checkbox on C++ Exceptions=>_com_error

In release mode hit F5 (run under debugger attached).

When this type of exception will be thrown, debugger will stop execution in the exact place.

If this is not a _com_error, just enable all exceptions filtering.

If it is not an exception, look at HRESULT from CreateBuffer().

If DebugOutput does not write something in Release mode due it is a macro, or optimized out, you may try output your text to MessageBox()

The other thought: why don't you allocate one huge buffer big enough for any big mesh once, and reuse it without deallocation?

For what it's worth, D3D uses _com_errors sometimes internally. You're just seeing a first-chance exception, which is handled and converted into an HRESULT before returning from the API.

Thanks I have deleted the catch/try block and inserted a macro for checking the HRESULT values and it displays the error message correctly (something about driver failure). It seems that you are right Happy SDE there proably isn't a continuous chunk of memory this large on the GPU.

Frankly I haven't thought of the approach you are suggesting (one big mesh allocated once), that's a pretty good idea.

I've tried implementing the using UpdateSubResource(I will change it to map/unmap later since it's more appropriate in this case) but I keep getting access violations. I am probably not passing the parameters correctly although I've read the MSDN api for this function. The access violation occurs because UpdateSubResource is reading more from the newMesh->vertices array than I expect it to.

In this code: model->vertexBuffer (*ID3D11Buffer) has been initialized with a large amount of empty memory and now I am trying to update it with an actual mesh (newMesh contains the data). My reasoning for the parameters: box is NULL (no offset), SrcRowPitch = the total amount of data passed, SrcDepthPitch = 1 since there is just one array of data. Could someone correct me please?


UINT vertexDataSize = newMesh->vertexCount * newMesh->vertexStride;
renderer->deviceContext->UpdateSubresource(model->vertexBuffer, 0, NULL, newMesh->vertices, vertexDataSize , 1);

1. I have this code for updating texture:


  d3dContext->UpdateSubresource( tex, 0, nullptr, bitData, static_cast<UINT>(rowBytes), static_cast<UINT>(numBytes) );

It seems that last param 1 is not correct.

2. Here is an example on update via map/unmap:


Microsoft::WRL::ComPtr<ID3D11Buffer>         m_lineVertices;

std::vector<XMFLOAT4> lines;

//Update buffer
D3D11_MAPPED_SUBRESOURCE res;
_check_hr(m_context->Map(m_lineVertices.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &res));

XMFLOAT4* pData = reinterpret_cast<XMFLOAT4*>(res.pData);
memcpy(pData, &lines[0], lines.size() * sizeof(XMFLOAT4));

m_context->Unmap(m_lineVertices.Get(), 0);

And here is a suggestion: when you run your application in Debug mode with VS attached, and device was created with D3D11_CREATE_DEVICE_DEBUG flag,

DirectX runtime will output all errors to your VS output window.

It's pretty helpful sometimes.

BTW, sometimes if your app is 32 bit, there might be no process continues memory (for vector for example) that is allocated via alloc/new.

Thanks a lot Happy SDE I got it working after all using your map/unmap approach. I am already using the debug layer, it is pretty useful indeed. And thanks for that tip regarding the process memory, I have not thought of that before.

My reasoning for the parameters: box is NULL (no offset), SrcRowPitch = the total amount of data passed, SrcDepthPitch = 1 since there is just one array of data.


Depth pitch is not the number of depth slices, but rather the number of bytes between each depth slice. In the MSDN samples, when updating a non-volume resource, depth pitch is set to 0, so you should follow that example and set it to 0 yourself. The documentation could, of course, have been clearer about that.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement