DX9 to DX11 convert, getting a warning, Index buffer has not enough space!

Started by
-1 comments, last by JBurnside 11 years, 9 months ago
[color=#4A4A4A][background=rgb(241, 241, 241)]Hello,[/background]
[color=#4A4A4A]
I have been converting a rendering system from using DX9 to DX11. At this point, everything I am currently testing is visually working. Unfortunately I am getting a DX API warning, "D3D11: WARNING: ID3D11DeviceContext::DrawIndexed: Index buffer has not enough space! [ EXECUTION WARNING #359: DEVICE_DRAW_INDEX_BUFFER_TOO_SMALL ]"
[color=#4A4A4A]
I have found that getting this warning depends on the meshes I am loading, and what order they are loaded in. If I load meshes, A, B, C in that order I get the issue when I render B or C. Mesh A appears to be the cause as anything loaded after it will cause a call to DrawIndexed to give that warning. From the warning I would think I am somehow requesting to draw using more indices than our present, but this does not appear to be the case. All three meshes use the same index and vertex buffers. If I load mesh A's vertices but not indices the issue remains, but if I load mesh A's indices but not vertices the issue goes away.
[color=#4A4A4A]
I am setting the vertex and index buffers to be used for rendering per shader, then per object calling DrawIndexed using a start vertex and start index. Here are some snippets of the rendering code:
[color=#4A4A4A]
Per shader - [color=#4A4A4A]
...[color=#4A4A4A]

[indent=1]UINT stride = sizeof(VERTEX_POSNORMTANTEX);
[indent=1]UINT offset = 0;
[indent=1]ID3D11Buffer *const vertexBufferPtr =
[indent=1]VertexBufferManager::GetReference().GetPosNormTanTexBuffer().GetVertexBuffer();
[indent=1]Renderer::theContextPtr->IASetVertexBuffers(0, 1, &vertexBufferPtr, &stride, &offset);[color=#4A4A4A]
Renderer::theContextPtr->IASetIndexBuffer(IndexBuffer::GetReference().GetIndices(), DXGI_FORMAT_R32_UINT, 0);[color=#4A4A4A]
...[color=#4A4A4A]
Per Object - [color=#4A4A4A]
// Assuming we are using triangle lists for now[color=#4A4A4A]
Renderer::theContextPtr->DrawIndexed(mesh.GetPrimitiveCount() * 3, mesh.GetStartIndex(), mesh.GetStartVertex());
[color=#4A4A4A]
I have found a work around but it does not seem like a good way to do things as it requires setting the index buffer per object. The following code will get rid of the issue:
[color=#4A4A4A]
Per Object -[color=#4A4A4A]
... [color=#4A4A4A]
Renderer::theContextPtr->IASetIndexBuffer(IndexBuffer::GetReference().GetIndices(), DXGI_FORMAT_R32_UINT, mesh.GetStartIndex() * 4);[color=#4A4A4A]
...[color=#4A4A4A]
// Assuming we are using triangle lists for now[color=#4A4A4A]
Renderer::theContextPtr->DrawIndexed(mesh.GetPrimitiveCount() * 3, 0, mesh.GetStartVertex());[color=#4A4A4A]
...
[color=#4A4A4A]
Mesh A is some test geometry that I have been using for years with OGL, DX9 and now DX11 projects, so I feel it is unlikely, but not impossible, that there is something wrong with the data. I can recreate the issue with some other meshes that worked with the DX9 version as well. All my vertices and indices are copied into vector containers until I am down loading then I create the index and vertex buffers based on this data.
[color=#4A4A4A]
I have discovered that I can hard code the "mesh.GetPrimitiveCount()" to return 0, so nothing is drawn and I still get the warning.
[color=#4A4A4A]

Here are the snippets of how I create my index buffer:

For each load -

UINT IndexBuffer::AddIndices(UINT startVert, const UINT *_indices, UINT _numIndices)
{
[indent=1]// Test if this buffer has already been finalized
[indent=1]assert(!indexBufferPtr);
[indent=1]size_t ret = indices.size();
[indent=1] // Implement a solution for the Renderer Lab
[indent=1]for(size_t i = 0; i < _numIndices; ++i)
[indent=1]indices.push_back(_indices);// + startVert);
[indent=1]return (UINT)ret;
}
The return is the StartIndexLocation for this particular mesh.

When I am done loading assets -

void IndexBuffer::Finalize()
{
[indent=1]D3D11_BUFFER_DESC ibd;
[indent=1]ibd.Usage = D3D11_USAGE_IMMUTABLE;
[indent=1]ibd.ByteWidth = sizeof(UINT) * (UINT)indices.size();
[indent=1]ibd.BindFlags = D3D11_BIND_INDEX_BUFFER;
[indent=1]ibd.CPUAccessFlags = 0;
[indent=1]ibd.MiscFlags = 0;
[indent=1]ibd.StructureByteStride = 0;

[indent=1]D3D11_SUBRESOURCE_DATA iinitData;
[indent=1]iinitData.pSysMem = &indices[0];
[indent=1]HR(Renderer::theDevicePtr->CreateBuffer(&ibd, &iinitData, &indexBufferPtr));

[indent=1]// Do not need to keep a local copy of indices
[indent=1]testSize = indices.size();
[indent=1]indices.clear();
}

I have a vertex buffer class that works the same way except for being a template to handle different input layouts. An instance of the vertex buffer class is made for each unique vertex layout. In my example mesh A, B and C all use the same vertex buffer.

This topic is closed to new replies.

Advertisement