Buffer creation

Started by
3 comments, last by NightCreature83 11 years, 10 months ago
I am having a problem with buffer creation in this case it is a vertex buffer, as far as I can see the code I have is correct:

void createBuffer( unsigned int bufferSize, void* data, bool dynamic, unsigned int bindFlag )
{
m_enabled = true;

m_bufferDescriptor.Usage = D3D11_USAGE_DEFAULT;
m_bufferDescriptor.ByteWidth = bufferSize;
m_bufferDescriptor.BindFlags = bindFlag;
m_bufferDescriptor.CPUAccessFlags = 0;

m_initData.pSysMem = data;
m_initData.SysMemPitch = 0;
m_initData.SysMemSlicePitch = 0;

HRESULT hr = DeviceManager::getInstance().getDevice()->CreateBuffer( &m_bufferDescriptor, &m_initData, &m_buffer );
if (FAILED(hr))
{
std::cout << "Failed to create a D3D11 Buffer object with code: " << hr << std::endl;
}
}

//This function is called from a derived class
void VertexBuffer::createBufferAndLayout( unsigned int bufferSize, void* data, bool dynamic, bool position, bool normal, const std::vector<unsigned int>& textureCoordinateDimensions )
{
Buffer::createBuffer(bufferSize, data, dynamic, D3D11_BIND_VERTEX_BUFFER);
}


The device is valid and created, and running on a AMD RADEON HD4850, the device is created with feature level 10_1 which is the maximum this card supports. Device is created with D3D11_CREATE_DEVICE_SINGLETHREADED | D3D11_CREATE_DEVICE_DEBUG these flags and the application has been added to the debug list in the DX config panel.

This call is fine when the buffer size is 128 which happens to be 16-byte aligned, if I however call this with a buffer size that is 20000 it crahses with this call stack:

atidxx32.dll!6d1d775e()
[Frames below may be incorrect and/or missing, no symbols loaded for atidxx32.dll]
atidxx32.dll!6d1d7809()
atidxx32.dll!6ce91874()
atidxx32.dll!6ce90ada()
atidxx32.dll!6ce903b3()
atidxx32.dll!6ce548b6()
atidxx32.dll!6ce57ba3()
atidxx32.dll!6ce5be26()
atidxx32.dll!6ce5b77c()
atidxx32.dll!6ce53bbb()
atidxx32.dll!6ce68471()
atiuxpag.dll!6da31d60()
d3d11.dll!CResource<ID3D11Texture3D>::CLS::FinalConstruct() + 0x18e bytes
d3d11.dll!CTexture1D::CLS::FinalConstruct() + 0x35 bytes
d3d11.dll!TCLSWrappers<CBuffer>::CLSFinalConstructFn() + 0x13 bytes
d3d11.dll!CLayeredObjectWithCLS<CUnorderedAccessView>::FinalConstruct() + 0x61 bytes
d3d11.dll!CLayeredObjectWithCLS<CBuffer>::CreateInstance() + 0x68 bytes
d3d11.dll!CDevice::CreateLayeredChild() + 0x98 bytes
d3d11.dll!CBridgeImpl<ID3D11LayeredDevice,ID3D11LayeredDevice,CLayeredObject<CDevice> >::CreateLayeredChild() + 0x22 bytes
d3d11.dll!CD3D11LayeredChild<ID3D11DeviceChild,NDXGI::CDevice,64>::FinalConstruct() + 0x2a bytes
d3d11.dll!NDXGI::CDeviceChild<IDXGISurface>::FinalConstruct() + 0x1b bytes
d3d11.dll!NDXGI::CResource::FinalConstruct() + 0x23 bytes
d3d11.dll!CLayeredObject<NDXGI::CResource>::CreateInstance() + 0x68 bytes
d3d11.dll!NDXGI::CDevice::CreateLayeredChild() + 0x135 bytes
d3d11.dll!CBridgeImpl<ID3D11LayeredDevice,ID3D11LayeredDevice,CLayeredObject<NDXGI::CDevice> >::CreateLayeredChild() + 0x22 bytes
D3D11SDKLayers.dll!CD3D11LayeredChild<ID3D11DepthStencilState,NDebug::CDevice,32>::FinalConstruct() + 0x2d bytes
D3D11SDKLayers.dll!NDebug::CDeviceChild<ID3D11Counter>::FinalConstruct() + 0x50 bytes
D3D11SDKLayers.dll!NDebug::CResource<ID3D11Buffer>::FinalConstruct() + 0x1d bytes
D3D11SDKLayers.dll!NDebug::CBuffer::FinalConstruct() + 0x15 bytes
D3D11SDKLayers.dll!CLayeredObject<NDebug::CBuffer>::CreateInstance() + 0x53 bytes
D3D11SDKLayers.dll!NDebug::CDevice::CreateLayeredChild() + 0x90 bytes
D3D11SDKLayers.dll!CBridgeImpl<ID3D11LayeredDevice,ID3D11LayeredDevice,CLayeredObject<NDebug::CDevice> >::CreateLayeredChild() + 0x22 bytes
d3d11.dll!NOutermost::CDeviceChild::FinalConstruct() + 0x29 bytes
d3d11.dll!CUseCountedObject<NOutermost::CDeviceChild>::CUseCountedObject<NOutermost::CDeviceChild>() + 0x48 bytes
d3d11.dll!CUseCountedObject<NOutermost::CDeviceChild>::CreateInstance() + 0x6e bytes
d3d11.dll!NOutermost::CDevice::CreateLayeredChild() + 0xd0 bytes
d3d11.dll!CDevice::CreateBuffer_Worker() + 0xde bytes
d3d11.dll!CDevice::CreateBuffer() + 0x18 bytes
D3D11SDKLayers.dll!NDebug::CDevice::CreateBuffer() + 0x105 bytes
> SpaceSim.exe!Buffer::createBuffer(unsigned int bufferSize=20000, void * data=0x020dca50, bool dynamic=false, unsigned int bindFlag=1) Line 46 + 0x3a bytes C++



The vertex format being used is a 3 float position, a 3 float normal and a 2 float texcoord. in the case of 128 bytes I am only passing 4 vertecies to the buffer, in the case of 20000 I am passing 625 vertices to the buffer.

It's still passing a 16-byte aligned pointer, but as you can see it is falling somewhere in the driver, since I have no symbols for the driver I am really wondering whats going on. And because this is dying in the driver I am not getting any real usefull information from the D3D debug runtime, I also tried creating the buffer with 225 verts and that also fails with the same callstack.

Side question:
I am new to D3D11 but I do know D3D9 quite well so most concepts probably carry over between the two. But am I right in thinking that once the Buffer is created D3D11 has made a copy of the vertex data onto the GPU as in D3D9 and so it is safe to delete the CPU side buffer?

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

Advertisement
What about the rest of the members of D3D11_BUFFER_DESC? Do you set them somewhere else in your code? If not they will not be initialized, and they will have garbage values. Also if you are creating a static vertex buffer that the GPU will not write to, then you should use D3D11_USAGE_IMMUTABLE instead of D3D11_USAGE_DEFAULT.

When you create a buffer, memory is allocated somewhere either in GPU memory or in system memory. If you provide initialization data, then that data is copied to that newly-allocated memory. So you don't need to hang onto your own copy of the data.

What about the rest of the members of D3D11_BUFFER_DESC? Do you set them somewhere else in your code? If not they will not be initialized, and they will have garbage values. Also if you are creating a static vertex buffer that the GPU will not write to, then you should use D3D11_USAGE_IMMUTABLE instead of D3D11_USAGE_DEFAULT.

When you create a buffer, memory is allocated somewhere either in GPU memory or in system memory. If you provide initialization data, then that data is copied to that newly-allocated memory. So you don't need to hang onto your own copy of the data.


Here is the whole buffer class.

class Buffer
{
public:
Buffer(void) : m_buffer(0), m_enabled(false)
{
ZeroMemory(&m_bufferDescriptor, sizeof(D3D11_BUFFER_DESC));
ZeroMemory(&m_initData, sizeof(D3D11_SUBRESOURCE_DATA));
}

~Buffer(void)
{
cleanup();
}

void cleanup()
{
ZeroMemory(&m_bufferDescriptor, sizeof(D3D11_BUFFER_DESC));
ZeroMemory(&m_initData, sizeof(D3D11_SUBRESOURCE_DATA));
m_enabled = false;

if (m_buffer)
{
m_buffer->Release();
}
}

void createBuffer( unsigned int bufferSize, void* data, bool dynamic, unsigned int bindFlag )
{
m_enabled = true;

m_bufferDescriptor.Usage = D3D11_USAGE_IMMUTABLE;
m_bufferDescriptor.ByteWidth = bufferSize;
m_bufferDescriptor.BindFlags = bindFlag;
m_bufferDescriptor.CPUAccessFlags = 0;
m_bufferDescriptor.MiscFlags = 0;

m_initData.pSysMem = data;
m_initData.SysMemPitch = 0;
m_initData.SysMemSlicePitch = 0;

HRESULT hr = DeviceManager::getInstance().getDevice()->CreateBuffer( &m_bufferDescriptor, &m_initData, &m_buffer );
if (FAILED(hr))
{
std::cout << "Failed to create a D3D11 Buffer object with code: " << hr << std::endl;
}
}

//Buffer update constructs
void map()
{
}
void unmap()
{}

bool isEnabled() const { return m_enabled; }
protected:
D3D11_BUFFER_DESC m_bufferDescriptor;
D3D11_SUBRESOURCE_DATA m_initData;
ID3D11Buffer* m_buffer;
bool m_enabled;
};

Even with the immutable flag it fails in the same way sadly enough. I'll give installing the newest drivers ago tomorrow or the day after and see if that fixes it. It's really annoying that it happens in the driver and not in D3D as then I would at least be able to debug it sad.png.

Ah cheer for clearing that memcopy up as it wasn't completly clear to me from the docs if it did that in D3D11, in D3D9 its a bit more explicit as you memcpy the stuff on to the pointer you get back from lock.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

Are you sure that the data pointer is legit and points to the right amount of data? As a quick experiment I tried something similar to your setup except that I passed a bogus pointer, and it also crashes in the ATI DLL.

Are you sure that the data pointer is legit and points to the right amount of data? As a quick experiment I tried something similar to your setup except that I passed a bogus pointer, and it also crashes in the ATI DLL.

That's a good one I'll double check that.
But this line:

SpaceSim


SpaceSim.exe!Buffer::createBuffer(unsigned int bufferSize=20000, void * data=0x020dca50, bool dynamic=false, unsigned int bindFlag=1), this seems to show the pointer is legit and even 4 byte aligned.

And it is however I am passing a pointer to the end of the my data block and not the beginning, so when the driver tries to copy the data it fails misserable as it is reading of the end of a buffer. Fixed it by passing the pointer to the beginning of the buffer.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

This topic is closed to new replies.

Advertisement