# DX11 [DX11] Update StructuredBuffer

This topic is 2102 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have (RW)StructuredBuffer initialized to eg. 50MB. In each frame I need to fill it with different data in interval <10, 50> MB

If i use
 deviceContext->UpdateSubresource( gpuBuffer->buffer, 0, NULL, data, 0, 0 ); 

works fine untill data has size 50MB

But if data are smaller, application crash on this line, no DX debug output nothing.

So i used for smaller data D3D11_BOX and fill it this way:
 D3D11_BOX destRegion; destRegion.left = 0; destRegion.right = dataSize; destRegion.top = 0; destRegion.bottom = 1; destRegion.front = 0; destRegion.back = 1; deviceContext->UpdateSubresource( gpuBuffer->buffer, 0, &destRegion, data, 0, 0 ); 
I hoped that data will be filled from index 0 - dataSize... BUT ...application doesn´t crash this time, however buffer has incorrect content... I got "blank" results if i vizualize content of buffer.

MSDN says nothing about D3D11_BOX and StructuredBuffers, so i really don´t know how to solve this.

##### Share on other sites
That sounds like a pointer error to me. Are you sure you aren't exceeding the size of the buffer???

Also, you could try to use mapping instead of update subresource and see if you meet a similar issue. That would indicate that there is a general problem with the overall setup, or if it works then it would mean that you have an issue with update subresource.

##### Share on other sites
Size of buffer is not exceeded, its always smaller or same size as inited

I habe tried Map / Unmap, but it requires
CPUAccessFlags = D3D11_CPU_ACCESS_WRITE
which failed for StructuredBuffer with UAV | SRV

I inited buffer this way.
 D3D11_BUFFER_DESC bufferDesc; bufferDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; bufferDesc.ByteWidth = gpuBuffer.elementSize * elementsCount; bufferDesc.StructureByteStride = gpuBuffer.elementSize; bufferDesc.Usage = D3D11_USAGE_DEFAULT; // Create the constant buffer pointer so we can access the shader constant buffer from within this class. HRESULT hr = this->device->CreateBuffer(&bufferDesc, NULL, &gpuBuffer.buffer); if (FAILED(hr)) { MyUtils::Logger::LogError("Failed to create buffer %s", bufferName.GetConstString()); return; } 
Than I init SRV and UAV for it

##### Share on other sites
In that case you will need to use two buffers - one with staging usage and the other one as you have already created. Then you write to the staging buffer, and then use copy resource to put the info into the default usage buffer.

It isn't ideal to do this copying, but from experience it is pretty fast since both resources are already on the GPU. In any case, you are only trying to isolate the problem, so this might still be helpful.

##### Share on other sites
I have found bug...

If filled data are from array, it crashed
If filled data are stored in vector and filled like &data[0], it works....

But I dont quite understand this behaviour.

##### Share on other sites
That would indicate that it was an issue with the array size - at least I think so anyways... At least you have a working solution now.

Did you try checking your vector size during runtime and ensure that it is in fact smaller than your array?

##### Share on other sites
I wonder if this is a data alignment issue. Try aligning your array to a 16 byte boundary.

##### Share on other sites
I changed data representation from uint8 array to int array (inside shader i work with int buffer, uin8 representation was in byte form) and now it works fine..

Thank you

• 10
• 11
• 9
• 16
• 19
• ### Similar Content

• I wanted to see how others are currently handling descriptor heap updates and management.
I've read a few articles and there tends to be three major strategies :
1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
2) You have one descriptor heap for an entire pipeline
3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.

• hi,
until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
My Graphic Card is Directx 12 compatible NVidia GTX 960
the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
Now my questions
is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
the same question is about the constant buffer of the matrixes
my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
for example i could use 2 vertexbuffer bindings
1 containing only the uv coordinates
2.containing position and normal
How do i copy from the RWByteAddressBuffers to the vertexbuffer ?

(Code from turanszkij )
Here is my shader implementation for skinning a mesh in a compute shader:

• Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?

It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.

• Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
double clicked on the frame to open it, but no idea where to go from there.

I've been searching for hours and there's no information on this, not even on the Microsoft Website!
They say "open the  Graphics Pixel History window" but there is no such window!
Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?

All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated