# DX11 [SlimDX] DX11 Structured buffer creation error

## Recommended Posts

Hi, First of all I'd like to say thanks for all the hard work and support you guys are giving here. Really appreciated all the work you put in SlimDX, keep it up. Also I've learned so much from this site simply by searching and reading the forums and articles. Cheers! Now..I'm a beginner in 3D programming (started learning not that long ago), but I'm not new to the whole programming thing. I put up a basic DX10 rendering engine in C# to get familiar with these things, it will be used for my project later. This evening I've decided to play around with compute shaders (was always excited about this DX11 feature) and try to add some basic GPU compute support to my engine. My starting point was the BasicCompute11 sample from the latest DX SDK (I'm trying to port in to SlimDX with C#) which hasn't worked out as I've expected.. I'm getting a E_INVALIDARG error when I try to create a structured buffer. After some more testing and debugging I'm really out of ideas, but I hope that you guys will be able to say what I'm missing here. Here's the code which is relevant to my problem: Struct element which will gonna fill the buffer:
          struct test_buff
{
public int ii;
public float ff;
}


And the part where I create the buffer:
            BufferDescription b_desc = new BufferDescription();
b_desc.BindFlags = (BindFlags)0x80L | BindFlags.ShaderResource; // 0x80L - UnorderedAccessView
b_desc.SizeInBytes = Marshal.SizeOf(typeof(test_buff)) * 1024;
b_desc.OptionFlags = ResourceOptionFlags.StructuredBuffer;
b_desc.StructureByteStride = Marshal.SizeOf(typeof(test_buff));
b_desc.Usage = ResourceUsage.Default;
b_desc.CpuAccessFlags = CpuAccessFlags.None;

Buffer buf1 = new Buffer(device, b_desc);


After enabling debugging I'm getting this error information:
D3D11: ERROR: ID3D11Device::CreateBuffer: When creating a buffer with the MiscFlag D3D11_RESOURCE_MISC_BUFFER_STRUCTURED specified, the StructureByteStride must be greater than zero, no greater than 2048, and a multiple of 4. [ STATE_CREATION ERROR #2097340: CREATEBUFFER_INVALIDSTRUCTURESTRIDE ]
First-chance exception at 0x74e8b727 in Demo.exe: Microsoft C++ exception: _com_error at memory location 0x0032ed3c..
D3D11: ERROR: ID3D11Device::CreateBuffer: CreateBuffer returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #69: CREATEBUFFER_INVALIDARG_RETURN ]


It seems like my b_desc.StructureByteStride is invalid or something... although I've checked - Marshal.SizeOf(typeof(test_buff)) returns 8 as it supposed to be in this case (which is indeed a multiple of 4). I've even tried setting the StructureByteStride with other values which hasn't changed anything. Any help or tips would be great :) If you need any more info - just ask. Thanks for your time. [Edited by - Sieras on November 6, 2009 4:53:03 PM]

##### Share on other sites
You've said it yourself:
Quote:
 Marshal.SizeOf(typeof(test_buff)) returns 8 as it supposed to be in this case
So, 8 * 1024 = 8192 which is certainly above the upper allowed limit of 2048, which is stated in the error message. Looks like your buffer is simply too large.

##### Share on other sites
Quote:
Original post by sirob
You've said it yourself:
Quote:
 Marshal.SizeOf(typeof(test_buff)) returns 8 as it supposed to be in this case
So, 8 * 1024 = 8192 which is certainly above the upper allowed limit of 2048, which is stated in the error message. Looks like your buffer is simply too large.

Well I thought so too in the first place, tried changing that value any other value so that the whole buffer size would be smaller than 2048 and it still gives me the same error. It facts it works fine in the BasicCompute11 SDK sample with 1024 multiplied by 8 (struct size) so it shouldn't be a problem here.

It seems like compiler complains about StructureByteStride value and not the whole buffer size.

Any more ideas? :)

##### Share on other sites
Hi Sieras

I've also translated the BasicCompute11 from the DirectX SDK to C# / SlimDX to get familar with GPU computing.

There were a few errors in SlimDX which were closed in revision r1237 (StructureByteStride not used) and r1242 (BufferEx properties missing).

So make sure you got the latest svn trunk of SlimDX, especially when you are using DirectX 11 features.

##### Share on other sites
Quote:
 Original post by Dysprosium88Hi SierasI've also translated the BasicCompute11 from the DirectX SDK to C# / SlimDX to get familar with GPU computing.There were a few errors in SlimDX which were closed in revision r1237 (StructureByteStride not used) and r1242 (BufferEx properties missing).So make sure you got the latest svn trunk of SlimDX, especially when you are using DirectX 11 features.

Thanks for your reply. I've downloaded and compiled the latest HEAD version from SVN and now it's working as it supposed to. Now I can get back to work :)

Thanks again. I'll try to keep SlimDX up-to-date next time.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627662
• Total Posts
2978508
• ### Similar Content

• hi,
i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
is the buffer bound
or
b. to the VertexShader that is currently set as the active VertexShader
Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

Look at this example:
perform drawcall       ( buffer_A is used )

perform drawcall   ( buffer_B is used )
perform drawcall   (now which buffer is used ??? )

I ask this question because i have made a custom render engine an want to optimize to
the minimum  updateSubresource, and setConstantbuffer  calls

• I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data

• I am really stuck with something that should be very simple in DirectX 11.
1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine.
2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders).

However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that?

If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up.  I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC).
I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines.
I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines?  But I have no clue how to use two different shaders in the same scene.  And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)?
I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated.

I'm also more than happy to post my simple test code if that helps as well!

• By Reitano
Hi,
I am writing a linear allocator of per-frame constants using the DirectX 11.1 API. My plan is to replace the traditional constant allocation strategy, where most of the work is done by the driver behind my back, with a manual one inspired by the DirectX 12 and Vulkan APIs.
In brief, the allocator maintains a list of 64K pages, each page owns a constant buffer managed as a ring buffer. Each page has a history of the N previous frames. At the beginning of a new frame, the allocator retires the frames that have been processed by the GPU and frees up the corresponding space in each page. I use DirectX 11 queries for detecting when a frame is complete and the ID3D11DeviceContext1::VS/PSSetConstantBuffers1 methods for binding constant buffers with an offset.
The new allocator appears to be working but I am not 100% confident it is actually correct. In particular:
1) it relies on queries which I am not too familiar with. Are they 100% reliable ?
2) it maps/unmaps the constant buffer of each page at the beginning of a new frame and then writes the mapped memory as the frame is built. In pseudo code:
BeginFrame:
page.data = device.Map(page.buffer)
device.Unmap(page.buffer)
RenderFrame
Alloc(size, initData)
...
memcpy(page.data + page.start, initData, size)
Alloc(size, initData)
...
memcpy(page.data + page.start, initData, size)
(Note: calling Unmap at the end of a frame prevents binding the mapped constant buffers and triggers an error in the debug layer)
Is this valid ?
3) I don't fully understand how many frames I should keep in the history. My intuition says it should be equal to the maximum latency reported by IDXGIDevice1::GetMaximumFrameLatency, which is 3 on my machine. But, this value works fine in an unit test while on a more complex demo I need to manually set it to 5, otherwise the allocator starts overwriting previous frames that have not completed yet. Shouldn't the swap chain Present method block the CPU in this case ?
4) Should I expect this approach to be more efficient than the one managed by the driver ? I don't have meaningful profile data yet.
Is anybody familiar with the approach described above and can answer my questions and discuss the pros and cons of this technique based on his experience ?
For reference, I've uploaded the (WIP) allocator code at https://paste.ofcode.org/Bq98ujP6zaAuKyjv4X7HSv.  Feel free to adapt it in your engine and please let me know if you spot any mistakes
Thanks
Stefano Lanza

• Hey all. I've been working with compute shaders lately, and was hoping to build out some libraries to reuse code. As a prerequisite for my current project, I needed to sort a big array of data in my compute shader, so I was going to implement quicksort as a library function. My implementation was going to use an inout array to apply the changes to the referenced array.

I spent half the day yesterday debugging in visual studio before I realized that the solution, while it worked INSIDE the function, reverted to the original state after returning from the function.

My hack fix was just to inline the code, but this is not a great solution for the future.  Any ideas? I've considered just returning an array of ints that represents the sorted indices.

• 10
• 10
• 12
• 22
• 13