Sign in to follow this  

DirectCompute Buffer

This topic is 1401 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hm, when using a buffer of type int4 for my input to the compute shader, everything works fine. The shader resource view has its format set to R32G32B32A32_SInt.

But when I try to create a buffer of type int3 (R32G32B32_Sint), I get this error "Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.".

 

So, is the int3 value somehow padded up to 16 bytes?

Or is there some other cause behind this?

Edited by gfxCahd

Share this post


Link to post
Share on other sites

Yep. Each element is separately padded up to 16 for c buffers
What kind of buffer are you using, (cbuffer, tbuffer, strctured buffer?), Because the padding rule is only applicable for constant buffers?

Edited by imoogiBG

Share this post


Link to post
Share on other sites

Yep. Each element is separately padded up to 16 for c buffers
What kind of buffer are you using, (cbuffer, tbuffer, strctured buffer?), Because the padding rule is only applicable for constant buffers?

hm, a t buffer, i think
Buffer<int3> input : register(t0);

Share this post


Link to post
Share on other sites

No, that isn't a tbuffer - it looks like a structured buffer to me.  If it is a structured buffer, then it isn't 16 byte aligned.  Can you post the code for the buffer's description when you create it?  That will clarify the type of the buffer, and also the size of the buffer which you can compare to the size of the CPU data that you are trying to load into it.

 

Also, do you know what operation you are performing when you get the error message?

Share this post


Link to post
Share on other sites

      //=============================================================================
      // Buffer<int3> input : register(t0);

      inputBuffer = new Buffer(device, new BufferDescription()
      {
        StructureByteStride = Int3.SizeInBytes,
        SizeInBytes = N * Int3.SizeInBytes,
        Usage = ResourceUsage.Dynamic,  // the cpu can write to it.
        BindFlags = BindFlags.ShaderResource, // will be read-only by the GPU.
        CpuAccessFlags = CpuAccessFlags.Write,
        OptionFlags = ResourceOptionFlags.None,
      });

      // srView to our unput buffer
      var srViewDesc = new ShaderResourceViewDescription();
      srViewDesc.Format = Format.R32G32B32_SInt;
      srViewDesc.Dimension = ShaderResourceViewDimension.Buffer;
      srViewDesc.Buffer.FirstElement = 0;
      srViewDesc.Buffer.ElementCount = N;
      ShaderResourceView srView = new ShaderResourceView(device, inputBuffer, srViewDesc);
      //=============================================================================

 

 

The error appears the second time i call UnmapSubresource

(I have a makeshift 'loop', using a goto statement)

      //=============================================================================

      DataStream dataStream;
      context.MapSubresource(inputBuffer, MapMode.WriteDiscard, SharpDX.Direct3D11.MapFlags.None, out dataStream);

      for(int i = 0; i < N; i++)
        dataStream.Write(input[i]);

      context.UnmapSubresource(inputBuffer, 0);
      dataStream.Dispose();

      //=============================================================================

 

It works with single integers, two integers, and 4. But with 3, it crashes on the second attempt to update the buffer.

 

p.s. yeah, this is in SharpDX, on a D3D10 card.

Edited by gfxCahd

Share this post


Link to post
Share on other sites

Ok, I did some tests.

So, there are two ways to define a "typed" buffer.

 

Buffer of a custom structure type:

C#

bufferDesc.OptionFlags = ResourceOptionFlags.BufferStructured;

srViewDesc.Format = Format.Unknown;

 

HLSL

StructuredBuffer<int3> input : register(t0);

 

Primitive type Buffer:

C#

bufferDesc.OptionsFlags = ResourceOptionFlags.None

srViewDesc.Format = Format.R32G32B32_SInt;

 

HLSL

Buffer<int3> input : register(t0);

 

The structured buffer works for every type I tried.

Using a primitive type buffer works for int, int2, int4, but not int3.

 

So my question is, why?

Also, is there any difference in the two types of buffers, other than the fact that the structured buffer lets you use custom types?

Edited by gfxCahd

Share this post


Link to post
Share on other sites


So my question is, why?
It is a good question! I have recently searched the net for a similar problem. The only rationale I've found which seemed fairly robust involved interactions with texturing hardware. Apparently, buffers are really meant to be "texture-like" and historically, 3-component textures never really made it to silicon.

I only report what I have read, but it does have indeed some sense.

Share this post


Link to post
Share on other sites

This topic is 1401 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this