Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


DirectCompute Buffer


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 gfxCahd   Members   -  Reputation: 204

Like
0Likes
Like

Posted 15 March 2014 - 04:05 AM

Hm, when using a buffer of type int4 for my input to the compute shader, everything works fine. The shader resource view has its format set to R32G32B32A32_SInt.

But when I try to create a buffer of type int3 (R32G32B32_Sint), I get this error "Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.".

 

So, is the int3 value somehow padded up to 16 bytes?

Or is there some other cause behind this?


Edited by gfxCahd, 15 March 2014 - 04:05 AM.


Sponsor:

#2 imoogiBG   Members   -  Reputation: 1247

Like
0Likes
Like

Posted 15 March 2014 - 08:30 AM

Yep. Each element is separately padded up to 16 for c buffers
What kind of buffer are you using, (cbuffer, tbuffer, strctured buffer?), Because the padding rule is only applicable for constant buffers?


Edited by imoogiBG, 15 March 2014 - 08:30 AM.


#3 gfxCahd   Members   -  Reputation: 204

Like
0Likes
Like

Posted 15 March 2014 - 09:27 AM

Yep. Each element is separately padded up to 16 for c buffers
What kind of buffer are you using, (cbuffer, tbuffer, strctured buffer?), Because the padding rule is only applicable for constant buffers?

hm, a t buffer, i think
Buffer<int3> input : register(t0);



#4 Jason Z   Crossbones+   -  Reputation: 5399

Like
0Likes
Like

Posted 15 March 2014 - 01:12 PM

No, that isn't a tbuffer - it looks like a structured buffer to me.  If it is a structured buffer, then it isn't 16 byte aligned.  Can you post the code for the buffer's description when you create it?  That will clarify the type of the buffer, and also the size of the buffer which you can compare to the size of the CPU data that you are trying to load into it.

 

Also, do you know what operation you are performing when you get the error message?



#5 gfxCahd   Members   -  Reputation: 204

Like
0Likes
Like

Posted 16 March 2014 - 06:44 AM

      //=============================================================================
      // Buffer<int3> input : register(t0);

      inputBuffer = new Buffer(device, new BufferDescription()
      {
        StructureByteStride = Int3.SizeInBytes,
        SizeInBytes = N * Int3.SizeInBytes,
        Usage = ResourceUsage.Dynamic,  // the cpu can write to it.
        BindFlags = BindFlags.ShaderResource, // will be read-only by the GPU.
        CpuAccessFlags = CpuAccessFlags.Write,
        OptionFlags = ResourceOptionFlags.None,
      });

      // srView to our unput buffer
      var srViewDesc = new ShaderResourceViewDescription();
      srViewDesc.Format = Format.R32G32B32_SInt;
      srViewDesc.Dimension = ShaderResourceViewDimension.Buffer;
      srViewDesc.Buffer.FirstElement = 0;
      srViewDesc.Buffer.ElementCount = N;
      ShaderResourceView srView = new ShaderResourceView(device, inputBuffer, srViewDesc);
      //=============================================================================

 

 

The error appears the second time i call UnmapSubresource

(I have a makeshift 'loop', using a goto statement)

      //=============================================================================

      DataStream dataStream;
      context.MapSubresource(inputBuffer, MapMode.WriteDiscard, SharpDX.Direct3D11.MapFlags.None, out dataStream);

      for(int i = 0; i < N; i++)
        dataStream.Write(input[i]);

      context.UnmapSubresource(inputBuffer, 0);
      dataStream.Dispose();

      //=============================================================================

 

It works with single integers, two integers, and 4. But with 3, it crashes on the second attempt to update the buffer.

 

p.s. yeah, this is in SharpDX, on a D3D10 card.


Edited by gfxCahd, 16 March 2014 - 06:54 AM.


#6 gfxCahd   Members   -  Reputation: 204

Like
0Likes
Like

Posted 16 March 2014 - 07:42 AM

Ok, I did some tests.

So, there are two ways to define a "typed" buffer.

 

Buffer of a custom structure type:

C#

bufferDesc.OptionFlags = ResourceOptionFlags.BufferStructured;

srViewDesc.Format = Format.Unknown;

 

HLSL

StructuredBuffer<int3> input : register(t0);

 

Primitive type Buffer:

C#

bufferDesc.OptionsFlags = ResourceOptionFlags.None

srViewDesc.Format = Format.R32G32B32_SInt;

 

HLSL

Buffer<int3> input : register(t0);

 

The structured buffer works for every type I tried.

Using a primitive type buffer works for int, int2, int4, but not int3.

 

So my question is, why?

Also, is there any difference in the two types of buffers, other than the fact that the structured buffer lets you use custom types?


Edited by gfxCahd, 16 March 2014 - 07:53 AM.


#7 Krohm   Crossbones+   -  Reputation: 3261

Like
0Likes
Like

Posted 17 March 2014 - 02:03 AM


So my question is, why?
It is a good question! I have recently searched the net for a similar problem. The only rationale I've found which seemed fairly robust involved interactions with texturing hardware. Apparently, buffers are really meant to be "texture-like" and historically, 3-component textures never really made it to silicon.

I only report what I have read, but it does have indeed some sense.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS