Sign in to follow this  
Tordin

DX11 [DX11] - Compute Shader questions.

Recommended Posts

Hey! im doing some Compute shaderstuff and got lost and since the lack of (in my opinion) good tutorials, i need some answers.

Q1 :

//C++
csDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE;
csDesc.CPUAccessFlags = 0;
csDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
csDesc.StructureByteStride = sizeof(data);
csDesc.ByteWidth = sizeof(data) * t_Size_X * t_Size_Y;
csDesc.Usage = D3D11_USAGE_DEFAULT;



this code aboves creates a buffer, and that is starightforward.
But the only parameter value i find odd is the ByteWidth.

Data in this case is an float4.
why do i do the sizeof(data)*size_X*size_Y?
To specifi that i whant to make an 2d array of floats?

Q2 :
In the hlsl code, where should i put the eoutput?

Q3 :
(this question is related to question one)
If i specify to make a 2d array out of floats.
How do i know in wich element i am calculating right now?
my toughts where that the SV_GroupThreadID semantic would have the eye for that.

Q4 :
How do i read the output values in my cpp code?
(is this possible?)


If you have any good tutorials or any good booktips, let me know!
And thanks for all help.


Share this post


Link to post
Share on other sites
I'll try to give some insight on these topics:

Q1: The ByteWidth is the size of the buffer in bytes. So you say the size of the element, multiplied by the number of elements.

Q2: The output from the compute shader is done through an unordered access view (UAV). This is a big difference from the other stages - the compute shader handles its own output, allowing for fairly flexible setups.

Q3: That's right, but only if you have only a single thread group. If you are using a 2D grid represented by the 1D buffer, then you need to create a 1D index to use from your 2D ID information. If you have more than one group, then use the dispatch thread ID. If you only have one group, you can use either the dispatch or group thread ID's. The Water Simulation demo in my engine (link is in my signature) has just such a setup to hold the state of the water if you are looking for an example.

Q4: You need to create a staging buffer (a secondary buffer with staging usage) then copy the results to it, and then map that buffer and read the data out on the CPU side.

Share this post


Link to post
Share on other sites
Q1 :
so there is no need to have a sizeof*x*y.
i could just go and create a array instead and size that?
Or i could make a real nig struct with diffrent kinds (if i would only whant to make one caluclation) of data types?

What i understood it from the tutorial (and hes purpose), was that he wanted to created a buffer big enough for a 16x16 float4 array, so he later could use it as an image.
and since images make most sence in 2d, he used the sizeX*sizeY.

Q2 :

That i know, to specify the question more, where do i output it in the hlsl code?

RWStructuredBuffer<BufferStruct> g_OutBuff;
[numthreads( 4, 4, 1 )]
void mainCS( uint3 threadIDInGroup : SV_GroupThreadID, uint3 groupID : SV_GroupID )
{
float4 color = threadIDInGroup.x * threadIDInGroup.y * threadIDInGroup.z;
g_OutBuff[ 0 ].color = color;
}



Here i am send the data to the g_OutBuff[0].color?
I just took this plain and simpel from the tutorial and i think i understand it so that the gOutbuff is the pixelshaders answer for "return color;".

Q3 :
Well, that was close to what i tought.
So the buffer is never ever a "2d buffer" it is allways a 1d buffer?
and there for i have to divide the size of (xNum*xElemntSize)/(yNum*yElementSize)?
and that index i allso use for the g_OutBuff[index]? instead of putting 0 there?
And i will have a look on your stuff!

Q4 :
Ah, so when i mapp the buffer, instead of writing to it, just read from it.
That make sense!

Thanks alot Jason!

Share this post


Link to post
Share on other sites
Quote:
Original post by Tordin
Q2 :

That i know, to specify the question more, where do i output it in the hlsl code?
*** Source Snippet Removed ***

Here i am send the data to the g_OutBuff[0].color?
I just took this plain and simpel from the tutorial and i think i understand it so that the gOutbuff is the pixelshaders answer for "return color;".


Compute shaders don't really "return" anything; you declare one or more ouput buffers and you can write to them at any point during the execution of the shader, much like a normal C/C++ function writing to an array.

Share this post


Link to post
Share on other sites
Quote:
Original post by Tordin
Q1 :
so there is no need to have a sizeof*x*y.
i could just go and create a array instead and size that?
Or i could make a real nig struct with diffrent kinds (if i would only whant to make one caluclation) of data types?

What i understood it from the tutorial (and hes purpose), was that he wanted to created a buffer big enough for a 16x16 float4 array, so he later could use it as an image.
and since images make most sence in 2d, he used the sizeX*sizeY.

The sizeof*x*y is just to select enough memory to hold his 2D grid of points (I assume). Buffers are always 1D, while texture resources can be 1D, 2D, or 3D. Technically you can use a Texture2D to achieve the same result, so you just need to choose the resource type that allows for the most coherent memory access with a minimal amount of address calculations.
Quote:
Original post by Tordin
Q2 :

That i know, to specify the question more, where do i output it in the hlsl code?
*** Source Snippet Removed ***

Here i am send the data to the g_OutBuff[0].color?
I just took this plain and simpel from the tutorial and i think i understand it so that the gOutbuff is the pixelshaders answer for "return color;".

This depends on the type of 'resource object' you declare in your shader, which is the interface that you work with your resource through. When you bind a resource to the compute shader through either an SRV or a UAV, on the HLSL side you must declare an object that represents it. In your case, it is a RWStructuredBuffer<BufferStruct> (which is a clever name by the way :P ). This object allows for array like access to its contents (which are again only 1D). Other objects like an append/consume buffer have different access mechanisms.
Quote:
Original post by Tordin
Q3 :
Well, that was close to what i tought.
So the buffer is never ever a "2d buffer" it is allways a 1d buffer?
and there for i have to divide the size of (xNum*xElemntSize)/(yNum*yElementSize)?
and that index i allso use for the g_OutBuff[index]? instead of putting 0 there?
And i will have a look on your stuff!

I don't clearly understand the equation you show above, but in general if you have the 2D location in the grid that you want, you can find the index by using the following: index = location.x + size_x * location.y You are just making a linear index out of a 2D one like you would in the same situation in C++.
Quote:
Original post by Tordin
Q4 :
Ah, so when i mapp the buffer, instead of writing to it, just read from it.
That make sense!

Thanks alot Jason!

Keep in mind that you have to provide the correct arguments to the map function in order to be able to read the data, and that it must be created with the proper access flags as well!

Share this post


Link to post
Share on other sites
Q1 :
Yes well now i understand it perfect :)

Q2 :
Hmm alright... This was a bit confusing thoe so i think i have to read up on that subject!
(and i couldent find a way to see your water shader demo)

Q3 :
haha, no that equation was wrong, but what i was trying to say is what you just said :P

Q4 :
What bindflags should i use for the mapp buffer?
I just started to test this with the following code, but since i dident know wich bindflag to use i just used the "Shader_Resource" one.
like this

cbDesc.Usage = D3D11_USAGE_STAGING;
cbDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;


Share this post


Link to post
Share on other sites
Since you won't be binding the staging resource to the pipeline at all, it doesn't matter what bind flag you use (and you should probably set it to 0). The key is in the ID3D11DeviceContext::Map() function - you need to pass the D3D11_MAP_READ flag for reading the contents of the buffer.

Share this post


Link to post
Share on other sites
I managed to create the buffer and all that, but that brings me to another question, how do i know what in the buffer i am looking for?

this is my shader so far

struct BufferStruct
{
float4 color;
};
RWStructuredBuffer<BufferStruct> g_OutBuff;
[numthreads( 4, 1, 1 )]
void mainCS( uint3 threadID : SV_GroupThreadID, uint3 groupID : SV_GroupID )
{
g_OutBuff[threadID.x].color = float4(10+10,0.0f,0.0f,0.0f);
}


im just trying to experiment with one calculation of a float4, so i specifed the numthreads to x4,y1,z1 since i only what 4 threads in xdim and none more since there is only one float. ( i think i got that correct )

Now i am outputting the value to the G_OutBuffer[at the threads x value];

And in my map function i am just pointing in the buffer i converted from the mapped resource.
like this

m_pD3DContex->CopyResource(MapBuffer,m_pComputeShaderBuffer);

hr = m_pD3DContex->Map(MapBuffer,0,D3D11_MAP_READ,0,&cbMapped);
if(ChekReturnError(hr))
return false;
Buffer = (MORN_VARIABLE_BUFFER_COMPUTE*)cbMapped.pData;
m_pD3DContex->Unmap(MapBuffer,0);



in my head this is correct, but my values are not.
please point me in the right direction for this.

cheers!

Share this post


Link to post
Share on other sites
I just did slove it, i did managed to create the destination buffer a bit to small :)

! Thanks alot guys for all the help, this has been very intressting and learning in all ways :)



(this post might even gets sticked becuase it contain lost of questions about ComputeShaders)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628354
    • Total Posts
      2982229
  • Similar Content

    • By joeblack
      Hi,
      im reading about specular aliasing because of mip maps, as far as i understood it, you need to compute fetched normal lenght and detect now its changed from unit length. I’m currently using BC5 normal maps, so i reconstruct z in shader and therefore my normals are normalized. Can i still somehow use antialiasing or its not needed? Thanks.
    • By 51mon
      I want to change the sampling behaviour to SampleLevel(coord, ddx(coord.y).xx, ddy(coord.y).xx). I was just wondering if it's possible without explicit shader code, e.g. with some flags or so?
    • By GalacticCrew
      Hello,
      I want to improve the performance of my game (engine) and some of your helped me to make a GPU Profiler. After creating the GPU Profiler, I started to measure the time my GPU needs per frame. I refined my GPU time measurements to find my bottleneck.
      Searching the bottleneck
      Rendering a small scene in an Idle state takes around 15.38 ms per frame. 13.54 ms (88.04%) are spent while rendering the scene, 1.57 ms (10.22%) are spent during the SwapChain.Present call (no VSync!) and the rest is spent on other tasks like rendering the UI. I further investigated the scene rendering, since it takes über 88% of my GPU frame rendering time.
      When rendering my scene, most of the time (80.97%) is spent rendering my models. The rest is spent to render the background/skybox, updating animation data, updating pixel shader constant buffer, etc. It wasn't really suprising that most of the time is spent for my models, so I further refined my measurements to find the actual bottleneck.
      In my example scene, I have five animated NPCs. When rendering these NPCs, most actions are almost for free. Setting the proper shaders in the input layout (0.11%), updating vertex shader constant buffers (0.32%), setting textures (0.24%) and setting vertex and index buffers (0.28%). However, the rest of the GPU time (99.05% !!) is spent in two function calls: DrawIndexed and DrawIndexedInstance.
      I searched this forum and the web for other articles and threads about these functions, but I haven't found a lot of useful information. I use SharpDX and .NET Framework 4.5 to develop my game (engine). The developer of SharpDX said, that "The method DrawIndexed in SharpDX is a direct call to DirectX" (Source). DirectX 11 is widely used and SharpDX is "only" a wrapper for DirectX functions, I assume the problem is in my code.
      How I render my scene
      When rendering my scene, I render one model after another. Each model has one or more parts and one or more positions. For example, a human model has parts like head, hands, legs, torso, etc. and may be placed in different locations (on the couch, on a street, ...). For static elements like furniture, houses, etc. I use instancing, because the positions never change at run-time. Dynamic models like humans and monster don't use instancing, because positions change over time.
      When rendering a model, I use this work-flow:
      Set vertex and pixel shaders, if they need to be updated (e.g. PBR shaders, simple shader, depth info shaders, ...) Set animation data as constant buffer in the vertex shader, if the model is animated Set generic vertex shader constant buffer (world matrix, etc.) Render all parts of the model. For each part: Set diffuse, normal, specular and emissive texture shader views Set vertex buffer Set index buffer Call DrawIndexedInstanced for instanced models and DrawIndexed models What's the problem
      After my GPU profiling, I know that over 99% of the rendering time for a single model is spent in the DrawIndexedInstanced and DrawIndexed function calls. But why do they take so long? Do I have to try to optimize my vertex or pixel shaders? I do not use other types of shaders at the moment. "Le Comte du Merde-fou" suggested in this post to merge regions of vertices to larger vertex buffers to reduce the number of Draw calls. While this makes sense to me, it does not explain why rendering my five (!) animated models takes that much GPU time. To make sure I don't analyse something I wrong, I made sure to not use the D3D11_CREATE_DEVICE_DEBUG flag and to run as Release version in Visual Studio as suggested by Hodgman in this forum thread.
      My engine does its job. Multi-texturing, animation, soft shadowing, instancing, etc. are all implemented, but I need to reduce the GPU load for performance reasons. Each frame takes less than 3ms CPU time by the way. So the problem is on the GPU side, I believe.
    • By noodleBowl
      I was wondering if someone could explain this to me
      I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide
      I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA
      In one case I would think that: 
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels
      In the second case I would think that:
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void
      I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
  • Popular Now