simple compute shader question

Started by
3 comments, last by CryZe 11 years, 3 months ago
Hi,
I've recently started using compute shaders, I understand the basics. I wrote a code for blurring 1D texture and it works. The problem is that now i want this texture to have 2k size. With 1k it was simple: Dispatch(1, 1, 1 ), [numthreads(1024, 1, 1)], SV_GroupThreadID for indexing. Maximum number of threads in group is 1024 so i must use two groups, but i'can't synchronize thread groups? Can someone explain how an algorithm for bluring 1d texture with 2 (or more) groups should look like (in compute shader)? Two passes?

TIA
Przemek
Advertisement
Hi,

You can use groupshared variables to make them syncronized

Example: groupshared float4 gCache[2048];

And if u need to work with specific computed data from an other thread just call GroupMemoryBarrierWithGroupSync(); so the data for the group is synced.

You can imagine the (1, 1, 1) as 3 axes (or 3D Array), If you have 1024, 1, 1 its 1024*1*1. If u need 2048 u can for example use Dispatch(1024, 2, 1) or what ever combination you want.

Regards

from time to time i find time


You can use groupshared variables to make them syncronized


No he can't. The keyword groupshared allows multiple threads inside a thread group to share data, not between the different thread groups. It's this way, because the thread groups might get executed by different streaming multiprocessors, while a single thread group is executed on a single streaming multiprocessor, where all the threads can use specialized on-chip memory to efficiently share data.

It actually depends on your implementation. A normal shader for blurring doesn't really require synchronization between the individual threads, since every thread just needs to gather its data. A better implementation might be, that you want to use groupshared memory as some kind of cache for the row of the texture, so that every thread only needs to perform a single texture fetch. Since groupshared memory is only available inside a thread group, you can only use up to 1024 threads. The only solution would be to convert your algorithm into a more iterative algorithm. Just work on 2 pixels per thread and use a groupshared array with 2048 elements. This works and is not even slower, since a thread group is not what the actual hardware executes in parallel. The driver splits your thread group into units of 32 or 64 threads called Warps or Wavefronts that get executed iteratively (clarification: all the threads of a warp get executed in parallel, but the different warps get executed iteratively). So 2 thread groups of 1024 would be executed as 32 wavefronts or 64 warps in an iterative manner. My solution of just a single thread group and 1024 threads per thread group gets converted into just 16 wavefronts or 32 warps, but they all do the twice the amount of work. So in the end your algorithm is just as parallel, as it would be if you would use 2 thread groups. As long as a thread group consists of at least 8 warps (recommendation of NVIDIA) you can always remove some of your parallelisation without a decrease in performance. As far as I understand NVIDIA's Kepler architecture, they now begin to execute multiple warps in parallel (6, if I'm correct), so this solution might not be the best for the future. But it's as good as it can get with DirectX 11 unfortunately.

If you would actually want to work on more than 2048 pixels, you would need to use register memory to cache your pixels, since you would need more than the maximum of 32 KB group shared memory. Let's say you would want to work with 4096 pixels. You could store 4 pixels per thread inside its registers and always expose 2 of them in the group shared memory. You just need to synchronize the threads and always expose the pixels you want to access from other threads. Groupshared memory is just a way to share data. Register memory is way larger than just 32 KB.
Assuming you're using shared memory to cache values from nearby texels, then you only need synchronization/sharing between threads that are N texels apart (where N is the radius of your blur kernel). So for instance if your blur samples 10 texels to the left or right, then a given thread only needs to sync/communicate with the 10 threads to its left and the 10 threads to its right. This means that your thread group doesn't need to span the entire texture, it just needs to be large enough for any one thread to access its neighbors in shared memory. So for instance if your thread group is 256 threads, thread 128 can safely access the data threads 118-127 and 129-137 using shared memory and thread group synchronization. The only issue comes at the edge of your thread group. Take thread 255: it can access 245-254 just fine, but it can't communicate with the threads to its right because they're in a different thread group. There are 2 common ways of solving this problem:

1. Add "dummy" threads to your thread group that don't actually output anything, but instead just sample the N texels past the edge of your thread group (these texels are often referred to as an "apron")

2. Have the N threads at the start of the group sample both their own texel as well as the N texels to the left, then do the same for N threads at the end of the group

You might want to have a read through this paper as well as this presentation for more info.
I would only use the apron solution for non-seperable kernels or kernels that are too large to fit into groupshared memory, since the apron pixels are getting sampled multiple times from the different thread groups. So you're losing performance there, which is not the case in the solution where every thread simply calculates multiple pixels.

This topic is closed to new replies.

Advertisement