A compute shader is a programmable shader stage that expands Microsoft Direct3D 11 beyond graphics programming. The compute shader technology is also known as the DirectCompute technology. Like other programmable shaders (vertex and geometry shaders for example), a compute shader is designed and implemented with HLSL but that is just about where the similarity ends. A compute shader provides high-speed general purpose computing and takes advantage of the large numbers of parallel processors on the graphics processing unit (GPU). The compute shader provides memory sharing and thread synchronization features to allow more effective parallel programming methods.
In this demo, we are going to render a desert scene to an off-screen texture. This texture will be the input into our blurring algorithm that executes on the compute shader. After the texture is blurred, we will draw a full screen quad to the back buffer with the blurred texture applied so that we can see the blurred result to test our blur implementation.
We assume that the blur is separable, so we break the blur down into computing two 1D blurs (Rolling Box Blur) - a horizontal one and a vertical one. Implementing this requires two texture buffers where we can read and write to both; therefore, we need a SRV and UAV to both textures. Let us call one of the textures A and the other texture B. The blurring algorithm proceeds as follows:
- Bind the SRV to A as an input to the compute shader (this is the input image that will be horizontally blurred) - Bind the UAV to B as an output to the compute shader (his is the output image that will store the blurred result) - Dispatch the thread groups to perform the horizontal blur operation. - Bind the SRV to B as an input to the compute shader (this is the horizontally blurred image that will next be vertically blurred) - Bind the UAV to A as an output to the compute shader (this is the output image that will store the final blurred result) - Dispatch the thread groups to perform the vertical blur operation.