Jump to content
  • Advertisement
  • entries
  • comments
  • views

Started to Dive into the Compute Shader!

Sign in to follow this  
Paul C Skertich


I admit personally that when I heard about Compute Shader that I thought it was an over complex odeal. It's actually fascinating because of the parallel possibilities. It's literally turbo charging your game's speed to create box blurs or any other post processing effect; even managing physics with particles.

I created a Unorder Access View then from that I was able to save the image you see below. It's simple example not over the top but gave me the basic idea on how a compute shader works. The texture I created for the UAV and SRV was 1024 and so for every 1 pixel to fill up the 1024 image; the gradiant ramp you see is the results of the XY threads.


I'm digging the whole compute shaders and I'll be exploring the unlimited possiblities with them. I understand the compute shaders won't solve everything and it helps hide some stalling. I'm actually glad I looked more in depth into Compute Shaders.

This is just me overly exciting ranting biggrin.png

Edit: A new attached photo shows what's being rendered to a render target to compute shader as is not post process effect via compute shader.


There's more to learn from this great experience I must admit! I look forward to it!
Sign in to follow this  


Recommended Comments

:lol: I remember doing exactly the same thing when I started: "Huh, what's all this SV_GroupThreadWhatever do ???"

Share this comment

Link to comment

Yeah I know right! It was all like what in God's name! Last night I grabbed a render target SRV and put it through the compute shader then saved it as a JPEG. I also came to the realization why sometimes creating a render target of R32G32B32A32_FLOAT is heavy on memory performance. A render target of R8G8B8A_UNORM does just fine. One thing I do have to do is in my engine is to enumulate all the display modes and save it a configuration file then have the program pick which one is suitable. Dispatching 64, 21, 1 threads for the constant buffer just to fill up the render target I'm not sure is any good because in the shader only 32 is the max apparently. After reading on MSDN the max dispatchable threads are higher than 64 but I'm not sure how well that will do with performance. I'll have to use a high resolution timer to see the difference.

Share this comment

Link to comment

Not sure I follow. You probably confuse something here. The 32 "limit" (or 64 for AMD) is the so-called warp size (aka wave front), and to "max out" the GPU's performance a group size should be a multiple of this (i.e. the product x*y*z of [tt][numthreads(x,y,z)][/tt]).


Apart from that, don't bother too much just yet. I consider the switch to compute from normal shaders as at least as difficult to understand as the switch from regular programming to graphics shaders. Best start playing with available source (e.g. prefix sum). wink.png

Share this comment

Link to comment

Yeah the whole dispatch(x,y,z) and the whole numthreads[x,y,z] was tripping me up a bit! It is confusing to get a handle of because your dealling with each thread of x,y,z to fill up a texture. Where as the pixel shaders just deal with output the pixels to the render target or back buffer. The way of the shader language goes is different instead of returning float4 as in the pixel shader vs as bufferOut[dispatchThread.xy] = data[dispatchThread.xy] 

Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!