• Advertisement
Sign in to follow this  

What do with compute shaders?!

This topic is 1208 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I am was just looking into compute shaders, they seem really cool and all but what to do with them.

 

What do you do with compute shaders? I am not sure what I will be using them for.

 

Greetz Jaap

 

 

Share this post


Link to post
Share on other sites
Advertisement

Some heavy duty calculations? Post-processing effects? Light culling? You name it!

 

Cheers!

Share this post


Link to post
Share on other sites

Wouldn't it be faster to do post processing in a fragment shader because when you work in a compute shader you can't output to the framebuffer directly and there for need to render a fullscreen quad to display the result?

Share this post


Link to post
Share on other sites

Of course that it is better to use standard vertex/tessellation/fragment shader whenever you can. I have deliberately skipped geometry shader for the reason. On the other hand, there are tasks that require GPU based computation and cannot easily be ported to shaders. They usually are programmed in CUDA or OpenCL. Both APIs require separate contexts and introduce unwanted delay in the interoperability with OpenGL. That's why compute shaders are introduced. They are not as powerful as CUDA/OpenCL but can serve the purpose most of the time allowing much lower overhead.

Share this post


Link to post
Share on other sites

Actually, some effects such as blur are more efficient when implemented with a compute shader - also you'll do lots of processing with render targets etc so it isn't really an issue that you can't write to back buffer directly. The usage patterns allow you to reduce memory access and thus save bandwidth which is always good. 

 

Of course, some other algorithms may not enjoy much of an improvement with CS but for example, culling 1000 light sources becomes pretty trivial with CS - if you do it with PS you'll be repeating much of the calculations for no gain. 

 

Aks9 - Geometry shaders are useful too! No need to skip them. Just to know where to use them. 

 

Cheers!

Share this post


Link to post
Share on other sites

 

Aks9 - Geometry shaders are useful too! No need to skip them. Just to know where to use them. 

 

 

I know. ;)

I'm sorry if my previous post make a confusion. I have skipped them in previous counting because of a general performance, not because of functionality.

Share this post


Link to post
Share on other sites
Maybe downscaling of images could be done faster using compute shaders. Since it requires only access to two textures instead of rendering.

Share this post


Link to post
Share on other sites

Compute shaders provide shared memory and don't use gfx pipeline functionnalities like ROP.
Thus they can be faster than pixel shader fullscreen implementation if you use the shared memory (if the algorithm provide some spatial locality) or if it is ROP bound (which is very rare).

Another use case not supported by current API but by hardware is concurrent tasks : current gpu can run a graphic worlkload and (several) compute workloads at the same time.

This make sense if you have some algorithm that heavily uses graphic units and another one that is ALU bound or bandwidth bound. Typical use case is shadow mapping that is rasterizer bound with some fullscreen effect.

Mantle exposes such ability which is used in Sniper Elite 3 (it renders Ambient Obscurance abd shadow map concurrently).

Share this post


Link to post
Share on other sites

 

Shaders can be used to manipulate vertex objects or fragments. You can create all kinds of effects. See http://talkera.org/opengl/  for some examples

 

This isn't about shaders in general, it's about compute shaders.  It's clear that the OP knows what shaders are and what they do.

 

 

Exactly, Im just wondering what others are doing with it.

Share this post


Link to post
Share on other sites

@bioglaze: I could try that would be cool to be able to have +1000 light sources. But I would settle for +100 too :P

Share this post


Link to post
Share on other sites

I'm surprised no one has mentioned particle simulation yet, nvidia has a nice demo available as part of Gameworks (there is also a compute based water simulation as well in the samples collection). Compute is also great for parallel sorting, though for a game this is a little less applicable unless you sort and then use the data purely on the GPU. 

Share this post


Link to post
Share on other sites

@bioglaze,JvdWulp: my clustered deferred implentation goes up to 60K lights at smooth 60fps :D

 

I've been implementing an LBVH for culling lights and used it along with my tiled/clustered deferred renderer.

Tree construction (includes sorting the lights with morton codes), traversal for culling lights for each cluster can all be done pretty fast with computeshaders.

Share this post


Link to post
Share on other sites

@bioglaze,JvdWulp: my clustered deferred implentation goes up to 60K lights at smooth 60fps biggrin.png

That's impressive! My Tiled Forward implementation only goes up to ~2000 point lights @ 1080p / 60 Hz, however, I use quite a heavy Cook-Torrance everywhere and haven't really profiled it yet.

Share this post


Link to post
Share on other sites

Thanks guys, got some ideas. I think I will try to get a gpu particle system working first maybe later turn the particles into light sources. Maybe I can come up with an more original use.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement