GPGPU and Compute

Started by
3 comments, last by WhiskyJoe 10 years, 6 months ago

Hey all,

I've been looking around a bit on the topic of parallel programming and I was wondering if there are any (good) resources around. I have done an SPH implementation in OpenCL and read up a bit here and there about CUDA, but apart from GPU Computing Gems and Programming Massively Parallel Processors: A Hands-on Approach and some articles/tutorials online, I wasn't able to find that much. These also tend to focus more on CUDA/OpenCL and not on Compute shaders.

I have read a bit on the differences in a previous older thread here on gamedev and I more or less get the difference between CUDA/OpenCL and Compute shaders, but I would love to get more insight about the differences (if there is more to say about it than what isn't already stated in the mentioned thread) and I would like to get more information about Compute shaders in general (I don't mind if it's the OpenGL or DirectX compute shaders). Online resources are fine, but books are preferred.

So anything on either of the topics are very welcome! smile.png

Advertisement


I've been looking around a bit on the topic of parallel programming and I was wondering if there are any (good) resources around.

I'm not sure my input will be useful, but I have been coding GPUs for 6 months now (mainly machine learning problems). I started with kernel development myself, but then found ArrayFire and it is much faster than what I was writing. Now I use ArrayFire for most everything I do and supplement with kernel code just when I have to do so.

You might be looking to just go low-level for the exercise of it. But if you're trying to get work done and want the best speed possible, I don't think anyone does it better than the ArrayFire guys.

-Tom

Practical Rendering and Computation with Direct3D 11 discusses DirectCompute in several chapters.

http://www.amazon.com/Practical-Rendering-Computation-Direct3D-11/dp/1568817207

This might sound counter-intuitive, but in general compute shaders in D3D are very focused on graphics. It uses the same language and API constructs as the graphics-oriented shader types, and it's very simple to integrate them with normal rendering. I would be more inclined to call them natural extension of D3D rendering capabilities rather than a solid framework for GPGPU computing. CUDA on the other hand is very much geared towards non-graphics work. It has an extensive standard library, tons of online resources for parallel computing, an active community, a language that's more flexible than HLSL and behaves a lot more like C++, and is simpler to integrate with CPU processing.

Another possibility is C++ AMP. It's basically a high-level framework built on top of compute shaders that provides a much more C++-like environment for GPGPU.


I've been looking around a bit on the topic of parallel programming and I was wondering if there are any (good) resources around.

I'm not sure my input will be useful, but I have been coding GPUs for 6 months now (mainly machine learning problems). I started with kernel development myself, but then found ArrayFire and it is much faster than what I was writing. Now I use ArrayFire for most everything I do and supplement with kernel code just when I have to do so.

You might be looking to just go low-level for the exercise of it. But if you're trying to get work done and want the best speed possible, I don't think anyone does it better than the ArrayFire guys.

-Tom

Thanks! While it certainly looks useful, one of the reasons I am also searching around for topics on this matter is not only because of personal interest, but I am also looking into a topic I would like to use for my graduation and the use of external libraries is not necessarily prohibited, but for the sake of learning not encouraged.

This might sound counter-intuitive, but in general compute shaders in D3D are very focused on graphics. It uses the same language and API constructs as the graphics-oriented shader types, and it's very simple to integrate them with normal rendering. I would be more inclined to call them natural extension of D3D rendering capabilities rather than a solid framework for GPGPU computing. CUDA on the other hand is very much geared towards non-graphics work. It has an extensive standard library, tons of online resources for parallel computing, an active community, a language that's more flexible than HLSL and behaves a lot more like C++, and is simpler to integrate with CPU processing.

Another possibility is C++ AMP. It's basically a high-level framework built on top of compute shaders that provides a much more C++-like environment for GPGPU.

Is this the same for compute shaders on OpenGL? Are compute shaders generaly also only used in relation to graphics and OpenCL/CUDA more for general computing of heavy tasks? I am wondering about this because the topic I posted kind of states both are primarely used for general purpose because under the hood they're sort of identical in relation to how they use the hardware.

Another question comes to mind. What is used more often in relation to game development (in all aspects like rendering, physics, etc) if at all? I know certain libraries like PhysX uses CUDA when it can, but I haven't heard much about Compute shaders in relation to game development (or even out of that relation to be honest). The only concrete thing I know is that the Playstation 4 has Compute shaders, but that's about it. Haven't really read/heard anything about implementations in the games. Perhaps somewhat later when the console is actually out though.

Thanks for the input so far! :)

This topic is closed to new replies.

Advertisement