Jump to content
Posted 10 July 2011 - 10:25 AM
Posted 10 July 2011 - 03:10 PM
Posted 10 July 2011 - 06:23 PM
Posted 10 July 2011 - 11:42 PM
GPUs are not good at branching. They process fragments in groups and if one of those fragments need to take a different branch, the whole fragment processing for that group stalls.
SM 3 GPU process in groups of 4 (2x2 fragments). SM 4 GPUs (I don't if it is all of them) process groups of 16 (4x4 fragments).
Branch prediction? There is no such thing in GLSL.
I do remember reading lot of articles about Intel and branch prediction in their CPUs and how AMD designed a better CPU with their brand new Athlon.
"Lots of articles talking about SIMD programming is about CUDA, can these information about CUDA be applied on shaders such as GLSL."
If the articles is about GPUs, then yes. GLSL is just a shading language and there isn't anything about GPU design in the GLSL specification.
As for your code, you can get rid of the "if""else" with the mix() function. Look at your GLSL manual.
If you are interested in using the GPU as a CPU, read about CUDA or OpenCL (Compute Library).
Posted 11 July 2011 - 08:39 AM