I have GLSL code that does something like the simplified code below
uniform vec2 inc;uniform sampler2D myTex;vec3 tColor = vec3(0.0,0.0,0.0);vec3 one = vec3(1.0, 1.0, 1.0);for (int i=0; i<512; ++i){ uv += inc.xy; vec3 texCol = texture2D(myTex, uv).rgb; tColor += texCol; if(dot(one,tColor) > 1.0) { break; }}// Do some other processing with tColorgl_FragColor.rgb = tColor;
This is not the exact code, but the code is not the important bit. Basically there is
1. A break statement which cannot be predicted at compile time
2. Even after the break statement there is some processing done on the vector whose value depends on when the break statement was called.
The shader works as expected and output shows that the break statement was called at the correct time.
Up until now, I was under the impression that dynamic branching of this sort was not supported and the shader would actually end up being costlier. This information has mainly come from old posts on gamedev.net forums. After running some tests on a fairly new ATI and an NVidia card I found that the frame rate actually improved considerably (by about 30%) with the break statement in there.
This seems to be a feature that seems to work on the new generation of graphics cards. I clearly seemed to have missed out on this. Please can someone point me to some technical documents / information that explain when and how this happened in a bit more detail?
Regards,
- Sid