Jump to content
  • Advertisement
Sign in to follow this  
redeemer90

Dynamic branching in GLSL

This topic is 2976 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

I have GLSL code that does something like the simplified code below



uniform vec2 inc;
uniform sampler2D myTex;

vec3 tColor = vec3(0.0,0.0,0.0);
vec3 one = vec3(1.0, 1.0, 1.0);

for (int i=0; i<512; ++i)
{
uv += inc.xy;
vec3 texCol = texture2D(myTex, uv).rgb;
tColor += texCol;

if(dot(one,tColor) > 1.0)
{
break;
}
}

// Do some other processing with tColor
gl_FragColor.rgb = tColor;


This is not the exact code, but the code is not the important bit. Basically there is
1. A break statement which cannot be predicted at compile time
2. Even after the break statement there is some processing done on the vector whose value depends on when the break statement was called.

The shader works as expected and output shows that the break statement was called at the correct time.

Up until now, I was under the impression that dynamic branching of this sort was not supported and the shader would actually end up being costlier. This information has mainly come from old posts on gamedev.net forums. After running some tests on a fairly new ATI and an NVidia card I found that the frame rate actually improved considerably (by about 30%) with the break statement in there.

This seems to be a feature that seems to work on the new generation of graphics cards. I clearly seemed to have missed out on this. Please can someone point me to some technical documents / information that explain when and how this happened in a bit more detail?

Regards,

- Sid

Share this post


Link to post
Share on other sites
Advertisement
True dynamic branching came in with shader model 3 hardware and the addition of some extra registers to support it (a loop counter register and a predicate register). This is definitely a case where knowledge of D3D can improve your OpenGL wisdom as you would be able to find full information on this in the DirectX SDK (the underlying hardware standards are the same so it's just API differences).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!