Jump to content
  • Advertisement
Sign in to follow this  
jcabeleira

Shader branching ruins performance

This topic is 3067 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

-dont use doubles, use floats
-dont index on interpolators or local variables
-avoid branching


if you break those rulez, you can get easily 30times slower. why? simply because the hardware runs 32threads for the same code path (nvidia calls them Warps). doint something that is not the way the hardware is optimized to do will make it probably like 32times slower (1/32th of the performance).

the later two points can be fixed by unrolling. if you assume your compile doesn't unroll and you dont want to unroll it by hand, make the inner loop a function and pass the iteration parameter.
once you've done that, just call the function 8 times instead of doing that in a loop, passing the iteration param. this way it stays maintainable and will be "unrolled".

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by ZenoGD
This may be a dumb idea but what happens if you allocate all of your local variables (like sphereVector, nearestPoint, and maybe even the loop counters) outside the loop?

I'm thinking maybe it is unrolling your loop internally but doing so requires it to create more temporary variables than it has room for?


I tried that, but got no improvement.

Quote:

Would you mind showing a screen shot of your result? How does AO look with only 8 samples?


Looks preety good actually. In fact, looks just like SSAO. The only difference is that it calculates occlusion from objects outside the view frustum too.

Quote:

-dont use doubles, use floats
-dont index on interpolators or local variables
-avoid branching


if you break those rulez, you can get easily 30times slower. why? simply because the hardware runs 32threads for the same code path (nvidia calls them Warps). doint something that is not the way the hardware is optimized to do will make it probably like 32times slower (1/32th of the performance).

the later two points can be fixed by unrolling. if you assume your compile doesn't unroll and you dont want to unroll it by hand, make the inner loop a function and pass the iteration parameter.
once you've done that, just call the function 8 times instead of doing that in a loop, passing the iteration param. this way it stays maintainable and will be "unrolled".


Yes, I'm aware of all that. First, I'm using floats only.
Second, I could unroll the shader, but I wanted to make it flexible enough to handle an arbitrary number of spheres.
Third, even if I used a fixed number of spheres like 8 spheres, that still gives me 64 loop iterations to unroll (8 rays X 8 spheres) which is not maintainable.

Share this post


Link to post
Share on other sites
Original post by jcabeleira
First, I'm using floats only.

occlusion/= 8.0;
gl_FragColor= vec4(1.0- occlusion);

those are doubles. GTX card from nvidia are capable of processing them. they run at 1/8th of the float performance. I cannot guarantee that nvidia's glsl compiler really uses them like doubles, but it's possible it does.

Share this post


Link to post
Share on other sites
Original post by Krypt0n
Quote:
Original post by jcabeleira
First, I'm using floats only.

occlusion/= 8.0;
gl_FragColor= vec4(1.0- occlusion);

those are doubles. GTX card from nvidia are capable of processing them. they run at 1/8th of the float performance. I cannot guarantee that nvidia's glsl compiler really uses them like doubles, but it's possible it does.


I think you're mixing up with C++ code that takes "1.0" as a double and "1.0f" as a float. I think all shader languages take any real number without the "f" postfix as a float not a double.

But even if what you're saying is true, those calculations you mentioned are being performed once per pixel which in neglible comparing to the 64 iterations of ray-sphere intersection code.

[EDIT]: I replaced those values with the "f" postfix like you sugested but got no performance improvement.

Share this post


Link to post
Share on other sites
Original post by jcabeleira
Quote:
Original post by Krypt0n
Quote:
Original post by jcabeleira
First, I'm using floats only.

occlusion/= 8.0;
gl_FragColor= vec4(1.0- occlusion);

those are doubles. GTX card from nvidia are capable of processing them. they run at 1/8th of the float performance. I cannot guarantee that nvidia's glsl compiler really uses them like doubles, but it's possible it does.


I think you're mixing up with C++ code that takes "1.0" as a double and "1.0f" as a float. I think all shader languages take any real number without the "f" postfix as a float not a double.

But even if what you're saying is true, those calculations you mentioned are being performed once per pixel which in neglible comparing to the 64 iterations of ray-sphere intersection code.

[EDIT]: I replaced those values with the "f" postfix like you sugested but got no performance improvement.

i wasn't sure bout the way glsl is handling it. but there are sometimes awkware cases why optimizes f#*k something up. usually, with using constants for loop start and end, everything should be just one simple shader with no branching at all.
you could maybe use ati's render monkey to check your glsl code (i'm not sure if it runs on nvidia cards, but nvidia's fxcomposer wasn't supporting glsl last time I checked, but maybe it's not too hard to convert your glsl shader to hlsl for the fx composer, there you could check the shader assembly output including hardware specific static performance analysis.)

sorry for not beeing much of a help with that unrolling thinggy. but i'm pretty sure it's not the dynamic branching that hurts ya, it's rather the usage of variables. either cause of too many temporaries or cause of indexing.
one old "trick" was to move those indexable things into textures. sampling floats isn't that fast either, but wont make the framerate drop from 30 to 2.

Share this post


Link to post
Share on other sites
RenderMonkey does support NVIDIA cards.
FX Composer supports ATI cards but it does not support GLSL so it cannot be used.
To disassemble your shader, use a special utility called "nvemulate", search it on google. Once you run it, select "Write shader assembly", then check your program's working directory and the disassembly should all be there. It's in the NVvp/fp language.

Share this post


Link to post
Share on other sites
Quote:
Original post by Momoko_Fan
RenderMonkey does support NVIDIA cards.
FX Composer supports ATI cards but it does not support GLSL so it cannot be used.
To disassemble your shader, use a special utility called "nvemulate", search it on google. Once you run it, select "Write shader assembly", then check your program's working directory and the disassembly should all be there. It's in the NVvp/fp language.


I've took your advice and I've seen the disassembled shader.
It has nothing particularly suprprising in it. As expected, the shader does perform the loops instead of unrolling them, it contains a couple of nested REP/ENDREP instructions that does it.
The only thing that looked a little weird was that the shader's variables are all initialized with MOV's. Since I have 32 rays declared and 10 spheres, that resulted in a lot of moves.

Now here is the funny part:
From the 32 declared rays I was only using 8, so I removed the unnecessary rays from the array declaration, and the frame rate raised from 2 fps to 4 fps. When I checked the disassembled code, I realized that the compiler had decided to unroll the outher loop and eliminate the array of rays which was no longer necessary. The unrolled outher loop explains the strange increase in frame rate.

Share this post


Link to post
Share on other sites
Wow! I can't believe what my eyes are seeing:
I was playing with NVemulate and I set the GLSL compilation profile to force NV40 compatibility. Since NV40 has no support for dynamic branching (I supose), the compiler had to unroll all the loops.
This way, my shader runs at 30 fps instead of 2 fps!!!

So, now that it is proved that the slowness comes from the loops and that unrolling them solves the issue, I only need a way to force the loop unrolling by code. But let me guess, there's no way to do it, right?

Share this post


Link to post
Share on other sites
If you use the latest drivers, then I would suggest to rollback to previous drivers and see if it makes a difference. Your problem really starts to look like a driver problem to me.

Other than that, maybe you could give a try to storing your data in 1D textures instead of an array of constants. I've seen strange behaviors when accessing an array of constant uniforms in the past, although that'd be surprising on a GTX 260.

Then your last hope would be to make a minimal program that reproduces the problem and send it to NVidia, and hope they have a look at the program.

Y.

Share this post


Link to post
Share on other sites
Quote:
Original post by jcabeleira
Since NV40 has no support for dynamic branching (I supose), the compiler had to unroll all the loops.
As a side note, I am pretty sure PS3.0 has full branching support...
EDIT: Anyway, this is just ugly.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!