Jump to content
  • Advertisement
Sign in to follow this  
cippyboy

OpenGL GL_ARB_separate_shader_objects Performance ?

This topic is 1561 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

As soon as I learned about GL_ARB_separate_shader_objects I imagined that I would have huge performance benefits because with this one OpenGL would look and feel more like DirectX9+ and my cross-API code could remain similar while maintaining high performance.

To my surprise, I implemented GL_ARB_separate_shader_objects only to find out that my performance is halved, and my GPU usage dropped from ~95% to 45%. So basically, having them as a monolithic program is twice as fast as being separate. This is on an AMD HD7850 under Windows 8 and OpenGL 4.2 Core.

I originally imagined that this extension was created to boost performance by separating constant buffers and shader stages, but it seems it might have been created for people wanting to port DirectX shaders more directly, with disregard to any performance hits.

So my question, is if you have implemented this feature in a reasonable scene, what is your performance difference compared to monolitic programs ?

Share this post


Link to post
Share on other sites
Advertisement

Updating my post cause I was wondering for a long time how is Nvidia doing with separate shader objects.

Turns out, not that great. Got a GTX 650 running on latest 335 drivers, got 290 FPS with SSO On, and 308 without them, so still a heavy performance penatly. And I'm doing a pipeline object per shader pair.

 

And I'm also seeing some weird behavior with objects that don't use textures.

Share this post


Link to post
Share on other sites

That's stunning. I've always abstained from using that extension (without even trying once!) because I had the weird "feeling" that it would probably run much slower, since the shader compiler is necessarily not able to optimize as agressively as it could otherwise.

 

I remember a statement from a nVidia guy years ago that basically said "Yeah, but not much of that is happening anyway, and the hardware works in a mix-and-match way in the sense of ARB_separate_shader_objects, it has been working like that for many years".

 

Now you've done back-to-back comparisons on two major vendors and see that while nVidia is at least in the same ballpark, both major IHVs are noticeably slower with separate shader objects. Bummer.

 

One should not forget that getting 50% of 300fps is still 150fps, which is way good enough (and many people will argue that practically, with VSync, there is absolutely no difference).

But of course only utilizing the GPU 50% means the customer who needs to meet your minimum hardware requirements could as well only have spent half as much money on hardware... or they could run your game with 2x or 4x supersampling at the same frame rate instead, with the hardware they have.

Share this post


Link to post
Share on other sites

When using this, make absolutely sure that your vertex shader output and fragment shader inputs match perfectly. The same variables, same semantics, same order, etc, etc. Otherwise the driver will have to patch the fragment program to fetch the correct interpolators.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!