GL_ARB_separate_shader_objects Performance ?

Started by
1 comment, last by Hodgman 10 years ago

Hi,

As soon as I learned about GL_ARB_separate_shader_objects I imagined that I would have huge performance benefits because with this one OpenGL would look and feel more like DirectX9+ and my cross-API code could remain similar while maintaining high performance.

To my surprise, I implemented GL_ARB_separate_shader_objects only to find out that my performance is halved, and my GPU usage dropped from ~95% to 45%. So basically, having them as a monolithic program is twice as fast as being separate. This is on an AMD HD7850 under Windows 8 and OpenGL 4.2 Core.

I originally imagined that this extension was created to boost performance by separating constant buffers and shader stages, but it seems it might have been created for people wanting to port DirectX shaders more directly, with disregard to any performance hits.

So my question, is if you have implemented this feature in a reasonable scene, what is your performance difference compared to monolitic programs ?

Relative Games - My apps

Advertisement

Updating my post cause I was wondering for a long time how is Nvidia doing with separate shader objects.

Turns out, not that great. Got a GTX 650 running on latest 335 drivers, got 290 FPS with SSO On, and 308 without them, so still a heavy performance penatly. And I'm doing a pipeline object per shader pair.

And I'm also seeing some weird behavior with objects that don't use textures.

Relative Games - My apps

That's stunning. I've always abstained from using that extension (without even trying once!) because I had the weird "feeling" that it would probably run much slower, since the shader compiler is necessarily not able to optimize as agressively as it could otherwise.

I remember a statement from a nVidia guy years ago that basically said "Yeah, but not much of that is happening anyway, and the hardware works in a mix-and-match way in the sense of ARB_separate_shader_objects, it has been working like that for many years".

Now you've done back-to-back comparisons on two major vendors and see that while nVidia is at least in the same ballpark, both major IHVs are noticeably slower with separate shader objects. Bummer.

One should not forget that getting 50% of 300fps is still 150fps, which is way good enough (and many people will argue that practically, with VSync, there is absolutely no difference).

But of course only utilizing the GPU 50% means the customer who needs to meet your minimum hardware requirements could as well only have spent half as much money on hardware... or they could run your game with 2x or 4x supersampling at the same frame rate instead, with the hardware they have.

When using this, make absolutely sure that your vertex shader output and fragment shader inputs match perfectly. The same variables, same semantics, same order, etc, etc. Otherwise the driver will have to patch the fragment program to fetch the correct interpolators.

This topic is closed to new replies.

Advertisement