Jump to content
  • Advertisement
Sign in to follow this  
nickyc95

OpenGL Shader Permutations

This topic is 705 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there,

 

So I have been building a material / shader system which will allow me to compile permutations depending on a shaders options.

 

When you create a material, you provide it with:

  • A Shader Path
  • A list of defines that it uses
  • A list of passes 
  • texture paths
  • Material specific uniforms
  • State (backface culling)
  • Custom flags

 

Now when rendering, I get a bitmask for the options defined in the material; to get the premutation that supports the materials features.

 

This all works pretty well, but I am having some issues with Uniforms (where to add them etc)...

 

 

How do you guys handle uniforms when supporting multiple permutations of a shader?

 

Do you store the uniform locations for each permutation? Or some other method I haven't thought of?

 

 

 

(Using OpenGL 3.3+ as the basis

but will be adding support for 4.5 to get access to compute etc in the future)

 

 

Also if anyone knows of a good implementation that I could look at (for additional ideas), please let me know :)

 

 

Thanks 

Edited by nickyc95

Share this post


Link to post
Share on other sites
Advertisement

If you care that much about performance then maybe it is easier to learn descriptor sets in Vulkan that should be closer to what I tried in DirectX. Each individual uniform takes a lot of time to upload separatelly but if you pack them together in groups with correct alignment for the GPU then not using all of them is okay because it is not a lot of data. If they are not aligned then you can have memory corruption which is why the OpenGL way is said to be easier.

Share this post


Link to post
Share on other sites

If you care that much about performance then maybe it is easier to learn descriptor sets in Vulkan that should be closer to what I tried in DirectX. Each individual uniform takes a lot of time to upload separatelly but if you pack them together in groups with correct alignment for the GPU then not using all of them is okay because it is not a lot of data. If they are not aligned then you can have memory corruption which is why the OpenGL way is said to be easier.

 

Isn't this basically UBOs (Uniform buffers)? Which I could use if I moved up to 4.5

 

The thing that I am trying to find out is, what is the best way to associate these uniforms (whether they be single uniforms or packed ubos) with a specific instance of a shader / a shader permutation

 

Thanks 

Share this post


Link to post
Share on other sites

I have not tried OpenGL 4.5 yet but if you go with uniform buffers or descriptor sets then you will probably apply the same principles as DirectX has used for a long time.

Large things like light sources do not have to change too often so we keep them in a buffer that rarely has to update.

Then you have data associated with instances in one buffer and another buffer for each material if you have the time left for the most frequently changed settings.

 

* A set of variables that update once per frame/pass.

    Light sources[]

    Camera settings

    Fog color

* A set of variables that update once per instance

    Transform

    Color

* A set of variables that update once per material within the instance (optional)

    Specular

    Gloss

 

If something like bone data takes extra space then additional buffers can be used only when needed.

Edited by Dawoodoz

Share this post


Link to post
Share on other sites

I have not tried OpenGL 4.5 yet but if you go with uniform buffers or descriptor sets then you will probably apply the same principles as DirectX has used for a long time.

Large things like light sources do not have to change too often so we keep them in a buffer that rarely has to update.

Then you have data associated with instances in one buffer and another buffer for each material if you have the time left for the most frequently changed settings.

 

* A set of variables that update once per frame/pass.

    Light sources[]

    Camera settings

    Fog color

* A set of variables that update once per instance

    Transform

    Color

* A set of variables that update once per material within the instance (optional)

    Specular

    Gloss

 

If something like bone data takes extra space then additional buffers can be used only when needed.

Yeah I get that,

 

but do you associate a buffer PER SHADER INSTANCE?

Share this post


Link to post
Share on other sites

Then you would not be able to reuse the same shader with multiple materials, if I understand correct this time.

 

So how would I get around this?

 

 

Thanks for input btw :)

Share this post


Link to post
Share on other sites

Just ignore that arguments are given to the shaders in OpenGL. That is a poor legacy design just like the coupling of textures and sampler states that got stuck from old hardware refusing to be compabile with DirectX. Upload the per instance uniforms each time to make sure that no garbage is left from the previous use or you might get highly unpredictable behaviour that is very hard to debug because of the global state causing flickering colors.

 

If you have a lot of data to share between shaders and performance is no problem then you can also upload a texture in advance encoded with general information. I made a text editor by storing character codes in a texture and rendering them using a font atlas.

Generating lots of alternative shaders can be messy if you don't have dynamic shader linking but if it is only for enabling/disabling normal mapping and such then consider making your own GLSL precompiler or simply a function taking boolean feature flags and returning a shader string since the shader compiler in OpenGL is highly unpredictable on different graphics drivers. For example, the token concatunation operand ## is sometimes not available on certain phone models. Integer math operations may sometimes generate random noise.

Some engines even have their own shading languages but that is overkill when you can now use Spir-V and get consistent offline compilation for Vulkan instead of guessing what works everywhere.

Share this post


Link to post
Share on other sites

Just ignore that arguments are given to the shaders in OpenGL

 

How can you do this?

 

When you add the uniforms to the shader you have to specify a shader program 

Share this post


Link to post
Share on other sites

There is not much you can do with an old OpenGL version so you should really consider upgrading for both performance and sanity.

As an ugly and slow workaround, you can wrap the shader into a class and let the class insert what the shader needs from all the data but the dynamic call overhead from using class inheritance will stall the branching predictions for out of order execution. Class objects will also ruin cache locality and load lots of cache lines with garbage even if you have an arena allocator. Even a static function call to a getter or setter will usually fail to inline and generate 16 assembly instructions of bloat just to set up all the registers and then perform the one instruction that you actually wanted to do. The API overhead for redundant safety checks when not using pipeline objects in Vulkan adds more wasted time in your render loop.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!