Combining Shaders

Started by
5 comments, last by ill 12 years, 6 months ago
So I've seen tutorials for shaders. Here's normal mapping, here's per pixel lighting... So on...

Now I'm working on having my game actually support all the different effects. I've done some reading about uber shaders and I'm still not 100% clear on everything. Also I'm going to be using GLSL.

I'm having my game use a simple material system similar to Doom 3 where I can specify that the material has certain diffuse map, specular, normal, glow, so on... I want to be able to turn these on and off on a per material basis. There would also be additional parameters like blend mode, depth mask, billboarding, things like that...

On top of that I also need to have skeletal animations and particle effects working with shaders on a per object basis, meaning I need to do additional vertex shaders only if the object being drawn needs it.

So basically I might create a material for a brick wall using a diffuse map, normal map, and specular map and have it be texturing some wall.
Then I might create another material for a shiny metal with a specular map, some diffuse map, and a random glow map for some glowy bits.
Does this require different permutations of shaders compiled?
Let's say I also decide to use the brick material to texture an animated skeletal model. Does that same brick material now need another shader that renders the brick both normally and one that is combined with a vertex shader that is used when rendering the model?
Would this be where an uber shader comes in to play with ifdefs enabling/disabling features? I can't find any actual concrete tutorials on GLSL and uber shaders so I can only guess.

I'm also a bit confused about texture units. Let's say I only have 4 texture units available on the card. I want to pass diffuse, normal, specular, height, and glow maps. That's 5 textures. Does that mean one shader can't do it? Would I have to have multiple shaders render the image with multiple passes and then overlay them to get the final image?

Also let's say I added Screen Space Ambient Occlusion. As I understand something like SSAO requires a separate pass where you only render the depth data. Does this mean effects like SSAO that need separate passes have separate shaders per pass per material? So I might potentially have 5 fragment shaders for a brick wall if I have to do 5 passes of some sort?



Heh, also on a side note I'm trying to figure out how I will support specular maps. I'm thinking RGB will be specular and the alpha component will be the gloss. Is this a good idea? This way I combine the components in to one texture instead of having one texture be specular, and one be gloss. Is there ever a time specular maps actually need alpha for the transparency?

Same with glow maps. Does it make sense for a glow map to have an alpha channel for transparency? As far as I understand, the glow works like additive blending, black is no glow which would act like transparency.
Advertisement

I'm having my game use a simple material system similar to Doom 3 where I can specify that the material has certain diffuse map, specular, normal, glow, so on... I want to be able to turn these on and off on a per material basis. There would also be additional parameters like blend mode, depth mask, billboarding, things like that...


You just pass it into the shader as like texcoord6 or position3, or whatever.


On top of that I also need to have skeletal animations and particle effects working with shaders on a per object basis, meaning I need to do additional vertex shaders only if the object being drawn needs it.
[/quote]

If you can wrap your vertex shader in an if clause, or find some other way to have it noop if you don't need it, that's best. If you want to mix-n-match different vertex shaders you're just going to end up with different draw calls, which is usually best avoided.


So basically I might create a material for a brick wall using a diffuse map, normal map, and specular map and have it be texturing some wall.
Then I might create another material for a shiny metal with a specular map, some diffuse map, and a random glow map for some glowy bits.
Does this require different permutations of shaders compiled?
[/quote]

Not necessarily. For monolithic shaders usually the idea is something like (in HLSL psuedocode):


float4 pixelShader
{
float4 glowColor;
if(GLOW)
{
glowColor = ...
}

float4 diffuseColor;
if(DIFFUSE)
{
diffuseColor = ...
}

float4 finalColor = glowColor + diffuseColor;

return finalColor
}


You don't automatically have to use ifs though. You can do something like this instead:

float4 pixelShader
{
float4 glowColor = ...
float4 diffuseColor = ...
float4 finalColor = glowColor * GLOW + diffuseColor * DIFFUSE;

return finalColor
}


That is, always assume everything is turned on, and just multiply the components by 0 or 1 at the very end to turn on or off that feature.


Let's say I also decide to use the brick material to texture an animated skeletal model. Does that same brick material now need another shader that renders the brick both normally and one that is combined with a vertex shader that is used when rendering the model?
[/quote]

Not necessarily. Ideally your shader could handle both cases, but you might not have enough instruction count to have a shader that big.


Would this be where an uber shader comes in to play with ifdefs enabling/disabling features? I can't find any actual concrete tutorials on GLSL and uber shaders so I can only guess.
[/quote]

You can use ifdefs, but then each combination of features is its own different shader, and therefore a totally different draw call, which defeats the purpose of an uber shader. You would have less instruction count, though.


I'm also a bit confused about texture units. Let's say I only have 4 texture units available on the card. I want to pass diffuse, normal, specular, height, and glow maps. That's 5 textures. Does that mean one shader can't do it? Would I have to have multiple shaders render the image with multiple passes and then overlay them to get the final image?
[/quote]

Yeah, you can only address up to 4 textures in a single pass/draw call if you only have 4 texture units on the card. You could either split your shader into a few different passes (with one draw call per pass), or mung your 5 textures into 4 or less textures. eg: Maybe your diffuse texture is 4 times larger than the others. Then just combine the 4 smaller textures into a single texture. Then you are only using 2 texture units.


Also let's say I added Screen Space Ambient Occlusion. As I understand something like SSAO requires a separate pass where you only render the depth data. Does this mean effects like SSAO that need separate passes have separate shaders per pass per material? So I might potentially have 5 fragment shaders for a brick wall if I have to do 5 passes of some sort?
[/quote]

Each pass can have its own arbitrary vertex and pixel (fragment) shaders. But each pass is another draw call, so you try to combine passes if at all possible. Whether you need a separate pass for SSAO or not depends on how clever you are with it. I haven't implemented SSAO before, so I don't know what data it needs or why it might need a separate pass. But often times you can steal the alpha channel on your render target to store extra data that would otherwise require another pass.


Heh, also on a side note I'm trying to figure out how I will support specular maps. I'm thinking RGB will be specular and the alpha component will be the gloss. Is this a good idea? This way I combine the components in to one texture instead of having one texture be specular, and one be gloss. Is there ever a time specular maps actually need alpha for the transparency?
[/quote]

That's good thinking, yeah. Whether you need the alpha on specular or not depends on what you're doing in your shader. Does your shader even look at the alpha? If not, you don't need it.

The only tricky bit is that you'll need a tool side step that combines them. Don't expect artists to make the textures like that for you. It's annoying for artists, and it means you can't easily undo it if you decide specular textures need alpha in the future.


Same with glow maps. Does it make sense for a glow map to have an alpha channel for transparency? As far as I understand, the glow works like additive blending, black is no glow which would act like transparency.
[/quote]

Again, look at your shader.

While there are some commonly accepted norms for how glow, diffuse, etc. work, keep in mind that they aren't set in stone. With modern shaders you can invent any wacky system you want, as long as it works for artists, and you can spit out a final RGB(A) from your fragment shader. Don't feel hamstrung trying to implement someone else's graphics model; you have huge freedom with programmable shaders.

Generally, I'd start from specific graphical effects you want and work backwards through the shader back to the toolchain and what artists actually produce.
[size=2]Darwinbots - [size=2]Artificial life simulation
I've been researching this too, and you're better off with the uber shader approach. Instead of using if-else statements, use ifdfef-else-endif preprocessors in your shader. This way, you can write just one shader with all the effects you want to support, but you can recompile that shader different ways depending on what you define ahead of time.

For example:

#define LIGHITNG

...


#ifdef LIGHTING
// lighting code goes here

#ifdef SPECULAR_LIGHTING
// process specular lighting
#endif
#else
// non-lighting code goes here
#endif


Notice that all the code inside of the LIGHTING ifdef preprocessor will be compiled except for the SPECULAR_LIGHTING ifdef block because we did not define that. What you can do is write your original shader, and DO NOT include ifdefs at all. Then, you can add flags to your shader loader class that insert the proper defines into the top of the shader to compile it the way you want it.

If you haven't checked this out, download the source to the dEngine from this website. Look at Fabien's uber shaders he wrote for that engine, and how it's used in his code. I've used this for reference, and I'm convinced that this is the way I need to go.
@Vincent: but then each combination of features is its own different shader, and so ends up being a different draw call. Whether that matters or not depends on what you're doing, of course, but it's not a scalable development paradigm. I'd only do it that way if/when you're uber shader gets so big you hit the instruction count limit.
[size=2]Darwinbots - [size=2]Artificial life simulation
So I got a nice shader system working for my deferred shading engine. The nice thing about a deferred shader is that I can actually keep things fairly simple.

I basically just have a bit mask of shader features. I have a resource manager for various things like textures and materials and sounds that are externally configured and the resources can be retreived by name. When it comes to shaders they are retreived by bitmask of features. So I have a shader manager and a shader program manager. That way shaders that are compiled can be reused by multiple shader programs.

When it comes to deferred shading I have a set of features in the G-Buffer stage. Then a set of features in the lighting stage. Then a set of features for the preprocessing stage which I haven't done yet.

For the G-Buffer stage I just have diffuse, normal, specular, emissive(emissive is coming soon), height(coming soon, hoping to have parallax mapping).

For the lighting stage I have things like point light, directional, spot, projected texture(projected texture coming up after I do more research).

No idea what goes in to the post processing stage yet but I'll get to that later.

Point is my shader manager system works very nicely with feature bit masks and compiles new shaders only if a new set of features are missing. E.g most shaders need normal, specular, diffuse. Some don't need diffuse so a new shader for only specular and diffuse has to be made. Then different attributes like textures and colors and light positions can be passed in on a per object basis right before rendering that object in whatever stage of the renderer I am at.

I'm thinking of sorting things by material in the different stages so I switch shaders and textures less and can render a massive amount of triangles per material type.

Later I think I will need forward rendering for the transparent objects. Haven't thought much about that yet but that'll probably need different shaders...
I write shaders at a lower level which enables me to chain them together really easily, probably able to do that in HLSL but I don't know how.

So when I create a material I can add in shaders in any order I like and give each one its own blendmode.

i,.e.

var defaultMaterial:MaterialBase = new MaterialBase();
defaultMaterial.addTechnique(new TextureTechnique(texture));
defaultMaterial.addTechnique(new DiffuseTechnique(0x000000), Blendmode.MULTIPLY);
defaultMaterial.addTechnique(new SpecularTechnique(0xFFFFFF), Blendmode.ADD);
defaultMaterial.addTechnique(new EnvironmentTechnique(envTexture), Blendmode.MULTIPLY);


so each technique sends the output from itself (once blended with an input if there is one) to the next.
Eventually I might have a node based material system like in unreal. I think i understand how they would do something like that now. Each node could be a function in the shader and the functions are put together in the right order. A shader is compiled per material so each crazy material created with the node based material editor creates its own massive shader program.

Of course it'd be nice to have about 95% of the shaders be simple common stuff and only have a small percentage be the crazy stuff that requires crazy custom shaders compiled. Most would be the standard Specular, Diffuse, Normal, Glow stuff...

This topic is closed to new replies.

Advertisement