Multiple shaders
How does using multiple shaders work? Say you have all of these different things you want to do to a model, such as have the light from the sun shining on it, colored light from a spell, normal mapped, bump mapped, etc - Do you have to create one shader to do all of this at once? Or is there a way to incorporate many shaders to work on one model? Would you have to draw the model several times to 'layer' the shader effect?
There is no definitely correct way of doing what you want, balancing shader effects and performance is a fine art - often requiring much trial & error and general experience [smile]
With reference to my opening statement - there is no one correct answer, but more specifically you might want to use several of the aforementioned techniques as well as hybrids of those... All depends what you're trying to achieve and on what level of hardware!
hth
Jack
Quote:Original post by TWildYes, you can do this - its often referred to as an "ubershader" (aka "super-shader"). It works well, but for SM2 you can easily exceed the limits (64+32 for a PS); for SM3 its easier but performance can still suck - dynamic flow control varies greatly between IHV's (ATI's X1k series seem to be much better than NV's 7x00 series)
Do you have to create one shader to do all of this at once?
Quote:Original post by TWildYou could look into the "fragment linker" technology for D3D9; but bare in mind its being replaced (or has been) for D3D10. Other tricks using C-style #define and #include as well as C++-style uniform variables can allow you to write a single "ubershader" and compile it into many highly-specific individual shaders.
Or is there a way to incorporate many shaders to work on one model?
Quote:Original post by TWildYou can do this as well - you'll incur multiple transforms of the geometry though (bad for expensive vertex shaders), its often referred to as "multi-pass rendering".
Would you have to draw the model several times to 'layer' the shader effect?
With reference to my opening statement - there is no one correct answer, but more specifically you might want to use several of the aforementioned techniques as well as hybrids of those... All depends what you're trying to achieve and on what level of hardware!
hth
Jack
Quote:Original post by jollyjeffersQuote:Original post by TWildYes, you can do this - its often referred to as an "ubershader" (aka "super-shader"). It works well, but for SM2 you can easily exceed the limits (64+32 for a PS); for SM3 its easier but performance can still suck - dynamic flow control varies greatly between IHV's (ATI's X1k series seem to be much better than NV's 7x00 series)
Do you have to create one shader to do all of this at once?
I use super-shaders in my current project on SM2 hardware, with a good bit of success. For me, instruction limits have not been an issue - I can do 5 per-vertex directional lights per pass along with with 5 per-vertex point lights per pass with no problems (3 of each for per-pixel). For reference, these are the possible 'components' you can choose from:
- Emissive color (ambient)
- Diffuse color
- Specular color
- Specular power
- Diffuse map
- Normal map
- Specular map
- Ambient occlusion map
- Light map
- Any combination of lights
The hardest part for me was keeping the code clean. With so many #ifdef's and #define's, it gets messy fast.
I used many small snippets of shader asm and strcat'd them together. Not quite a real strcat, as I had special characters to refer to different constant offsets. For example %L1% would point to the second constant of whichever light was being strcat'd.
The various snippets could compute transform the vertex positon, normal, and/or tangent by a world matrix, 1, 2, 3, or 4 bone matrices. Custom position code could be injected for wind sway, etc. Next heightfog and depth fog were computed, followed by texcoord generation for reflections, texture coord transforms, lighting(directional diffuse, directional diff/spec, point diffuse, point diff/spec, ambient, emissive, two sided), falloff values for seamless LOD changes.
It worked well. Because of limits in the fragment linker, I anticipate writing a version to stitch HLSL snippets together in the near future.
The various snippets could compute transform the vertex positon, normal, and/or tangent by a world matrix, 1, 2, 3, or 4 bone matrices. Custom position code could be injected for wind sway, etc. Next heightfog and depth fog were computed, followed by texcoord generation for reflections, texture coord transforms, lighting(directional diffuse, directional diff/spec, point diffuse, point diff/spec, ambient, emissive, two sided), falloff values for seamless LOD changes.
It worked well. Because of limits in the fragment linker, I anticipate writing a version to stitch HLSL snippets together in the near future.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement