DirectX 9.0 shader questions.

Started by
7 comments, last by janta 17 years, 6 months ago
Hello, Ok...so I am new to DirectX 9.0 shaders and HLSL and have a few questions. I have read a lot about using them but am new to implementing them so things are not completely in place yet as far as implementation goes. So...it is my understanding that you can use effect files to do fixed pipeline functionality along with using vertex and pixel shaders, and that you can use shaders to implement fixed pipeline functionality (or any other functionality that you desire). I am still trying to get this ingrained as to how to modularize shaders and effect files though. For example, my older code graphics module has a whole bunch of short functions to alter render states such as: EnableAlphaBlending(...); EnableBiFilter(...); AddVertexFog(...); etc... So it is my understanding that I would need to create shaders and/or effect files to emulate these features. My main question is: - How do you simultaneously activate combinations of features using shaders and/or effect files? As an example....say I have a particular shader that I use to render object O1. Then I have a different shader that I use to render object O2. And object O3 etc. Then...suddently I decide that for some levels I require motion blur. How do I then add this motion blur feature without having to write a new shader for EVERY object such as the shaders: O1_with_motion_blur O2_with_motion_blur O3_with_motion_blur O4_with_motion_blur etc... It seems like a SERIOUS waste of code and time to have to write a new shader for every object when a new feature is desired. If you note, all the above shaders are the same as before the motion blur but now have motion blur. In essence all of them are reusing the same motion blur code. This seems like a SERIOUS waste of code that could be reused. It would be much easier to use the original shaders but simultaneously be able to activate the motion blur feature without having to write a new shader for every object. As another example, say that I want to use some features from my previous graphics module using the fixed pipeline at times, but at other times I desire a different combination of these features. For example, for a particular render I could use alpha blending and bilinear filtering, but NOT vertex fog. Then another renderable object comes along that uses alpha blending and vertex fog but NOT bilinear filtering etc. There are so many possibilities of combinations between features that this quickly gets out of hand if you have to write an entire shader for every permutation. Do I need to create a different shader for every possible permutation of features? If not, how do I combine the usage of such shaders? Does this make sense? This again gets quickly out of hand if you are constantly mixing and matching shader combinations and features. What it really should be like is to write a particular shader one time only and in one place only and then blend them together somehow such as: EnableShader(shadBiFilter); EnableShader(shadMotionBlur); EnableShader(shadVertexFog); And then when you render your object(s) all different active shaders are used simultaneously and more importantly are only implemented once and in one place. In summary, I don't know how to combine shader/effect file features. I could write a shader that does vertex fog, or write one that does cartoon shading and many other shaders but I do not know how to combine them (vertex fog with cartoon shading) without having to write an entire new shader that implements that functionality. This quickly gets out of hand when there are many features (the permutations become very large very quickly) and would be a SERIOUS waste of potentially reusable code. Sorry if this sounds extremely newbie but the truth is I am fairly new to shaders and could really use some help. Oh, also....what is the best way that you know of to organize your shaders? My initial thoughts were to have a shader manager that manages them with a static public interface to acquire shaders through (and to set parameters to those shaders etc). That way the methods would be globally accessible without the need to clutter the global data space and without having to pass the manager by parameter as it would be used in many places. Thank you for your help, Your friend, Jeremy (grill8)
Advertisement
I've been wondering about how to approach this problem too. Don't have any answers, but I know that when compiling your shader, you can #include other files, so with a bit of imagination (and carefully named functions), you could swap functionality in and out of your shaders just by #including different files.
I have a system that combines different chunks of shaders into a complete shader. This is similar to an approach I took for dealing with VU1 code on the PS2 (which is more or less the same as a vertex shader).

You can more or less just have code that patches together text from multiple different shader chunks - the primary issue there is dealing with how the different "sub-stages" of the shaders communicate with each other. For example, if you enable a shader stage that requires a temporary output by another stage, then the mashed together shader will fail to compile, and probably not in a way where it's really obvious what's going on.
Thank you for your help.

Does anyone have any more information on this? In short...I know how to write a shader from scratch (basically) and know how to use them with DirectX 9 (basically) but have not been able to figure out a way to re-use parts of shaders in the sense that....say I have 10 features:

Feature1 (fog or something)
Feature2 (Depth of Field or something)
Feature3 (Motion blur or something)
Feature4
Feature5
.
.
.
etc.

I could write these shaders just fine, but then, say you want to enable Features 1,4,7,8 but none of the others. You could write a complete shader that handles that permutation just fine but likely it would never be re-used and you would have to write VERY similar code for the next permutation. The number of complete shaders you write would quickly get out of hand if you had to write a new shader for every permutation.

So, does anyone know how to re-use shader components without having to write a complete shader for every possible permutation of features? Can you enable multiple shaders somehow?

The only way I know how to do this is to write a single VERY large shader for vertex and pixel that you can turn features on/off with by setting the internally used parameters. That does not seem very efficient and is not very modular. I believe there must be a better way.

Does anybody know? Any thoughts?
Thank you,
Jeremy (grill8)
Quote:
The only way I know how to do this is to write a single VERY large shader for vertex and pixel that you can turn features on/off with by setting the internally used parameters. That does not seem very efficient and is not very modular. I believe there must be a better way.

There's no reason why it shouldn't be efficient. keep in mind you can assign constant values before compiling the shader. In that case you can disable the parts you don't need at compile time, and it'll be eliminated from the compiled shader code. But I agree, it's still not an ideal solution. :)
Firstly I'd point out that there isn't necessarily a single correct way of doing this. Ultimately it'll get as complex as you want it to be [wink]

I can't think of any specific examples, but if you search around (forums, magazines/books and so on...) you'll find this sort of thing discussed quite a bit. I'd highly recommend digging into the DirectXDev mailing list archives - I'm sure this sort of thing has been discussed numerous times over the years [smile]

Personally I don't like the use of C-style #define, but the use of #include can be quite nifty.

Primarily I drive combinations via techniques and uniform parameters. I can then write several "uber" functions in HLSL and have the compiler split/combine them as appropriate. It means you have to manage a lot of techniques, but clever use of annotations and compile-time discovery makes this fairly easy.

I wrote about annotation-based discovery in my journal: Useful tricks with annotations.

It's not a perfect solution and it does still require a bit of potentially hairy effect-file editing when adding new stuff, but its a lot more manageable than a more trivial approach. You could even split out individual HLSL functions into seperate files that are #include'd in as appropriate.

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Thanks for the tips.

Does anyone else have any thoughts/opinions/ideas?

Jeremy (grill8)
Quote:Original post by grill8
I could write these shaders just fine, but then, say you want to enable Features 1,4,7,8 but none of the others. You could write a complete shader that handles that permutation just fine but likely it would never be re-used and you would have to write VERY similar code for the next permutation. The number of complete shaders you write would quickly get out of hand if you had to write a new shader for every permutation.

So, does anyone know how to re-use shader components without having to write a complete shader for every possible permutation of features? Can you enable multiple shaders somehow?


The whole point of the system is that I write the component parts, and then the composite shaders are all machine generated.

This isn't a particularly odd approach - if memory serves, Half Life 2 had quite a large number of target variants, but they approached it more with conditionals than with shader chunks.
Hi

You might want to take a look at "FragmentLinker Sample" in the directx documentation for c++.

- JA

This topic is closed to new replies.

Advertisement