Use fragment linker?

Started by
4 comments, last by S1CA 18 years ago
I was wondering what is better: using full HLSL shaders with techniques and passes, or using the fragment linker to link diffrent parts?
Advertisement
Quote:Original post by cppcdr
I was wondering what is better: using full HLSL shaders with techniques and passes, or using the fragment linker to link diffrent parts?

It depends on the scale of your project. If you have to support many different models with an arbitrary material configuration, it may be better to check out the fragment linker. Personally, I made a custom fragment linker, as the D3DX one isn't so hot. It is very easy to do so through little tokens that you can place as comments in your HLSL. For example:

...// EvaluateDirLight param1 param2 param3// EvaluatePointLight param1 param2 param3 param4// EvaluateSpotLight param1 param2 param3 param4 param5...


would ultimately be expanded to function calls to the lighting library. The linker can then duplicate these calls as necessary, depending on the number of lights.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
I'm still not clear on if I should use it...

I'm making an engine, and I want it to be flexible, easy to use, and fast.

1)Is there a speed diffrence between the fragment linker and normal effects?
2)Is the fragment linker easy to use?
3)Which is more flexible?
Quote:Original post by cppcdr
I was wondering what is better: using full HLSL shaders with techniques and passes, or using the fragment linker to link diffrent parts?



Neither is 'better', it's a choice you have to make based on your particular application and usage scenario(s).

Asking some questions about your particular usage may help decide which is more 'appropriate' for your application:


1) Is the number of effects your application uses manageable?

Once you have more than say 30 separate effects, they can get a bit unweildy - particularly when you have a change you want to apply commonly to a large number of them.

Once the numbers get unweildy, but a lots of subroutine functionality is the same across different shaders (transformation and simple lighting will be at very least), then linking becomes more attractive (though there other alternatives such as using #include in effect/HLSL files to keep common code somewhere common).


2) Do many of your shaders do the same thing, just with different variable counts for things like lights & bones (e.g. skinned_22_bones_1_light, skinned_10_bones_1_light, skinned_22_bones_2_lights, skinned_10_bones_2_lights, skinned_10_bones_3_lights, etc)?

Do the shader versions you're targeting support flow control? (i.e. loops)

If you're targetting shader versions that have flow control, then you can get away with a lot fewer shaders which use loops and branching to handle different permutations (e.g. a single 'skinned_N_bones_M_lights' shader).

However if you're targetting older shader versions, then you'll need a new shader for each combination (skinned_22_bones_2_lights). That's managable and simpler when you only have a few shaders (see #1), but can make linking more attractive when you have a lot (though stuff like intelligent use of uniform parameters can help reduce the number you actually have in code.


3) Do you ever need to build shaders based on dynamic events (e.g. your app is an editor program and the user adds a light to the scene so you find yourself needing a skinned_10_bones_4_lights) ?

Storing every possible permutation of shader (e.g. skinned_2_bones_20_lights) that you might have the need for at some time, you think, maybe, can lead to either a)having too many shaders to manage (see #1) - and/or b)a stupid amount of pre-compiled shaders to load in when your application starts -- it's flexible, but I've seen 40MB pre-compiled shader package files in the past - not big, not clever.

Dynamically changing requirements can mean dynamic fragment linking becomes a whole lot more attractive - though always remember, underneath all the Effects/Techniques/Fragment linkers etc, it all boils down to a IDirect3DDevice9::CreateVertexShader and IDirect3DDevice9::CreatePixelShader call somewhere. By nature of translation from D3D shader bytecode to native GPU code/combiner setup, those calls are going to be slow, so you don't want too many happening "in frame"



For the shader based applications I develop (not just PC), for "next-gen", I'd personally favour uber-shaders wrapped in FX files and clever use of uniform parameters to reduce the count further. #include'ing *.hlsl code is a trick that helps a lot for maintainence.

For older platforms that don't support flow control - arguably, the creation cost, memory overhead and shader switch cost makes having too many shaders around a bad idea anyway. A recent Xbox1 game I shipped kept the numbers down to around levels mentioned in #1 (though contractually I can't go into specific details).

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

I'm making an engine (as I already said). It has no limit to the number of shaders. In that case, I guess I should use the fragment linker right?
Quote:Original post by cppcdr
I'm making an engine (as I already said). It has no limit to the number of shaders. In that case, I guess I should use the fragment linker right?


Your mention of making an engine was made at the same time as I was typing a reply to your original post. Me typing in a reply window doesn't mean I'm also refreshing the OP to see which changes are there...

TBH the more open ended your engine is, the less any one technology is going to be a "best fit". Surely when you designed/planned your engine, you had an *average* target platform and *average* use in mind. You can still use the leading questions I posted on that average.


Quote:1)Is there a speed diffrence between the fragment linker and normal effects?


Both end up calling IDirect3DDevice*::CreateVertexShader and IDirect3DDevice*::CreatePixelShader with a complete shader somewhere along the line. Those calls will take the same time for both.

If you need to make shaders dynamically, then there will be. How much depends on the target shader version.


Quote:2)Is the fragment linker easy to use?


'Easy' is completely subjective. Take a look at the FragmentLinker sample in the SDK to see how it compares to effects and techniques for you. For me personally both are about the same level of ease-of-use.

For 1.x shaders and permutation/combination issues, to do make multiple shaders with effects and techniques will require use of uniform parameters in your HLSL code and make it quite ugly. For 1.x shaders that have lots of combinations, fragment linking is easier.


Quote:3)Which is more flexible?


Effects and Techniques allow the most run time scriptability (at the cost of some run time performance if you use that scriptability).

Fragment linking allows you to link all your shaders at load time, which will have no run time performance cost, but will have a sizable memory and load-time cost.


Spanner-in-works for the future: looking at the CTP, D3D10 doesn't do fragment linking.


Sorry if some of my answers come across as being harsh, but to all of these questions there really aren't simple yes/no answers. How long is a piece of string.

[Edited by - S1CA on March 26, 2006 6:05:27 AM]

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

This topic is closed to new replies.

Advertisement