Fragment Linker

Started by
4 comments, last by RenderCache 19 years, 6 months ago
hihi! this sample in the SDK looks damn cool. does it mean that we could process different forms of fragments in one pass? thx!
Advertisement
Quote:Original post by RenderCache
hihi!
this sample in the SDK looks damn cool. does it mean that we could process different forms of fragments in one pass?

thx!
Yes, ID3DXFragmentLinker is a quite useful interface, that can really boost productivity if used correctly. I'm not sure what your question means...can you explain it a little more?

From the FragmentLinker whitepaper:
Quote:Large-scale Direct3D applications commonly employ a large set of redundant shaders covering every supported fallback case for a given technique (often generated by uniform parameters) from which the appropriate shader for the current graphics hardware is selected at run time. This approach can result in a large amount of compiled shaders being included with the application, only a small fraction of which are ever used on a given machine. Using shader fragments, the desired shader for the current graphics card can be built at run time.


The purpose of fragment linking is to minimize the code you reuse for each shader. Using it, you can create a shader on-the-fly, based on the current graphics card. This is useful, because you only have to write the code once, not once per shading model.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
But, of course, you could already do the same thing with strcat()...
enum Bool { True, False, FileNotFound };
with reference to the sample, suppose if only vertex animation & ambient vertexshader is to be formed, i noticed that for 2 vertexshaders (somewhat multipassed) can be done in one draw call.

is there a fallacy to my observation with regards to performance?

thx!
RC
Quote:Original post by RenderCache
with reference to the sample, suppose if only vertex animation & ambient vertexshader is to be formed, i noticed that for 2 vertexshaders (somewhat multipassed) can be done in one draw call.

is there a fallacy to my observation with regards to performance?

thx!
RC
It's not that multiple vertex shaders are being used, it's that multiple vertex fragments are being linked together.

For example (this is taken from the SDK sample):

// Projection VS Fragmentvoid Projection( float4 vPosObject: POSITION,                 float3 vNormalObject: NORMAL,                 float2 vTexCoordIn: TEXCOORD0,                 out float4 vPosWorld: r_PosWorld,                 out float3 vNormalWorld: r_NormalWorld,                 out float4 vPosProj: POSITION,                 out float2 vTexCoordOut: TEXCOORD0,                 uniform bool bAnimate               ){    // Optional vertex animation    if( bAnimate )        vPosObject.x *= (1 + sin( g_fTime )/2);            // Transform the position into world space for lighting, and projected space    // for display    vPosWorld = mul( vPosObject, g_mWorld );    vPosProj = mul( vPosObject, g_mWorldViewProjection );        // Transform the normal into world space for lighting    vNormalWorld = mul( vNormalObject, (float3x3)g_mWorld );        // Pass the texture coordinate    vTexCoordOut = vTexCoordIn;}vertexfragment ProjectionFragment_Animated = compile_fragment vs_1_1 Projection( true );vertexfragment ProjectionFragment_Static = compile_fragment vs_1_1 Projection( false );// Ambient VS Fragmentvoid Ambient( out float4 vColor: COLOR0 ){    // Compute the ambient component of illumination    vColor = g_vLightColor * g_vMaterialAmbient;}vertexfragment AmbientFragment = compile_fragment vs_1_1 Ambient();


If this were to be compiled, using the Projection and Ambient fragments, the resulting code would be:

// The compiled vertex shadervoid MyVertexShader( float4 vPosObject: POSITION,                 float3 vNormalObject: NORMAL,                 float2 vTexCoordIn: TEXCOORD0,                 out float4 vPosWorld: r_PosWorld,                 out float3 vNormalWorld: r_NormalWorld,                 out float4 vPosProj: POSITION,                 out float2 vTexCoordOut: TEXCOORD0,                 uniform bool bAnimate,                 out float4 vColor: COLOR0               ){    /* FROM THE PROJECTION FRAGMENT */    // Optional vertex animation    if( bAnimate )        vPosObject.x *= (1 + sin( g_fTime )/2);            // Transform the position into world space for lighting, and projected space    // for display    vPosWorld = mul( vPosObject, g_mWorld );    vPosProj = mul( vPosObject, g_mWorldViewProjection );        // Transform the normal into world space for lighting    vNormalWorld = mul( vNormalObject, (float3x3)g_mWorld );        // Pass the texture coordinate    vTexCoordOut = vTexCoordIn;    /* FROM THE AMBIENT FRAGMENT */    // Compute the ambient component of illumination    vColor = g_vLightColor * g_vMaterialAmbient;}


As you can see, when using fragment linking, multiple pieces of code are combined and then compiled. You aren't actually using multiple vertex shaders - you're just piecing the fragments together.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
ahh i see!

This topic is closed to new replies.

Advertisement