[HLSL/DX9] Managing multipass rendering data

Started by
2 comments, last by MJP 13 years, 8 months ago
Let's assume I have an effect with a single technique in it that contains two or more passes. The passes are rendered one by each other. Is it possible to somehow supply the second pass with the output data from the first pass?

All I'm trying to do at the moment is to blend results of multiple shaders. I can imagine using additional render targets to store color values in the first pixel shader pass and reuse them in the second one. But how could I achieve something similar with vertex shaders?

I've run a very simple test with a single white triangle on the scene. Shader code is included below (with description of what I expect):

//this changes color to violetfloat4 mainPS() : COLOR {	return float4(1.0, 0, 1.0, 1.0);}//I'd expect 'c' variable to be the output of previous pass (violet).//If so, the 'c' returned here should be red (ra = 1, gb = 0).//It's unfortunately NOT like that!float4 mainPS2(float4 c : COLOR) : COLOR {	c.g=1; c.b=0;	return c;}technique technique0 {	pass p0 {		CullMode = None;		VertexShader = compile vs_3_0 mainVS();		PixelShader = compile ps_3_0 mainPS();	}		pass p1 {	CullMode = None;	VertexShader = compile vs_3_0 mainVS2();	PixelShader = compile ps_3_0 mainPS2();}}


I know about Supershaders, but it's rather unsatisfactory solution. Fragment linker is deprecated and no longer supported in DX10. Has anything changed about the issue in DX10 or DX11. Is it any easier there than in DX9?
Advertisement
The pass notation in the effects framework doesn't actually do anything on its own. All it does is say "hey, you've got multiple passes defined in here" and gives you an interface for rendering a pass. Feeding outputs from one pass to another or anything like that has to be totally handled by you in your shader code or in your application code.

In your case, probably the simplest thing to do would be to have your mainPS2 function call the mainPS function to obtain the output. Otherwise you would have to either render the first pass to a render target and sample that in your second pass, or you'd have to use the fixed-function blending states.
Thanks for your response.

OK, so actually it only confirms an assumption that I can't do what I want ;]

But you say I could call mainPS2(). What if I had a custom-made system that could dynamically build a new pixel shader from such chained calls mainPS1, mainPS2, ..., mainPSn. Output from previous would feed next one, etc. Do you think it would do the job well?

One thing that seems poor is the need to recompile effect, when the sequence of pixel shaders changes. Anyway, as far as I can imagine it IS also the case, when dealing with super-shaders or similar.
Yeah that doesn't sound unreasonable. It's true that you would have to compile many permutations, but this is very common among commercial games. The alternative to compiling many permutations is to use runtime dynamic branching, which has performance implications.

This topic is closed to new replies.

Advertisement