• Advertisement
Sign in to follow this  

different VS and PS versus branching?

This topic is 1446 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,
Although I should probably profile first, but what would you think would be better, performance wise?
The case is that I need two 'paths' in my shader, one including normal mapping and one without.

Scenarios:

 

1. Different PS and VS for with and without normal mapping, use 2 different Techniques

2. Send a bool or something to the shader, and branch (one VS and PS, and one technique)

(if normalmap do some things, in both the VS and PS)

 

The actual differing code for WITH normal mapping is as follows:

// VS
	// Worldspace to Tangent space for normalMapping
	Out.WorldToTangent[0] = mul(normalize(input.Tangent), World);
	Out.WorldToTangent[1] = mul(normalize(input.Binormal), World);
	Out.WorldToTangent[2] = mul(normalize(input.Normal), World);

// PS
	float3 normalMap = normalize(2.0f * tex2D(normalMapSampler, input.TexCoord).xyz - 1.0f);
	normalMap = mul(normalMap, input.WorldToTangent);

// instead of
//	float3 normalMap = normalize(input.Normal);

I'm using FX files and could also decide to have 2 separate effects/ FX files, but that sounds like waste too me.

Any thoughts?

Share this post


Link to post
Share on other sites
Advertisement

A 3rd option is to use the same code for both objects, but use a flat "blue" normal map for the objects that don't require normal mapping wink.png

As long as it's a small texture (e.g. 1x1 pixels) then those texture fetches will be quite fast due to every pixel fetching the same texel, meaning it will be very cache friendly.

 

But yes, as with any optimization question, you've got to test and profile the different options cool.png

 

As a guess, I would say that if you're drawing a large number of pixels with each shader (>1000 normal-mapped, then >1000 non-normal-mapped pixels), then it will be faster to switch shaders.

If most pixels are normal-mapped, and only a few are non-normal-mapped, then I would go with option #3 (just always pay the price of normal mapping).

If most are non-normal-mapped, and only a few are normal-mapped, then I would try the branching method, but also profile it against the shader swap.

Share this post


Link to post
Share on other sites
Thanks. I'll go for implementing option 3 for now. In my loading code for diffuse textures I'll check if a texture is available with the same filename +NRM.extension. If not, load the default/ blue normal map. This gives nice flexibility and for now assumes that there are more materials with normalmap then without. Can always change it later on.

Just to be sure.
If a pixel in the normalmap has R 255, G 255 and B255 it would mean +1 for the normal, so 128/128/128 would be 'neutral', is this correct? (or 0.5/0.5/0.5, haven't made many normalmaps yet :))

Share this post


Link to post
Share on other sites

OK, this is definately a thinking mistake (or how you'd call it smile.png)

 

'Default' should probably have X and Y components set to zero (128), this would be R and G.

And B set to 1 (255). Meaning is pointing 'outwards', but no changes on X and Y (U / V)

Edited by cozzie

Share this post


Link to post
Share on other sites

Implemented and up & running.

Performance no issues but that might be because only 2 out of 120 unique materials in the scene have a normal map yet :)

The rest uses the default 4x4 pixel 'blue map'.

 

Now I think of it, do you/ would you handle alpha maps the same way?

In my scenes/ situations I think this wouldn't be a good idea, because only a few materials are 'blended'/ or have alpha maps.

Currently in my scenegraph I sort by renderables which are 'blended', I might change this to material instead of renderable (mesh instances).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement