I had a longer discussion with one classmates today about how to properly normalize normals, and we ended up finding out that both of us are rather confused about the subject.
I always assumed that the proper place to normalize, would be in the vertex shader:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.Normal = normalize(mul(input.Normal, World));
return output;
}
I noticed that alot of people only apply normalization in the pixel shader:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.Normal = mul(input.Normal, World);
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float3 normal = normalize(input.Normal);
...
}
And in fact I have even seen people normalize them both in the vertex and in the pixelshader.
I'm asking because I'm working on a deferred renderer right now, and I'm trying to compact my normal data into 2 channels instead of 3 as explained here:
http://aras-p.info/texts/CompactNormalStorage.html
Basically, I tried using a similar setup to the one on the page, to see which method would produce the best result for my engine.
I noticed that using the Cryengine3 method, I ended up completely without quality loss, which seemed unrealistic.
It turns out this happened because I normalized my Normals in the Pixelshader, if I normalize them in the Vertexshader instead, I end up with exactly the same results as 'Aras' does.
So, which is the correct way of doing this?
Assuming that doing it in the Vertexshader is the correct way of doing it, I made a few additional tests using the Cryengine3 method:
float2 encodeCry3(float3 normal)
{
float2 encoded = normalize(normal.xy) * (sqrt(-normal.z*0.5+0.5));
return encoded*0.5+0.5;
}
float3 decodeCry3(float2 encoded)
{
float3 decoded;
float2 temp = encoded*2-1;
decoded.z = -(dot(temp,temp)*2-1);
decoded.xy = normalize(temp) * sqrt(1-decoded.z*decoded.z);
return decoded;
}
This is the initial Normal data (normalized in Vertex shader only)
In other words this is my "original" data, and I'll compare my other tests with it.
I am going to use
abs(original-decodeCry3(encodeCry3(original)))*10
to calculate the error.
I multiply by 10 to make it easier to visually see the error, so even though the errors will look quite high, they're actually 10 times less.
This is the difference (or quality loss if you want) between the original picture posted above, and encoding+decoding using the Cryengine3 method:
float3 original = input.Normal;
output.error = float4(abs(original-decodeCry3(encodeCry3(original)))*10,1);
As you can see this causes an artifact, exactly the same as during Aras' test in fact.
However if I also normalize it in the Pixelshader before applying encoding+decoding I get this:
float3 original = input.Normal;
float3 normalized = normalize(input.Normal);
output.error = float4(abs(original-decodeCry3(encodeCry3(normalized)))*10,1);
As you can see, this gets rid of the artifact, and the overall quality loss is quite alot smaller.
So, do I assume correctly, that in my situation (because of compacting the normals), I'm better off to normalize my Normals both in the Vertex and in the Pixelshader?
Thanks,
Hyu