Confused about normalizing Normals and compacting Normals

Started by
1 comment, last by Hyunkel 14 years, 1 month ago
I had a longer discussion with one classmates today about how to properly normalize normals, and we ended up finding out that both of us are rather confused about the subject. I always assumed that the proper place to normalize, would be in the vertex shader:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;

    float4 worldPosition = mul(input.Position, World);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);
    
    output.Normal = normalize(mul(input.Normal, World));

    return output;
}
I noticed that alot of people only apply normalization in the pixel shader:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;

    float4 worldPosition = mul(input.Position, World);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);
    
    output.Normal = mul(input.Normal, World);

    return output;
}

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
	float3 normal = normalize(input.Normal);
	...
}
And in fact I have even seen people normalize them both in the vertex and in the pixelshader. I'm asking because I'm working on a deferred renderer right now, and I'm trying to compact my normal data into 2 channels instead of 3 as explained here: http://aras-p.info/texts/CompactNormalStorage.html Basically, I tried using a similar setup to the one on the page, to see which method would produce the best result for my engine. I noticed that using the Cryengine3 method, I ended up completely without quality loss, which seemed unrealistic. It turns out this happened because I normalized my Normals in the Pixelshader, if I normalize them in the Vertexshader instead, I end up with exactly the same results as 'Aras' does. So, which is the correct way of doing this? Assuming that doing it in the Vertexshader is the correct way of doing it, I made a few additional tests using the Cryengine3 method:
float2 encodeCry3(float3 normal)
{
    float2 encoded = normalize(normal.xy) * (sqrt(-normal.z*0.5+0.5));
	return encoded*0.5+0.5;
}

float3 decodeCry3(float2 encoded)
{
	float3 decoded;
	float2 temp = encoded*2-1;
	decoded.z = -(dot(temp,temp)*2-1);
	decoded.xy = normalize(temp) * sqrt(1-decoded.z*decoded.z);
	return decoded;
}
This is the initial Normal data (normalized in Vertex shader only) In other words this is my "original" data, and I'll compare my other tests with it. I am going to use
abs(original-decodeCry3(encodeCry3(original)))*10
to calculate the error. I multiply by 10 to make it easier to visually see the error, so even though the errors will look quite high, they're actually 10 times less. This is the difference (or quality loss if you want) between the original picture posted above, and encoding+decoding using the Cryengine3 method:
	float3 original = input.Normal;
	output.error = float4(abs(original-decodeCry3(encodeCry3(original)))*10,1);
As you can see this causes an artifact, exactly the same as during Aras' test in fact. However if I also normalize it in the Pixelshader before applying encoding+decoding I get this:
	float3 original = input.Normal;
	float3 normalized = normalize(input.Normal);
	output.error = float4(abs(original-decodeCry3(encodeCry3(normalized)))*10,1);
As you can see, this gets rid of the artifact, and the overall quality loss is quite alot smaller. So, do I assume correctly, that in my situation (because of compacting the normals), I'm better off to normalize my Normals both in the Vertex and in the Pixelshader? Thanks, Hyu
Advertisement
If you interpolate between two normalized vectors, the length of the result vector somewhere in between is not necessarily 1. So you might need to normalize in the pixel-shader too for perfect results.

Simplified:
So, do I understand correcty that in order to get "perfect" results, I need to normalize in both the Vertex and the Pixelshader?

VertexShaderOutput VertexShaderFunction(VertexShaderInput input){    VertexShaderOutput output;    float4 worldPosition = mul(input.Position, World);    float4 viewPosition = mul(worldPosition, View);    output.Position = mul(viewPosition, Projection);        output.Normal = normalize(mul(input.Normal, World));    return output;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{	float3 normal = normalize(input.Normal);	return float4(normal, 1);}


So this would produce "perfect" normals, and I can use it as a reference for my tests?
Because for some reason, in this very situation, the Cryengine3 Normal compacting method creates a "perfect" result:

float2 encodeCry3(float3 normal){    float2 encoded = normalize(normal.xy) * (sqrt(-normal.z*0.5+0.5));	return encoded*0.5+0.5;}float3 decodeCry3(float2 encoded){	float3 decoded;	float2 temp = encoded*2-1;	decoded.z = -(dot(temp,temp)*2-1);	decoded.xy = normalize(temp) * sqrt(1-decoded.z*decoded.z);	return decoded;}VertexShaderOutput VertexShaderFunction(VertexShaderInput input){    VertexShaderOutput output;    float4 worldPosition = mul(input.Position, World);    float4 viewPosition = mul(worldPosition, View);    output.Position = mul(viewPosition, Projection);        output.Normal = normalize(mul(input.Normal, World));    return output;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{	float3 normal = normalize(input.Normal);	return float4(abs(normal-decodeCry3(encodeCry3(normal)))*10,1);}


This creates a perfectly black texture.
Does this indicate that the guys over at Crytek have figured out how to compact 3 normal channels into 2 channels without loosing any quality at all, provided that you did normalize your Normals in both the Vertex and Pixelshader?

This topic is closed to new replies.

Advertisement