• Advertisement
Sign in to follow this  

Reason for Normal Calculation

This topic is 2139 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I often see this calculation in tutorials of deferred lighting when decoding the normals in the Light's PS:


float3 normal = 2.0f * normalData.xyz - 1.0f;


And it says "Transform normal back into [-1,1] range".
I don't get why this transformation is needed when receiving the normals. Can someone tell me the reason behind this ?
I've tried doing that in my shader and the only visual difference I see is that it gets brighter/darker doing it with or without the * 2.0f - 1.0f part. So how do I know which one is "correct"?

Share this post


Link to post
Share on other sites
Advertisement
Texture vaules are in the [0,1] range, which means they go from 0 to 1.

Each channel represents an axis of a normal, which is in [-1,1], which means they go from -1 to 1.

So when writing a normal into a texture, you need to scale and translate it from [-1,1] to [0,1], and when reading it back in you need to scale and translate it back.

Share this post


Link to post
Share on other sites
I see, so when doing the encoding to store them inside a 2D Texture I transform my normals in the 0,1 range and by decoding it I do the same back to the [-1,1] space.
Thanks for the explanation.

Share this post


Link to post
Share on other sites

Texture vaules are in the [0,1] range, which means they go from 0 to 1.


This is only the case for a certain class of texture formats, namely unsigned normalized integer formats. These formats will store values as an integer, where the the pixel shader will reinterpret that integer value as a [0,1] floating point where 0 == 0 and 2^(numbits) - 1 == 1.0. Your typical 8-bit color texture formats fall under this category. If you use a signed normalized integer format you can store [-1, 1] values which means that you don't need to do the range expansion. This is also the case if you use a floating point format that has a sign bit.

Share this post


Link to post
Share on other sites
coming from flash (where optimization counts big time :( )

i use:
normal = (nrm - 0.5) / 0.5;

to expand the normal as one less constant is required to be uploaded :) (the 0.5 is uploaded only once and reused)

am I being a melon or will this sort of optimization add up to some sort of saving?

Share this post


Link to post
Share on other sites

am I being a melon or will this sort of optimization add up to some sort of saving?


Multiplication is supposed to be faster than diivision (though I've never checked this). So probably the multiplication by 2.0 is better than the division by 0.5.

Share this post


Link to post
Share on other sites
Multiplication is usually faster, but where I think the saving might come would be from sending only one constant to the gpu (the 0.5) as opposed to two (1.0 and 2.0), because as far as I am aware, any generic numbers used in a shader have to be uploaded to the gpu, and cpu -> gpu communication is a large bottleneck.

In tests I have done, uploading constants is a definite slow process! Might only be a problem on the flash platform, or maybe I am doing something wrong.

Share this post


Link to post
Share on other sites
A decent compiler should be able to see that you're dividing by constant 0.5 and convert it to a multiplication by 2.0. I've confirmed already that the HLSL compiler will produce the same assembly for both. I don't know the flash platform so I can't comment there (but I am puzzled about why it needs to upload constants for this kind of thing), but in the general case it's either (a) it shouldn't matter at all, or (b) just do the multiplication in your code. I favour (b) because it makes what you're doing more explicit to anyone reading the code.

Share this post


Link to post
Share on other sites
At the moment we can only write shaders in AGAL which is fairly low level i.e. mov, div, m44, sin, slt, etc...
so no higher level language is available unless you write a converter.

so you cant simply write
normal = (nrm - 0.5) / 0.5;

it would be something more like
tex ft0 v0 fs0 <2d, clamp, linear>
sub ft0 ft0 fc0
div ft0 ft0 fc0

where
ft0 = a temp to store the normal in
v0 = uv
fs0 = normal map sampler
fc0 = the 0.5 value constant uploaded to the gpu

so as you can see every arbitrary number used in the shader must be uploaded as a constant, which isn't the fastest process in the world (even for such a small amount of information)
also we are limited to only 28 constants in the fragment shaders so every saving helps there too

Share this post


Link to post
Share on other sites

At the moment we can only write shaders in AGAL ....

Sounds painful.

I'm assuming that the OP is using either HLSL or Cg (from use of float3 - it would be vec3 if GLSL) where this overhead doesn't exist.

Share this post


Link to post
Share on other sites
For actual numeric constants, those are baked into the shader microcode, so that's irrelevant. However, the more important thing is that when doing a scale + bias operation, you should try to always do the scale operation first. (eg: (nrm * 2) - 1). The reason is that GPU instruction sets include an instruction known as 'mad' - multiply+add. It's meant to perform (a*b+c). There is no instruction for doing add+multiply (a+b)*c. If you're using all constants, the compiler might be able to rewrite your expression to produce a mad operation, but it's better to be safe...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement