Jump to content
  • Advertisement
Sign in to follow this  
Vince B

Normalizing Normal Maps

This topic is 3332 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I thought I understood normal maps, although some recent calculations make me less sure of that. AFAIK the red, green, and blue channels of a normal map encode the scalar (dot) product of the local surface normal and a unit vector aligned with the X, Y, or Z axis respectively. The X and Y values cover the range from -pi to +pi, while the Z value encompasses the range from -pi/2 to +pi/2 since surfaces with normals pointing away are not of interest for this. If this is so, since the scalar product of two unit vectors is the cosine of the angle, I can usefully think of a normal map as encoding the direction cosines of the local surface normal for each pixel. If that is true, since the sum of the squares of the direction cosines equals 1.0, I should be able to calculate if a normal map pixel is normalized by calculating the sum of the squares. The equation I used is: sum = [(R-N)/N]^2 + [G-N)/N]^2 + [B/255]^2, where R = red value, G = green value, B = blue value, and N is a number between 127 and 128 (I'm not sure what value is best). The problem is, for some normal maps the sum is always 1.00 within rounding error, but for others, the value is far from 1.0. I took a normal map from Ben Cloward's collection and processed it with nVidia's Texture Tools 2, using the -norm function. I calculated the sum using the formula above and got a value of about 0.59! The processed normal map looked dark too. I didn't think I was playing in the mud, but I'm not sure any more. Any constructive comments would be welcome.

Share this post


Link to post
Share on other sites
Advertisement
There are more ways to store normals in a texture, but I don't know of any way that uses a dot product (wouldn't that made it 4 component?). The simplest way is to store a unit vector in RGB channels, than use the formula normalVector=textureRGBPixel*2.0f-1.0f to get the normal (since the texture is probably in 0.0f-1.0f range).

Share this post


Link to post
Share on other sites
Usually, normal maps just represents normal vectors. Typically red is x, green is y and blue the z component of the normal vector. That's it, no dot products involved at all. You normalize them the same way you normalize a vector, ie. V.xyz / V.length.

Other ways to encode normal maps are polar maps, for example, which only use two components.

Share this post


Link to post
Share on other sites
n3Xus and Yann L, thanks for replying.

It seems that both of you are saying that the colors just store the X, Y and Z components of the surface normal. But to calculate a component, you need to calculate its projection along the axis in question--which is proportional to the cosine of the angle between them. Isn't that identical to the scalar (dot) product of the surface normal and a unit vector along the axis?

Contrary to appearances, I'm not trying to start an argument. However, I am trying to gain as precise a knowledge about this as I can. Recently I converted a quarter million poly mesh into a two poly mesh using textures! Although I can't count how many meshes I've mapped, somehow this hit me as just how powerful textures can be. Although my natural tendency is to continue modding (for Oblivion), this convinced me to stop for awhile and dig deeper into textures. Now that the newest Blender supports graphical node based procedural textures, I hope to transfer what I've learned with AOI's procedural generator to Blender. I also want to learn as much as I can about tweaking normal maps--hence my original post. So, please bare with me. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by Vince B
It seems that both of you are saying that the colors just store the X, Y and Z components of the surface normal. But to calculate a component, you need to calculate its projection along the axis in question--which is proportional to the cosine of the angle between them. Isn't that identical to the scalar (dot) product of the surface normal and a unit vector along the axis?
How on earth are you storing your vectors? A 3-dimensional vector is typically represented as 3 floating point numbers, named x, y and z (or i, j and k in mathematical circles). In your normal map, you literally just store those 3 numbers.

Share this post


Link to post
Share on other sites
swiftcoder, other than the normalizing neccessary to fit the signed, floating point numbers into 8-bit integers, you are, of course, absolutely correct. And, no, I don't know any other reasonable way of storing the vector. Still, if the stored components represent a surface normal of unit length, I believe the sum of the squares of the X, Y, and Z components should equal 1.0. However, I'll just chill out on this for a while. Thanks for responding.

Share this post


Link to post
Share on other sites
I hope you "decomporess" them by *2-1
some error can arrise if you interpolate them with texture samplers, that's why you usually normalize them in shaders, after sampling.

Share this post


Link to post
Share on other sites
Quote:
Original post by Vince B
swiftcoder, other than the normalizing neccessary to fit the signed, floating point numbers into 8-bit integers, you are, of course, absolutely correct. And, no, I don't know any other reasonable way of storing the vector. Still, if the stored components represent a surface normal of unit length, I believe the sum of the squares of the X, Y, and Z components should equal 1.0. However, I'll just chill out on this for a while. Thanks for responding.
Sure, you should normalise the vector before storing it [i.e. V / len(V)], so that you can use the full range of the 8-bit format. You still need to normalise them again in the pixel shader, however, because the texture samplers interpolate across several pixels of the normal map.

Share this post


Link to post
Share on other sites
Quote:
Original post by Pragma
Did you try treating B the same as the other components?

i.e. [(B-N)/N]^2

No, I did not. The equation works for X (Red) and Y (Green) since all angles are possible (from -pi to +pi) so those components needs to be signed. For Z (Blue), angles greater than +/- pi/2 correspond to the normal pointing away from the viewer. So the Z component is unsigned. That's why the normal map for a perfectly flat surface is all 127, 127,255 and not 127,127,127.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!