Sign in to follow this  

Normalizing Normal Maps

This topic is 3116 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I thought I understood normal maps, although some recent calculations make me less sure of that. AFAIK the red, green, and blue channels of a normal map encode the scalar (dot) product of the local surface normal and a unit vector aligned with the X, Y, or Z axis respectively. The X and Y values cover the range from -pi to +pi, while the Z value encompasses the range from -pi/2 to +pi/2 since surfaces with normals pointing away are not of interest for this. If this is so, since the scalar product of two unit vectors is the cosine of the angle, I can usefully think of a normal map as encoding the direction cosines of the local surface normal for each pixel. If that is true, since the sum of the squares of the direction cosines equals 1.0, I should be able to calculate if a normal map pixel is normalized by calculating the sum of the squares. The equation I used is: sum = [(R-N)/N]^2 + [G-N)/N]^2 + [B/255]^2, where R = red value, G = green value, B = blue value, and N is a number between 127 and 128 (I'm not sure what value is best). The problem is, for some normal maps the sum is always 1.00 within rounding error, but for others, the value is far from 1.0. I took a normal map from Ben Cloward's collection and processed it with nVidia's Texture Tools 2, using the -norm function. I calculated the sum using the formula above and got a value of about 0.59! The processed normal map looked dark too. I didn't think I was playing in the mud, but I'm not sure any more. Any constructive comments would be welcome.

Share this post


Link to post
Share on other sites
There are more ways to store normals in a texture, but I don't know of any way that uses a dot product (wouldn't that made it 4 component?). The simplest way is to store a unit vector in RGB channels, than use the formula normalVector=textureRGBPixel*2.0f-1.0f to get the normal (since the texture is probably in 0.0f-1.0f range).

Share this post


Link to post
Share on other sites
Usually, normal maps just represents normal vectors. Typically red is x, green is y and blue the z component of the normal vector. That's it, no dot products involved at all. You normalize them the same way you normalize a vector, ie. V.xyz / V.length.

Other ways to encode normal maps are polar maps, for example, which only use two components.

Share this post


Link to post
Share on other sites
n3Xus and Yann L, thanks for replying.

It seems that both of you are saying that the colors just store the X, Y and Z components of the surface normal. But to calculate a component, you need to calculate its projection along the axis in question--which is proportional to the cosine of the angle between them. Isn't that identical to the scalar (dot) product of the surface normal and a unit vector along the axis?

Contrary to appearances, I'm not trying to start an argument. However, I am trying to gain as precise a knowledge about this as I can. Recently I converted a quarter million poly mesh into a two poly mesh using textures! Although I can't count how many meshes I've mapped, somehow this hit me as just how powerful textures can be. Although my natural tendency is to continue modding (for Oblivion), this convinced me to stop for awhile and dig deeper into textures. Now that the newest Blender supports graphical node based procedural textures, I hope to transfer what I've learned with AOI's procedural generator to Blender. I also want to learn as much as I can about tweaking normal maps--hence my original post. So, please bare with me. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by Vince B
It seems that both of you are saying that the colors just store the X, Y and Z components of the surface normal. But to calculate a component, you need to calculate its projection along the axis in question--which is proportional to the cosine of the angle between them. Isn't that identical to the scalar (dot) product of the surface normal and a unit vector along the axis?
How on earth are you storing your vectors? A 3-dimensional vector is typically represented as 3 floating point numbers, named x, y and z (or i, j and k in mathematical circles). In your normal map, you literally just store those 3 numbers.

Share this post


Link to post
Share on other sites
swiftcoder, other than the normalizing neccessary to fit the signed, floating point numbers into 8-bit integers, you are, of course, absolutely correct. And, no, I don't know any other reasonable way of storing the vector. Still, if the stored components represent a surface normal of unit length, I believe the sum of the squares of the X, Y, and Z components should equal 1.0. However, I'll just chill out on this for a while. Thanks for responding.

Share this post


Link to post
Share on other sites
Quote:
Original post by Vince B
swiftcoder, other than the normalizing neccessary to fit the signed, floating point numbers into 8-bit integers, you are, of course, absolutely correct. And, no, I don't know any other reasonable way of storing the vector. Still, if the stored components represent a surface normal of unit length, I believe the sum of the squares of the X, Y, and Z components should equal 1.0. However, I'll just chill out on this for a while. Thanks for responding.
Sure, you should normalise the vector before storing it [i.e. V / len(V)], so that you can use the full range of the 8-bit format. You still need to normalise them again in the pixel shader, however, because the texture samplers interpolate across several pixels of the normal map.

Share this post


Link to post
Share on other sites
Quote:
Original post by Pragma
Did you try treating B the same as the other components?

i.e. [(B-N)/N]^2

No, I did not. The equation works for X (Red) and Y (Green) since all angles are possible (from -pi to +pi) so those components needs to be signed. For Z (Blue), angles greater than +/- pi/2 correspond to the normal pointing away from the viewer. So the Z component is unsigned. That's why the normal map for a perfectly flat surface is all 127, 127,255 and not 127,127,127.

Share this post


Link to post
Share on other sites
Pragma, Ok, I tried it. In several maps, what you suggest gave a reasonable answer; closer than the formula in my original post. In other maps, it was off a good bit. One problem is (I've discovered) a lot of normal maps are not normalized but still give good visual results. Ben Cloward's famous cobble stone texture looks great, but checking with either formula, it's not close to normalized.

So, I decided to make a normal map of a sphere in Blender. See:
Normal Sphere
To the best of my knowledge, normal maps created this way should be correctly normalized. Checking half a dozen pixels, my original formula was close (typically 1.00 +/- 2% or less). Trying what you suggested for the Z value didn't work well. Sums were off by as much as 30%.

Since the Z value is unsigned, subtracting half the range doesn't make sense to me. However, I could be missing something and I'm here to learn. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by Vince B
Since the Z value is unsigned, subtracting half the range doesn't make sense to me. However, I could be missing something and I'm here to learn. :)
I think the point Pragma and I are trying to explain is that it depends what type of normal maps you are using. For tangent-space normal maps, the z-value can never be less than zero, so your technique is an acceptable means of increasing the normal map precision.

However, you only need tangent-space normal maps if you are animating your model, and you can't trivially apply a tangent-space normal map to certain shapes (a sphere is the obvious example). Therefore in many cases you will use object space normal maps instead, and in this case, normals can point in any direction, and thus you must use a signed z-component.

Further, pretty much all tools I have used have expected a normal map with signed-z, *whether or not* they were using tangent-space normal maps. I guess it isn't generally considered worth the extra code to differentiate the tangent space maps.

Share this post


Link to post
Share on other sites
It's possible to store a non-normalized vector in RGB, which is maybe what you're seeing.

Some applications don't normalize the value before saving the texture.

Usually you just normalize the vector in your shader. Of course, it might be useful to have non-normalized vectors. In this way you can store a heightmap along with the normal vector (perhaps useful for parralax mapping).

Share this post


Link to post
Share on other sites
Just because you think of the values as a normalised vector doesn't make it a requirement. If you think of the vector as a direction, rather than a direction with a given length, you'll see that you can encode more directions in a texture with a non-normalised vector than a normalised one. There's just more directions to choose from.
This means that if you can find a good way to encode the directions in a non-normalised vector your error rate would go down.
This is why you see some people using non-noralised vectors in normalmaps. The error introduced in the quantisation is less than I can notice so I don't usually bother with this.

Share this post


Link to post
Share on other sites
RDragon1 and MortenB, I understand that you are saying that the surface normal doesn't have to be a unit length vector. Certainly, based on the normal maps I've examined over the past few days, that is often the case. I found a useful discussion about normal maps by Jonathan Kreuzer, see: Object Space Normal Mapping. In it he states that the map should store the surface normal as a unit vector since, if you don't, it takes some additional calculation time, however "... the newer graphics cards provide a normalization function on the GPU that is reasonably fast ...". So, I'm not going to worry about normalization further.

Share this post


Link to post
Share on other sites
Quote:
Original post by Vince B
I found a useful discussion about normal maps by Jonathan Kreuzer, see: Object Space Normal Mapping. In it he states that the map should store the surface normal as a unit vector since, if you don't, it takes some additional calculation time, however "... the newer graphics cards provide a normalization function on the GPU that is reasonably fast ...". So, I'm not going to worry about normalization further.
This is a little misleading, since even if the normal vectors are stored as unit length, you will almost always need to re-normalise them after sampling the texture in a shader.

This is because normal map textures are typically used with bilinear filtering (and even mipmapping), which means that the sampler interpolates between several adjacent pixels to ensure smooth output, and this interpolation often doesn't result in unit-length normals.

Share this post


Link to post
Share on other sites
Swiftcoder, I understand your point about object-space normal maps, although my work (modding the GameBryo engined Oblivion) is all tangent-space normal maps. However, I am not able to resolve what you say about the tools you use versus my experience. I've created normal maps with Blender, Modo, ShaderMap Pro, and nVidia's Texture Tools 2. For a smooth surface facing the observer, they all create a normal map of 127, 127, 255. Since the blue (Z) value is 255, doesn't that mean that it's encoded as an unsigned integer? I have not used the ATI software and have used Melody and a demo of CrazyBump very little. Do any of those produce 127, 127, 127 for a plane surface facing the observer?

Share this post


Link to post
Share on other sites
Quote:
Original post by Vince B
Since the blue (Z) value is 255, doesn't that mean that it's encoded as an unsigned integer?
Integer texture formats are always unsigned, so when you store a normalised floating point vector in a texture, you use the following formula: (vec * 0.5 + 0.5) * 255

Note that for a float input of 1.0, it returns 255, for a float input of -1.0, it returns 0, and for a float input of 0.0 it returns 127.
Quote:
Do any of those produce 127, 127, 127 for a plane surface facing the observer?
Of course not: (127, 127, 127) is equivalent to the normalised floating point vector (0, 0, 0) - or in other words, a degenerate normal.

The key thing to take away from this is that there is no such thing as a signed integer texture format - we only emulate the sign.

Share this post


Link to post
Share on other sites

This topic is 3116 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this