Trying to understand normalize() and tex2D() with normal maps

Started by
7 comments, last by digs 8 years ago

Hello

I'm learning the normal mapping process in HLSL and am a little confused by an example in it the book I'm learning from ("Introduction to Shader Programming" by Pope Kim)

What confuses me is:

float3 tangentNormal = tex2D(NormalSampler, Input.mUV).xyz;
tangentNormal = normalize(tangentNormal * 2 - 1);
I don't know why we are normalizing the tangetNormal... I thought that normalize() "converted" units into a value within a 0 to 1 range? Now that I read that back it sounds wildly inaccurate...
I'm also wondering if anyone can describe what exactly tex2D(sampler, uv) is doing? (the more detail the better)
- all i know right now is that I can supply tex2D an image and it will apply that image to a 0 to 1 texture coordinate
Thanks for any help or clarification
Advertisement
tex2D(NormalSampler, input.mUV) samples a color from the normal map texture. The NormalSampler is a sampler associated with the texture, the provides the sampling state. input.mUV provide the texture coordinates to sample from.

A normal stored in a normal map must be "decoded" before it can be used. R and G channels encode the X and Y values of the normal (tangent to the face) while the blue channel encodes the Z value, and is typically simply set to the same value (255) for all values for simplicity, hence the nice cool blue color of a normal map texture.
A color channel holds values from 0 to 1, but the X and Y components of the normal can be values from -1 to 1, hence the need to decode. Inside the call to normalize, the normal sample is multiplied by 2, then 1 is subtracted from it. This has the effect of converting the R and G channels from the range 0,1 to -1,1.

Since most normalmap generation software uses a constant color for the blue channel, the resulting vector is not of unit length. A proper normal needs to be unit length (meaning that sqrt(x^2 + y^2 + z^2) = 1) That is what normalize() does: it takes a vector of arbitrary length and converts it to a vector of unit length.

JTippetts explained the tex2D part and as he JTippetts said, normalize() converts a vector into a unit length vector. Note that if a vector is already unit length, then the result after normalizing should be exactly the same vector (in practice, give or take a few bits due to floating point precision issues)

In a perfect world, the normalize wouldn't be needed. However it is needed because:

  • There's no guarantee the normal map contains unit-length data. For example If the texture is white, the vector after decoding from tex2D will be (1, 1, 1). The length of such vector is 1.7320508; hence it's invalid for our needs. After normalization it will result in (0.57735, 0.57735, 0.57735) which points to the same direction, but has a length of 1.
  • If the fetch uses bilinear, trilinear or anisotropic filtering, the result will likely not be unit length. For example fetching right in the middle between ( -0.7071, 0.7071, 0 ) and ( 0.7071, 0.7071, 0 ) which are both unit length vectors will result in the interpolated vector ( 0, 0.7071, 0 ); which is not unit length. After normalization it will result in (0, 1, 0) which is the correct vector.
  • 8-bit precision issues. The vector ( 0.7071, 0.7071, 0 ) will translate to colours: (218, 218, 128) since 218 is the closest match to 217.8. When converted back to floating point, it's (0.70866, 0.70866, 0 ) which is slightly off. May not sound much, but it can create very annoying artifacts. Normalization helps in this case.
Thanks! I think I understand that better now
float3 tangentNormal = tex2D(NormalSampler, Input.mUV).xyz;
tangentNormal = normalize(tangentNormal * 2 - 1);
tex2D() samples an rgb color value from the NormalSampler at the current texel address ...
This means that if that texture were a 10px by 10px image, at some point tex2D() would return the color value of the texel at (0.7, 0.4) -- we store the rgb value of that texel in the vector because this is the normal direction we want
Mathias Goldberg had said:
"fetching right in the middle between ( -0.7071, 0.7071, 0 ) and ( 0.7071, 0.7071, 0 ) which are both unit length vectors will result in the interpolated vector ( 0, 0.7071, 0 ); which is not unit length"
why is ( 0, 0.7071, 0 ) not unit length? Is it because this value is < 1?

"Unit length" means x * x + y * y + z * z = 1. It might help if you try to visualize this as points on a sphere of radius 1. When you do linear interpolation between 2 such points you're actually taking a straight line between them, not a curve, and hence the distance from the center to the interpolated point is no longer 1.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

why is [background=#fafbfc]( 0, 0.7071, 0 ) not unit length? Is it because this value is < 1?[/background]


Linearly interpolating unit-length vectors (which is what the pixel color interpolation is doing) rarely results in a unit-length vector. Consider the following image:

8fUkAyi.png

Say you have the normals Green and Blue, and you want to interpret the normal halfway between them. Linearly interpolating them will result in the portion of the cyan vector that ends at the white line, since linear interpolation will interpolate the values along the white line between the endpoints of the two vectors. In order to obtain a unit-length vector, you have to re-normalize it, which will restore it to the full unit-length it needs to be for proper lighting calculation.

Thanks again for the help and explanations, I'm going to run through the math Matias had linked too and bookmark this page.

I still don't understand why it's necessary to normalize, but that's probably because I'm missing some basic theory perhaps?

For example, I don't understand why a normal's length is relevant when calculating lighting. To me it makes sense that the position and direction of that normal are the important factors

also, can anyone provide an example of a time when you might not want to normalize something?

I still don't understand why it's necessary to normalize, but that's probably because I'm missing some basic theory perhaps?

For example, I don't understand why a normal's length is relevant when calculating lighting. To me it makes sense that the position and direction of that normal are the important factors

A normal is a direction only.

Therefore the length isn't relevant, which is the reason why you normalize it.

If you didn't normalize it, the length of the vector will make calculations with it more tricky.

The main reason I think is the dot product.

If you have vectors A and B, with length |A| and |B|, the dot product is equivalent to |A|*|B|*cos(angle).

So if both vectors have length one, the dot product is 1*1*cos(angle) = cos(angle).

This means if you want to use the dot product to find the angle between two vectors, you have to use normalized vectors.

In lighting, cos(angle) is also a perfect number to use to decide how bright something is depending on the angle of the incoming light. (L dot N)

Right! Ok, so... cos() is more expensive than dot(); we normalize the vectors so we can use them in dot() to replace doing the more expensive cos operation (if I'm remembering correctly)

One of the reasons I think I am getting so mixed up here is because I often see variables being normalized in the vertex shader before being sent to the pixel shader, yet other times they are normalized in the pixel shader... however, now that I think about it more, the only time I normalize a value in the vertex shader (or pixel shader) is when that variable is being used for a calculation (like dot)!

I think another thing that's tripping me up is I'm not at all sure what's happening when my data is sent to the rasterizer... I know that it is linearly interpolated across the triangle, but I really don't know what that means (or perhaps more accurately, I cannot picture whats happening with the data during this stage)

Thanks for all the help everyone, it has cleared some road blocks I hit; maybe I should research what happens during the rasterization stage

This topic is closed to new replies.

Advertisement