Help, understanding Tangent Space (normal mapping)

Started by
11 comments, last by Tispe 10 years, 11 months ago

Hey guys, you are talking but nothing makes sense atm.

When a Normal Map is created for use in Tangent space, the normal stored in the map is different from this "built in" case. The normal stored in the Tangent Space Normal Map is the offset to apply the normal passed from the vertex shader?

Yes. The tangent space normal map normals are pointing in the correct direction when your matching texture is facing you on your screen. If you wrap that texture around an object, the pixels/normals you pull out of the normal map in the shader will still be in screen space, so you have to apply them to the surface normal at that pixel.

So in the vertex shader, you'll need to pass the vertex normal, the normal map normal and the tangent/bitangent vectors to the pixel shader. The tangent/bitangents vectors coupled with the vertex normal is enough to orient the normal map normal [technical details skipped] to the correct vector. You can then use this in your lighting calculation.

One thing I was wondering actually, if the normals in the normal map always point generally 'out' of screen space, why do you need both the tangent and the bitangent? You should be able to calculate the bitangent from the vertex normal and the tangent, I mean if you always assume the tangent is 'up'...

Wait, so the Vertex Shader samples the Normal Map? That can't be right.

Is this correct:
1. From the vertex shader 3 vertices(with normals) are outputted.
2. In the pixel shader; an interpolated normal (N1) from these 3 vertex normals is passed to the PS.
3. A sampled Normal (N2) from the normal map is retrieved.
4. Using (N1) and (N2) we create a transform matrix (TBN).
5. We then use the (TBN-1) to transform the light vector in world space (LW) to Tangent space (LT).
6. We then use either (N1) or (N2) or a calculation of both to DOT it with (LT).
7. Use result to shade the pixel.
What is wrong here?
Advertisement
An optimization you can make when doing forward rendering is to calculate the TBN matrix in the vertex shader and use it to pre-transform the lightvector into tangent space before sending the transformed lightvector into the pixel shader. Then in the pixel shader, you read the normal from the normal map and perform dot with the incoming light vector. Since the incoming light vector is already pre-transformed into tangent space you don't need to modify it or do anything weird with the normal from the normal map. You'll need to normalize the incoming lightvector in the pixel shader, since interpolation of might cause some shortening.

And no, you don't sample the normal map in the vertex shader.

Regarding your posted list of steps:

1. From the vertex shader 3 vertices(with normals) are outputted.

No. A vertex shader operates only on a single vertex at a time. Values for each vertex given to the shader are calculated and output to an intermediate assembly stage. For a triangle, once the 3 vertices of the triangle have been processed by the shader, then this intermediate stage of the driver calculates interpolated versions of these three sets of data, and hands these interpolated values to the pixel shader as the set of input data for the current pixel.

2. In the pixel shader; an interpolated normal (N1) from these 3 vertex normals is passed to the PS.

Basically, yes.

3. A sampled Normal (N2) from the normal map is retrieved.

Yes. Note that the normal stored in a tangent-space normal map is encoded, so this step will need to include steps to decode the normal. Particularly, the x and y components need to be multiplied by 0.5 then subtract 0.5, to correct them from the [0,1] coordinate space to the [-1,1] coordinate space. The z component, or blue channel, is typically set to 1 so once the x/y components are decoded the whole vector needs to be normalized to unit length. This decoded normal represents the surface normal of the fragment in tangent space.

4. Using (N1) and (N2) we create a transform matrix (TBN).

No, the TBN matrix is constructed from the vertex normal, the tangent and the bi-tangent. These three vectors form a "miniature" 3D coordinate space, if you will, where the normal corresponds to the local Z axis of the space, and the tangent and bitangent correspond to the X and Y axes of the space. These 3 vectors need to be perpendicular (orthogonal) to one another, just as the global X, Y and Z axes are perpendicular to one another. Typically, a tangent vector for each vertex is calculated in a pre-process pass (probably during model export at asset creation, or some sort of process pass either at a later stage or when the model itself is loaded into the game) and passed to the vertex shader as an attribute along with the normal and vertex position. The bitangent can be calculated in the shader by taking the cross-product of the normal and the tangent.

Once the TBN matrix is calculated, it is used to transform the light vector, which is originally in World space. After the transformation, the light vector will now be pointing relative to this miniature coordinate space (called tangent space). Since the decoded normal is also pointing relative to the tangent space, then a dot product between the light vector and the decoded normal vector will calculate the correct shading for the pixel. This shading is then applied to the diffuse color to get the final diffuse value for the fragment.

Ok, lets see if I got it:

1. In the Vertex Shader, the vertex is skinned to bones and transformed, along will its Normal also be transformed from model space to screen space. Since severeal faces can share the same Vertex, that Vertex normal is an avarage of neighbouring face normals. This means that the local "miniature" 3D coordinate space (TBN) for each vertex is not perpendicular on the faces, but gradually change as you move along the face. The Vertex Normal which is an avarage of neighbouring face normals and transformed to screen space makes up the 'N' component of the TBN matrix.

2. The tangent (T) and bitangent (B) is calculated using the Vertex 'N' component and the Vertex UV coordinate.

3. Still in the Vertex Shader, the Light vector in world space (LW) is transformed with (TBN-1) to a Light vector in tangent space (LT), and then passed to the Pixel shader to be interpolated over the face.

4. Since TBN changes for each vertex, the Light vector (LT) (interpolated before it reaches the Pixel Shader) will change direction as we go along the face for each fragment?

5. In the Pixel Shader the transformed and interpolated Light vector (LT) gets DOTed with the sampled (decoded) Normal from the Normal Map.

6. The result is used to shade the pixel/fragment.

Did I get it right?

This topic is closed to new replies.

Advertisement