Tangent, Normal, Binormal: Per Triangle or per Vertex?

Started by
7 comments, last by MigPosada 20 years, 1 month ago
Of course they are in fact computed per triangle, but, which way is the best? 1. T, N and B computed per triangle and vertices are not shared (the same vertex in a different triangle has different vectors). OR 2. T, N and B computed per triangle. The vertex'' T,N and B vectors are the average of the triangles'' T,N,B it belongs to. I think the first option is the most accurate, but doesn''t allow to use indexed primitives.
Computer Programming is Magic! Hold the Power!
Advertisement
this depends on, what you are looking for:
the second case is easier to implement, the first way would be more correct, because you can handle the so called "smoothing groups"; but it also needs more memory.

DJSnow
---
this post is manually created and therefore legally valid without a signature
DJSnow---this post is manually created and therefore legally valid without a signature
There are many different possibilities to compute the vertex TNB vectors during a preprocess. But while rendering in realtime, you will always use option 2.

Option 1 will give you horrible visual artifacts, you''ll get discontinuities (assuming you use the TNB for bumpmapping, phong, EMBM, etc) along the polygon edges. A mesh model is an approximation per se (not as, for example, a procedural shape or object), and your per vertex surface vectors need to follow that approximation. The only scenario where you can use real accurate normals and tangents are with analytically defined objects (for example a torus or sphere).

Of course, you won''t want to share the TNBs along ''hard'' edges, for example along the outer edges of a cube.
Yes, you'll want indexed primitives for sure and, as ALX says, you'll get ugly lighting discontinuities if you use option 1.

I've found there's a little bit of voodoo involved when it comes to sharing tangent bases. I've heard arguments for and against using orthogonal tangent bases on the GDAlgorithms list. I tried both and for my stuff the non-orthogonal approach looks best.

What I do is...
- Calculate vertex normals as usual (I use the angle-weighted average of neighbouring tris in smoothing group).
- Go through all triangles in mesh and calculate the unit tangent and unit binormal vectors using the standard texture coordinates procedure (this ignores the vertex normal and leaves you with unit T,N,B that are not necessarily orthogonal to each other).
- Associated with each vertex *position* is a list of tangent bases (which is initially empty). If there only ends up being one basis in this list then all triangles surrounding vertex position share the basis at that position. If there's more than one then you get vertex splitting (similar to vertex splitting across texture discontinuities and smoothing groups).
- For each smoothing group go through its triangles and for each triangle go through its vertices and for each triangle vertex...
{
- Copy the current triangle's T and B into temporary structure (call it reference).
- Take the reference T and B and make them orthogonal to the *vertex* normal N and also make them unit length (note that T is still not necessarily orthogonal to B). Call this TNB our reference basis.
- Get the tangent basis list corresponding to the position of the current vertex.
- See if any tangent bases exist in the list. If there are then iterate through the list and compare each basis with our reference basis.
- If none match, or list is empty, then use reference basis (also add reference basis to tangent basis list for current vertex position).
- If one matches then update the matching basis using our reference basis... listBasis.T += refBasis.T; listBasis.B += refBasis.B; (don't renormalise) and point the current vertex of the current triangle to 'listBasis'. Note that 'listBasis' can still be updated in the future (which is why a pointer is used instead of a copy).
}
- A final pass iterates over all the vertex tangent bases and makes T and B unit length.

All tangent bases in each vertex position list have the same normal so the matching function only compares the T,B. The matching function is...
dot(refBasis.T, listBasis.T / listBasis.T.length()) > 0.3f &&
dot(refBasis.B, listBasis.B / listBasis.T.length()) > 0.3f
...which I got out of the ATI normalmapper source code and basically tests that the vectors are close enough. The value of 0.3 corresponds to 72 degrees which is quite a large margin (it's the value ATI chose so I just use it) and hence permits alot of sharing of tangent bases and hence good indexed primitive performance.

Note that the tangent bases used are not orthogonal (cross(T,B) not necessarily zero, although cross(N,T)==cross(N,B)==0). And note that there is no sharing across smoothing groups.


[edited by - soiled on March 22, 2004 2:00:35 AM]
Soiled, so you are basically computing the TNB per face (on the base of local UV gradients, and the geometric faces normal), and then extrapolating the TNB vectors at each vertex by using weighting ?

That''s the way I do it, in fact that is the approach recommended by some older nvidia paper. Just treat the face TNB to vertex TNB, like you would treat face to vertex normals: compute at each face, weight on each vertex from neighbouring faces. Looks good for me.

I never had the idea of computing the tangent and binormal directly at each vertex, to the vertex normal. How do you get the differential UV gradients here, ie, how do you orient your TB base around the normal ?
Usually, you calculate a tangent and binormal based on the direction of the U and V gradients in the vertex. You can calculate this for each outgoing edge from the vertex, and average. You can detect if there''s too much of a difference between different outgoing edges (they should all give "about" the same basis) and flag the vertex for artists to fix up, for extra credit.

If you have a texture edge, you have to split the vert, of course.
enum Bool { True, False, FileNotFound };
Actually one thing I forgot to mention (and I've edited my post above to reflect this) is instead of using the matching basis directly I add the reference T and B to it. The post above explains in greater detail.

quote:Original post by ALX
Soiled, so you are basically computing the TNB per face (on the base of local UV gradients, and the geometric faces normal),

Yes, T and B are calculated using local UV gradients and N is the angle-weighted average of neighbouring triangle normals.
quote:
and then extrapolating the TNB vectors at each vertex by using weighting ?

In my case the T and B weights are 1 because I'm simply averaging the T and B vectors. The normal is independent of the T and B vectors.
quote:
That's the way I do it, in fact that is the approach recommended by some older nvidia paper. Just treat the face TNB to vertex TNB, like you would treat face to vertex normals: compute at each face, weight on each vertex from neighbouring faces. Looks good for me.

My case may differ from yours in that in my case it's possible to get multiple tangent bases belonging to a single smoothing group at a single vertex position.
quote:
I never had the idea of computing the tangent and binormal directly at each vertex, to the vertex normal. How do you get the differential UV gradients here, ie, how do you orient your TB base around the normal ?

I use differential UV gradients only for calculating the per-triangle tangent basis. The TB vectors at the *vertex* are effectively the average of the TB vectors of the neighbouring triangles projected onto the plane of the vertex normal. Ie, for the binormal we have...
vertex.B = normalise( sum {t,
normalise(triangle[t].B-dot(triangle[t].B,vertex.N)*vertex.N)
})
The extra caveat is that I don't average over all neighbouring triangles, only those that are in the same smoothing group *and* whose projected TB vectors are close enough to each other. This way I avoid averaging together tangent bases that shouldn't. An example of when this happens is when the artist mirrors texture coordinates for a mesh like a car that has symmetry (where artist creates texture for one half of car and maps to both halves). In this case vertices along the plane of symmetry don't necessarily have a texture discontinuity but you need a tangent basis discontinuity since the tangents line up but the binormals point in opposite directions (or vice versa). Perhaps the NVidia method deals with this in a different way.


[edited by - soiled on March 22, 2004 2:35:33 AM]
What if I just compute and save the tangent vector, and in the shader I generate the binormal with the cross product of the tangent and the normal?

Or is it the best to feed the shader with the 3 vectors? That is gonna need more bandwidth, but less computing.

[edited by - MigPosada on March 22, 2004 12:58:34 PM]
Computer Programming is Magic! Hold the Power!
That seems quite a popular method so I'd say it's a win (especially since cross product is 2 assembly VS instructions).
You also need to store the handedness (-1 or +1) of the basis (in say tangent.w) and calculate cross product as...

binormal.xyz = (N x T) * tangent.w
tangent.xyzw comes from vertex data

...this is because the basis can be either right-handed or left-handed.

You could also try packing tangent into byte4 instead of float3.

In my case dot(T,B)!=0 so I can't use the cross-product trick.

[edited by - soiled on March 22, 2004 9:09:00 PM]

This topic is closed to new replies.

Advertisement