# tangent space is too hard to understand

This topic is 3288 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I am learning bump mapping. Tangent space is too hard to understand. I read some articles, it said we use tangent space just is in order to avoid transforming vertex normal to world space. I dont understand it. If it is just avoid transforming normal to world space, we can transform light from world space to object space. and calculate lighting in object space. I think that is enough. But why we need to transform light to tangent space. Why not we compute bump mapping in world space or in object space? I was confused by this problem for a long time.

##### Share on other sites
Because the normals stored in a tangent-space texture aren't in world or object space, so either you need to convert the light to tangent space to do lighting, or convert the normal from the texture from tangent space into whatever space the light is in.

##### Share on other sites
You can also do the lighting in object space, but you'll loose the ability to do deformations (like for characters), or to share UVs or use symmetry in the texture.

Y.

##### Share on other sites
Quote:
 Original post by RDragon1Because the normals stored in a tangent-space texture aren't in world or object space, so either you need to convert the light to tangent space to do lighting, or convert the normal from the texture from tangent space into whatever space the light is in.

I am still confused.
I am reading d3d example "emboss".
there is a function in this example

D3DXVECTOR3 ComputeTangentVector( EMBOSSVERTEX pVtxA, EMBOSSVERTEX pVtxB, EMBOSSVERTEX pVtxC )
{
D3DXVECTOR3 vAB = pVtxB.p - pVtxA.p;
D3DXVECTOR3 vAC = pVtxC.p - pVtxA.p;
D3DXVECTOR3 n = pVtxA.n;

// Components of vectors to neghboring vertices that are orthogonal to the
// vertex normal
D3DXVECTOR3 vProjAB = vAB - ( D3DXVec3Dot( &n, &vAB ) * n );
D3DXVECTOR3 vProjAC = vAC - ( D3DXVec3Dot( &n, &vAC ) * n );

// tu and tv texture coordinate differences
FLOAT duAB = pVtxB.tu - pVtxA.tu;
FLOAT duAC = pVtxC.tu - pVtxA.tu;
FLOAT dvAB = pVtxB.tv - pVtxA.tv;
FLOAT dvAC = pVtxC.tv - pVtxA.tv;

if( duAC*dvAB > duAB*dvAC )
{
duAC = -duAC;
duAB = -duAB;
}

D3DXVECTOR3 vTangent = duAC*vProjAB - duAB*vProjAC;
D3DXVec3Normalize( &vTangent, &vTangent );
return vTangent;
}

I think code "D3DXVECTOR3 n = pVtxA.n" use normals in object space.
Could you tell me a little more about "the normals stored in a tangent-space texture aren't in world or object space".

##### Share on other sites
Quote:
 Original post by YsaneyaYou can also do the lighting in object space, but you'll loose the ability to do deformations (like for characters), or to share UVs or use symmetry in the texture.Y.

I dont understand.
if i deform object, i can recalculate the normals and recalculate lighting in object space. Why do i need transform light to tangent space. is it necessary?

I dont know what you said that i will loose ability to share UVs or use symmetry in the texture.

##### Share on other sites
TANGENT SPACE

Tangent space is like describing a 3d plane in space that the texture is occupying.

Your texture is 2d right... but when its placed on a model it now occupies a curving 3d surface area. (a curving plane) on top of your geometry.

At the start, you just have normals... but tangent space is more than just having normals, you really have to describe the "up" "right" AND "normal" (which points outwards of the plane you want to bump map) of the curving plane that the texture occupies.

Its so, when u put the up right and front into the 3x3 matrix, transform the light vector into it, youll produce a light vector that points outwards of the 2d normal map WITH THE CORRECT ORIENTATION.

Can you understand that... actually calculating tangent space is a little complex, but if you understand the basic idea first then youll be more ready to take it in.

So, the "up" is the u direction of the texture (place it in 3d so it points in the u direction of the texture, which is sideways) as it points in 3d space.
And the "right" direction, is the v direction as it points in 3d space, you can see if you get the up right and front of the 3d surface, you can then convert to 2d and light as if you were on the 3d surface.

Did you know, you can also call tangent space "derivative" space, but thats heading on into calculus, if you think of your area youre bump mapping as a 3d graph, then its the derivative (the "direction" of the 3d function) will curve along the surface, with the "front" vector pointing outwards of the graph, and the tangent and bitangent along the surface of the 3d function.

Pretty full on eh? :)

I know of an easy way to calculate tangent space, ive written it last.

But - let me tell you one thing, you dont NEED tangent space to do bump mapping, theres such a thing as "object space normal mapping" and you literally point in all directions in the texture so you needent curve it over the surface, but this only works if the texture is custom made for the geometry it sits on (like a doom3 model.) I prefer object space any day, tangent space is too confusing for me too. :)

Also, if you dont calculate tangent space, and you simply light using the normal map, its as if the normal map is sitting on a flat plane pointing in the z+ direction, so tangent space simply curves it away from here.

IMPORTANT->
One thing to watch out for, is if the tangent space is too course a grid, the normals wont interpolate properly and youll end up with a "dull/dirty" output, simply because the shader linearly interpolates the direction coordinates and its not actually correct for normals, so tangent space works best with a tighter grid with not as much angle change between the tangent space samples.

You can calculate tangent space in an easy way, once you know what it is your trying to make. (a 3d surface with 3 axi pointing to the derivative/direction/curve of it)

Take the normal of the triangle and use it as an axis for rotation to calculate the u coordinate. (general axis rotate)
you see, all you have to do is find the u direction in 3d space, and you got it, and you just cross to get the v.
Point a vector on the first leg of the triangle, and this is the vector your going to rotate to make to the u coordinate.
Look at texture space, record the angle of difference between the first leg of the triangle and straight across, and this is how much you rotate to make the u direction in 3d space.

Of course, there is a better more algebraic way of calculating it, and thats what youll generally read on the net, because its more efficient, but perhaps a little more complex to understand.

Hope this helped, if you didnt get it just say so with a few questions and ill see if i can clarify.

My pleasure. :)

I could go on all day about this, wait till i bring up what an anomaly is, its really spinny. hehe (its why you must calculate your tangent space in distinct seperate parts. there has to be splits in the mesh or it doesnt work.)

[Edited by - rouncED on November 3, 2009 5:23:16 AM]

##### Share on other sites
Quote:
 Original post by pqmagic68if i deform object, i can recalculate the normals and recalculate lighting in object space.

Ok, so lets start by *NOT* using tangent space. Imagine a simple quad with a 64x64 normal map applied, where all the normals are stored in world space. When we rotate the 4 points, we will also have to rotate the 64x64 world space normals stored in the texture. That's 4096 normals for a single quad!

The normal vectors on the quad will be redundant - because we will use the normals from the texture. This also means we can't re-use normal maps, because a quad facing upwards (normal={0,1,0}) will have different normals to one facing down (normal={0,-1,0}).

So, we can easily calculate the lighting by doing a dot product of the world space light vector, and the vector stored in the normal map.

So far so good. So, lets deform the mesh and recalculate the normals. We would have to find a way to recalculate the normals stored in the texture. This would only work for deformations that can be described with a single matrix (eg skinning). For non-linear deformations (eg blend shapes, FFD's) we would have no way to compute those textures.

If we were to store the normals in object space, we'd be able to rotate/translate/scale the mesh, but we would still not be able to deform it without requiring us to re-calc the normal maps.

Quote:
 Original post by pqmagic68Why do i need transform light to tangent space. is it necessary?

The above way of doing it is highly in-efficient when deforming the surface. So instead, lets generate an array of normals for each vertex. In addition, lets look at what the U and V texture directions are at that vertex (the tangents and bi-normals).

We now have 3 vectors per vertex, which should be roughly orthogonal. From those 3 vectors, we can now construct a rotation matrix to rotate a vector from object space into texture space.

So, for each triangle, we rotate 3 light vectors (one for each vertex), interpolate the result across the triangle, and dot product with all the texels in that triangle (the normal vectors). We don't every need to modify the (hundreds of thousands) of normals stored in the texture, and we can re-use the texture as many times as we want. Win win....

##### Share on other sites
Thanks all of you.

Now I understand why I need tangent space.

but i fall into a new problem.
people use height map or normal map to bump object.
i can use height map to generat a normal map.
so i think height map is equal to normal map.
and my new problem is if i use height map, do i need a tangent space?

there are some bump examples in d3d sdk.
but only "emboss" use tangent space.
"bumpearth" use height map and dont use tangent space.
if height map is equal to normal map, why does "bumpearth" not use tangent space?
is tangent space only used in "emboss" but not in other bump tecniques?
is that right?

Quote:

Bwat?

##### Share on other sites
hi. I am back.:)

I am trying to consider tangent space in another way.
I feel this is easy to understand for me.
but i dont know whether it is right or not.
I am writing a program to test my thought.
I write my thought here.
And I hope you check my thought and correct it.

this is my steps.
1) transform light into object space.
2) throughout every triangle(???) in object, transform it,
let its normal align with z-axis of object space.
of course, light also be tranformed simultaneously.
3) after step 2, i think light is in texture space ( tangent space ??? )
4) throuhout every pixel in this polygon.
for example, a pixel use texture coordinate (u,v).
I look up into coordinate (u,v) of normal map,
so i get a normal saved at (u,v).
5) light pixel use normal gotten in step 4.
6) goto step 2

i dont know whether it works or not,
but i found it is familiar with me just like tranform a object
from object space to world space , then to camera space, finally light it.

1. 1
Rutin
42
2. 2
3. 3
4. 4
5. 5

• 9
• 27
• 20
• 9
• 15
• ### Forum Statistics

• Total Topics
633392
• Total Posts
3011636
• ### Who's Online (See full list)

There are no registered users currently online

×