# tangent space is too hard to understand

This topic is 3017 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I am learning bump mapping. Tangent space is too hard to understand. I read some articles, it said we use tangent space just is in order to avoid transforming vertex normal to world space. I dont understand it. If it is just avoid transforming normal to world space, we can transform light from world space to object space. and calculate lighting in object space. I think that is enough. But why we need to transform light to tangent space. Why not we compute bump mapping in world space or in object space? I was confused by this problem for a long time.

##### Share on other sites
Because the normals stored in a tangent-space texture aren't in world or object space, so either you need to convert the light to tangent space to do lighting, or convert the normal from the texture from tangent space into whatever space the light is in.

##### Share on other sites
You can also do the lighting in object space, but you'll loose the ability to do deformations (like for characters), or to share UVs or use symmetry in the texture.

Y.

##### Share on other sites
Quote:
 Original post by RDragon1Because the normals stored in a tangent-space texture aren't in world or object space, so either you need to convert the light to tangent space to do lighting, or convert the normal from the texture from tangent space into whatever space the light is in.

I am still confused.
I am reading d3d example "emboss".
there is a function in this example

D3DXVECTOR3 ComputeTangentVector( EMBOSSVERTEX pVtxA, EMBOSSVERTEX pVtxB, EMBOSSVERTEX pVtxC )
{
D3DXVECTOR3 vAB = pVtxB.p - pVtxA.p;
D3DXVECTOR3 vAC = pVtxC.p - pVtxA.p;
D3DXVECTOR3 n = pVtxA.n;

// Components of vectors to neghboring vertices that are orthogonal to the
// vertex normal
D3DXVECTOR3 vProjAB = vAB - ( D3DXVec3Dot( &n, &vAB ) * n );
D3DXVECTOR3 vProjAC = vAC - ( D3DXVec3Dot( &n, &vAC ) * n );

// tu and tv texture coordinate differences
FLOAT duAB = pVtxB.tu - pVtxA.tu;
FLOAT duAC = pVtxC.tu - pVtxA.tu;
FLOAT dvAB = pVtxB.tv - pVtxA.tv;
FLOAT dvAC = pVtxC.tv - pVtxA.tv;

if( duAC*dvAB > duAB*dvAC )
{
duAC = -duAC;
duAB = -duAB;
}

D3DXVECTOR3 vTangent = duAC*vProjAB - duAB*vProjAC;
D3DXVec3Normalize( &vTangent, &vTangent );
return vTangent;
}

I think code "D3DXVECTOR3 n = pVtxA.n" use normals in object space.
Could you tell me a little more about "the normals stored in a tangent-space texture aren't in world or object space".

##### Share on other sites
Quote:
 Original post by YsaneyaYou can also do the lighting in object space, but you'll loose the ability to do deformations (like for characters), or to share UVs or use symmetry in the texture.Y.

I dont understand.
if i deform object, i can recalculate the normals and recalculate lighting in object space. Why do i need transform light to tangent space. is it necessary?

I dont know what you said that i will loose ability to share UVs or use symmetry in the texture.

##### Share on other sites
TANGENT SPACE

Tangent space is like describing a 3d plane in space that the texture is occupying.

Your texture is 2d right... but when its placed on a model it now occupies a curving 3d surface area. (a curving plane) on top of your geometry.

At the start, you just have normals... but tangent space is more than just having normals, you really have to describe the "up" "right" AND "normal" (which points outwards of the plane you want to bump map) of the curving plane that the texture occupies.

Its so, when u put the up right and front into the 3x3 matrix, transform the light vector into it, youll produce a light vector that points outwards of the 2d normal map WITH THE CORRECT ORIENTATION.

Can you understand that... actually calculating tangent space is a little complex, but if you understand the basic idea first then youll be more ready to take it in.

So, the "up" is the u direction of the texture (place it in 3d so it points in the u direction of the texture, which is sideways) as it points in 3d space.
And the "right" direction, is the v direction as it points in 3d space, you can see if you get the up right and front of the 3d surface, you can then convert to 2d and light as if you were on the 3d surface.

Did you know, you can also call tangent space "derivative" space, but thats heading on into calculus, if you think of your area youre bump mapping as a 3d graph, then its the derivative (the "direction" of the 3d function) will curve along the surface, with the "front" vector pointing outwards of the graph, and the tangent and bitangent along the surface of the 3d function.

Pretty full on eh? :)

I know of an easy way to calculate tangent space, ive written it last.

But - let me tell you one thing, you dont NEED tangent space to do bump mapping, theres such a thing as "object space normal mapping" and you literally point in all directions in the texture so you needent curve it over the surface, but this only works if the texture is custom made for the geometry it sits on (like a doom3 model.) I prefer object space any day, tangent space is too confusing for me too. :)

Also, if you dont calculate tangent space, and you simply light using the normal map, its as if the normal map is sitting on a flat plane pointing in the z+ direction, so tangent space simply curves it away from here.

IMPORTANT->
One thing to watch out for, is if the tangent space is too course a grid, the normals wont interpolate properly and youll end up with a "dull/dirty" output, simply because the shader linearly interpolates the direction coordinates and its not actually correct for normals, so tangent space works best with a tighter grid with not as much angle change between the tangent space samples.

You can calculate tangent space in an easy way, once you know what it is your trying to make. (a 3d surface with 3 axi pointing to the derivative/direction/curve of it)

Take the normal of the triangle and use it as an axis for rotation to calculate the u coordinate. (general axis rotate)
you see, all you have to do is find the u direction in 3d space, and you got it, and you just cross to get the v.
Point a vector on the first leg of the triangle, and this is the vector your going to rotate to make to the u coordinate.
Look at texture space, record the angle of difference between the first leg of the triangle and straight across, and this is how much you rotate to make the u direction in 3d space.

Of course, there is a better more algebraic way of calculating it, and thats what youll generally read on the net, because its more efficient, but perhaps a little more complex to understand.

Hope this helped, if you didnt get it just say so with a few questions and ill see if i can clarify.

My pleasure. :)

I could go on all day about this, wait till i bring up what an anomaly is, its really spinny. hehe (its why you must calculate your tangent space in distinct seperate parts. there has to be splits in the mesh or it doesnt work.)

[Edited by - rouncED on November 3, 2009 5:23:16 AM]

##### Share on other sites
Quote:
 Original post by pqmagic68if i deform object, i can recalculate the normals and recalculate lighting in object space.

Ok, so lets start by *NOT* using tangent space. Imagine a simple quad with a 64x64 normal map applied, where all the normals are stored in world space. When we rotate the 4 points, we will also have to rotate the 64x64 world space normals stored in the texture. That's 4096 normals for a single quad!

The normal vectors on the quad will be redundant - because we will use the normals from the texture. This also means we can't re-use normal maps, because a quad facing upwards (normal={0,1,0}) will have different normals to one facing down (normal={0,-1,0}).

So, we can easily calculate the lighting by doing a dot product of the world space light vector, and the vector stored in the normal map.

So far so good. So, lets deform the mesh and recalculate the normals. We would have to find a way to recalculate the normals stored in the texture. This would only work for deformations that can be described with a single matrix (eg skinning). For non-linear deformations (eg blend shapes, FFD's) we would have no way to compute those textures.

If we were to store the normals in object space, we'd be able to rotate/translate/scale the mesh, but we would still not be able to deform it without requiring us to re-calc the normal maps.

Quote:
 Original post by pqmagic68Why do i need transform light to tangent space. is it necessary?

The above way of doing it is highly in-efficient when deforming the surface. So instead, lets generate an array of normals for each vertex. In addition, lets look at what the U and V texture directions are at that vertex (the tangents and bi-normals).

We now have 3 vectors per vertex, which should be roughly orthogonal. From those 3 vectors, we can now construct a rotation matrix to rotate a vector from object space into texture space.

So, for each triangle, we rotate 3 light vectors (one for each vertex), interpolate the result across the triangle, and dot product with all the texels in that triangle (the normal vectors). We don't every need to modify the (hundreds of thousands) of normals stored in the texture, and we can re-use the texture as many times as we want. Win win....

##### Share on other sites
Thanks all of you.

Now I understand why I need tangent space.

but i fall into a new problem.
people use height map or normal map to bump object.
i can use height map to generat a normal map.
so i think height map is equal to normal map.
and my new problem is if i use height map, do i need a tangent space?

there are some bump examples in d3d sdk.
but only "emboss" use tangent space.
"bumpearth" use height map and dont use tangent space.
if height map is equal to normal map, why does "bumpearth" not use tangent space?
is tangent space only used in "emboss" but not in other bump tecniques?
is that right?

Quote:

Bwat?

##### Share on other sites
hi. I am back.:)

I am trying to consider tangent space in another way.
I feel this is easy to understand for me.
but i dont know whether it is right or not.
I am writing a program to test my thought.
I write my thought here.
And I hope you check my thought and correct it.

this is my steps.
1) transform light into object space.
2) throughout every triangle(???) in object, transform it,
let its normal align with z-axis of object space.
of course, light also be tranformed simultaneously.
3) after step 2, i think light is in texture space ( tangent space ??? )
4) throuhout every pixel in this polygon.
for example, a pixel use texture coordinate (u,v).
I look up into coordinate (u,v) of normal map,
so i get a normal saved at (u,v).
5) light pixel use normal gotten in step 4.
6) goto step 2

i dont know whether it works or not,
but i found it is familiar with me just like tranform a object
from object space to world space , then to camera space, finally light it.

##### Share on other sites
Quote:
 Original post by pqmagic68hi. I am back.:)I am trying to consider tangent space in another way.I feel this is easy to understand for me.but i dont know whether it is right or not.I am writing a program to test my thought.I write my thought here.And I hope you check my thought and correct it.this is my steps.1) transform light into object space.2) throughout every triangle(???) in object, transform it, let its normal align with z-axis of object space. of course, light also be tranformed simultaneously.3) after step 2, i think light is in texture space ( tangent space ??? )4) throuhout every pixel in this polygon. for example, a pixel use texture coordinate (u,v). I look up into coordinate (u,v) of normal map, so i get a normal saved at (u,v).5) light pixel use normal gotten in step 4.6) goto step 2i dont know whether it works or not,but i found it is familiar with me just like tranform a object from object space to world space , then to camera space, finally light it.

One thing I have done to get a better (visual!) grip on tangent space (because back then I had a logical bug in my shader that I found hard to track) was to display normals (or normalized light direction) based on whatever space as rgb colors in the pixel shader. Of course, because of clamping, you might want to have a look at -normal also...

Alex

##### Share on other sites
Look dont think i cant code it.

OWL BOY, why dont you explain it betta?

[Edited by - rouncED on November 5, 2009 8:03:45 AM]

##### Share on other sites
I've also been quite confused by tangent space previously but the concepts are starting to become much clearer, mostly from the responses to this thread. One thing that I'm still not clear on is that in order to do correct tangent-space normal mapping for an object, what needs to be stored with the vertex?

Correct me if I'm wrong, but for normal mapping alone (ignoring diffuse, specular, etc) what you need per vertex is obviously an object space position vector, a pair uv texture coordinates for the normal map, a tangent vector and bi-tangent vector (derivates in the u and v directions respectively). I've heard mention of using a pre-computed 3x3 matrix per vertex but how would that work if you don't get the normal component until you extract it from the normal map (e.g. in a fragment or pixel shader)?

##### Share on other sites
Look its easy, i dont know why i go and be a friendly person and try to explain things from the beginning.

Just get a u vector in 3d space, and a v vector in 3d space, plus a normal, and i dont see why you cant do it.

##### Share on other sites
Quote:
 Original post by RobMaddisonI've also been quite confused by tangent space previously but the concepts are starting to become much clearer, mostly from the responses to this thread. One thing that I'm still not clear on is that in order to do correct tangent-space normal mapping for an object, what needs to be stored with the vertex?Correct me if I'm wrong, but for normal mapping alone (ignoring diffuse, specular, etc) what you need per vertex is obviously an object space position vector, a pair uv texture coordinates for the normal map, a tangent vector and bi-tangent vector (derivates in the u and v directions respectively). I've heard mention of using a pre-computed 3x3 matrix per vertex but how would that work if you don't get the normal component until you extract it from the normal map (e.g. in a fragment or pixel shader)?

The pre - computed 3x3 matrix uses the Tangent , Bitangent and the Vertex normal.

The normal you get from the normal map is used for NdotL while the vertex normal, tangent and bitangent vectors for that vertex are used to create a matrix ( Usually called TBN , or tangentToWorld ) that could transform the light direction vector from tangent space to world space. To get a reverse effect , if you want to transform the light direction vector from world space to tangent ( The usual method ) , you just transpose the "tangentToWorld" matrix and you get a "worldToTangent" matrix.

##### Share on other sites
I think other people in this thread have covered tangent space well. I just wanted to add that it isn't too hard to understand, you can do it!

In my opinion, one of the most important things to remember is that tangent space is just like any other sub space. It is just a different set of basis vectors etc, it is not some kind of magic. It is important that you do not think of "Tangent space" and "world space" and "object space" as these separate concepts, because they are not at all.

It is not easy, but you must not think that it is "too hard". It requires learning and dedication to understand Linear Algebra concepts, but it is very rewarding and will greatly advance your programming. A good place to start is: http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/CourseHome/index.htm

##### Share on other sites
Tangent Space is just another basis in a vector space, with respect to which you are working with vectors. Actually it is named space because the basis generates a whole vector space, i guess. Altough vector spaces are presented in linear algebra courses they really are fundamented on set theory :D.

##### Share on other sites
Quote:
 Original post by rouncEDLook its easy, i dont know why i go and be a friendly person and try to explain things from the beginning.Just get a u vector in 3d space, and a v vector in 3d space, plus a normal, and i dont see why you cant do it.

Thank you rouncED.
You are a very friendly person.

I am not a professional game developer. programming a game is my hobby.
i am not clever.:(
but i try my best to do it better.
i try one thing in several different ways, just in order to make me understand it completely.
maybe some of them are wrong, but it make me think, what is the problem, how to solve it.
So i try to solve this problem base on my own understanding.
of course i will try your method later. :D

finally, thanks all of you for helping me.

##### Share on other sites
I too am struggling learning the concept of tangent space but more so on how to implement it.

I read and am using the code provided here: http://www.terathon.com/code/tangent.html

My only question is what normal do we use to compute Gram-Schmidt Orthogonalation? Do we use the vertex normal that is in object space?

The whole idea of making sure you are in the right space is confusing me.

Say I do use the vertex normal in object space to compute the tangent and bitangent (bitangent is just cross( normal, tangent ) * handedness).

So do I multiply the vertex normal in object space by the TBN matrix so I have a normal in tangent space to do the lighting calculations? (I would also multiply the light direction by the TBN matrix so the light direction is in tangent space)

Or do I just use the normal sampled from the normal map (normals in the normal map are in tangent space I believe) with no transformations and only multiply the light direction by the TBN matrix for the NdotL term? (I hope this is what I do because this makes the most sense to me)

I feel I am really close for these space concepts to click. : /

[Edited by - link3978 on November 18, 2009 11:27:38 AM]

##### Share on other sites
First of all this may be a slightly poor description. I have yet to find a really good way of describing this (which may mean I don't know it as well as I should).

When you sample the normal map, you are sampling a vector which is in tangent space. In order to use this vector for lighting (obv. there are other uses, but lets say lighting) you must either transform the light vector from world/object space into tangent space, or you must transform the normal into world/object space. Once your two vectors are in the same space, you may do dot(N, L) and get your Lambertian term.

So, when you are constructing your tangent space matrix, you use the object-space normal. What that says is "Hey now this direction is up". So now it should start to make a little more sense. Because if your normal-map is that nice perrywinkle color, it is an up vector...but up isn't world-space up, or even object-space up. The desired "up" for the normal map is actually the normal of the surface. Filling in the basis vectors is like saying, "This direction is right, this direction is up, and this direction is forward."

I hope that helps more than it confuses.