• Advertisement

Archived

This topic is now archived and is closed to further replies.

Tangent space basis creation

This topic is 5059 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all, I am doing my first steps in per pixel lighting at the moment. I am using DirectX 8 / VS / PS for that. I read an interesting paper at MSDN which pretty much explained everything I need to get started per pixel lighting techniques. For per pixel lighting it is required to have a tangent space basis for a triangle. As far as I understand it that is a vector space basis where the w component is the vertex normal. My problem is to get the other two vertices, namely the tangents. If I had just one triangle connected to a vertex that would be pretty obvious as I just take u = p1-p0 and v = u cross w. However there are usually multiple triangles connected to a vertex and that is my problem. How do I calculate the tangent space basis for that? Maybe anyone could help me with that? Thanks in advance, Alex

Share this post


Link to post
Share on other sites
Advertisement
A brute force approach is of course to calculate tangent vectors for EACH triangle that is attached to a certain vertex and then mediate them.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Here''s a topic on the subject from opengl.org:
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=011349
It also has sample code.

Share this post


Link to post
Share on other sites
okay. i got the basic idea. however for the moment I am using that funny NVidia class MeshMender until I got that right. Simply eliminates one source of bugs until I got things working. Found it just an hour ago and it looks pretty useful. Does normal generation for you as well.
However I do now have strange results from my shaders. I am using Cg.

For keeping things short I only post the parts that are important to lighting...

The vertex shader:

VS_OUT main(VS_IN IN)
{
VS_OUT OUT;

// Transform normal & tangent from object to world space
float3 N = mul(TRANSFORM_WORLD_3x3, IN.normal);
float3 T = mul(TRANSFORM_WORLD_3x3, IN.tangent);

// Generate the binormal
float3 B = cross(IN.normal, IN.tangent);

// Store the normal, tangent and binormal
OUT.normal = N;
OUT.tangent = T;
OUT.binormal = B;

// Generate a 3x3 matrix transformation matrix for tangent space
float3x3 TRANSFORM_TANGENT_3x3;
TRANSFORM_TANGENT_3x3[0] = N;
TRANSFORM_TANGENT_3x3[1] = T;
TRANSFORM_TANGENT_3x3[2] = B;

float3 L = normalize(LIGHT0.pos.xyz - world_pos.xyz);

// Transform the light vector to tangent space
L = mul(TRANSFORM_TANGENT_3x3, L);

// Store half angle light vector (this just looks wrong?)
OUT.light0.xyz = L * 0.5 + 0.5;

// Done
return OUT;
}


And the pixel shader:

PS_OUT main(PS_IN IN)
{
PS_OUT OUT;

// For simplicity, no normal map at the moment
float3 normal;
normal.x = 0;
normal.y = 1;
normal.z = 0;

float light0 = saturate(dot(normal, IN.light0));
OUT.color.rgb = DIFFUSE_COLOR * light0;

return OUT;
}


And it looks plain wrong. Any ideas?

Thanks,
Alex

Share this post


Link to post
Share on other sites
I think the problem is "OUT.light0.xyz" & "IN.light0.xyz"

Is this a texture coordinate or a diffuse/specular color iterator?

If it''s a color, you need the *0.5f + 0.5 to compress the -1,1 range into 0,1 b/c colors can''t be negative.

If you do this, then you need to expand it again with * 2.0f - 1.0f in the pixel shader from 0,1 to -1,1.

If the light vector is a texture coordinate, those can be negative without a problem, so just take out the *0.5 + 0.5 from the vertex shader, and don''t add the *2 -1 to the pixel shader and it should work.

Glad you are getting use out of the MeshMender...

Share this post


Link to post
Share on other sites
Thanks for your reply. MeshMender is just like a tool has to be: small, non-intrusive and easy to use. Great thing really.

Yeah, from what I understood that whole /2+0.5 story looked somewhat wrong.
For the moment I'll be perfectly happy with per pixel diffuse lighting as that's a good basis to start from.

So what I want to do is simply pass the tangent space normal along with the light-pos vector to the pixel shader and do my dot3 there. Sounds simple enough but I am having heavy difficulties getting that to work.

I made a screentshot which is to be found at:
http://www.dajudge.com/shot1.jpg


It's a sphere with normals which is lit by a point light source to it's left. So what you can see is that the very left part of the sphere is already lit correctly but that's only the case for that specific rotation...


My current shader code looks like this:

Vertex shader:

/////////////////////////////////////////////////////
// >> OUTPUT SEMANTICS <<
/////////////////////////////////////////////////////
struct VS_OUT
{
float4 pos : POSITION;
float3 tex0 : COLOR0;
float3 normal : TEXCOORD1;
float3 tangent : TEXCOORD2;
float3 binormal : TEXCOORD3;
float3 light0 : TEXCOORD0;
};


/////////////////////////////////////////////////////
// >> CODE SNIPPET <<
/////////////////////////////////////////////////////

///////////////////////////////////////////////
// LIGHTING PREPARATION
///////////////////////////////////////////////
// Transform normal & tangent from object to world space
float3 N = mul(TRANSFORM_WORLD_3x3, IN.normal);
float3 T = mul(TRANSFORM_WORLD_3x3, IN.tangent);

// Generate the binormal
float3 B = cross(IN.normal, IN.tangent);

// Generate a 3x3 matrix transformation matrix for tangent space
float3x3 TRANSFORM_TANGENT_3x3;
TRANSFORM_TANGENT_3x3[0] = T;
TRANSFORM_TANGENT_3x3[1] = B;
TRANSFORM_TANGENT_3x3[2] = N;

///////////////////////////////////////////////
// PER LIGHT CALCULATIONS
///////////////////////////////////////////////
// Light 0
float3 L = normalize(LIGHT0.pos.xyz - world_pos.xyz);

// Transform the light vector to tangent space
L = mul(TRANSFORM_TANGENT_3x3, L);

// Store half angle light vector
OUT.light0.xyz = L;
OUT.normal = mul(TRANSFORM_TANGENT_3x3,N);


and the pixel shader:


PS_OUT main(PS_IN IN)
{
PS_OUT OUT;

float light0 = dot(IN.normal.xyz, IN.light0);
OUT.color.rgb = CAR_COLOR.rgb * light0;
OUT.color.a = 1;

return OUT;
}


Thanks in advance!

A yeah, does anyone know how to get the Cg compiler to produce DX8 conform pixel shader asm? Most of times it messes up some swizzling (say: it's not wrong, DX just doesn't assemble it saying "invalid swizzle") and I have to modify the ASM code by hand (which kind of messes up the whole concept of using Cg in the first place).

[edited by - dajudge on April 16, 2004 1:49:14 PM]

Share this post


Link to post
Share on other sites
The tangent vector is designed to move from object->tangent space. You are using it to move from object->world space.

Instead, take your light vector, calculated in world space, and then move through the tangent space matrix, and pass to the pixel shader.

Share this post


Link to post
Share on other sites
Okay, let's get this somewhat simpler. For the beginning I'd be happy if I can move the final dot product for light calcs from the vertex shader to the pixel shader (I know it doesn't make much sense, but let's just do small steps for starters...).

So for testing I make a vertex shader that outputs the per-vertex calculated dot product of the light vector and the normal, plus (for doing the same thing in the pixel shader) the normal and the light vector.
All lighting calculations are done in world space (which should be no problem at all since we're doing only diffuse shading without texture lookups here).

vertex shader:

///////////////////////////////////////////////
// Output semantics
///////////////////////////////////////////////

struct VS_OUT
{
float4 pos : POSITION;
float3 color : COLOR0;
float3 normal : TEXCOORD1;
float3 light0 : TEXCOORD0;
};
///////////////////////////////////////////////
// Code snippet:
///////////////////////////////////////////////

///////////////////////////////////////////////
// PER LIGHT CALCULATIONS
///////////////////////////////////////////////
// Light 0 (LIGHT0.pos.xyz is already world space)
float3 L = normalize(LIGHT0.pos.xyz - world_pos.xyz);

// Store L,N (both world space) and sat(L dot N)
OUT.light0 = L;
OUT.normal = N;
OUT.color = max(0, dot(L,N));


and now the pixel shader (this time asm) in both versions, one of them simple sending the vertex-shader calculated color to the screen, the other one doing the dot-product on it's own:


//////// VERSION ONE: ///////////////
// Calc sat(N dot L) and output that
ps.1.1
def c4, 0.000000, 0.000000, 0.000000, 0.000000

texcoord t0 // light0
texcoord t1 // normal
mov r1, c4 // set r1 to 0
dp3_sat r1.xyz, t0, t1 // r1.xyz = max(0,dot(n,l))
mov r0, r1 // output


//////// VERSION TWO: ///////////////
// Simply output color from VS
ps.1.1
mov r0, v0 // output


However, switching the two pixel shader gives different results: the vertex shader dot-product produces the correct result while the one calculated in the pixel shader does not. Can someone please explain that to me?

To me it looks like the texture coordinates are clamped after being pushed out the vertex shader. But why would that happen? Can't texture coordinates be negative?

Thanks in advance,
Alex


[edited by - dajudge on April 17, 2004 1:28:28 AM]

Share this post


Link to post
Share on other sites
When using texcoord, the value is clamped between 0 and 1.

In your vertex shader, mad with 0.5, 0.5 to convert to 0 to 1 range.
In your pixel shader use _bx2 to convert to -1 to 1 range.
If it makes you feel better, bitterly swear under your breath at this design decision.
Again, if it helps, keep complaining when you realize ps.1.1 constant registers have the same boneheaded limitation while it makes even less sense in the constant regs.

It will be a glorious day when everyone has ps.2.0 capable hardware and we can forget ps.1.1 exists.

Share this post


Link to post
Share on other sites
Yeah, ps1.x is not exactly what one would really call "programmable". Too many limitations.

Your tip using the [0;1] range worked perfectly! I''m pretty close to per-pixel diffuse+specular lighting... Thanks!

Now, the only question I have left is: How to get Cg to compile working dx8ps shaders? I have the latest version which was shortly released on the NVidia site. But most of time it uses some swizzling the DirectX runtime doesn''t like. Here''s an example:


...
mul r0.rgb, c3, t0
+ mov r0.a, c4.b


D3DXAssembleShader() fails with the error message "Invalid swizzle..." at this line. What concerns me is that the Cg compiler is actually meant to be used for that purpose. And now it generates code that is not supported by the DirectX runtime? Strange thing...

Any ideas? Thanks in advance,

Alex

Share this post


Link to post
Share on other sites
Okay, after messing around with that for almost two days I decided to check some stuff I based my assumptions on. One of these was that after tangent space creation the normal and the tangent in one point should be orthogonal. Am I right there? If so, dot(N,T) should always produce 0, right? If I do that in my vertex shader (simply doing IN.normal dot IN.tangent) when transforming the vertices of a sphere and pass that value straight through the pixel shader I should only see a black sphere, right? Unfortunately that is not the case. One hemisphere is black and the other hemisphere is white. Is something wrong with my tangent space generation (which I do with NVidia''s MeshMender utility)?
I just can''t get it. Can per pixel light calculation be really that hard or is it just me?

Cheers,
Alex

Share this post


Link to post
Share on other sites
I am very interested to hear how NVMeshMender fight the mirroring of tangent spaces in neighbour triangles.

Share this post


Link to post
Share on other sites
Yes the more recent versions of meshmender, including the one on www.nvidia.com/developer website, does handle mirrored uvs and cylindrical wrapping.

We''ve done a bunch of testing, and I''m using it at home with good results.

Your tangent basis is not necessarily orthogonal, but usually is or is close to it. On a flat surface it usually is, unless the texture is stretched in some strange way.

We recommend using hlsl for directx projects, and cg or glsl for opengl projects. hlsl does a better job at compiling things down to ps.1.1.

That said, I use ps.1.1 assembly directly. I find you need to know the assembly to get things to go from high->asm on the ps.1.x side, so it''s easier to leave out the middle man.

For ps.2.0+, I recommend using a high-level language instead of asm.

Share this post


Link to post
Share on other sites
quote:
Original post by SimmerD
Yes the more recent versions of meshmender, including the one on www.nvidia.com/developer website, does handle mirrored uvs and cylindrical wrapping.

We''ve done a bunch of testing, and I''m using it at home with good results.


I read some of the source, but still can''t get it.
It tears both tangent spaces on an edge between them, if they are too different. And the edge vertices will be copied for the one face.
But if one of these vertices is shared by 3 faces and now it is copied for the second of them (because of the difference between the 1st and the 2nd) - what about the 3rd face? It will not share a vertex only with the 1st, and if the smooth transition if between 2nd and 3rd? The per-edge logic will break that...

Share this post


Link to post
Share on other sites
The latest version of meshmender uses the concept of smoothing groups. I''d have to look at the code to see what you''re saying.

It basically works by identifying all face neighbors to a vertex, then walks from face->face trying to find similar faces. Every time a face is too different ( say in the binormal ), it starts a new smoothing group. Each face gets a chance to join a neighboring smoothing group or start its own.

This process is run independently for normals, binormals, and tangents.

Share this post


Link to post
Share on other sites
quote:
Original post by SimmerD


I see. This is very good.
However, when mirroring is there, say at binormal - the algorithms will tear the binormals in the edge between mirrored faces...? So, we won''t get smooth transition there, and we should (at least it is desirable). Of course this won''t show up in the case when both faces share a plane, but when the angle between them is relatively small it will be visible imo...

Share this post


Link to post
Share on other sites

  • Advertisement