Tangent space matrix from Normal

Started by
5 comments, last by maxest 12 years, 8 months ago
Hi! im on it to writing a shader for my water. now i got stuck. I want to transform the normals from the normalmap to tangentspace. problem is, that i do all of the calculations in a post process shader. In this shader i now got: The normal of the wave (calculated from a heightmap) The UV of the bumpmap The viewing vector (from the camera to the pixel). now i need to compute a matrix from these parameters. I know it should somehow work, but i just got no idea how.. probleml.jpg thats what i want to archive.. the red vector is the Normal of the water, i computed from the heightmap. the green one is the normal i read from the bumpmap. now i want to align this green normal to the red one, so i get nice waves.
Advertisement
Moving to Graphics Programming & Theory.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Please? noone got an idea?
im really stuck if i dont get this to work
Someone correct me if I'm wrong, but in order to create tangent space, you need a tangent and binormal vector. The only way I know how to get those is to precalculate them by running over the the geometry and the uv coordinates to construct these extra vectors. ( or let a modeling program export them for you in the case of a model ). If you know the layout of your geometry ahead of time, maybe you could do it with just the UV and normal, but I can't help you beyond that. Maybe you should look up some normal mapping tutorials.
Ok.. i now found some hlsl code. but im not sure if i converted it right to glsl..

[hlsl]
float3x3 compute_tangent_frame(float3 N, float3 P, float2 UV)
{
float3 dp1 = ddx(P);
float3 dp2 = ddy(P);
float2 duv1 = ddx(UV);
float2 duv2 = ddy(UV);

float3x3 M = float3x3(dp1, dp2, cross(dp1, dp2));
float2x3 inverseM = float2x3( cross( M[1], M[2] ), cross( M[2], M[0] ) );
float3 T = mul(float2(duv1.x, duv2.x), inverseM);
float3 B = mul(float2(duv1.y, duv2.y), inverseM);

return float3x3(normalize(T), normalize(B), N);
}

float3x3 tangentFrame = compute_tangent_frame(myNormal, eyeVecNorm, texCoord);
float3 normal0a = normalize(mul(2.0f * tex2D(normalMap, texCoord) - 1.0f, tangentFrame));
[/hlsl]

[glsl]
mat3 compute_tangent_frame(vec3 Normal, vec3 View, vec2 UV)
{
vec3 dp1 = dFdx(View);
vec3 dp2 = dFdy(View);
vec2 duv1 = dFdx(UV);
vec2 duv2 = dFdy(UV);

mat3 M = mat3(dp1, dp2, cross(dp1, dp2));
mat3 inverseM = mat3( cross( M[1], M[2] ), cross( M[2], M[0] ), vec3(0.0) );
vec3 T = vec3(duv1.x, duv2.x, 0.0)*inverseM;
vec3 B = vec3(duv1.y, duv2.y, 0.0)*inverseM;

return mat3(normalize(T), normalize(B), Normal);
}

mat3 tangentFrame = compute_tangent_frame(myNormal, eyeVecNorm, vec2(1,1)*S.xz * 1.6 + wind * apptime * 0.00016);
vec3 normal0a = normalize((2.0 * texture2D(texture8, texCoord).xzy - 1.0)*tangentFrame);
[/glsl]

i kinda hope that i converted it wrong cause otherwise i still got it wrong :-/

Assuming the sea is rippling upon a flat horizontal plane, and all the normals responsible for the rippling come from a normal map, a texture matrix per vertex is not required. All that is needed is to rotate the normal stored in the map 90 degrees about the x axis so that it points up. Basically. Pull the normal from the map as a vec3 (openGL) float3(HLSL) called norm. Then rotate it into a new vec3 called n as below.

n.x=norm.x; n.y=norm.z; n.z=-norm.y;

Think of it like this, the blue normal map produced in photoshop is said to be in tangent space. But obviously photoshop has no knowledge of the model the map will map, therefore it has no idea of the model's vertices or how those verticies will be UV mapped... basically, photoshop has no knowledge of the tangent space it is supposedly defining normals in. Tangent space normal maps are not mapped in tangent space, they are mapped in eye space. They assume (pretend) that all of the model's vertex normals point (0,0,1) up the z axis (openGL), and since they encode the z value in the blue channel of the map, the maps are mostly blue; the red and green channels encode x and y offsets from the z axis, ie, the perturbations. Given eye coordinates place the camera at (0,0,0) looking down the negative z axis (openGL), a normal in the map is like a south facing wall, the normals point out of the wall toward the camera. So what we need for the sea is to rotate the map 90 degrees so the z normals point up. As you can see from the code above, the y axis (standard base frame in eye space) now takes the z component, meaning, scale the y axis by the amount of z; the x axis remains unchanged because the normal is being rotated about this axis; if you rotate the axis frame 90 degrees about the x axis, the former y axis will now point down the negative z axis (openGL), so n.z is assigned the negative of the y value, which is to say, the z axis is scaled by the negative of the amount y.
If you're computing your waves with procedural functions, take a look here http://developer.nvidia.com/node/110

This topic is closed to new replies.

Advertisement