Normalmapping space and tangents

Started by
6 comments, last by cozzie 5 years, 10 months ago

Hi all,

now that I’ve managed to get my normal mapping working, I’m at the point of making a “clean” implementation.

Would you know if there’s a general consensus on the following 2 things:

1. Normal mapping in world space vs tangent space (multiply T, B and N in PS versus converting light direction to tangent space in VS, pass to PS)

2. Provide tangents and bitangents to the GPU in vtxbuffer versus calculating in the shader 

What I’ve learned so far is that performing it in tangentspace is preferred (convert light dir in VS and pass that to the PS), combined with calculating the bitangent (and perhaps also tangent) in the VS, per vertex.

I know the answer could be profiling, but that’s not something I’m doing at this stage. Just want to decide my 1st approach.

Any input is appreciated.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Advertisement
  1. Converting light directions to tangent space in the VS isn't practical for handling arbitrary numbers of lights, since you're very limited in the number of interpolants that you can pass between the VS and PS (this is especially true from a performance POV). There's also any number of other things that might happen in the PS that need the final surface normal, such as cubemap reflections or SH ambient lighting. I would just convert the normalmap normal to world space and work from there. It will make your code cleaner, and you'll be in a better spot if you start adding more advanced techniques.
  2. To compute per-vertex tangents you really need more global info about the triangle and its neighbors. This is why it's very common to compute tangents in an offline pass during mesh import/processing. It is possible to use pixel shader derivatives to compute a tangent frame on-the-fly, but you may run into issues if the derivatives are inaccurate.


Hello,

I assume the Normal, Tangent, BiNormal in "Model space" are already in place or computed once you load your mesh and stored your on vertex buffer because most of 3D modeling tools capable of including this in mesh info, but if not MJP links is helpful.


"1. Normal mapping in world space vs tangent space (multiply T, B and N in PS versus converting light direction to tangent space in VS, pass to PS)"

Since VS (Vertex Shader) are normally geometry oriented which operates on each vertex, I will only calculate anything related to vertex final position output usually for normal mapping computation of your T,N,B etc,, from your entity world matrix.

"convert light dir in VS and pass that to the PS"

PS(Pixel Shader) as the name implies this operates on on the final color per pixel, IMO this is where the lighting computations should be computed because it affect the final color of each pixel.


My rule of thumb is calculate anything related to Vertex manipulation in VS and calculate anything related to final color output in PS ^_^y

(outdated, see my next reply)

Thanks, this helps. I’ll go for providing normal and tangent through the vtx buffer (so calculated offline) and just calculatie bitangent in the VS with 1 cross product. I’ll do the normal mapping in world space (pass TBN in worldspace to the PS).

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Actually I found some mistakes in my code. Now, after correcting both the fragment shader and pre-calculations, the two methods look identical on flat surfaces. However, on curved surfaces they're slightly different, just as calculating flat normals on i.e. a sphere would be different than averaging vertex normals.

If you're just slapping on a generic bump texture on your object, you won't notice much difference (unless you have triangles with degenerate texture coordinates - those will look atrocious with derivatives because you're normalizing a zero vector).

But if the normal map was baked for that specific mesh, there will be noticeable discontinuities. The true way of fixing that is to export tangent maps, but since most tools have adopted mikktspace, you can use that to generate them yourself. However, you need to follow the algorithm closely or else it defeats the entire purpose. Basically, check the notes on this page: https://wiki.blender.org/index.php/Dev:Shading/Tangent_Space_Normal_Maps

By the way, if you pre-calculate tangents, don't forget the direction flag on them (+1 or -1 in W coordinate), or it won't handle flipped normal maps correctly. Same goes for derivatives method, you can't just do B=cross(N,T), it has to be multiplied by the sign of the texture "area".

@d07RiV thanks for digging it up :)

Articles and papers are not very clear/ consistent on if/ when to use the sign for the bitangent. My use-case is that I provide the normal and tangent to the GPU (via vtxbuffer), and then in the VS I calculate bitangent with a simple cross:


	if(gPerMaterial.Material.UseNormalMap == 1)
	{
		float3 tBitangent = cross(vin.Tangent, vin.NormalL);
		
		// TBN for normal mapping
		vout.TangentW	= mul(vin.Tangent, (float3x3)gPerObject.WorldInvTranspose);
		vout.BitangentW	= mul(tBitangent, (float3x3)gPerObject.WorldInvTranspose);
		vout.NormalW	= mul(vin.NormalL, (float3x3)gPerObject.WorldInvTranspose);
	}
	else 
		vout.NormalW = mul(vin.NormalL, (float3x3)gPerObject.WorldInvTranspose);

Now the question is, is this the way to go (independent on mirrored/flipped situations), or do I have to do something like this:

- check if untransformed normal.z > 0 or < 0, if > sign = +, if < sign = -
- apply that sign to all 3 transformed vectors? (T, B and N) or before transforming?

Note that the vectors are float3's, so no W component for storing anything :)
Note 2: this is the original vertex normal, no normapmap involved yet

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

This topic is closed to new replies.

Advertisement