Jump to content

  • Log In with Google      Sign In   
  • Create Account


Normal mapping? Tangents? Binormals?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
38 replies to this topic

#1 Nairou   Members   -  Reputation: 418

Like
0Likes
Like

Posted 22 August 2011 - 09:54 PM

I am just starting to get into normal/bump mapping, and am curious what the "current modern method" is for doing normal mapping. I have a lot of older 3D books which make passing references to computing tangents and normals. I currently include normals in my geometry vertex format, but haven't yet seen anything that exports tangents with the geometry.


Aside from not yet knowing anything about tangents (or binormals, or bitangents, ...), I'm also using OpenGL 3.2 and trying to avoid any old deprecated methodologies. I can only assume that tangents are still needed for normal mapping (I haven't read anything to say otherwise), but is it still best to precompute them and add them to the vertex format? Or are they computed as-needed in shaders these days?

What would be a good (modern/shader) reference for learning tangents (if needed) and the basics of normal mapping?


Sponsor:

#2 Digitalfragment   Members   -  Reputation: 738

Like
0Likes
Like

Posted 22 August 2011 - 10:04 PM

I am just starting to get into normal/bump mapping, and am curious what the "current modern method" is for doing normal mapping. I have a lot of older 3D books which make passing references to computing tangents and normals. I currently include normals in my geometry vertex format, but haven't yet seen anything that exports tangents with the geometry.


Aside from not yet knowing anything about tangents (or binormals, or bitangents, ...), I'm also using OpenGL 3.2 and trying to avoid any old deprecated methodologies. I can only assume that tangents are still needed for normal mapping (I haven't read anything to say otherwise), but is it still best to precompute them and add them to the vertex format? Or are they computed as-needed in shaders these days?

What would be a good (modern/shader) reference for learning tangents (if needed) and the basics of normal mapping?


Tangents and binormals (/bitangents) are perpendicular vectors that run along the surface of the model that describe the direction that 2 channels of the normalmap point in, for any point on a models surface. They're exported as part of DAE and FBX formats, as well as a few others. You can caclulate them yourself, using the texturecoordinates and vertex positions each vertex per triangle, then average them out ala smooth normals. Of course, because you can do this, you can also generate them in shaders but the cost is usually fairly prohibitive.

#3 MJP   Moderators   -  Reputation: 10041

Like
1Likes
Like

Posted 22 August 2011 - 11:10 PM

Of course, because you can do this, you can also generate them in shaders but the cost is usually fairly prohibitive.


Ehh, depends on the platform. A bit of extra ALU isn't going to slow down any modern GPU. :P

#4 Digitalfragment   Members   -  Reputation: 738

Like
0Likes
Like

Posted 23 August 2011 - 12:13 AM

Ehh, depends on the platform. A bit of extra ALU isn't going to slow down any modern GPU. :P


Haha, this is true. My judgement gets skewed because I'm used to dealing with the RSX.

#5 Krohm   Crossbones+   -  Reputation: 2916

Like
0Likes
Like

Posted 23 August 2011 - 01:26 AM

Naughty dog proposed a method based on derivative functions to reconstruct the perturbed normal out of the height map alone. The performance of those functions has been a bit of a pain in the past. Hopefully those are now better performing, but I'm not going to use this for a while considering I know a lot of people still running shader model 2.

#6 Eric Lengyel   Crossbones+   -  Reputation: 2155

Like
1Likes
Like

Posted 23 August 2011 - 01:47 AM

Note that using the derivative instructions to get a screen-space basis results in a severe quality reduction. It's much better to use precomputed per-vertex tangents.

Here are some comparison images. In each one, the region left of the white line uses the derivative instructions as described by the paper above. The region right of the white line uses the conventional method in which tangents are precomputed per-vertex and bitangents are generated in the vertex shader.

Posted Image

Posted Image

Posted Image

Posted Image

Posted Image

Posted Image

#7 MJP   Moderators   -  Reputation: 10041

Like
0Likes
Like

Posted 23 August 2011 - 02:18 AM

Note that using the derivative instructions to generate tangents and bitangents results in a severe quality reduction. It's much better to use precomputed per-vertex tangents.

Here are some comparison images. In each one, the region left of the white line uses the derivative instructions as described by the paper above. The region right of the white line uses the conventional method in which tangents are precomputed per-vertex and bitangents are generated in the vertex shader.


I'm sure you know this, but that paper referenced above does not generate tangents and bitangents for normal mapping...methods for doing that in the fragment shader don't produce such terrible results (especially for simple tiled cases like the ones you posted).

#8 PaloDeQueso   Members   -  Reputation: 283

Like
0Likes
Like

Posted 23 August 2011 - 09:59 AM

You are definitely going to want to generate the tangents and bitangents, and maybe normals too yourself. Doing it in a vertex shader or geometry shader for instance will only result in per-poly normals. I suppose given a big enough patch size you could calculate accurate per vertex TBNs in a tessellation shader but it will cost you. I have a TriangleMesh class in my engine that I pump in triangles with just vertices and texcoords and from there I calculate a face normal, using that and the texcoord I calculate the face bitangents and binormals. Then you have to go through your TriangleMeshes vertices and for each one you find the surrounding triangles and average the face normals, binormals, and bitangents to get the per vertex values. It's kind of a pain but the results are a big pay off!
Douglas Eugene Reisinger II
Projects/Profile Site

#9 Nairou   Members   -  Reputation: 418

Like
0Likes
Like

Posted 23 August 2011 - 07:42 PM

Does anyone have any references on why tangents and bitangents and such are used in normal mapping?

I understand that tangent space is a plane that sits on a surface perpendicular to the normal, and I see lots of documents that give the math for tangent space and calculating the tangents for a vertex. But I haven't yet found a document which explains why tangents and tangent space is important.

The best article I've found on tangents so far is this one, but even it assumes that I already want a tangent space and want it aligned a certain way.

I'd really like to understand how normal mapping works, rather than just memorize a set of required steps.

#10 Hodgman   Moderators   -  Reputation: 26967

Like
3Likes
Like

Posted 23 August 2011 - 08:31 PM

Does anyone have any references on why tangents and bitangents and such are used in normal mapping?

I understand that tangent space is a plane that sits on a surface perpendicular to the normal, and I see lots of documents that give the math for tangent space and calculating the tangents for a vertex. But I haven't yet found a document which explains why tangents and tangent space is important.

The normal, tangent and binormal can be put together into a 3x3 matrix which describes 'tangent space'.

In short, normal-map data is in 'tangent space' (relative to the surface orientation), but the normals we use in lighting calculations must be in a different space (usually world or view space). So, we must know the orientation of the surface, so we can transform this data into the correct space (i.e. relative to the world, or to the view, instead of relative to the surface).


To step back for a moment -- objects in your world are described in 'world space', where (0,0,0) describes some specific point in your world (probably the center). We say x points one way in the world (maybe 'right'), z points another way (maybe 'forward'), and y points up (or maybe z points up for you, etc...).z = "forward" = (0,0,1)
x = "right" = (1,0,0)
y = "up" = (0,1,0)

Each object in the world's position/orientation can be described with a point (which is an offset from the world's origin) and 3 directions that desribe it's rotation -- forward, right and up.
E.g. an object that is 10m above the origin, and is rotated to face the right (i.e. facing down +x, or rotated 90º around y, etc), could be described as:
pos = (0,10,0), forward = (1,0,0), right=(0,0,-1), up=(0,1,0)

We also have other 'spaces' than 'world space' though. Some things might be defined relative to this object, in which case, their description would be given in that object's local space.
e.g. The world-space position of an object that was "2m in front of the object" would be:
relativePos = (0,0,2);
worldPos = object.position + object.right * relativePos.x + object.up * relativePos.y + object.forward * relativePos.z;
or
worldPos = (0,10,0) + (0,0,-1) * 0+ (0,1,0) * 0 + (1,0,0) * 2;


What we've just done there is actually equivalent to a matrix multiplication! If you encoded the above math into a matrix, you would have a transformation matrix which converts data from the object's local space, into world space. I'd call this the local-to-world matrix.

Going back to the normal/tangent/binormal -- these are the forward/right/up directions the describe the surface of the model, relative to the object. i.e. they form a surface-to-local matrix, AKA the tangent space matrix, or tangent to object transform.

The data stored in normal-maps is in tangent space (i.e. relative to the surface). This matrix (i.e. these 3 directions) let us transform the texture data from being relative to the surface, to being relative to the object.
Once it's relative to the object, we can also transform it to be in world-space, or to be relative to the camera (which are the spaces that lighting calculations are often done in).

#11 Eric Lengyel   Crossbones+   -  Reputation: 2155

Like
2Likes
Like

Posted 23 August 2011 - 11:20 PM

The best article I've found on tangents so far is this one, but even it assumes that I already want a tangent space and want it aligned a certain way.


I wrote that. As Hodgman describes, the tangent frame provides a matrix that allows you to transform light and view directions from object or world space into the local axis-aligned coordinate system of the normal map, or vice-versa. Basically, the tangent and bitangent at each vertex point in the directions that the x and y axes of the normal map point at those locations if you were to pull the texture off of the mesh and project it onto the tangent plane.

Btw, the term binormal is still used in a lot of places, but it's not correct in the context of normal mapping. The proper term for the direction perpendicular to both the normal and tangent is the bitangent.

#12 Nairou   Members   -  Reputation: 418

Like
0Likes
Like

Posted 24 August 2011 - 08:59 AM

Hodgman:
Awesome explanation! That makes sense, I think I get it now! Thank you!

Eric:
Thanks for the article! I'll have to go back and read it again now that I understand better the purpose of what is going on. :)

#13 synulation   Members   -  Reputation: 271

Like
0Likes
Like

Posted 24 August 2011 - 01:24 PM

FWIW - Mikkelsen posted a follow up (http://mmikkelsen3d.blogspot.com/) to that unparametrized bump mapping paper, making use of a precomputed derivative map to increase visual quality.

#14 mmikkelsen   Members   -  Reputation: 236

Like
0Likes
Like

Posted 25 August 2011 - 01:08 PM

FWIW - Mikkelsen posted a follow up (http://mmikkelsen3d.blogspot.com/) to that unparametrized bump mapping paper, making use of a precomputed derivative map to increase visual quality.


These public test results were produced by a member of the blender community Sean Olson

http://jbit.net/~spa..._deriv/compare/

At a moderate distance it's difficult to tell the difference between using the listing 2 method in the paper which operates off of
a height map and the method from the blog using a derivative map.
But then up close as you see in these shots you can see the difference. The derivative operator deletes one order of smoothness essentially.
This is why filtering derivatives looks smoother then taking the derivative of the filtered height signal.

However, both listing 2 and the derivative map method from the blog produce results without the aid of vertex level tangent space.
And results are good. Using derivative map vs. the height map is a trade-off between quality during texture magnification and memory budget
and imo is a call that should be made by the artist on a case by case basis.


Blender is using the method from the paper and the blog btw so you can check it out with blender though you'd have to use a build
from graphicall.org or get code from trunk to see the derivative mapping.
It's easy to drop it into your own shader anyway.
The only thing that should be mentioned which I forgot to mention in the paper is that you're supposed
to scale the bump derivatives in listing 2 (and the blog) dBs and dBt by an adjustable user-scale.

Also surf_pos and surf_norm must be in the same space ie. obj/world/view


To anyone out there who doesn't know how to generate derivative maps
there's a freely available tool here --> http://jbit.net/~sparky/makedudv/
which can make them from height maps in virtually any image format.
I recommend using min. 16 bit heights and preferably float32.
Once the derivative map has been made you can compress it
into BC5 (8 bits per texel).

ZBrush can export such float32 height maps and so can blender for those looking for free options.
Mudbox and Blender both allow you to paint bump maps as well in float32
and will show you the lit result as you paint the bumps.
If anyone is feeling adventurous then there's also Cinepaint which was used
on various movie productions such as "Harry Potter and the Philosopher's Stone"
and "The Last Samurai" (and others) to paint HDR textures.

Cheers,

Morten.

#15 es   Members   -  Reputation: 100

Like
0Likes
Like

Posted 31 August 2011 - 06:58 AM

FWIW - Mikkelsen posted a follow up (http://mmikkelsen3d.blogspot.com/) to that unparametrized bump mapping paper, making use of a precomputed derivative map to increase visual quality.

I've implemented this approach using OpenGL and got on 460gtx -11% in comparison with tangent space approach



I am the God of War! None shall defy me!

#16 ProfL   Members   -  Reputation: 541

Like
0Likes
Like

Posted 31 August 2011 - 09:19 AM

Tangents and binormals (/bitangents) are perpendicular vectors....


Is that really the case?


The space that is created is usually based on some smoothed normal and this is not perpendicular to the tangent and bitangent. I know there is some orthogonalization formula, but after all the space is not orthogonal, the tangent space matrix need to stretch and shear (in addition to just rotating like an orthogonalized matrix does). So, can someone resolve the mystery I'm stumbled into? Do I miss an important bit or is the orthogonalization mathematically wrong but somehow not visible?




while I'm hijacking this thread anyway, 2nd thing that confuses me, is it really valid to average the tangents matrices on vertices? they usually represent individual spaces for one triangle(-edge) a neighboring triangle might have a completely unrelated space which is even flipped due to UV mirroring or something. I think if the orthogonalization would remove anything but rotation, this could work if you propagate per triangle the flipping bit, but that leads to my first question.




btw. the normal perturbation gave me the same bad results Eric got, in minification I see bad aliaing (especially in specular) and with magnification you notice the linear interpolated heightmaps.

#17 mmikkelsen   Members   -  Reputation: 236

Like
0Likes
Like

Posted 01 September 2011 - 06:21 AM


FWIW - Mikkelsen posted a follow up (http://mmikkelsen3d.blogspot.com/) to that unparametrized bump mapping paper, making use of a precomputed derivative map to increase visual quality.

I've implemented this approach using OpenGL and got on 460gtx -11% in comparison with tangent space approach




Hey es,

Very interesting. Just to clarify could you
please elaborate on what you mean by 11%?
Also could you show your pixel shader? And also are you using same texture format in both cases?

Has anyone else here tried some different configurations?

Cheers,

Morten.

#18 Digitalfragment   Members   -  Reputation: 738

Like
1Likes
Like

Posted 02 September 2011 - 01:11 AM


Tangents and binormals (/bitangents) are perpendicular vectors....

Is that really the case?


You're right, its not really the case - it's a generalisation. We have assets here which completely break that rule. The orthonormalisation just makes it easier to conceptualise the individual channels of a normalmap, and how they relate on the surface, especially when generating normalmaps from heightmaps.

For the most part, you want the tangent space orthonormal to make the most out of the normalmap, otherwise your space ends up heavily compressed in one axis.

#19 Codarki   Members   -  Reputation: 462

Like
0Likes
Like

Posted 02 September 2011 - 07:03 AM

while I'm hijacking this thread anyway, 2nd thing that confuses me, is it really valid to average the tangents matrices on vertices? they usually represent individual spaces for one triangle(-edge) a neighboring triangle might have a completely unrelated space which is even flipped due to UV mirroring or something. I think if the orthogonalization would remove anything but rotation, this could work if you propagate per triangle the flipping bit, but that leads to my first question.

Seams in UV mapping, or hard edges for normal smoothing, causes split vertices. In those cases you can't average the tangents for the edge, and the tangent spaces for the neighboring triangles are not continuous.

#20 ProfL   Members   -  Reputation: 541

Like
0Likes
Like

Posted 02 September 2011 - 08:37 AM


while I'm hijacking this thread anyway, 2nd thing that confuses me, is it really valid to average the tangents matrices on vertices? they usually represent individual spaces for one triangle(-edge) a neighboring triangle might have a completely unrelated space which is even flipped due to UV mirroring or something. I think if the orthogonalization would remove anything but rotation, this could work if you propagate per triangle the flipping bit, but that leads to my first question.

Seams in UV mapping, or hard edges for normal smoothing, causes split vertices. In those cases you can't average the tangents for the edge, and the tangent spaces for the neighboring triangles are not continuous.

that's kind of obvious :D

I'm talking of course about shared edges/vertices. without orthogonalization you clearly don't have to average the normal, but the tangent and bitangent kinda would need that for smoothing.





@Digitalfragment thank you for confirming my worries, maybe any more takers with opinion? I wonder how you guys are dealing with those issues, obviously no game stores a full tangent matrix per vertex, all seem to run with orthogonalization mostly because the inverse in shader is just a simple transpose, not an expensive full 3x3 inverse (although it could be normalized which would at least save calculating the determinant and the division, I assume).









Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS