Started by Feb 04 2011 04:43 PM

,
11 replies to this topic

Posted 04 February 2011 - 04:43 PM

Hi, I have a normal map which I use to replace the normals of each pixel in the HLSL pixel shader. The result looks good. What's the point of tangents and bitangents? I don't use them but the result looks pretty good!

Posted 04 February 2011 - 04:48 PM

Tangent/Bitangent/Normal form an orthonormal basis (usually given in object-space of a model for every vertex), which is used to transform all vectors (that you use in your computations) to tangent-space of a texture (normal map). This way all vectors and also vectors you sample from the texture reside in the same space so the computations can be correct. This is needed for normal maps that have vectors in tangent-space (such normal maps have blueish colors).

Other possibility is to store the normals in a normal map in object-space (or any other you wish). In this case you do not need tangents and bitangents since, for example, transforming the normals form the sampled normal map would only require to use world transform to get them in world-space.

If you want some solid reference, Eric Lengyel's "Mathematics for 3D Game Programming and Computer Graphics" gives a very nice dicussion on this topic.

Other possibility is to store the normals in a normal map in object-space (or any other you wish). In this case you do not need tangents and bitangents since, for example, transforming the normals form the sampled normal map would only require to use world transform to get them in world-space.

If you want some solid reference, Eric Lengyel's "Mathematics for 3D Game Programming and Computer Graphics" gives a very nice dicussion on this topic.

Posted 04 February 2011 - 04:55 PM

A similar reference to the one mentioned above can be found here as well: http://www.terathon.com/code/tangent.html

Posted 04 February 2011 - 05:01 PM

What do object space normal maps look like?

Wouldn't it be easier to convert a normal map into object space rather than converting each vertex into tangent space?

Wouldn't it be easier to convert a normal map into object space rather than converting each vertex into tangent space?

Posted 04 February 2011 - 05:48 PM

Of course that workings with object-space normal maps is much easier . But... imagine you have a wall texture that appears notorously on many walls. For every "orientation" of a wall using that texture you would need one separate normal map. Having normal maps in tangent space and doind computations in that space is thus much more universal and much less memory consuming. You can imagine this using for example for characters. A character's texture will probably be used only on that particular character so having normal map in object-space in this case makes sense. Moreover, you avoid many problems associated with tangent-space computations, like sheared texture coordinates, non-orthogonal tangent basis and so on. So a good idea seems to be to have unique objects with object-space normal maps and "general" textures (like walls, decals, floors) in tangent-space. Note however that this requires from you to keep two slightly distinc shader paths for rendering one type of meshes and the other.

Object-space normal map: http://www.3dkingdoms.com/tut2.jpg

Object-space normal map: http://www.3dkingdoms.com/tut2.jpg

Posted 04 February 2011 - 07:07 PM

Of course that workings with object-space normal maps is much easier . But... imagine you have a wall texture that appears notorously on many walls. For every "orientation" of a wall using that texture you would need one separate normal map. Having normal maps in tangent space and doind computations in that space is thus much more universal and much less memory consuming. You can imagine this using for example for characters. A character's texture will probably be used only on that particular character so having normal map in object-space in this case makes sense. Moreover, you avoid many problems associated with tangent-space computations, like sheared texture coordinates, non-orthogonal tangent basis and so on. So a good idea seems to be to have unique objects with object-space normal maps and "general" textures (like walls, decals, floors) in tangent-space. Note however that this requires from you to keep two slightly distinc shader paths for rendering one type of meshes and the other.

Object-space normal map: http://www.3dkingdoms.com/tut2.jpg

I am generating a normal map per-pixel from a different height map texture for every object anyway. Unfortunately the way I'm generating it places it in tangent space. I was trying to avoid sending more data with my vertices. Is there anything I can do on a per-pixel or per-vertex basis (in the hlsl shader)?

Posted 04 February 2011 - 07:48 PM

I my engine I send a normal and a tangent, leaving a bitangent to be computed with the cross product of the normal and the tangent. Certainly, we have two perpendicular vectors to normal and tangent and we must choose the proper one. The idea I use is to send a normal, a tangent, and a determinant of a matrix formed with (tangent, bitangent, normal). The determinant gives the handedness of the basis. So in my vertex shader I do something like:

bitangent = determinant * cross(normal, tangent)Passing a normal, a tangent and a determinant takes 7 floats, whereas passing the whole basis takes 9 floats. 2 floats my is not much, but is a good step to start . Maybe someone has some better idea on how to save sending to much data to a vertex shader?

Posted 04 February 2011 - 07:55 PM

Well, rather than restructuring my engine to use vertex declarations (instead of FVF), I decided to go a different route:

http://www.gamedev.net/topic/594781-hlsl-height-map-to-normal-algorithm/

http://www.gamedev.net/topic/594781-hlsl-height-map-to-normal-algorithm/

Posted 05 February 2011 - 10:03 AM

I my engine I send a normal and a tangent, leaving a bitangent to be computed with the cross product of the normal and the tangent. Certainly, we have two perpendicular vectors to normal and tangent and we must choose the proper one. The idea I use is to send a normal, a tangent, and a determinant of a matrix formed with (tangent, bitangent, normal). The determinant gives the handedness of the basis. So in my vertex shader I do something like:

bitangent = determinant * cross(normal, tangent)Passing a normal, a tangent and a determinant takes 7 floats, whereas passing the whole basis takes 9 floats. 2 floats my is not much, but is a good step to start . Maybe someone has some better idea on how to save sending to much data to a vertex shader?

Personally I send normals, tangents, and bitangents to the vertex shader. I suppose I'd just rater compute all the stuff once per mesh and construct the tangent space matrix in the vertex shader instead of having it computer anything more. Not really sure if thats more efficient or not in the long run.

Posted 05 February 2011 - 10:50 AM

I my engine I send a normal and a tangent, leaving a bitangent to be computed with the cross product of the normal and the tangent. Certainly, we have two perpendicular vectors to normal and tangent and we must choose the proper one. The idea I use is to send a normal, a tangent, and a determinant of a matrix formed with (tangent, bitangent, normal). The determinant gives the handedness of the basis. So in my vertex shader I do something like:

bitangent = determinant * cross(normal, tangent)Passing a normal, a tangent and a determinant takes 7 floats, whereas passing the whole basis takes 9 floats. 2 floats my is not much, but is a good step to start . Maybe someone has some better idea on how to save sending to much data to a vertex shader?

Personally I send normals, tangents, and bitangents to the vertex shader. I suppose I'd just rater compute all the stuff once per mesh and construct the tangent space matrix in the vertex shader instead of having it computer anything more. Not really sure if thats more efficient or not in the long run.

I'm currently doing the same, as i'm running into shader complexity issues(PS2.0/running out of instruction space). Efficiency one way or the other will depend on many factors such as scene complexity/geometry complexity, and will probably need to be looked at on a per project basis.

Posted 05 February 2011 - 11:24 AM

I also used to send tangent/bitangent/normal but sending tangent/normal/determinant is not only lighter with the memory but also the computations in a vertex shader. For instance, if you have some skeletal animation you probably have a lot of matrix multiplications and you have to transform all tangent/bitangent/normal. With the latter approach you can only transform trangent/normal and once all the computations are done, you simply compute bitangent with the cross product. In general, this will save 1/3 of the matrix transformations (since you avoid dealing with bitangent). Assuming that the cross product has negligible cost.

Posted 05 February 2011 - 12:30 PM

For those interested in an thorough explanation of normal mapping and tangent spaces I suggest you take a look

at this -> http://image.diku.dk...ikkelsen.08.pdf

It's a big file, 50 megs, but at least the server is good.

A clear order independent way to define which triangles are to share tangents (averaged) is given at the end of page 47

Order independence is important because otherwise you risk different tools pipelines creating different tangent spaces

for what is essentially the same mesh but with triangles or vertices in a different order. It's also important for getting perfectly

mirrored tangent spaces for mirrored meshes,

Adjacent triangles must share tangent space if and only if the 4 criteria are met.

The first 3 are trivial. The triangles must share

position, normal and texture coordinate at both end-points of the shared edge.

The fourth rule means that the winding (CW/CCW) of the texture coordinates must be the same on both triangles.

The fourth rule is important in the general case but it also allows us to handle mirroring correctly (a split will be created).

The thesis also shows how most commercial products out there have many problems.

Check pages: 44, 45, 52-56

Using the strategy of the thesis gives a perfect result as shown on page 68 (see the tspaces on 67).

Page 7 gives an explanation of the relation between bump mapping and normal mapping.

Hope someone finds this useful.

Cheers,

Morten

at this -> http://image.diku.dk...ikkelsen.08.pdf

It's a big file, 50 megs, but at least the server is good.

A clear order independent way to define which triangles are to share tangents (averaged) is given at the end of page 47

Order independence is important because otherwise you risk different tools pipelines creating different tangent spaces

for what is essentially the same mesh but with triangles or vertices in a different order. It's also important for getting perfectly

mirrored tangent spaces for mirrored meshes,

Adjacent triangles must share tangent space if and only if the 4 criteria are met.

The first 3 are trivial. The triangles must share

position, normal and texture coordinate at both end-points of the shared edge.

The fourth rule means that the winding (CW/CCW) of the texture coordinates must be the same on both triangles.

The fourth rule is important in the general case but it also allows us to handle mirroring correctly (a split will be created).

The thesis also shows how most commercial products out there have many problems.

Check pages: 44, 45, 52-56

Using the strategy of the thesis gives a perfect result as shown on page 68 (see the tspaces on 67).

Page 7 gives an explanation of the relation between bump mapping and normal mapping.

Hope someone finds this useful.

Cheers,

Morten