Sign in to follow this  
Blue*Omega

Normal Mapping - The Facts

Recommended Posts

Blue*Omega    150
I've been messing around with normal maps in HLSL lately, and I'm having a hard time finding good solid facts on the subject. There are lots of places that are willing to give you example code, but everyone does it quite differently and many "example" pieces of code "cheat" in some way (for example: a lot of them pass in the light direction, not a position), making them useless in a general enviroment. On top of that, different forms of normal maps add to the confusion. I spent a good couple of hours one day trying to figure out why my map seemed to be inverting itself when rotated till I realized that I was using an object space map in tangent space. GRRRRR! So I'm looking for facts! Specifically: 1) What's the defenition and difference between Tangent Space, Object Space, and World Space? 2) In every example I've seen the value from the normal map is multipide by 2 and has 0.5 subtracted from it, but no one explains why? float4 cBump = (2 * tex2D(BumpMap, nTex) - 0.5); 3) I've seen a lot of examples (but not all) multiply the light vector by the inverse World matrix. Once again, why? 4) I've seen several different methods of constructing a tangent space matrix. Which one is right? Any help is appreciated!

Share this post


Link to post
Share on other sites
alexmac    122
1) well object space is what you create an object mesh in, with it centered around the origin. In your game it will be translated to some position and maybe rotated, that puts it into world space. Tangent space is the coordinate space defined by the surface normal, tangent and binormal vectors

2) because the normal map is a texture the color values range from 0 to 1, but normals have the range -1 to 1. To fit them into the texture the are 'range compressed' by dividing by 2 and adding 0.5. to un range compress them you must therefore multiply by 2 and subtract 1.

3) I got confused by all this crap with the light direction vector. If we're talking glsl here then there's no easy way you can get the objectspace light position so you've got to calculate it yourself in your app and pass it in as a uniform, then multiply it by you TBN matrix to get it in tangent space.

4) Haven't got it with me but in eric lengyel's book there's a nice explanation of how you generate the tangent and binormals.

Share this post


Link to post
Share on other sites
okonomiyaki    548
Quote:
Original post by Blue*Omega

2) In every example I've seen the value from the normal map is multipide by 2 and has 0.5 subtracted from it, but no one explains why?
float4 cBump = (2 * tex2D(BumpMap, nTex) - 0.5);



I can answer this one. Think of the value of textures. They only hold values of 0.0-1.0 (or 0-255). If we are going to store a vector at each pixel, we need to translate (x,y,z) to (r,g,b). We are using normalized vectors of course, so the domain of each vector component is -1 to 1, and we need to translate this to 0 to 1. This is simply done by dividing each vector component by 2 and adding 0.5 (therefore compressing it and shifting it up to 0-1). Of course you need to extract it correctly, so you need to multiply it by 2 and subtract 0.5 which decompresses it back to -1 to 1.

I'm not the best one to answer the other questions; though I know the answers vaguely, I would probably just confuse you.

edit: dang, beaten

Share this post


Link to post
Share on other sites
_the_phantom_    11250
1) Each describes a space relative to a set of axis.
World space is ofcourse relative to a global axis, which is used when positioning objects in the world.
Object space is local to the object, so when you draw objects you give the coords of each vertex in local space.
Tangent Space is relative to the current vertex (when used in normal mapping), with the TBN matrix being used to transform from object space to tangent space (there is another name given to this as well, which some find more intuative when relating to the space it defines).

2) iirc its todo with scaling the normals which are stored in the normal map.

3) In OpenGL at least the light position is in Eye space (its automatically transformed by the modelview matrix when submitted), so it must be transformed back to object space by multiplying with the inverse modelview matrix. So, I'm guessing that in the D3D world the light ends up in world space, so you have to multiply it by the inverse world matrix to get it back into object space. Once in object space it can be put into tangent space by multiplying by the TBN matrix.

4) I would guess they all are, I personally use the one on Eric Lengyel's page (which i dont have a link to handy, but can be found in some of the other normal mapping/tangent vector threads we've had)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this