Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


Help with Bump Mapping concepts

This topic is 5720 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Howdy all, I''ve been trying to get DOT3 bump mapping working lately and have been running into nothing but problems. What I''ve got so far: 1. Calculated TBN matrix for each vertex 2. Created/Loaded normal map 3. Created normalized Cube map Now, what I''m having trouble with, is what''s next. From what I''ve read: 1) I''m supposed to create a half vector between the vertex and the light. 2) bind texture 0 to the normal map, and texture 1 to the cube map 3) Set up the DOT3 extention via texture functions (register combiners/ texEnvi) 4) When rendering the object, use the standard texture coords for texture_0, and the binormal texture coords for texture_1 (which is 3d) 5) Render. For some reason, no matter what, I just get black screens, or no results what so ever. What am i missing? ~Main == Colt "MainRoach" McAnlis Programmer www.badheat.com/sinewave

Share this post

Link to post
Share on other sites
You got the whole thing a little mixed up

The steps are like this:

1) compute light vector per vertex, normalize it

2) transform that vector to texture space, by multiplying it by the per-vertex TBN matrix (you can do that in a vertex shader). The matrix is made of the vertex normal, the binormal, and the tangent vector (you can drop one, and recreate it on the fly by a crossproduct).

3) you now have a light vector that is 'compatible' (ie. in the same coordinate frame) with the local normal map. Supply it as texture coordinates into unit 1 (cubemap)

4) the hardware will interpolate the light vector over the polygon, but it will denormalize due to the linear inteporlation. The lookup in the normalization cubemap fixes that.

5) you get the per-pixel light vector from the cubemap on unit 1. You have the per-pixel normal for the normal map on unit 0. DOT3 both togther.

6) modulate with the base texture, colour, whatever. done.

7) if you want, you can take care of self-shadowing.

You don't need the half-angle vector, unless you want per-pixel specular bumpmapping. In that case, H is processed analogous to the light vector.

Did you read the nVidia papers about bumpmapping ?

[edited by - Yann L on March 23, 2003 11:10:01 PM]

Share this post

Link to post
Share on other sites
Yann, thanks for the clear up.
I''ve read the NVidia papers, however it covered more concept than application, and unforunitally i''m not well versed in Register combiners or vertex shaders at this point. (infact, Bump mapping was my attempt to learn both of those) Infact i''ve looked over loads of demos with the DOT3 bump mapping, but everyone does it a different way, and when trying to duplicate their code, i just wouldn''t get any results.
couple mathmatical questions:
1) lightV = vertex - light.pos ?
2) the result of the TBN·light vector is a 3x1 matrix correct?
3) So our cube map is in Texture unit 1, we pass the result(2) in as texture coords? (MultiTexCoord3f ??) Or do we get the result VIA some texture lookup?
5) Do we DOT the result of those 2 via Register Combiners? Or is thier actually a math operation we want to do?

Thanks yann.

Share this post

Link to post
Share on other sites

1) Light_vector = normalize( Light.pos - vertex.pos )

2) The TBN matrix is a 3x3 matrix (a 4x4 is not needed, since we are dealing with direction vectors only). The result of a matrix3x3 * vector3 multiplication is a vector3, that has been mapped to a different coordinate frame. It''s just a standard vector with x, y and z component (it lacks the w component though, which is assumed 0).

3) The interpolated result of (2) is already the per-pixel light vector we need, but it is denormalized. So you have to pass it as index into the cubemap, which will normalize it through a lookup. Pass the result of (2) as s,t,r coordinates into the cubemap on unit 1. The result is the normalized perpixel light vector.

5) You simply have to compute the dotproduct between the texture result of unit 0 and unit 1. That''s: c(final) = ctex0 DOT ctex1. The result is a greyscale value representing the bump illumination of the fragment. If you are not comfortable with regcoms, you can use ARB_texture_env_dot3 as a starting point.

Share this post

Link to post
Share on other sites
According to the nVidia Cg examples, you can also get the light vector like this:
D3DXVECTOR3 lighpos(LightPos[0],LightPos[i][1],LightPos[i][2]);
D3DXVECTOR3 lightDirection; //in world space
D3DXVec3Normalize(&lightDirection, &lighpos);

D3DXMATRIX world2obj;
D3DXMatrixInverse(&world2obj, 0, &matWorld);
D3DXVECTOR3 lightDirectionInObjectSpace;
D3DXVec3TransformNormal(&lightDirectionInObjectSpace, &lightDirection, &world2obj);

Share this post

Link to post
Share on other sites
Yann, thanks a lot for making things clearer for me. I have 2 questions:

1. How can I do simple bumpmapping without lighting?
2. I read several articles about DOT3 lighting, and I didn''t understand two things. First, what''s a cube map? What does it do, and how do I generate it? Second, how can I get the sTangent and tTangent coords for a model?


Share this post

Link to post
Share on other sites
The S/T tangents are simply a vector specifiying the direction of increment of the S/T texture coords. Think of it as Delta(S) and Delta(T) with a Z coord (given from the polygon). NOTE: The T coord is often called the "Binormal", while the S is called the "Tangent" These, along with the vertex normal make up the TBN matrix (Tangent, Binormal, Normal)

A normalisation cube map means that simply, instead of storing colors in a cube map you store vectors, just like for your normal map. Imagine having an array of vectors specifying every possible angle in a sphere around the origin. Now, rather than storing each one of those Vectors as an array, instead, store that data in the pixels of a texture map, with each vector normalized (0-1 can be thought as 0-255). Do that for all 6 sides of a cube, and that's your normalized cube map.

As far as bump mapping without a light source... Bump mapping itself is considered a "lighting" method more than anything else. IE it doesn't exist without a light source.

Couple other questions I had:
1) Does it matter what type you load your Normal map in? All the demo's I see open in GL_RGB8, does that matter If i'm using a different type?
2) Would the following be sufficient as the TexEnvi DOT3 calculation? I'm getting a pretty hard result, just white and black, without any bumps showing... wonder what i'm missing (Couldn't find a manual on it)

//Set up texture environment to do (tex0 dot tex1)*color




//Draw object

3.) just to clear this up again, the TBN matrix is infact set up that way? That is, First Column=Tangent, 2nd=Binormal 3rd=Normal?
4.) Are we supposed to use the Binormal and Tangent vectors for anything else?



Colt "MainRoach" McAnlis

[edited by - duhroach on March 24, 2003 10:30:41 AM]

[edited by - duhroach on March 24, 2003 10:40:39 AM]

[edited by - duhroach on March 24, 2003 10:43:21 AM]

[edited by - duhroach on March 24, 2003 10:44:15 AM]

[edited by - duhroach on March 24, 2003 10:52:44 AM]

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!