DOT3 normal mapping without pixel shaders

Started by
13 comments, last by Jimfing 19 years, 10 months ago
Hi, I''m trying to get DOT3 normal mapping on Geforce 2 style hardware (since it does support the DOT3 colour operation). It kinda works but I''m getting odd artifacts that I think may be a result of me putting the light vector (which i put into texture space) into the diffuse colour component in the vertex shader. Remember that the colours are clamped [0..1] and the vector can be [-1..1] so i do: mad oD0.xyz, r4, c8, c8 where c8 is (0.5, 0.5, 0.5, 0.5) and r4 is the light vector in texture space. This should put it into the range [0..1]. However, when using a pixel shader you would then convert it *back* into the range [-1..1] surely? You can''t do that with the fixed function pixel pipeline. By the way: I have the colour operation setup to do the DOT3 operation using the Diffuse colour (which is actually the light vector) as the 1st argument, and the normal map as the 2nd argument. Maybe i need to normalise the transformed light vector before clamping it into the range [0..1]? Thanks for any help. Jim.
Advertisement
The DX8 sdk had a sample on this I believe?
Yeah I''ve worked it out now - it was the way i was generating the tangent vectors that was wrong.

I hate when I problem causes you to look in the wrong place :-\

Cheers guys.
quote:Original post by Jimfing
Hi, I''m trying to get DOT3 normal mapping on Geforce 2 style hardware (since it does support the DOT3 colour operation).


I''ve got a GF2 MX400 or so but it DOES support the DOT3 colour operation. Maybe you don''t have the latest detonator drivers? Check your list of extensions. DOT3 should be listed.

Jimfing,

Maybe you can help me out? I'm using dx7 and transforming/lighting my own verts (not using tnl). I believe I'm computing tangent spaces correctly as I have used two diffrent functions to compute tangent space and get the same results. Basically in the sample I'm working on everything looks right except it appears that my point light is rotating. In my program I have a vase with a normal map texture on it. the vase rotates in place and the point light is off to the right of the vase. What I expect is that the side away from the light will be dark and the side close to the light will be lit. What actually happens is that only one side ever gets lit and it appears that the light is rotating with the vase?

This is what I'm doing... I use the vase's matrix (the one that causes the vase to rotate) and I get the inverse of that. I then convert the point light position into model space my multiplying it by the inverse vase matrix. I subtract the model vase vertice(s) from the model space light point and transform the results with the tangent space matrix... the result is the tangent space light vector which I store in the color value of the vertex...

I think my problem has to do with the light vector generation...

Now my vase's matrix has scaling and positioning info as well is rotational... could this be the problem? The sample code (from the powervr website) I'm basing this on only had rotational...

If you can give some insite then I would be very gratefully (I'll try to see if I can get a hold of the dx8 sdk, I have dx7 and dx9 loaded right now...)

[edited by - wxb1 on June 5, 2004 2:25:23 PM]
wxb1 - I''m only using directional lights at the moment.

The idea is to use the normal, tangent and binormal of each vertex to transform the light-vector (which is in world space) into texture space (which is what the tangent and binormal describe).

So I do this in the vertex shader:

1. Transform normal, tangent and binormal by the World-matrix (remember the world matrix takes the meshes vertices from object space into world space - we need to do the same to the normals and tangents).

2. Transform the light using the transformed normals and tangents from world space into texture space.

If you think your tangents are wrong, then created the simplest meshes you can, such as a box, sphere, even just a couple of polygons. Then just draw the normal, tangent and binormal as 3 seperate colour lines. It will become immediatly apparent whether it is the tangent spave vectors or not.

Not sure what else to suggest - let me know how you get on.
quote:Original post by Jimfing
However, when using a pixel shader you would then convert it *back* into the range [-1..1] surely?

You can''t do that with the fixed function pixel pipeline.

Yes, if you wrote a pixel shader you''d have to do a _bx2 to put it back into -1 to 1 range. However, if you read the DOT3 docs for the fixed pipeline you''d see they do it for you, so yes, you can do it for the fixed pipeline.
It sounds like your doing it a little different than I... your doing everything in world space... I''m doing it in model space... But I think your suggestion is right about using a simpler model... I had thought about that... its a very good idea... I will also try to see if I can display the tangents and etc... thanks for the reply...
Forgot to mention that i invert the world matrix beforehand.

Namethatnobodyelsetook - thanks for clearing that up. Usual case of me not RTFM

Cheers
Jim.
Ok... if your using the inverse world matrix than your moving stuff into model space... one last question... I create my model space light vector by subtracting my model space light position from my model space vertex position... In my code I know the light's position in world space but in other samples I've seen the view vector is used to compute the light vector... how are you computing the model space light vector?

(I don't see anything wrong with my code based on what I've read and the samples I've seen... but I'm going start with a simple model this week and try to draw some visual cues on-screen...)

[edited by - wxb1 on June 7, 2004 7:49:25 AM]

This topic is closed to new replies.

Advertisement