Dot3 and the normalization cube map

This topic is 4791 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I implemented basic dot3 lighting (no pixel shader, just vs) and decided to simply skip the cube map. (The normals are interpolated across the surface.) It looks pretty good as-is, and I'm happy with the results...compared to vertex lighting, at least. Now, I've seen Yann mention several times that this looks 'cheesy' due to the lack of renormalization. My question is, just how important is the renormalization in the pixel stage? How denormalized are these normals getting, anyway? Will I see significant visual improvement by using the cubemap? (Don't tell me to use PS. It's not an option.)

Share on other sites
It depends on your model. Any swift changes in normal across a face will cause it to look bad. If your model is all flat edges like cubes, or if your model is highly tesselated and the normals don't change much per face, it won't be extremely noticable. If you change the normal by a large amount you'll introduce a lot of error.
o....oxx  ...o  x    ..o   xx    .o     xx   .o       x  .o        xx.oooooooooooo
Here, if we change the normal by 90 degrees, we see the curve the normal should take (in dots), and the linear interpolation of the normal (in x's). At the worst point our light will be half the correct brightness since we'll get a normal of (0.500, 0.500), or length 0.5 vector when you should have a normal of (.707, .707), or a length 1.0 vector.

Share on other sites
Just another note. You don't need to use a pixel shader to get normalization. Just put the tangentspace light vector as an output in the vertex shader, and your UVs as a second output. Bind a normalization cubemap to stage0, and your normalmap to stage1. Read the cubemap (selectarg1, texture), dot3 with the normalmap (dot3, texture, current), then mix with your real texture in stage 2 or a second pass depending on if you have more than 2 stages available.

Share on other sites
I use normalization cubemaps when a light is very close to a wall with medium-sized polygons. This is equivalent to a light further from large polygons - it's all relative.

It helps with specular especially. Diffuse can usually get away without it, although your bumps may become washed out...

Share on other sites
Quote:
 Original post by PromitI implemented basic dot3 lighting (no pixel shader, just vs) and decided to simply skip the cube map. (The normals are interpolated across the surface.) It looks pretty good as-is, and I'm happy with the results...compared to vertex lighting, at least.Now, I've seen Yann mention several times that this looks 'cheesy' due to the lack of renormalization. My question is, just how important is the renormalization in the pixel stage? How denormalized are these normals getting, anyway? Will I see significant visual improvement by using the cubemap?(Don't tell me to use PS. It's not an option.)

If you are using diffuse lighting (not raising the result toa power, which I assume you are doing since you are not using a PS), then it is INCORRECT to normalize the vector. You should keep it denormalized. The reason being is that dot products are communitive, so keeping it denormalized is more correct (save the clamp to zero). That is

a*dot(BumpMapSamp1, LightDir) + b*(BumpMapSamp2, LightDir)

is equivalant to:

dot(BumMapSamp1*a + BumpMapSamp2*b, LightDir)

which is the equivalant of keeping the normal denormalized, and still allow the filtering to work. Normalizing it is wrong.

Share on other sites
Quote:
 If you are using diffuse lighting (not raising the result toa power, which I assume you are doing since you are not using a PS), then it is INCORRECT to normalize the vector. You should keep it denormalized. The reason being is that dot products are communitive, so keeping it denormalized is more correct (save the clamp to zero). That is

The vectors in the dot product should be normalized. What we care about is the angle between the light vector and the normal, and we can have that only if they both have unit lengths. Otherwise, the result would be dependent of the length of the vectors, which is wrong. I don't know where you heard that normalizing is wrong, but it's not true.

Share on other sites
Texture filtering will denormalize vectors in a normal map, but this is not necessarily bad, as it reduces aliasing. Anyway, this is not the issue here.

Interpolating the light vector across a triangle will denormalize it, and this is always wrong, but won't always be noticable. Using a cubemap is more correct. Renormalizing with math ops (pixel shader) gives better accuracy and is often faster on modern hardware.

Share on other sites
Normalizing a light vector in the vertex shader is wrong, because what you want to do is interpolate the light position across the polygon. Positions are linear, normalized vectors are not.

Only per-pixel do you need to normalize - either with a cubemap or math.

That said, normalizing the vector in the vertex shader is often almost right, and serves as a form of compression if you need to get the L vector squeezed into a diffuse or specular interpolator.

Share on other sites
You should never have to normalize a vector in the vertex shader. All input vectors should already be unit length, and the "un-normalization" comes in from interpolating the vectors as Namethatnobodyelsetook explained before.

Besides:

Quote:
 Original post by Promit...(Don't tell me to use PS. It's not an option.)

The OP said that he couldn't use a PS anyway - just fixed function. So Namethatnobodyelsetook provided the only option in his note.

Promit, you will see the faceted appearance on your model if normalization is an issue. It all depends on the screen space size of the polygons and the change in normal from vertex to vertex.

I would recommend trying it to see how much of a change it produces - I notice a very big difference using the normalize function in the PS, but it really does depend on the model you are rendering.