Easy Per Pixel Lighting

Started by
10 comments, last by pierceblaylock 16 years, 2 months ago
I'm doing some experimenting with per pixel lighting using normal maps and I seem to have come up with a really easy implementation that doesn't require tangents or binormals. Unless I'm doing something really wrong? I noticed that all the implementations that I found on the net so far use normals, tangents and binormals. This is what I'm confused about. Why is the tangent and binormal needed for each vertex? Why can't per pixel lighting be done just with the vertex normal and the normal map? I tried implementing it myself. In my first attempt I followed the approach everyone else uses and used tangents and binormals. It appears as though the tangent and binormal is used to translate everything (i.e. normal, light ray, etc...) into tangent space to perform the lighting calculating (i.e. the dot product). What I don't understand is, why does everything need to be transformed into tangent space? In my second attempt I stripped out the tangent and binormal stuff and the final rendered result is exactly the same. The only difference is, the shader is shorter and runs faster! Why is everyone using expensive tangents and binormals? Here is what I did in a nutshell: 1. The vertex normal is passed to the shader as a normalized 3D vector. 2. The position of the light source is passed in world space. 3. The light position along with the vertex position in world space is used to compute the to_light vector (i.e. the direction to the light), which is normalized. 4. Then in the pixel shader, a normal is read from the normal map using the u,v coords for the pixel, which is also normalized. 5. This pixel normal is then combined with the vertex normal and normalized. 6. Finally a dot product is done between the combined normal and the direction to the light vector. 7. The result of the dot product is multiplied with the pixel's diffuse color to give the final pixel color. I'll point out that there appears to be a lot of normalizing go on in this process. However in reality, there isn't very much at all. Most of the vectors (such as the vertex and pixel normals) are normalized to begin with. I'm not very mathematically inclined so I don't know if my theory is correct. But all I can say is, on the screen the results look the same (well to my untrained eye anyway). Does anyone care to explain why everyone uses tangents and binormals?
Advertisement
I too have done this for deferred shading. But the problem I had was having to either pass in the model matrix for each "model" or pre transforming each vertex. Are you doing either of these things. If so are you doing that for the normal, or are you multiplying it by gl_NormalMatrix. Please give a little more detail as to what "combined" means in step 5. Thanks!
Douglas Eugene Reisinger II
Projects/Profile Site
Both the vertex position and the vertex normal are transformed into world space in the vertex shader. This transformation is essentially free since it has to be done anyway in order to render the model in the correct position.

The transformed vertex position is used to work out the direction to the light source. The light source position is already in world space when it is passed to the shader.

Combining the vertex normal and the pixel normal simply means adding them together (i.e. vn + pn) and then normalizing the result.
Can you please provide some comparison screenshots?

Thanks :-)
--
Quote:Original post by pierceblaylock
I'm doing some experimenting with per pixel lighting using normal maps and I seem to have come up with a really easy implementation that doesn't require tangents or binormals. Unless I'm doing something really wrong?

Sounds like you've discovered object-space normal mapping. Normal maps can be either in tangent space (ie. relative to the triangle surface) or object space (ie. relative to the object orientation). Object space means you don't have to do the tangent/binormal stuff because that work has already been applied and burnt in to the texture.

There's several downsides, the main one being that you can't share normal maps any more since they're specific to a particular model. This might be acceptable for characters but probably not for environments.
Step 5 is imo just a hack and not correct normal mapping. It can look ok, but did you also try to rotate your objects?

Post vs/ps code and/or screenshots so we can see if the method really works or not.

EDIT: OrangyTang brought another possibility. What type of a normal map do you actually use?
The real question here is: in which space were your normals (coming from the normal map) computed ?

If they are in world space, you cannot apply a rotation to your model.

If they are in object space, then your object can be translated and rotated, but it must be rigid (no deformation). For example, that's no good for characters.

Tangent space is the most flexible. It allows deformations and any position/rotation of your model.

The trick is, to do normal mapping you need to compute (N dot L). It doesn't matter in which space N and L are, as long as they are in the same space. So, both N and L can be in world space, you'd get the same visual result than if N and L were in tangent space.

I'm a bit confused by your procedure though. The light pos is directly passed in world space to the vertex shader, and the light direction is computed by L = LightPos - VertexPosWorld with VertexPosWorld being the object-space vertex multiplied by the object-to-world matrix. The pixel shader should normalize this vector, because if it's normalized in the vertex shader only, the interpolation that happens after will generate incorrect results.

The real question is, how do you get N into world space ? If your normal map already contains N in world space, then you just have to multiply-add by (2,-1) and normalize (the bilinear interpolation due to texture fetching will also cause the normal to not have a length of 1.0).

If N is in object space, you need to apply the inverse transformation matrix to go from object space to world space. That involves (in addition to the mad/normalization) a matrix transformation in the pixel shader.

If N is in tangent space, this is similar, you need to apply the matrix going from tangent space to object space, then from object space to world space.

All of this is going backwards: transforming the normal into the space the light is in. But in "standard" normal mapping, you're actually transforming the light into the space the normal is in.

The two standard approaches are:

1. Object space normal mapping:
- N is fetched from the normal map and is already in object space.
- the LightPos is passed to the vertex shader and the world-to-object matrix is applied to it. Then L is computed as LightPosObj - vertex and passed to the pixel shader that will perform N dot L directly.

This is the fastest/most simple technique, but it requires your object to be rigid, and the normal map to be computed in object space. As you might notice, it requires no tangent space (so no binormal/tangent crazyness).

2. Tangent space normal mapping:
- the binormal/tangent is passed to the vertex shader and the tangent space matrix (matrix going from object space to tangent space) is computed with a cross product.
- the LightPos is passed to the vertex shader and the world-to-object matrix is applied to it. The light direction is computed in object space (LDir = LightPosObj - vertex). Then L is computed by transforming LDir with the object-to-tangent space matrix and passed to the pixel shader.
- N is fetched from the normal map in the pixel shader and (N dot L) is computed (after the usual normalizations of both N and L).

This is still relatively fast, because there's no matrix multiplications involved in the pixel shader, only renormalizations. The drawback is, you have to compute the binormal/tangent on the cpu and pass it as an additional vertex component along with the normal.

Now about your procedure:

Quote:3. The light position along with the vertex position in world space is used to compute the to_light vector (i.e. the direction to the light), which is normalized.


The vertex position is originally in object space, so you need to apply the object-to-world matrix in the vertex shader to compute the vertex position in world space (just making that clear). After this step, your to_light vector is in world space. It should be normalized in the pixel shader, not in the vertex shader.

Quote:4. Then in the pixel shader, a normal is read from the normal map using the u,v coords for the pixel, which is also normalized.


The real question here is, in which space is the normal map stored ?

Also note that you need to have transform your normal from the [0;1] range to [-1;+1], so you need to multiply it by 2 and subtract 1, and then only normalize.

Quote:5. This pixel normal is then combined with the vertex normal and normalized.


What do you mean by "combined" ? I also must point out that vertex normal is in object space, so if your pixel normal isn't in object space, you're definately doing something wrong.

Quote:6. Finally a dot product is done between the combined normal and the direction to the light vector.


.. which only makes sense if both the light vector and the normal are in the same space (in your example, world space).

Since I highly doubt your normal map is in world space, I think I can safely say that you're doing something seriously wrong.

Y.
Ysaneya, your post was extremely helpfully. Thank you.

I've realised now why my method works even though it is wrong. Let me explain.

Firstly, the normal map is in object space. The vertex normal is in object space, but the direction to the light is in world space. Which is obviously wrong. But it all works because the shader is being used to texture a terrain, which by its nature is static (i.e. doesn't move). In addition, since the terrain is at the origin (i.e. identity world matrix), then the terrain exists in both world space and object space at the same time (i.e. the positions match). In fact, so does the light vector and the normals. Therefore it appears as though I can make the "incorrect" calculations in this situation only and still have them work.

I'm going to make a few minor changes based on your suggestions to improve it, but I'll stick with the object space method for the terrain as it works well and is fast. For my dynamic objects, I guess I'll need to use tangent space.

Once again, thanks Ysaneya for clearing this up. Makes perfect sense now.
Hi Pierce.

I have also used a similar approach for shading my terrain, for the reasons mentioned above (the terrain is not subject to any transformations).

1) Compute world-space normals from the geometry for each triangle.
2) Store these normals in a texture (in a Luminance-Alpha texture to save storage, and compute Z with sqrt(1-lum^2-alpha^2).
3) Sample the normal map in the fragment shader.
4) In the fragment shader the world-space light position (passed in by gl_LightSource[0].position is directional) is normalized (lightDir).
5) NdotL = max(dot(normal, lightDir), 0.0).

But I don't understand why you have to pass in the normals to the vertex shader and combine them with the normal map? That is, do these normals contain any information that is not already encoded in your normal map?

kind regards,
Nicolai Brøgger
Hi Nicolai,

You are just encoding your vertex normals into a texture. This is fine if all you want is a vertex lit terrain. In my situation, I use per pixel lighting with a texture normal map.

So for example, if there is a section of the terrain that has a rock texture on it, then there will also be a normal map texture for the rock texture. Then I add the vertex normal to the pixel normal (which I get through a u,v sample into the normal map texture).

Hope that makes sense?

This topic is closed to new replies.

Advertisement