Phong Shading Clarifications

Started by
6 comments, last by bah 20 years ago
In order to do proper phong shading, I have to linearly interpolate the vertex normals across the face of the triangle and then run the illumination model once per pixel. How do you interpolate normal vectors? Do you do a "lerp" or a "slerp"? (standard linear as opposed to spherical interpolation). How do you determine the initial colour of the pixel? (With gouraud shading the initial pixel colour is interpolated from the vertex colours.)
Advertisement
If the triangulation is fine enough, it is sufficient to use lerp (since then the length of the normal vector will stay approximately unity). I do not know how sensitive this method is to not-so-fine-tesselation. The normal vector should become shorter so the light gets darker. IIRC, lerp is much faster than slerp, so if your tris are fine, try it. If your tesselation is not-so-fine, you could also normalize the normal per pixel (argh - a square root...), but I think you don''t need slerp.

Do you mean the glColor stuff by initial colour?
The colors are either specified by texture (best since pixel-wise) or by vertex (bad - you have to interpolate the color giving you the nasty gouraud shading artefacts).

I would do the following per-pixel:
- Take texture color
- Do the lighting
- Well, that''s it, sounds easy?
quote:Original post by bah
How do you interpolate normal vectors? Do you do a "lerp" or a "slerp"? (standard linear as opposed to spherical interpolation).

Are you writing a software renderer ? If yes, then simlpy use lerp, better performance (be careful about PP correction, though !). If not, then the hardware will implicitely do that for you, by the standard face interpolators (taking perspective correction into account). Supply the normal as either compressed RGB colours, or as texcoords. The hardware will then interpolate them over the face, and give you back a pixel normal at each pixel.

Note that you must normalize that pixel normal, otherwise your phong highlights will look like shit (or not even exist). In software, that''s an fsqrt and an fdiv. In hardware, you can either do it by a per-pixel RSQ (or the HLSL/GLSL/Cg equivalent), or by using a normalization cubemap.

quote:Original post by bah
How do you determine the initial colour of the pixel? (With gouraud shading the initial pixel colour is interpolated from the vertex colours.)

a) you don''t use any initial colour at all, and do the whole lighting in a vertex or pixelshader. You can supply face materials as constant values.

or

b) You still supply a per-vertex colour, by using an additional interpolator.

or

c) You could just use a constant ambient term, and do the diffuse and specular lighting per pixel.
ALX,

I don''t know if "software renderer" is the correct definition. This basic set of functions that I''ve implemented take a collection of vertices in object space, transform, clip, light and rasterize them in a buffer which is then passed on to OpenGL.

Currently, I lit every vertex using the Phong illumination model and then Gouraud shade the rest of the triangle''s surface. I want to do Phong shading but I don''t really know how to interpolate the normals.

Let''s say for example, that you have two vertices, each one has a normalized normal. If I linearly interpolate between those two, what will the result be? A normalized vector? Should I re-normalize it? What''s the difference between the standard and then spherical interpolation as far as normals are concerned?

Also, in Gouraud shading the colour of a pixel is bilinearly interpolated from the triangle''s vertices. THAT is the initial and final colour of the pixel. I could lit the pixel by feeding its position and colour into the lighting module.

What is the starting colour of the pixel using phong shading? I must have one in order to pass it on to my lighting module.
quote:Original post by bah
I don't know if "software renderer" is the correct definition. This basic set of functions that I've implemented take a collection of vertices in object space, transform, clip, light and rasterize them in a buffer which is then passed on to OpenGL.

Yes, that's a software renderer ( = a renderer where you don't use any hardware acceleration through OpenGL or D3D, and do the whole job on the CPU).

quote:
Currently, I lit every vertex using the Phong illumination model and then Gouraud shade the rest of the triangle's surface. I want to do Phong shading but I don't really know how to interpolate the normals.

OK. It's very simple, really. You linearily interpolate each normal component separately over the triangle. You can just treat the normal vector as if it was a colour (substituting RGB by XYZ components). Just make sure there is enough precision available. You're just gouraud interpolating three independent quantities, that happen to form a vector when put together.

quote:
Let's say for example, that you have two vertices, each one has a normalized normal. If I linearly interpolate between those two, what will the result be? A normalized vector? Should I re-normalize it?

It will be denormalized. It's direction will be correct, but not the length. You have to renormalize it.

quote:
What's the difference between the standard and then spherical interpolation as far as normals are concerned?

Spherical interpolation should give you normalized interpolated normals (within the range of interpolator accuracy). You don't need to normalize in this case, but the interpolation is more CPU intensive. On hardware, this option is not yet available, so you have to normalize here (although normalizing usually doesn't take a big hit on todays GPUs. It does, however, on a CPU).

Don't forget that you need to do the same thing for the light vector, and the half angle vector.

quote:
Also, in Gouraud shading the colour of a pixel is bilinearly interpolated from the triangle's vertices. THAT is the initial and final colour of the pixel. I could lit the pixel by feeding its position and colour into the lighting module.

What is the starting colour of the pixel using phong shading? I must have one in order to pass it on to my lighting module.

There is none. The phong lighting model ( ambient + diffuse + specular ) doesn't take a variable initial lighting (except perhaps ambient). If you do phong lighting perpixel, that means that you compute the *entire* lighting for each pixel:
ambient = constantdiffuse = N DOT Lspecular = (N DOT H) ^ ncolour = ambient + diffuse + specular  

This equation defines your lighting, there is no need for some 'initial' lighting. You can still add initial ambient (perhaps a GI solution), it would then be passed as perpixel ambient term.


[edited by - ALX on March 25, 2004 1:21:24 PM]
I just took another look at the lighting module and realized I don''t pass an initial vertex colour. (Doh!) I use the front and back material properties instead.

quote:
Spherical interpolation should give you normalized interpolated normals (within the range of interpolator accuracy). You don''t need to normalize in this case, but the interpolation is more CPU intensive.


That''s what I thought. I''ll use the slerp equation with a static array of precalculated sines to index into at every interpolation. But why is it computationally more expensive than a standard linear interpolation?
Have you compared lerp and slerp equations?
quote:Original post by bah
That''s what I thought. I''ll use the slerp equation with a static array of precalculated sines to index into at every interpolation. But why is it computationally more expensive than a standard linear interpolation?

lerp operates on single scalar quantities. A slerp only works on directions (eg. quaternions), not on scalars. Even if you optimize with table lookups, there will be many more multiplications and memory accesses for a slerp than for a lerp. A lerp doesn''t need lookup tables at all. It can basically be done with a couple of SSE instructions, for all three vector components: it''s only one subtraction and one multiply per component.

This topic is closed to new replies.

Advertisement