I want to ask something about shading on fixed pipe.
The first one is about D3DSHADE_GOURAUD for the shademodel.
According to my understanding , shading is done in vertex processing before rasterization.
And I checked it with PIX , it turned out that no normal is passed onto pixel processing, which proves my point further.
Diffuse and specular color are interpolated during rasterization.
There is no such thing as per pixel lighting in fixed pipe at all.
Is that correct??
And the second one really confused me.
If color is computed in vertex processing, how does sphere map work in fixed pipe??
There is no normal or any direction passed onto pixel processing.
From the result , I can tell that it's not per pixel reflection.
So how does sampling works , in vertex processing??
1) Correct, FF assigns colours (for lighting etc) per vertex whereupon fragments recieve a colour interpolated from the 3 vertices that make up the fragment's parent triangle.
2) The fidelity of the shading is coupled to the tessellation of the object being rendered. For a sphere, like any geometry, the higher the tessellation, the higher the fidelity of the lighting.
Of course, if the tessellation is low enough then lighting details will be missed completely or, perhaps even worse, rendered completely wrong. Imagine a large quad with a light of some sort near one of the vertices but that attenuates such that the other vertices are well out of reach of said light. The two triangles that make up that quad will have this vertex lit but as the shading occurs at the vertex level, the lighting will be completely off as the fragments will interpolated between this lit vertex and the two unlit vertices, resulting in a shaded colour that is disproportionally biased towards this lit vertex, even for fragments far out of reach of the light's radius. Per-pixel lighting (or light map texturing) avoids these interpolation errors.
Fixed pipeline refers to a system where the vertex transformation, lighting and raster output computation are implemented as fixed (as opposed to programmable) hardware units for performance.
You can't run programmable logic in the "pixel shader" stage of a fixed system (the texture blending cascade in D3D), so you can't evaluate lighting approximations very effectively after the vertex processing. D3D9-era hardware does support dot product operation in the cascade, so you can emulate simple per-pixel lighting using normal map textures.
As for your second question, the fixed pipeline has a configurable (again, as opposed to fully programmable) texture coordinate generation logic that can generate texture coordinates for sphere mapping based on the incoming normals.
Remember that the vertex formats were also somewhat fixed back then, and the hardware "knew" what was the purpose of the normal element in the vertices. Today's hardware doesn't care at all about most of the semantics of a vertex structure; it is the responsibility of the shader author to give meaning to the all of the data. In modern hardware (or more specifically drivers), the fixed pipeline is internally implemented as programmable shaders that happen to have the same inputs and outputs that the fixed hardware used to have.