Texture coordinate interpolation

Started by
0 comments, last by wolf 20 years, 5 months ago
Hi there, I am using the vs_2_0 and ps_2_0 vertex and pixel shaders heavily. Usually I normalize my vectors (L, N etc.) for per-pixel lighting in the pixel shader. The normalization is necessary because the transform in the vertex shader and the interpolation in the texture coordinate registers de-normalize the vectors. I am currently thinking about this texture coordinate interpolation process. How does it exactly working ? Thanks for your help, - Wolf [edited by - wolf on November 15, 2003 3:18:04 PM]
Advertisement
Your L and N should be interpolated in screen space such that they are correct in world space (ie, perspective correct). So should just need to normalise them per-pixel (using pixel shader or cube texture).

I think texture coordinate interpolation works like this...
For input texture coords [s, t, r, q]:
- [s/w, t/w, r/w, q/w] is calculated per-vertex after vertex shader (where w is 4th component of transformed vertex position),
- these 4 values are linearly interpolated in screen-space (as triangle is rasterized) to get [s', t', r', q'] per-pixel,
- [s'/q', t'/q', r'/q'] is calculated per-pixel and passed to the pixel shader.
...there's other combinations such as dividing by r instead of q, ignoring q, etc, depending on the api you're using (and even the driver) but that's where I start getting confused.


[edited by - soiled on November 16, 2003 4:18:16 AM]

This topic is closed to new replies.

Advertisement