Archived

This topic is now archived and is closed to further replies.

Linearly Interpolated And Perspective-Correct Interpolated

This topic is 5304 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have a problem,in vertex shader,the ourput texture coordinate register is Perspective-Correct Interpolated(1/w,u/w,v/w).When I have two texture.One is The diffuse(oT0) and the other is reflection(oT1) map(use render to texture). m4x4 r0,v0,c0 rcp r1.w,r0.w mul r0,r0,r1.w mov oPos,r0 ..... mov oT1.xy,r2.xy mov oT0.xy,r3.xy When i use /w on oPos,I get correct Linearly Interpolated on T1,but the the T0 is Linearly Interpolated too. m4x4 oPos,v0,c0 ..... mov oT1.xy,r2.xy mov oT0.xy,r3.xy The T0 and T1 are Perspective-Correct Interpolated. How to get Perspective-Correct Interpolated on T0 and Linearly Interpolated on T1 at the same time? (oD0,oD1 are used in my shader).

Share this post


Link to post
Share on other sites
Hmm, I don''t really see what you''re trying to do. The texture interpolators are always perspective correct, that behaviour can only be manually tweaked in a pixel shader. The vertex shader does not have an influence over this. You can of course play with the homogeneous texture coordinate. That''s basically what your are doing in your shader: your little rcp/mul/mov manipulation of the OPos''s w coordinate will in fact totally remove any perspective, not only from the texture coordinates, but also for the vertex positions. You will essentially revert to orthographic projection. In that case, the interpolation behaviour of the texcoord interpolators is linear, simply because the screenspace projection is. They will still operate with (1/w) internally.

If I got you right, then you have a normal diffuse texture on unit 0, and a projective texture on unit 1. You need ''linear'' screenspace texcoords for unit 1. OK.

The generally accepted way of doing this, is using projective coordinates for unit 1: you transform your vertex positions using "m4x4 oPos,v0,c0". Don''t touch anything else in oPos. For unit 0, you supply standard texcoords. And now, for unit 1, you supply projective 3D texcoords, that will eventually remap to 2D screenspace, just as the vertices did. For that, you supply the vertex positions to the initial 3D texcoords, and apply the same transformations to the 3D texcoords as you did for the vertices. You''ll have to take the different ranges into account. It''s not that easy to explain, you should read up on projective texturing. Look into this forum''s FAQ, there are a couple of links to papers about that topic.

Share this post


Link to post
Share on other sites
Thanks for your reply:-)
Your means is use _dw in pixel shader?

vs:
// c0 - c3 MVP
m4x4 oPos,v0,c0
// base texture
mov oT0,v1
// project texture
//c4 -c7 World*View*Modify*Proj
// the Modify is _11=0.5,_22=-0.5,_41=0.5,_42=0.5,_33=_44=1.0;
//
m4x4 oT1,v0,c4
...........

ps:
texld r0,t0
texld r1,t1_dw.xyw

But I don''t understand,_dw is after interpolated,the result is correct?

Share this post


Link to post
Share on other sites
Yes, I mean _dw (I think). That sounds awfully D3D-ish, right ? Well, if _dw is the D3D counterpart of OpenGL''s TXP, then I mean that . Sorry if I use OpenGL terminology in this post, but when I refer to the ''q'' coordinate, I mean the homogeneous texture coordinate. I''m not sure how it is called in D3D.

OK, I should perhaps explain the whole process of texel fetching in some greater detail.

When you supply texture coordinates, either streamed in or generated in a vertex shader, to the system, the GPU will interpolate those coordinates over a triangle. In order to correctly do that, the GPU has to take the current view projection into account, otherwise the texture will display great distortion effects (remember those old software 3D games without persepctive correction ? horrible... ).

This correction is applied by the interpolator itself, and you have no way to modify it. You don''t even get to see it, it''s a black box and might be implementation dependent. The only thing that matters, is that the texel fetch circuitry (or you pixel shader) is supplied with texture coordinates that have been adjusted to match the currently active view projection. Typically, this correction is done by dividing the texcoords by vertex.w in the interpolator, but as I mentioned, you shouldn''t worry about the details. So, you are supplied with texture coords that have either been perspective corrected (in the case of a perspective w, ie. a perspective view matrix), or linearily interpolated (in the case of a constant w, ie. an orthographic view matrix).

You now get back the control on the coords. In your case, you want of course a perspective camera view, it''s a 3D game after all. So, no matter what you do, your texel coordinates are going to be persepctive corrected. The only way to avoid that, is supplying a constant w, ie. switching back to a non-perspective view mode. That''s what you did with your rcp/mul/mov part above. But this will revert everyting to orthographic, and that''s not what you want.

OK, so what do we do ? Simple: we cancel out the perspective correction applied to the texcoords in the interpolators by using another perspective correction ! In normal texture mapping (or TEX shader opcode in OGL), the q coordinate is ignored, ie. the (corected) excoords are directly used to index the image. But in projective mode (TXP opcode), the coordinates are divided by q, before being used. And that''s the trick to ''cancel out'' the correction. That''s a great simplification, in practice there are a lot of mathematical subtilities in the process - but you don''t really have to care about them. Fact is, what you want to do, can be achieved through projective texture mapping. Just read a few papers about that technique, it''s very useful for such tricks.

Share this post


Link to post
Share on other sites