plane reflection - ugly interpolation

Started by
10 comments, last by xycsoscyx 17 years, 10 months ago
I want to do a simple plane reflection by rendering the world upside down to a texture and sampling that texture in the plane's fragment shader (Cg). In order to sample it I need screenspace texture coordinates. I calculate these by doing this in the plane's vertex shader:

OUT.pos = mul(modelViewProjMatrix, IN.pos);

//position in screen space
half2 screenPos = OUT.pos.xy/OUT.pos.w;
//move from range [-1,1] to [0,1] (texture coords)
OUT.tCoordsProj = 0.5 * screenPos + 0.5;

Then I use tCoordsProj in the fragment shader to sample my texture. The problem is that I get ugly interpolation artefacts. Even with a fairly tesselated plane. How can I fix this? Calculate the tCoordsProj in the fragment shader somehow?
Advertisement
Try calculating your screen space coordinates in the vertex shader, and pass that to the pixel shader as a texture coordinate. This should properly interpolate your positions down the line. Keep in mind that you still want to do the w divide inside the pixel shader. Basically just move the mul with the transformation matrix into the vertex shader, then pass that down as a texture coordinate and use it from there.
ah, doing the division by w in the pixelshader solved the problem!
btw, is it a way to fetch the position in the fragment shader in Cg? So I don't have to send xy and w of the vertex's position from the vertex shader?
Quote:Original post by Jaurang
btw, is it a way to fetch the position in the fragment shader in Cg? So I don't have to send xy and w of the vertex's position from the vertex shader?


No, fragment shaders do not have direct access to the vertex buffer data. All you can do is to pass the position to the fragment shader as a texture coord.
Actually, it is possibly, but only in PS3.0 in HLSL (Direct3D). There is a :VPOS semantic that you can use on input data to get the position of the pixel. I don't know much about it (I have a RaedonX600 at work, so even though my home video card supports it, I wanted something that works one both computers, so I haven't used it yet). I assume it gives the screen space coordinates, which means you'd have to divide by the screen size to get proper texture coordinates. Again, that's only on PS3.0 (I dunno if GLSL or Cg supports it or not).
hmm, I do have the WPOS semantic in Cg. Which is supposed to give me the pixel's window position. But I can't get it to work. I have used the z-component of the WPOS parameter before (which is the depth value), but when using the xy components for this task it doesn't work.
I assume that's PS3.0 specific, as well. If your hardware doesn't support it, I assume Cg ignores it (or something, I haven't worked with Cg before). Does your hardware actually support PS3.0? Rather then erroring, I assume Cg probably just treats that value as a regular input (so it assumes it gets the data from the vertex program). I think the best way (and most compatible) is just to transform in the vertex shader, pass down as a texture coordinate, then divide by W in the pixel shader, viola, you have screen space coordinates. This is what I've settled with since only PS3.0 hardware supports the position value.
I've got a GeForce6600GT so I suppose I support SM3(?). Anyway, since I can use the z-value of the WPOS parameter I thought it worked.

But the method I'm using now works fine (the same as you described), so it's no point in messing around with WPOS.
Actually, if I recall correctly, the _POS paramater can only be accessed via the .xy binding, so it's only a 2D screen coordinate of the pixel. This means it doesn't actually store the depth value.

This topic is closed to new replies.

Advertisement