Sign in to follow this  
ganchmaster

disabling perspective correction?

Recommended Posts

ganchmaster    134
I have a situation where I want to interpolate a value from a HLSL vertex shader in screen space - not perspective-correct world space, as is done by default. I'm rendering a 3D mesh in world space as you normally would, but I want to have some values that depend only on the screen space (post-projection) position passed to the pixel shader, and I want those values to be interpolated linearly in screen space, not world space. Right now, the interpolation is using the w from the real vertex position to do world-space perspective correct interpolation, which is wrong for linear in screen space. It's correct at the vertices, but wrong for the interiors, which is not good enough for my situation in which I have very large tris. I tried doing the W division myself in the vshader which makes every thing work like I want it to (since the output W becomes 1), except that then the hardware near plane clipping doesn't work right (triangles pop out as soon as they touch the plane). I could do what I want by using the VPOS register in the pixel shader, but I'm trying to save some cycles by doing some computations in the vertex shader and passing the result to the pixel shader. Any ideas on how I can change the nature of the interpolation? I'm using HLSL and DX9.0c.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Quote:
ganchmaster
I could do what I want by using the VPOS register in the pixel shader, but I'm trying to save some cycles by doing some computations in the vertex shader and passing the result to the pixel shader.

I take it from this that you're trying to get the position of a pixel in screen space. In your vertex shader, you'll need to compute which pixel corresponds to a particular vertex. You can do this by first applying the world-view-proj matrix. This will give you a vertex that is in the range of [-1,-1,0]..[1,1,1]. Since all you care about is where this is on your 2D screen, you can ignore the z-component. Add 1.0f to this vertex and then multiply by your screens width and height. This will give you the (x,y) location of the vertex. You can then send this value to the pixel shader as a texture coordinate and it will be linearly interpolated accross the screen, giving your pixel shader the (x,y) location of each pixel.

The code to do this would look like this:

struct VSINPUT
{
float3 vPosition : POSITION;
float3 vNormal : NORMAL;
float2 vTexCoord : NORMAL;
};

struct VSOUTPUT
{
float4 vPosition : POSITION;
float4 vNormal : NORMAL;
float2 vTexCoord0: TEXCOORD0;
float2 vTexCoord1: TEXCOORD1;
};

VSOUTPUT mainVS( VSINPUT In )
{
VSOUTPUT Out;
Out.vPosition = mul( float4(In.vPosition, 1.0f), g_matWorldViewProj );
Out.vNormal = mul(In.vNormal, g_matNormal ); // However you transform your normals.
Out.vTexCoord0 = In.TexCoord; // Typical texture coordinates for your model
Out.vTexCoord1 = ( (Out.vPosition.xy + 1.0f) / 2.0f ) * float2(g_fScreenWidth, g_fScreenHeight) * float2(1.0f, -1.0f);

return Out;
}


The last line computes the screen coordinate of the vertex. It first takes the (x,y) position of the vertex in perspective space. These values are between -1 and 1. Then we add 1.0f to this value. Now the range is 0..2. Then we divide by 2. Now the range is 0..1. We can now multiply by the size of the screen. Now the range of values for the (x,y) coordinates are 0..width, and 0..height. The final step is to flip the y coordinate because if you want this to correspond to screen pixels, you'll need to keep in mind that the top left corner of an image is (0,0) and the bottom right is (width,height).

Hope all of this helps,
neneboricua

Share this post


Link to post
Share on other sites
ganchmaster    134
Quote:
Original post by Anonymous Poster
You can then send this value to the pixel shader as a texture coordinate and it will be linearly interpolated accross the screen, giving your pixel shader the (x,y) location of each pixel.


That's the part that doesn't work. I understand the math of how to compute a pixel location on the screen (although in my case I only actually need the [-1,1] position). But it's not linearly interpolated across the screen as you state. It's perspective corrected interpolation in world space, according to the W values of the vertices. This produces a non-linear distribution of values in screen space. I tried to explain this in the OP but I think I didn't do a very good job.

To make your sample work correctly you would have to pass the W value to the pixel shader and divide per pixel. I was trying to avoid that operation (and subsequent operations which that would have also made necessary).

Quote:
Original post by Anonymous Poster
It first takes the (x,y) position of the vertex in perspective space. These values are between -1 and 1.


This is not correct. The values are between -1 and 1 only after the homogeneous division by W.

Share this post


Link to post
Share on other sites
neneboricua19    634
Quote:
Original post by ganchmaster
That's the part that doesn't work. I understand the math of how to compute a pixel location on the screen (although in my case I only actually need the [-1,1] position). But it's not linearly interpolated across the screen as you state. It's perspective corrected interpolation in world space, according to the W values of the vertices. This produces a non-linear distribution of values in screen space. I tried to explain this in the OP but I think I didn't do a very good job.

Ok, now I see what you mean. I originally thought that texture coordinates were linearly interpolated but after looking up the register definitions in the SDK I found this quote about the texture coordinate register on the pixel shader side

"They contain high precision, high dynamic range data values interpolated from the vertex data. Values are generated with perspective-correct interpolation. Data is floating-point precision, and is signed."
Quote:
Original post by ganchmaster
The values are between -1 and 1 only after the homogeneous division by W.

D'oh. You're right on this. Forgot that the perspective divide comes after the vertex shader is run.

As far as changing the nature of the interpolation like you asked in your original post, there's no nice way to do it. In short, one way to get the result you want is to tessellate your large surfaces into smaller ones so the error introduced by the hardware interpolation is not noticeable.

I ran into something similar when implementing dual-paraboloid shadow mapping. The projection matrix i needed to use performed a quadratic projection instead of a linear one. The hardware was not able to perform the kind of interpolation I wanted. To get around this, it was necessary to tessellate large surfaces into smaller ones so that the quadratic interpolation I wanted and the interpolation done by the hardware were closer to being equal. In my case, this meant tessellating only the floors and ceilings in the world, since most meshes were already finely tessellated enough.

neneboricua

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this