# [SOLVED] Deferred Shading, position-distortion with non-cubic window-size

This topic is 2865 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm currently implementing my first deferred renderer. I'm using the hardware-depthbuffer for view-space-position-reconstruction as in MJP's tutorial described (creating linear depth and multiplying it with the view-direction).
The problem I'm having is, that the reconstructed position get's distorted in relation to the width:height-ratio of the window.
If the ratio is 1, there's no distortion.
The texture-lookup is correct, because this distortion doesn't occur to the other data I'm looking up, e.g normals.

Here you can see the result of that distortion (I've choosen a spotlight image because it's easier to see the effect of what's going wrong, but if I only draw the position-vectors there's pretty much the same distortion.)

If I'm rotating the camera with non-cubic windowsize the spotlight circle also moves around. I think this happens, because the other values are not affected by this distortion.

I'm not entirely sure, if I implemented MJP's reconstruction code correctly.

// Calculate the view space vertex position output.PositionVS = mul(input.PositionOS, WorldViewMatrix);
and in his fragment-shader he does:
// Clamp the view space position to the plane at Z = 1 float3 viewRay = float3(input.PositionVS.xy / input.PositionVS.z, 1.0f)

For now I only want to render a fullscreenquad for each light, so I pass the vertices for a quad from (-1, -1, 0) to (1, 1, 0) and simply use the unprojected vertices for gl_position and for my viewRay I use their xy-values with -1 for the z-value (-1 instead of 1, because his tutorial is for Direct3D).
Then in my fragment-shader I recalculate the view-space-position like that:
float depth = texture2D(texDepth, vPos2D).x; float linearDepth = projectionConstants.y / (depth - projectionConstants.x); vec3 position = vViewRay * linearDepth;

Does anyone had the same problem or has an idea what's going wrong here?

##### Share on other sites

Does anyone had the same problem or has an idea what's going wrong here?

[/quote]

Something wrong with your projection matrices perhaps? Is the scene geometry also distorted when you draw it normally or is it just the lights?

[EDIT] I read your post more carefully and it seems that the geometry / normals do render correctly.

You draw you light with a full screen quad so are your frustum corner vectors calculated based on your wide field-of-view ?

Best regards

##### Share on other sites

Does anyone had the same problem or has an idea what's going wrong here?

Something wrong with your projection matrices perhaps? Is the scene geometry also distorted when you draw it normally or is it just the lights?

[EDIT] I read your post more carefully and it seems that the geometry / normals do render correctly.

You draw you light with a full screen quad so are your frustum corner vectors calculated based on your wide field-of-view ?

Best regards
[/quote]

Hi there! Thanks for the fast reply :3
I don't use the frustum corners.

for my fragment shader I calculate the projectionConstants as following:
projectionConstants.x = FarClipDistance / (FarClipDistance - NearClipDistance); projectionConstants.y = (-FarClipDistance * NearClipDistance) / (FarClipDistance - NearClipDistance);
With that I'm trying to reconstruct the z-value of the position through
float z = projectionConstants.y / (depth - projectionConstants.x);
and then multiply the view-direction by that z-value for the actual view-space position.

But your right, this doesn't seem to consider the field of view.

but how is it done in MJP's third example in that tutorial?
I'll think about it more later, need to go to a lecture right now, but thanks a lot for helping

##### Share on other sites
yay, solved it!
It was related to the fov indeed.

My perspective projection matrix get's calculated like this:
 vFov = 1.0 / (tan(vFovDegree * (PI / 360.0)); hFov = vFov / ratio; p1 = (farPlaneDistance + nearPlaneDistance) / (nearPlaneDistance - farPlaneDistance); p2 = (2.0 * farPlaneDistance * nearPlaneDistance) / (nearPlaneDistance - farPlaneDistance); hFov 0 0 0 0 vFov 0 0 0 0 p1 p2 0 0 -1 0 
The screen-coordinates on which the geometry get's projected goes always from (-1, -1) to (1, 1).
If I scale the window horizontaly, we get more pixels under the same screen-area.
So my projection matrix adjusts the basic fov with the window-size-ratio, so more of the scene get's projected to this screen area, which makes sense because there are also more pixels under the same screenarea, so we're preventing the scene from getting stretched.

In my lighting-shader, I'm rendering a quad covering the whole screen (-1, -1) to (1, 1).
When I reconstructed the position from depth I've directly used those coords as the direction and scaled them with the reconstructed z-value.
This hasn't worked because the coords are always between (-1, -1) and (1, 1) but the geometry that got projected on this area is different in relation to the resulting fov and therefore different in relation to the ratio.

So what I've done to fix this, was passing the hFov and vFov to the fragmentshader and dividing my reconstructed position's xy-coords through them.

Thanks for your help again, it was the push I needed to finally understand the perspective projection completely

edit: just found a better way to apply this solution, instead of dividing the reconstructed positions xy-coords in the fragment-shader, I can simply divide the viewDirection in the vertex-shader, which get's used to reconstruct position in the fragmentshader. That saves a lot of divisions if we just render a quad.

• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 12
• 30
• 9
• 16
• 12