Jump to content
  • Advertisement
Sign in to follow this  

Deferred lighting, lit value influenced by camera

This topic is 2391 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey,

i'm having a problem calculating light value when rendering a full screen quad in the deferred lighting light pass. The problem is that the calculated light value changes when the camera's position changes. I'm seeing a pattern in the error which is: larger lit value than expected between the camera's position and the 0, 0, 0 position. Here's some images to show what i mean (far first, near second)
far.pngnear.png


on the left behind the bright specular'ish looking bulb is the 0, 0, 0 position. On the right the houses seem to be lit correctly but the wall behind it now suffers from the same problem.

Some details about my setup:
-32bit hardware Depth buffer
-RGB32F texture for normals
-Wrap s/t: Clamp to edge
-min mag filter: linear
-Row major matrices
-View = inverted camera's world
-WorldView = World * View
-WorldViewProj = World * View * Proj
-ProjInverse = inverted Proj
-Vertex provided texture coordinates top-left = 0, 0.

Geometry pass shader:
[source] Pass p1 : PREPASS1
{
VertexShader
{
#version 110
uniform mat4 WorldViewProj;
uniform mat4 WorldView;
varying vec4 normal;
void main()
{
gl_Position = WorldViewProj * gl_Vertex;
normal = normalize( WorldView * vec4( gl_Normal, 0 ) );
}
}
FragmentShader
{
#version 110
varying vec4 normal;
void main()
{
gl_FragData[ 0 ] = normal * 0.5 + 0.5;
}
}
}[/source]

Light pass shader:
[source] Pass p0
{
VertexShader
{
#version 110
uniform ivec2 ScreenSize;
varying vec2 texCoord;
void main()
{
vec4 vPos;
vPos.x = (gl_Vertex.x) / (float(ScreenSize.x)/2.0) - 1.0;
vPos.y = 1.0 - (gl_Vertex.y) / (float(ScreenSize.y)/2.0);
vPos.z = 0.0;
vPos.w = 1.0;
gl_Position = vPos;
texCoord = vec2( gl_MultiTexCoord0.x, 1.0-gl_MultiTexCoord0.y );
}
}
FragmentShader
{
#version 110
uniform sampler2D Texture0;//normal
uniform sampler2D Texture1;//depth
varying vec2 texCoord;
uniform mat4 View;
uniform mat4 ProjInverse;
void main()
{
float depth = texture2D( Texture1, texCoord ).r;
vec3 normal = (texture2D( Texture0, texCoord ).rgb - 0.5)*2;
vec4 projectedPos = vec4( 1.0 );
projectedPos.x = texCoord.x * 2 - 1;
projectedPos.y = texCoord.y * 2 - 1;
projectedPos.z = depth;
vec4 posVS4d = ProjInverse * projectedPos;
vec3 posVS3d = posVS4d.xyz / posVS4d.w;
vec4 lightPos = vec4( 0, 0, 0, 1 );
lightPos = View * lightPos;
vec3 toLight = normalize( lightPos.xyz - posVS3d );
float lit = max( 0.0, dot( toLight, normal ) );
if( depth > 0.9999 )
gl_FragData[ 0 ] = vec4( 0.0, 1.0, 1.0, 1.0 );
else
gl_FragData[ 0 ] = vec4( lit );
}
}
}[/source]

For now i'm using a light hardcoded at position 0, 0, 0. I'm trying to do the calculations in view space (also tried world space but without luck). I've also followed some tutorials but none seem to have this particular problem. I've also seen there is a different approach concerning using a view ray extracted from the frustum but i'd like to get this to work using unprojection first.
My question is if there's someone who can spot an error in what i'm doing.

Share this post


Link to post
Share on other sites
Advertisement
I've had similar problems before. The best way (i.e. learning at the same time) to solve it is to output values at selected places in your rendering pipeline, and comparing them to your expectations. Binary search through your rendering pipeline for the problem :D


For example, output the raw worldspace (or viewspace) position from your GBuffer construction shader, and compare it with the position you reconstructed in your lighting shader.


By the way, if you're outputting debug values from your gbuffer construction shader, make sure you're using something that can represent [-inf,inf]. (not unorm textures), or scale appropriately. Tripped me up for some time when I was debugging...

Share this post


Link to post
Share on other sites
Looks like i'm failing at reconstructing the viewspace position:


viewpos.png


the window contains the actual viewspace position rendered by the models, the output on the bottom left is the reconstructed viewpos. Could this be caused by non linear depth precision (near = 1.0, far = 500.0, avg distance in image = 40.0)? I've tried rendering my own linear depth (z / farClip) but i'm not sure how to use this to reconstruct the view space position. Do i just skip the inverted projection and w devide?

Share this post


Link to post
Share on other sites
Before proceeding, make sure that the failure indeed occur at the reconstruction. Shade using your raw viewspace positions and see if that works.

By the way, your viewspace position seems quite devoid of reds(+x) and greens(+y).

No-geometry areas are shaded as pure red... unless you cleared to red, does this make sense?

Share this post


Link to post
Share on other sites
my window is cleared to red, i'm testing for depth to make the light buffer red aswell (the red areas is where there currently is no geometry at all. This is how i'm outputting viewpos to the window:
[source] Pass p2 : PREPASS2
{
VertexShader
{
#version 110
uniform mat4 WorldViewProj;
uniform mat4 WorldView;
varying vec4 viewPos;
void main()
{
gl_Position = WorldViewProj * gl_Vertex;
viewPos = WorldView * gl_Vertex;
}
}
FragmentShader
{
#version 110
varying vec4 viewPos;
void main()
{
gl_FragData[ 0 ] = vec4( viewPos.x / 300.0, viewPos.y / 100.0, viewPos.z / 300.0, 1.0 );
}
}
}[/source]

and in the reconstruction shader:
[source] FragmentShader
{
#version 110
uniform sampler2D Texture1;//depth
varying vec2 texCoord;
uniform mat4 ProjInverse;
void main()
{
float depth = texture2D( Texture1, texCoord ).r;
vec4 projectedPos = vec4( 1.0 );
projectedPos.x = texCoord.x * 2 - 1;
projectedPos.y = texCoord.y * 2 - 1;
projectedPos.z = depth;
vec4 posVS4d = ProjInverse * projectedPos;
vec3 posVS3d = posVS4d.xyz / posVS4d.w;
if( depth > 0.9999 )
gl_FragData[ 0 ] = vec4( 1.0, 0.0, 0.0, 1.0 );
else
gl_FragData[ 0 ] = vec4( posVS3d.x / 300.0, posVS3d.y / 100.0, posVS3d.z / 300.0, 1.0 );
}
}[/source]

i'm using the divide to get a better range of possible color values because otherwise i get this:

nodivide.png

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!