i'm having a problem calculating light value when rendering a full screen quad in the deferred lighting light pass. The problem is that the calculated light value changes when the camera's position changes. I'm seeing a pattern in the error which is: larger lit value than expected between the camera's position and the 0, 0, 0 position. Here's some images to show what i mean (far first, near second)

on the left behind the bright specular'ish looking bulb is the 0, 0, 0 position. On the right the houses seem to be lit correctly but the wall behind it now suffers from the same problem.

Some details about my setup:

-32bit hardware Depth buffer

-RGB32F texture for normals

-Wrap s/t: Clamp to edge

-min mag filter: linear

-Row major matrices

-View = inverted camera's world

-WorldView = World * View

-WorldViewProj = World * View * Proj

-ProjInverse = inverted Proj

-Vertex provided texture coordinates top-left = 0, 0.

Geometry pass shader:

Pass p1 : PREPASS1 { VertexShader { #version 110 uniform mat4 WorldViewProj; uniform mat4 WorldView; varying vec4 normal; void main() { gl_Position = WorldViewProj * gl_Vertex; normal = normalize( WorldView * vec4( gl_Normal, 0 ) ); } } FragmentShader { #version 110 varying vec4 normal; void main() { gl_FragData[ 0 ] = normal * 0.5 + 0.5; } } }

Light pass shader:

Pass p0 { VertexShader { #version 110 uniform ivec2 ScreenSize; varying vec2 texCoord; void main() { vec4 vPos; vPos.x = (gl_Vertex.x) / (float(ScreenSize.x)/2.0) - 1.0; vPos.y = 1.0 - (gl_Vertex.y) / (float(ScreenSize.y)/2.0); vPos.z = 0.0; vPos.w = 1.0; gl_Position = vPos; texCoord = vec2( gl_MultiTexCoord0.x, 1.0-gl_MultiTexCoord0.y ); } } FragmentShader { #version 110 uniform sampler2D Texture0;//normal uniform sampler2D Texture1;//depth varying vec2 texCoord; uniform mat4 View; uniform mat4 ProjInverse; void main() { float depth = texture2D( Texture1, texCoord ).r; vec3 normal = (texture2D( Texture0, texCoord ).rgb - 0.5)*2; vec4 projectedPos = vec4( 1.0 ); projectedPos.x = texCoord.x * 2 - 1; projectedPos.y = texCoord.y * 2 - 1; projectedPos.z = depth; vec4 posVS4d = ProjInverse * projectedPos; vec3 posVS3d = posVS4d.xyz / posVS4d.w; vec4 lightPos = vec4( 0, 0, 0, 1 ); lightPos = View * lightPos; vec3 toLight = normalize( lightPos.xyz - posVS3d ); float lit = max( 0.0, dot( toLight, normal ) ); if( depth > 0.9999 ) gl_FragData[ 0 ] = vec4( 0.0, 1.0, 1.0, 1.0 ); else gl_FragData[ 0 ] = vec4( lit ); } } }

For now i'm using a light hardcoded at position 0, 0, 0. I'm trying to do the calculations in view space (also tried world space but without luck). I've also followed some tutorials but none seem to have this particular problem. I've also seen there is a different approach concerning using a view ray extracted from the frustum but i'd like to get this to work using unprojection first.

My question is if there's someone who can spot an error in what i'm doing.