• Create Account

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a \$50 Amazon gift card. Click here to get started!

# Deferred lighting, lit value influenced by camera

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

5 replies to this topic

### #1Murdocki  Members   -  Reputation: 274

Like
0Likes
Like

Posted 30 October 2011 - 09:42 AM

Hey,

i'm having a problem calculating light value when rendering a full screen quad in the deferred lighting light pass. The problem is that the calculated light value changes when the camera's position changes. I'm seeing a pattern in the error which is: larger lit value than expected between the camera's position and the 0, 0, 0 position. Here's some images to show what i mean (far first, near second)

on the left behind the bright specular'ish looking bulb is the 0, 0, 0 position. On the right the houses seem to be lit correctly but the wall behind it now suffers from the same problem.

-32bit hardware Depth buffer
-RGB32F texture for normals
-Wrap s/t: Clamp to edge
-min mag filter: linear
-Row major matrices
-View = inverted camera's world
-WorldView = World * View
-WorldViewProj = World * View * Proj
-ProjInverse = inverted Proj
-Vertex provided texture coordinates top-left = 0, 0.

Pass p1 : PREPASS1
{
{
#version 110
uniform mat4 WorldViewProj;
uniform mat4 WorldView;
varying vec4 normal;
void main()
{
gl_Position = WorldViewProj * gl_Vertex;
normal = normalize( WorldView * vec4( gl_Normal, 0 )  );
}
}
{
#version 110
varying vec4 normal;
void main()
{
gl_FragData[ 0 ] = normal * 0.5 + 0.5;
}
}
}

Pass p0
{
{
#version 110
uniform ivec2 ScreenSize;
varying vec2 texCoord;
void main()
{
vec4 vPos;
vPos.x = (gl_Vertex.x) / (float(ScreenSize.x)/2.0) - 1.0;
vPos.y = 1.0 - (gl_Vertex.y) / (float(ScreenSize.y)/2.0);
vPos.z = 0.0;
vPos.w = 1.0;
gl_Position = vPos;
texCoord = vec2( gl_MultiTexCoord0.x, 1.0-gl_MultiTexCoord0.y );
}
}
{
#version 110
uniform sampler2D Texture0;//normal
uniform sampler2D Texture1;//depth
varying vec2 texCoord;
uniform mat4 View;
uniform mat4 ProjInverse;
void main()
{
float depth = texture2D( Texture1, texCoord ).r;
vec3 normal = (texture2D( Texture0, texCoord ).rgb - 0.5)*2;
vec4 projectedPos = vec4( 1.0 );
projectedPos.x = texCoord.x * 2 - 1;
projectedPos.y = texCoord.y * 2 - 1;
projectedPos.z = depth;
vec4 posVS4d = ProjInverse * projectedPos;
vec3 posVS3d = posVS4d.xyz / posVS4d.w;
vec4 lightPos = vec4( 0, 0, 0, 1 );
lightPos = View * lightPos;
vec3 toLight = normalize( lightPos.xyz - posVS3d );
float lit = max( 0.0, dot( toLight, normal ) );
if( depth > 0.9999 )
gl_FragData[ 0 ] = vec4( 0.0, 1.0, 1.0, 1.0 );
else
gl_FragData[ 0 ] = vec4( lit );
}
}
}

For now i'm using a light hardcoded at position 0, 0, 0. I'm trying to do the calculations in view space (also tried world space but without luck). I've also followed some tutorials but none seem to have this particular problem. I've also seen there is a different approach concerning using a view ray extracted from the frustum but i'd like to get this to work using unprojection first.
My question is if there's someone who can spot an error in what i'm doing.

### #2jameszhao00  Members   -  Reputation: 271

Like
0Likes
Like

Posted 30 October 2011 - 11:39 AM

I've had similar problems before. The best way (i.e. learning at the same time) to solve it is to output values at selected places in your rendering pipeline, and comparing them to your expectations. Binary search through your rendering pipeline for the problem

For example, output the raw worldspace (or viewspace) position from your GBuffer construction shader, and compare it with the position you reconstructed in your lighting shader.

By the way, if you're outputting debug values from your gbuffer construction shader, make sure you're using something that can represent [-inf,inf]. (not unorm textures), or scale appropriately. Tripped me up for some time when I was debugging...

### #3Murdocki  Members   -  Reputation: 274

Like
0Likes
Like

Posted 31 October 2011 - 02:49 PM

Looks like i'm failing at reconstructing the viewspace position:

the window contains the actual viewspace position rendered by the models, the output on the bottom left is the reconstructed viewpos. Could this be caused by non linear depth precision (near = 1.0, far = 500.0, avg distance in image = 40.0)? I've tried rendering my own linear depth (z / farClip) but i'm not sure how to use this to reconstruct the view space position. Do i just skip the inverted projection and w devide?

### #4jameszhao00  Members   -  Reputation: 271

Like
0Likes
Like

Posted 31 October 2011 - 05:05 PM

Before proceeding, make sure that the failure indeed occur at the reconstruction. Shade using your raw viewspace positions and see if that works.

By the way, your viewspace position seems quite devoid of reds(+x) and greens(+y).

No-geometry areas are shaded as pure red... unless you cleared to red, does this make sense?

### #5Murdocki  Members   -  Reputation: 274

Like
0Likes
Like

Posted 01 November 2011 - 01:44 AM

my window is cleared to red, i'm testing for depth to make the light buffer red aswell (the red areas is where there currently is no geometry at all. This is how i'm outputting viewpos to the window:
Pass p2 : PREPASS2
{
{
#version 110
uniform mat4 WorldViewProj;
uniform mat4 WorldView;
varying vec4 viewPos;
void main()
{
gl_Position = WorldViewProj * gl_Vertex;
viewPos = WorldView * gl_Vertex;
}
}
{
#version 110
varying vec4 viewPos;
void main()
{
gl_FragData[ 0 ] = vec4( viewPos.x / 300.0, viewPos.y / 100.0, viewPos.z / 300.0, 1.0 );
}
}
}

FragmentShader
{
#version 110
uniform sampler2D Texture1;//depth
varying vec2 texCoord;
uniform mat4 ProjInverse;
void main()
{
float depth = texture2D( Texture1, texCoord ).r;
vec4 projectedPos = vec4( 1.0 );
projectedPos.x = texCoord.x * 2 - 1;
projectedPos.y = texCoord.y * 2 - 1;
projectedPos.z = depth;
vec4 posVS4d = ProjInverse * projectedPos;
vec3 posVS3d = posVS4d.xyz / posVS4d.w;
if( depth > 0.9999 )
gl_FragData[ 0 ] = vec4( 1.0, 0.0, 0.0, 1.0 );
else
gl_FragData[ 0 ] = vec4( posVS3d.x / 300.0, posVS3d.y / 100.0, posVS3d.z / 300.0, 1.0 );
}
}

i'm using the divide to get a better range of possible color values because otherwise i get this:

### #6Murdocki  Members   -  Reputation: 274

Like
0Likes
Like

Posted 05 November 2011 - 03:43 AM

For anyone stumbling on this same issue: this seems to be solved by the solution presented here

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS