Sign in to follow this  
bantherewind

Deferred shading - light accumulation issue

Recommended Posts

bantherewind    104
I'm working on deferred shading and all has gone well until it comes time to render multiple lights. The first shader pass renders my scene to a G-buffer with color (albedo), position, normal, and material (specular, etc) data written to color attachments. The next shader passes create and blur a SSAO texture from the normal data and a randomizer, and it looks nice. Then I do the final lighting pass. I start experimenting by rendering out a sphere to represent a large point light (see fig. 1) and it looks mostly like what I would expect. So far so good.

My goal is to draw a ton of lights in my scene. When I add a second light, it occludes the first (see fig. 2). I've tried using [font=courier new,courier,monospace]glAccum()[/font] to no avail. When I render a second light which overlaps the first in screen space (see fig. 3 and fig. 4), that pixel is overwritten. I'm having trouble understanding how to create and display a light accumulation buffer in a single shader pass. I could ping-pong FBOs between lights, but that seems like it would be really slow, and has not been suggested in any material I've read.

My vertex shader (see fig. 5) and fragment shader (see fig. 6) in my lighting pass are below.

Fig 1. One light
[img]http://bantherewind.com/uploads/accum_01_sm.jpg[/img]

Fig 2. Two lights, occluding instead of accumulating
[img]http://bantherewind.com/uploads/accum_02_sm.jpg[/img]

Fig 3. Spheres representing lights
[img]http://bantherewind.com/uploads/accum_04_sm.jpg[/img]

Fig 4. Zoom out of fig. 3
[img]http://bantherewind.com/uploads/accum_03_sm.jpg[/img]

Fig. 5 Lighting pass vertex shader
[CODE]
varying vec4 lightPosition;

void main( void )
{
lightPosition = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_Position = lightPosition;
}
[/CODE]

Fig. 6 Lighting pass fragment shader
[CODE]
uniform vec3 eyePoint; // Camera eye point
uniform vec4 lightAmbient; // Light ambient color
uniform vec3 lightCenter; // Center of light shape
uniform vec4 lightDiffuse; // Light diffuse color
uniform float lightRadius; // Size of light
uniform vec4 lightSpecular; // Light specular color
uniform vec2 pixel; // To convert pixel coords to [0,0]-[1,1]
uniform sampler2D texAlbedo; // Color data
uniform sampler2D texMaterial; // R=spec level, G=spec power, B=emissive
uniform sampler2D texNormal; // Normal-depth map
uniform sampler2D texPosition; // Position data
uniform sampler2D texSsao; // SSAO

varying vec4 lightPosition; // Position of current vertex in light

void main()
{

// Get screen space coordinate
vec2 uv = gl_FragCoord.xy * pixel;

// Sample G-buffer
vec4 albedo = texture2D( texAlbedo, uv );
vec4 material = texture2D( texMaterial, uv );
vec4 normal = texture2D( texNormal, uv );
vec4 position = texture2D( texPosition, uv );
vec4 ssao = texture2D( texSsao, uv );

// Calculate reflection
vec3 eye = normalize( -eyePoint );
vec3 light = normalize( lightCenter.xyz - position.xyz );
vec3 reflection = normalize( -reflect( light, normal.xyz ) );

// Calculate light values
vec4 ambient = lightAmbient;
vec4 diffuse = clamp( lightDiffuse * max( dot( normal.xyz, light ), 0.0 ), 0.0, 1.0 );
vec4 specular = clamp( material.r * lightSpecular * pow( max( dot( reflection, eye ), 0.0 ), material.g ), 0.0, 1.0 );
vec4 emissive = material.b;

// Combine color and light values
vec4 color = albedo;
color += ambient + diffuse + specular + emissive;

// Apply light falloff
float falloff = 1.0 - distance( lightPosition, position ) / lightRadius;
color *= falloff;

// Apply SSAO
color -= vec4( 1.0 ) * ( 1.0 - ssao.r );

// Set final color
gl_FragColor = color;
}
[/CODE] Edited by bantherewind

Share this post


Link to post
Share on other sites
johnchapman    601
Typically when accumulating lights you'll need to enable blending, so that the result of the fragment shader gets combined with what's already in the buffer (i.e. the result from previous lights):

glClear(GL_COLOR_BUFFER_BIT); // clear light accumulation buffer to black here

glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE); // additive blending

// render lights

glDisable(GL_BLEND);

Share this post


Link to post
Share on other sites
bantherewind    104
Got it! Needed to disable depth testing.

The lights are blending, but now it seems I can only see from "inside" the lights, or close to it. If I zoom out, I would expect the scene to be illuminated, but smaller. Instead it goes black, as if the only visible area is inside the sphere. Edited by bantherewind

Share this post


Link to post
Share on other sites
gfxgangsta    806
[quote name='bantherewind' timestamp='1352323238' post='4998586']
Got it! Needed to disable depth testing.

The lights are blending, but now it seems I can only see from "inside" the lights, or close to it. If I zoom out, I would expect the scene to be illuminated, but smaller. Instead it goes black, as if the only visible area is inside the sphere.
[/quote]

Do you have backface culling enabled? If so, I think you need to flip it when you "enter"/"exit" a light sphere.

From http://www.catalinzima.com/tutorials/deferred-rendering-in-xna/point-lights/ :

"After setting the parameters, we draw the sphere model, using the effect file. But before we do this, we must set the desired culling mode. If we are outside the sphere, we want to draw the exterior of the sphere. Otherwise, when the camera in inside the light volume, we need to draw the inner side of the sphere. Using CullMode.None would apply the lighting calculations twice when the camera is outside the viewing volume, which is not desirable. By switching the culling mode, we make sure that the light is always applied once."

Share this post


Link to post
Share on other sites
bantherewind    104
So here's the new issue. The lights are blending into each other better (see fig. 1), but I have to be inside (or almost inside) the sphere. Everything goes black if I back out (see fig.2 and fig. 3). The [font=courier new,courier,monospace]falloff[/font] variable in my fragment shader is controlling how much to mix the color. This should should be determined by how far the current light vertex is away from the position sampled from the G-buffer. Both are calculated using [font=courier new,courier,monospace]gl_ModelViewProjectionMatrix * gl_Vertex[/font], so the values remain relative. I'm not sure if the camera's position is somehow reducing this distance or there is something inherent in drawing these spheres that is blocking the view of what's inside.

Fig .1 Lights are now blending into each other
[img]http://bantherewind.com/uploads/accum_05_sm.jpg[/img]

Fig.2 Image gets darker from outside
[img]http://bantherewind.com/uploads/accum_06_sm.jpg[/img]

Fig. 3 Light position when the image gets too dark to see.
[img]http://bantherewind.com/uploads/accum_07_sm.jpg[/img]

Share this post


Link to post
Share on other sites
bantherewind    104
Well, now I feel like an idiot. I had a typo in the C++ when setting the uniform for my position texture (was spelled "texPositon"). My position was always 0,0,0 which gave me a bad falloff value. Thanks for the help!

It's subtle, but this has two point lights and a directional light blending nicely. And I can see it from far away. [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img]

[img]http://bantherewind.com/uploads/accum_08_sm.jpg[/img]

[img]http://bantherewind.com/uploads/accum_09_sm.jpg[/img] Edited by bantherewind

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this