Lighting with deferred rendering

Started by
4 comments, last by FTLRalph 9 years, 1 month ago

Hey all, I'm working on a 2D-deferred-rendering-with-lighting implementation and I have a question.

So for those of you familiar with deferred rendering, I already have my geometry FBO populated with a diffuse buffer and a normal buffer.

I'm now currently at the stage where I need to add some lights to a "light accumulation buffer" and I have no idea what that means.

I take it I need a new FBO to render in - okay I got that. I also have an array full of light data objects (x, y, rgb, intensity, radius) - only dealing with point lights at the moment.

I'm assuming I need to for-loop through my lights, set the appropriate uniforms of a shader per light, draw the diffuseBuffer texture to the new light FBO while the shader does its lighting magic, and repeat.

However, I don't understand how that would work. Wouldn't I just overwrite the previous light after I draw over it in the next light pass? And the next one, and so on? What if lights overlap?

I believe looking at a standard normal-lighting GLSL shader for this type of implementation would really help. Or any advice, really.

Appreciate any help!

Advertisement

You can draw light one by one to backbuffer using additive blending. Each light pass use light data(uniform) and gbuffer buffer data as input and output radiance. As optimization you can draw each light using bounding area of light(circle/quad) instead of fullscreen quad.

You can draw light one by one to backbuffer using additive blending. Each light pass use light data(uniform) and gbuffer buffer data as input and output radiance. As optimization you can draw each light using bounding area of light(circle/quad) instead of fullscreen quad.

Alright, I think I'm grasping the idea a little better.

If I'm understanding correctly, I would need to first populate my lighted backbuffer with the gbuffer diffuse data completely, and then loop throught my lights and add them one by one (bounding the draw area like you said), right?

And additive blending, from what I'm reading, it seems I should be using (GL_ONE, GL_ONE), right?

The only issue I've seen from this approach is if I re-draw a piece of the screen over and over on top of itself with this blending, it seems to become bright/over-exposed and obviously doesn't look right. How can this be avoided?

First you populate gbuffer. Separate framebuffers for diffuse, normals and maybe roughness + specular intensity.

Then you render light passes that read from gbuffer and write to backbuffer.

Over bright images can be battled with hdr rendering but first make the initial system work.

The way I'm currently doing is:

G Buffer fill Pass

- Write albedo to RGBA8 buffer, A component used for specular map.

- Write encoded normals to RG16F buffer.

- Write shiniess/glossiness factor to a component in another buffer because I ran out of components in the other 2 >_<

Lighting pass

- Sample albedo, specular and shininess.

- Decode normals.

- Reconstruct position from depth.

- Compute lighting (albedo * diffuse + specular) and output to light accumulation buffer.

That accumulation buffer is usually high range (RGB16F, R11_G11_B10F, etc), and lighting passes are done with additive blending. After that you're free to do post process effects with the high range buffer (bloom, tone mapping, motion blur, etc).

Have in mind that this is pretty much using the same FBO with multiple attachments. You'll want more FBOs if you want (for some reason, say, shadowing) more depth buffers or if you run out of attachment slots for a single FBO (unlikely, D3D10+ class hardware supports at least 8).

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Alright guys, thanks for the input. Just hearing it put in a different way helps me to make sense of it better.

This topic is closed to new replies.

Advertisement