Light Pre-Pass Question

Started by
9 comments, last by diginerd 14 years, 5 months ago
Hey guys, I'm implementing Wolfgang Engel's Light Pre-Pass renderer and I have a question. I've done the g-buffer pass storing the normal and depth, I did the light-buffer pass, accumulating all my lighting, and here's where I'm confused. I render the geometry again and use the light buffer, but it doesn't look right at all. If I render the final geometry to a texture and then use the light buffer with it, it works fine. My question is, what exactly do you do for the final geometry pass? Thanks
Advertisement
What does it look like?

In its simplest form, in the 2nd geometry pass you return

DiffuseLightBuffer.rgb * ColorTexture.rgb
It looks like the RenderMonkey RenderToTexture exmaple. Every object I'm rendering just looks like it has the entire light buffer mapped on it.


http://www.freeimagehosting.net/image.php?2afa3e7b7b.jpg
I don't understand what I'm looking at...

Are you multiplying the value of the light buffer by the color texture?
I have a bunch of spheres that I want rendered. That's what those are in the picture. They have the light-buffer mapped onto them. You can see 3-4 color blobs grouped together on each sphere. I'm not sure what you mean by multiplying the light buffer by the color texture. Here is the final geometry shader:

vert:

void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}

frag:

uniform sampler2D lightBufferTex;

void main(void)
{
vec4 lightAtPixel = texture2D(lightBufferTex, gl_TexCoord[0].st);
gl_FragColor = LightAtPixel * gl_FrontMaterial.diffuse;
}
It looks like what you are doing is texture mapping each sphere with the light map.

What i think you need to do is do a full screen quad textured with the light map instead.

I havent implemented this before but thats what it looks/sounds like
You're using the light buffer as if it's a normal diffuse texture for each model, and using their texture coordinates. 'lightAtPixel' isn't actually getting the light at the current pixel as I assume gl_MultiTexCoord0 isn't being set to any special texture coordinates in your code.

You have two options:

Render the scene normally (with normal materials) to the main rendertarget, then draw a fullscreen quad of the light buffer with blending turned on (a multiply blend).

Render the scene normally to another render target, then later on draw another fullscreen quad that blends both render targets together.

Hope this helps!

edit: You could probably also reconstruct your position using the depth buffer and then multiply the pixel based on that. But that'd probably be overly complex just to avoid an extra full screen quad ;)
[size="1"]
Yeah, I just render the scene into a new texture and do a blend with the light texture. So here's another question: How would I implement Parallax Mapping using light pre-pass. I figure the light buffer pass won't change at all. All of the parallax work should be done inside the final geometry pass when I'm rendering to the texture, right? Then I can still just apply the lighting as a blend afterwards?
Actually, I will have to modify my light buffer pass to get the light direction in tangent space. Other than that I should be fine.
If you did that, you wouldn't have the TBN information to transform the light into tangent space during the light pass, unless you used more render targets. While I haven't actually done this yet, I believe the easiest route is to bring the normal back into view space (or whatever space you're lighting in) during the depth normal pass. You'll be rendering geometry and have access to the TBN, so pass it to the shader, lookup the normal, transform it back into view space and then write it. Then the light pass won't need to change at all.
-- gekko

This topic is closed to new replies.

Advertisement