Deferred Lighting: Materials using Light Buffer Post-Processing

Started by
8 comments, last by n00body 14 years ago
Recently, I have been re-examining the potential shading possibilities offered up by Deferred Lighting/PreLighting/Light Prepass. Specifically, I have heard a variety of effects (ex. skin) can be achieved by post-processing the contents of the light buffer, before combining it with materials in the second pass. So I was wondering if anyone has tried this kind of trick, and would be willing to share their experiences with it? Any particular comments on quality, limitations of the technique? Have you found any other interesting effects from post-processing the light buffer? Any input would be most appreciated. ;) EDIT: I'm asking about ways to post-process the lighting data for different material effects. This only pertains to Deferred Lighting/PreLighting/Light Prepass which have a minimal g-buffer, light-buffer(s), and use a second geometry pass to combine lighting with materials. This second pass is where these tricks would be occurring. [Edited by - n00body on April 6, 2010 12:28:13 AM]

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Advertisement
Hmmm, I'm thinking what I'm doing with it. The buffers you produce (albedo, specular, normals, depth, ...) can be used for:
- SSAO
- Depth of Field
- Motion blur
- ...
And many more things. The lighting results could be used for faking some (realtime) ambient lighting, maybe. Don't know if anyone had really success with it though. Some other stuff:
- Scene info / Tone Mapping(getting the average brightness)
- Use contrast filtering to get the highlights for stuff such as bloom or GodRays (blurring light strokes)

There is a variant on Deferred Lighting that seperates the lighting and material combination as well: Inferred Lighting. The paper on this page might give you some more ideas:
http://graphics.cs.uiuc.edu/~kircher/publications.html

Last but not least you could use it maybe for AI stuff. If you need to know if something is litten or not, you can ask the lighting results (as a scaled down texture) for info.

I think most of the techniques here do not necessarily require deferred lighting though. When doing forwar rendering, you can also capture the results and do similiar things.

Well, I use the following "tricks" in my deferred rendering engine:
- SSAO (pretty standard :) )
- decals applied directly to g-buffer
- projective textures, similar to decals
- expensive light shader (for large light radius)
- particle lights applied directly to g-buffer (for small light radius)
- multilayer material system (to fake skin-effects)


I believe that you guys have misunderstood my original question. My question pertains to a specific type of deferred rendering, where you have a buffer that stores nothing but unmodified lighting data. Then, has a material pass where objects get drawn a second time, sampling the lighting buffer(s) to light their materials.

I am asking about modifying this sampled lighting inside the sampling object in the second pass. In particular, I was wondering if anyone has used this for skin effects, since you could possibly blur the sampled lighting data. Though I am also interested in hearing about other similar uses of this lighting data.

Implementation details, or sources of more information on these approaches would be appreciated.

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Well splitting up is what happens in Inferred Lighting, see the earlier link. Supposing you mean only the incoming light is stored on that buffer
( dot( normal, lightDirection ) * lightColor + specular? ).

The technique itself isn't doing post effects with the light data, but by seperating the passes it can add transparency information. I forgot the precise details though.


Not sure if blurring over multiple pixels would be usefull though. I'm no expert with skin effects, but I think they ussually sample glossy reflecions from a blurred cubeMap / BRDF, and/or add a gloom with some shader (sheen, velvet, Rim lighting). I don't think you can get the same type of info from sampling neighbour pixel lightdata.

Maybe it is usefull for materials that have a high degree of internal reflections (glass, crystal?). But I can't really think of other (lighting) effects that make good use of a seperate buffer, except for the things I stated earlier...

Inferred lighting is a specialization of Pre-Pass lighting which expands the data stored in the G-buffer to allow for the transparency etc. Besides this, it is performed exactly like pre-pass lighting.

n00body:
I have poked at a few effects using post-processed light buffers, but I haven't come up with anything I'm ready to really write about yet. One of the main issues with doing post-processing of the light buffer alone, is that you have no albedo, and this can drastically change the values for which one would use the light buffer. So it really depends on what it is that you are doing, because you may just not have the data you need.

Sorry that doesn't really answer the question.
@Pat:
Nah, it was useful. Ultimately, the most surefire way I can think of is to read the lighting into an unwrapped mesh and do the light-blurring trick. Though I am hoping that a method exists that doesn't require two passes per-mesh like that.

Any else wanting to chime in?

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Yeah my skin shader computes the lighting in texture-space (i.e. onto an unwrapped mesh) and does the blurring there.

Doing the blurring in the LBuffer does sound like an interesting idea, but isn't as flexible. For example, I use NVidia's skin trick of multiplying with sqrt(albedo) before *and* after blurring, to simulate the albedo having an effect as the light both enters and leaves the skin. As patw says, you wouldn't have albedo data available to do this.

Actually... when you do the blurring on the LBuffer though, you just want to blur the regions where the skin geo is -- you can do this by rendering the skin geo only (when doing the blur passes), which would also allow you to access the skin's albedo texture! So maybe this *would* work, and would remove the need for doing texture-space lighting for skin...
//LBuffer GenerationRender all geo to GBuffer - compute depth/normalRender all lights to LBuffer - samples GBuffer, writes to LBuffer//Skin scattered lightingRender skin geo to LBuffer - write LBuffer * sqrt(albedo)Render skin geo to LBuffer - write blurHorizontal(LBuffer)Render skin geo to LBuffer - write blurVertical(LBuffer)//Final material passRender non-skin geo to FinalBuffer - write LBuffer * albedoRender skin geo to FinalBuffer - write LBuffer * sqrt(albedo)
Hmm... gonna have to think about different uses of the LBuffer some more. Thanks for the idea n00body =D
Different materials are possible with light prepass. First of all you can encode material index into your buffer and apply different material specific calculations using branching in shader. Or you can do it as Wolf showed in his presentation:
http://www.bungie.net/images/Inside/publications/siggraph/Engel/LightPrePass.ppt
check slide #17

It does possible to blur a portion of light buffer, you just need already mentioned material index to blur only a portion of the buffer.
@Hodgman:
Thanks for all the ideas and suggestions. Truth be told, I need to study more of Nvidia's skin tricks, since they have funneled so much work into them. Though for the sake of my project, I am trying to keep it simple.

The idea just came to me when I heard about how Uncharted 2 & Crysis 2 are both using Deferred Lighting, and both have SSS shaders. Though they are probably using the unwrap trick, part of me wonders if they were applying post-processing to the diffuse light-buffer.

My original concept was that I could use something like a bilateral filter (ala SSAO) on the light-buffer contents to ensure that blurring doesn't go over edges. As you said, it would be using the mesh geometry to keep from going over the boundaries of the mesh. It would also allow the use of textures mapped to the mesh, such as one that controls the amount of blurring over each region of the mesh surface.

Does that sound feasible?

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

This topic is closed to new replies.

Advertisement