Light pre-pass HDR

Started by
28 comments, last by eq 15 years, 4 months ago
I have a few questions on implementation details of a light pre-pass renderer. Specifically, I'm wondering how one would handle HDR with this setup? Because the alpha channel is in use for specular light properties, I can't use an RGBA8 target with an alternate color-space. I know an RGBA16F target would work, but that seems like overkill, and would reduce the number of platforms I could support. Also, it'll cost me MSAA, and might force me to do something like Quincunx AA. I suppose I could use MRTs, but that wouldn't be much better and I was hoping to avoid it if I could. Can anyone help me with this conundrum?

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Advertisement
you have several options here :-)
1. the light buffer just holds a bunch of vectors with the diffuse color. If you can live with a 8:8:8:8 render target here I would just keep it :-)
2. you compete with Crysis and can afford using a 16:16:16:16 render target
3. Use LUV color space
L = N.L * Attenuation or Spotlight Factor
u and v are in the next two color channels and specular is in alpha

The advantage of the LUV color space is that the light colors are better preserved ... just looks better than the RGB model here. Pat Wilson wrote a ShaderX7 article about this.
I'm not especially familiar with the LUV color space. Can it handle HDR values, even if the target is RGBA8? Or are you saying I'll only get HDR if I use an RGBA16 target?

On that note, do I want to store the values in the LUV color space, or convert the final result to LUV to extract the luminance? If I'm having to store it in LUV, can I additively blend lights in this color space?

Can you point me to any resources that explain clearly the equation/implementation of LUV conversion?

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

LUV seems to create better results than RGB. The only reason I can think of why this is the case is that luminance is more important than the RGB color values.
The reason why I do not use it is that I do not want to additively blend it. On one of my target platforms, alpha blending is next to free.
On the other platform I will consider the LUV conversion in the near future. Although you only have three color channels for LUV it seems like the luminance differences are better preserved.
I saw Pat Wilson's screenshots and when the number of overlapping lights increase you can see that the colors are better preserved.
LUV<->RGB conversions are pretty fast in a pixel shader. You can look up the conversion via Google.
One suggestion that was given (complements of reltham) to me which works great (and yes, this is within the context of light pre-pass rendering):

Instead of storing the calculated light value in the buffer, store 2^(-lightValue).

When decoding, then, you can simply use -log2(lightValue) to get the result.

For blending (and I'm doing this bit from memory, so forgive me if I'm wrong - I'll check my FX file when I'm at home and correct this if I'm incorrect):

SrcBlend = DestColor
DestBlend = Zero

(that is, it's pure multiplicative blending)

...and you'll want to clear the light buffer to WHITE instead of black, because multiplying with black is not terribly useful :)

In my case, it made a fairly big difference in quality (as the brightness of the light value was no longer bound by the diffuse color).

Here's a picture (sorry, no thumbnail). The left half is the log-based encoding, the right half without. Note that the right half, when the light gets bright, washes out because the diffuse material color caps the brightness. on the left, the lighting can get brighter than 1.

It doesn't give you total HDR, it's more of a "medium dynamic range," but it seems to work pretty well for me.
Cool trick, thanks for the tip. ;)

Thanks to both of you for the info so far. Curious to see if anyone else chimes in. ;)

EDIT:
Okay, Drilian, I need some clarification on how to integrate that trick. So I would convert the light value to 2^(-lightValue), and multiply the result with diffuse colors. Then, when I'm reading the lights, I do the luminance extraction trick. Now, I would do -log2(lightValue) with the extracted value?

EDIT: Is this trick MSAA safe?

[Edited by - n00body on November 12, 2008 8:45:40 PM]

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

So, just as a basic run down of the technique:


Step 1. Render all objects' normal/depth to a buffer.


Step 2. Use that buffer to render the (pure diffuse/specular) light to a buffer (additive blending to blend the lights together)


Step 3. Render each object, using the output of step 2 (the light buffer) instead of doing lighting calculation.

So step 1 remains unchanged. You render normal/depth to a buffer.

In step 2, you calculate whatever lightValue you would be rendering. Rather than writing them (additively) to the light buffer like you would, you instead write out 2^(-lightValue) multiplicatively (SrcBlend = DestColor, DestColor = Zero). <br><br>Multiplicative makes sense when you think about it: 2^a * 2^b = 2^(a+b)<br>Since the exponent is what you care about, multiplicative blending is just adding the exponents together!<br><br>Just remember that, before you start your light pass, to clear the buffer to white instead of clearing to black like you had been (I made that mistake the first time and couldn't figure out what I'd done wrong).<br><br>So, &#111;nce that is complete, what you have is a buffer that contains 2^(-totalLightValue) of each pixel.<br><br>So, then, for step 3, when you sample that texture (tex2d, perhaps), you use -log2(sampledValue) to get the actual light value that's stored at that pixel. Then you proceed as you would have before.<br><br>As to whether it's MSAA safe, no less so, in my opinion, than light pre-pass is in general. Light Pre-Pass rendering (abbreviating as LPPR from now &#111;n) suffers from the same edge artifacts as deferred shading does (because the depth value written in step 1 is an average of surrounding depths, you end up with a value that's generally neither &#111;n the object in front nor &#111;n the object behind it). I don't consider LPPR to be MSAA-safe at all, personally.
Thanks for going to the extra effort with the visual aids, and the breakdown of the steps.

One last clarification. When you say "lightValue", you mean (NdotL * lightColor), right?

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Exactly. It's the NdotL * lightColor value that gets calculated during step 2.
Quote:Original post by wolf
I saw Pat Wilson's screenshots and when the number of overlapping lights increase you can see that the colors are better preserved.

Any chance to get/see these screenshots without buying the book? =)

--

This topic is closed to new replies.

Advertisement