Jump to content
  • Advertisement
Sign in to follow this  
jstroh

Deferred Rendering and HDR

This topic is 3763 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I'm thinking the best way to do the light accumulation is in a FP16 render target and then do all the tone mapping on this buffer before it's combined with everything else. Yes?

Share this post


Link to post
Share on other sites
Advertisement
Sure, that's the most straightforward way of doing it. You pretty much have all the same options available to you that you'd have with forward-rendering (fp16, alternate color-spaces, tone-mapping directly in the shader), so you can really do whatever you want. In my old deferred renderer I just output to an fp16 target and then tone-mapped. It was convenient, but required a GPU with fp16 blending support (you can get around that limitation by manually implementing alpha blending in the pixel shader, which is what you'd have to do for alternate color-space representations anyway).

Share this post


Link to post
Share on other sites
Cool. Yeah I've been reading about people saying "oh you can only do it if the device supports fp16 texture blending" but it's pretty simple to just add to a texture in a pixel shader lol. Ok cool thanks, much appreciated as always.

Share this post


Link to post
Share on other sites
Quote:
Original post by jstroh
Yeah I've been reading about people saying "oh you can only do it if the device supports fp16 texture blending" but it's pretty simple to just add to a texture in a pixel shader lol. Ok cool thanks, much appreciated as always.

Except that it means that you need to ping-pong your texture for every light that you render... and you can't batch up lights into one DP... and you need to worry about copying at least the affected part of the texture into the second ping-ponged texture every time you render a light...

In short it's rather impractical to do it manually for a scene with complex lighting. If you mean to target GPUs without fp16 blending, you're going to want to look at an alternate HDR representation, such as Valve's post-tone-mapped RT stuff (sacrifices a lot of flexibility in your tone mapping unfortunately), or potentially some of the alternate colour space stuff like LogLUV or similar.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
Quote:
Original post by jstroh
Yeah I've been reading about people saying "oh you can only do it if the device supports fp16 texture blending" but it's pretty simple to just add to a texture in a pixel shader lol. Ok cool thanks, much appreciated as always.

Except that it means that you need to ping-pong your texture for every light that you render... and you can't batch up lights into one DP... and you need to worry about copying at least the affected part of the texture into the second ping-ponged texture every time you render a light...

In short it's rather impractical to do it manually for a scene with complex lighting. If you mean to target GPUs without fp16 blending, you're going to want to look at an alternate HDR representation, such as Valve's post-tone-mapped RT stuff (sacrifices a lot of flexibility in your tone mapping unfortunately), or potentially some of the alternate colour space stuff like LogLUV or similar.


ew lol didn't think it all the way through :(

Ok I'll check out Valve's tricks, thanks.

EDIT: Any links to explanations of valve's technique or alternatives would be appreciated.

Share this post


Link to post
Share on other sites
Quote:
Original post by jstroh
EDIT: Any links to explanations of valve's technique or alternatives would be appreciated.


There's been some discussion of it here before and at beyond3d but I can't seem to find the threads...anyway the basic idea is that at the end of your shader rather than spitting out the raw HDR value, you apply your tone-mapping operator right then and there and output the tone-mapped value a regular non-fp surface. The hard part is coming up with a good average luminance value to feed into your tone-mapping operator, since with this technique you don't have a full-screen HDR buffer to sample and then downscale. I'm not really sure method Valve uses to come up with a luminance value, but I'd imagine it's a pretty rough approximation given the sometimes sub-par results their engine produces.

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
I'm not really sure method Valve uses to come up with a luminance value, but I'd imagine it's a pretty rough approximation given the sometimes sub-par results their engine produces.

I don't remember precisely by I recall something akin to doing a lazy luminance histogram update (say, update one bin per frame via an occlusion query) and smoothing the luminance towards the "ideal" average frame by frame. The advantage is that it does something vaguely reminiscent to how our eyes incrementally adjust pupil dilation, with some fairly obvious disadvantages in that you never have a fully up-to-date luminance histogram of the scene. It's also pretty impractical to do any sort of local tone mapping in such a system, but that's probably overkill for the average game for at least a few more years.

On the bright side though (no pun intended ;)), doing tone mapping at the end of shader works on pretty much all hardware and works properly with MSAA, etc. Blending is maybe questionable depending on what you're doing, but it's probably sufficient for stuff like foliage. However you're going to have to accept that any post-process stuff that really should be done pre-tone-mapping (read: bloom!) is gonna have to be hacked...

Share this post


Link to post
Share on other sites
lol that's some hack tastic HDR there valve :P

Damn XNA+XBOX 360 not having FP alpha!!!

EDIT: It does support 10.10.10.2 with alpha though. I was looking through HDR_The_Bungie_Way.ppt and they found that the 360's 10.10.10.2, which is 7e3, doesn't give enough exposure:

Quote:

10-bit floating point formats.

The XBOX 360 has full support for the 7e3 format (7 bits mantissa, 3 bits exponent), which is the dark purple curve.
7e3 has more than enough precision, but only gives you about 3 stops of exposure range – it’s well into the banding region with the 5 stop headroom here.

Another alternative, not supported by XBOX 360 but possibly showing up on PC hardware soon, is 6e4.
This format has just enough precision, as it is barely poking above the visual curve. But it gives you a full 10 stops of exposure range.

DX10 will support 6e5 I believe, with 5e5 for the blue channel. It is difficult to say if 5 bits is enough for the blue channel – this visual curve I measured was monochrome values, for which 5 bits will exhibit some banding. The blue channel, however, might require less precision.


I might just try it for the hell of it to see if my scene has banding.

EDIT AGAIN:

Oh and I was looking into the other techniques where you store a brightness in the alpha channel: http://www.daionet.gr.jp/~masa/archives/GDC2003_DSTEAL.ppt

but of course you can't alpha blend that :P

So it seems like 10.10.10.2 is the only option I can find for deferred with tons of lights on the 360.

[Edited by - jstroh on June 6, 2008 2:54:15 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by jstroh
Damn XNA+XBOX 360 not having FP alpha!!!

EDIT: It does support 10.10.10.2 with alpha though. I was looking through HDR_The_Bungie_Way.ppt and they found that the 360's 10.10.10.2, which is 7e3, doesn't give enough exposure:


Be careful: you're talking about 2 different surface formats here. XNA supports regular A2R10G10B10, which is a fixed-point format with some extra precision (everything will still be normalized to the range [0,1]). The format Bungie is talking about, which uses floating-point components, is not accessible through XNA. The reason why is because this format is unique to the 360, and has some interesting quirks about it's usage (the format only exists in the 360's eDRAM backbuffer, when it's resolved it's copied out to main memory as full fp16).

Because of that, the HDR situation on 360 through XNA basically sucks. You can use fp16, but you get two wonderful problems along with it:

-Backbuffer size is doubled, which means you have to tile even without MSAA
-No alpha-blending

So you're stuck with those problems, doing something with alternate color spaces, or using the Valve method. Not great options, IMO. I've been wrestling with these issues myself over the past few weeks, and I think I'm just going to bite the bullet and go with fp16 on both 360 and PC. I'm using a forward renderer so at least I don't have to worry about batching lights, but any alpha-blended geometry is going to be a pain.

I actually brought up the issue over at Connect, and they basically said they're not looking into giving access to the fp10 format. I understand it creates some inconsistency they don't want across the PC and 360 platforms, but it's really frustating not having access to one of the 360's unique advantages.

Share this post


Link to post
Share on other sites
ugh that sucks to hear :(

/cry for deferred

EDIT:

Guess I'll head back to Forward land with a camera depth buffer to try out SSAO :(

EDIT AGAIN:

Do you think removing light batching would eliminate the speed benefit of deferred? hmmm

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!