HDRR Luminance Chain Questions

Started by
4 comments, last by wolf 15 years, 10 months ago
I have a couple questions on obtaining average luminance from a render target through recursive downsampling. If anyone has any ideas, I would appreciate it! My first question is that since the original render target is a rectangle, to downsample to a 1x1 luminance composite, it seems the original render target must either be clipped or stretched into a square at some point. Rendering it (Stretching) into a square would allow the whole scene, and be easy and sounds best. But wouldn't it throw off the final luminance? If so, is there a way to compensate? Clipping just the center would miss much of the scene, especially on widescreen usage? Sounds much harder to do? My second question is should I reduce the original render target before downsampling luminance targets? It seems this downsampling for avg lum is the major workload of the whole hdrr process with tons of texture lookups. Obviously it would be best to start downsampling with a fullsize render target, but is that usual? Thanks for your time / help!
Advertisement
I used to always just scale things to square size, and downscale from there. Sure I'd miss a few pixels, but never really had any scenes where missing a few pixels would throw off the final luminance calculation in any significant way. I think you're just going to have to strike a balance between performance and quality that you're comfortable with.

As for downsampling the original RT, you could do that if you've already done it for a bloom pass (or are going to use it in a bloom pass).
Thank you MJP. I appreciate your help.

Since you can adjust the key or exposure, depending on the tone mapping algorithm you are using, perhaps it's more important to accurately detect changes in scene luminance than it is to get the "actual" luminance level?
you can also just over-sample stuff ... when you down-sample from a rectangle to a quad with a big enough filter kernel you oversample some of the pixels in the rectangle. That should not make a huge difference in the big scheme of things.
Thanks Wolf, great idea.

It led me to this idea - down sample as a rectangle until it gets very small, say 5x3, then sample it into to the 1x1. Except that all resolutions wont
reduce evenly by sampling with 2, 3 or 4. For instance;

720/4 = 180
180/4 = 45
45/4 = 11.25
11.25/4 = 2.8125

So I will probably just stretch the rectangle into a 512 quad by
uv.x = uv.x * original.width/512
uv.y = uv.y * original.height/512

Even though it is stretched vertically, it is stretched proportionately. So it should still work for sampling luminance correctly... right?

Thanks.





you might bump into some problems with this :-) ... hardware likes the 64x64-> 16x16 -> 4x4 -> 1x1 chain. I predict that some drivers will choke about anything else :-)

This topic is closed to new replies.

Advertisement