I'm gathering all the information I can get my hands on in the prospect of implementing a simple spectral ray tracer. Currently I'm stuck on HDR - I understand the concept but I have some trouble grasping the theory. I am working on Reinhard's operator at the moment.
As I understand, HDR images cannot be displayed on LDR monitors because the luminance range in the image exceeds what the monitor can display - so the luminance has to be scaled somehow. This is done by considering the per-pixel luminance in the image and processing them to compress the HDR image's luminance range in a way that's perceptually acceptable.
I have done some tests with this algorithm, which would seem to be the real deal as it produces fairly nice tone-mapped images but some things are still unclear:
1. calculate the maximum luminance in the image, using the standard CIE luminance curve [eqn]L = 0.2126R + 0.7152G + 0.0722B[/eqn]
2. for each pixel, scale its luminance [eqn]L[/eqn] as:
[eqn]\displaystyle L_{\mathrm{scaled}} = L \left [ \frac{1 + \frac{L}{L^2_{\mathrm{max}}} }{1 + L} \right ][/eqn]
3. thus scale the pixel's RGB components by multiplying them with the luminance coefficient: [eqn]\displaystyle k_L = \frac{1 + \frac{L}{L^2_{\mathrm{max}}} }{1 + L}[/eqn]
[/quote]
First, should the maximum luminance in the image be used for [eqn]L_\mathrm{max}[/eqn]? In this reference they call it [eqn]L_\mathrm{white}[/eqn] and state it should be the luminance that will be mapped to a LDR luminance of 1 (i.e. maximum). This suggests that it could be anything, depending on the exposure we want to obtain - should the maximum or the average be used? Or perhaps the median? Maximum seems to produce the nicest results on most of the HDR photos I tested but is sometimes too dark. Average is often far too bright, but is sometimes spot on. What gives?
Thanks!