I'm gathering all the information I can get my hands on in the prospect of implementing a simple spectral ray tracer. Currently I'm stuck on HDR - I understand the concept but I have some trouble grasping the theory. I am working on Reinhard's operator at the moment.
As I understand, HDR images cannot be displayed on LDR monitors because the luminance range in the image exceeds what the monitor can display - so the luminance has to be scaled somehow. This is done by considering the per-pixel luminance in the image and processing them to compress the HDR image's luminance range in a way that's perceptually acceptable.
I have done some tests with this algorithm, which would seem to be the real deal as it produces fairly nice tone-mapped images but some things are still unclear:
1. calculate the maximum luminance in the image, using the standard CIE luminance curve
2. for each pixel, scale its luminance as:
3. thus scale the pixel's RGB components by multiplying them with the luminance coefficient:
First, should the maximum luminance in the image be used for ? In this reference they call it and state it should be the luminance that will be mapped to a LDR luminance of 1 (i.e. maximum). This suggests that it could be anything, depending on the exposure we want to obtain - should the maximum or the average be used? Or perhaps the median? Maximum seems to produce the nicest results on most of the HDR photos I tested but is sometimes too dark. Average is often far too bright, but is sometimes spot on. What gives?