Adding lights to a scene

Started by
4 comments, last by MJP 11 years, 7 months ago
There is a lot of interesting relate topics ongoing, but I couldn't find out what I am looking for. The question is how to compute the effect of adding white light to a scene. What I have is diffuse data, and a measurement of the light intensity.

The first test was to simply multiply the diffuse value (each of the channels for RGB) by the intensity. This works well until saturation, but it is not good enough as it should be possible to see a difference at least for a couple of light sources. I can scale down the final value by a substantial factor, but that would make the "normal" picture too dark. I could also use adaptive transformation that depends on actual light intensity in the picture, but that will add substantial computation time.

Using tone mapping, for example x/(1+x), will solve the problem of saturation. However, if it is applied to each channel independently, there may be a color shift for some materials where one of the component in RGB is different enough to the others.

My next experiment was to summarize all white light contributions into one buffer, and apply tone mapping to it. That way, it ranged from 0 to 1. However, multiplying a RGB value with something from 0 to 1 can only darken the scene, and it all turned out to be quite gloomy. I can scale with multiplicative constant, for example 1.5, but that will again move some RGB colors outside the desired maximum of 1.

As a next step, I start to look into transforming the RGB into another representation, for example Hue-Saturation-Value. But instead of doing lots of testing I hope to find advice here. It looks like a transformation to a format that has a luminance value (luma), would be promising. The luma would then be scaled with the light intensity, and then transforming back to RGB again. Is this a good way to go?

Or maybe there is a simple way to do this that I overlooked?
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
Advertisement
I'm not sure if I understand your question, but when considering light calucation you should remeber the following:
1. Light is additive, therefore use a large enough value range (HDR) to get your light represented properly in linear color space.
2. Diffuse surfaces are like color filters, that is, they absorb light, always reducing the incoming intensity. Therefore multiplying incoming light with the diffuse map is a good (standard) approximation.
3. Standard monitors can't display hdr images, therefore you need to map the HDR range always to [0...1].
4. The human eye adjusted to the surrounding light level, consider this when using a tone mapper.
5. Prevent saturation of single channels to avoid a color shift (=> tone mapper)
6. Remeber to do proper gamma correction.

Here's an overview of some tone mapper operations.
A tone mapping curve like x / (1 + x) is not enough on its own to turn HDR values into something suitable for a display. You also need to calibrate the image, so that the "important" range of intensities end up visible and not crushed or clipped. Typically this is done with some sort of simple exposure simulation, which is usually just a scale value that you multiply your pixel value by before applying tone mapping. A lower exposure value lets you see details in higher intensities, while high exposure values let you see details in lower intensities. It's directly analogous to manipulating settings on a camera that would adjust exposure, like shutter speed and f-stop. It's also analogous to how your eye adapts to different lighting conditions by dilating or constricting the iris. As you've already experienced, you can't just use one exposure value for all scenes. At the very least exposure needs to be adjustable, so that you can change it depending on how bright or dark the scene is. Even better yet is to have some sort of "auto exposure" routine, where you examine the current distribution of intensities that were rendered and automatically compute an exposure value. Such a routine was proposed by Reinhard in his original paper, and you'll see it in a lot of HDR tutorials and samples.
I got it working now, with just the effects I wanted, based on the information in this thread and some other related concurrent threads. The design I use is to compute [eqn]L = 0.2126R + 0.7152G + 0.0722B[/eqn], and then scale each color channel with [eqn]\displaystyle \frac{1 + \frac{L}{L^2_{\mathrm{white}}} }{1 + L}[/eqn]. This is done in linear color space.

I don't use an automatic algorithm for [eqn]L_{\mathrm{white}}[/eqn], but rather calibrated it manually. That way, I got a "whiteout" at a point that gives a fitting appearance.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

5. Prevent saturation of single channels to avoid a color shift (=> tone mapper)

That's what I thought as well. I than realized that the Uncharted 2 Tone Mapper operates on the individual color channels as well as my tone mapping operator. I fixed my tone mapping operator implementation after realizing that it might be better to tonemap the luminance instead. But it turned out, that the resulting image looked sooo much worse, that I immediately switched back to the old implementation. Just look at Hable's blog entry. All of his tone mapping operator implementations operate on the individual color channels as well. And when you think about it, the cones on the human retina independently convert the light into nerve impulses. That's why I think it actually makes sense to perform it on a per channel basis.

Also, just take a look at this bright light:

light.jpg

The color shifts to white in the center. This has nothing to do with the white point. It's actually a color shift.
Yeah I wouldn't get too caught up on "color shift". When you tone map you're really taking data representing one thing (the physical amount of light that hits the eye/sensor/film) and transforming it to something else entirely (the color to give to the display in order to approximate the eye's actual perception of that particular spectrum of visible light). A camera will absolutely "color shift", since it will use response curves that vary along different wavelengths of visible light.

This topic is closed to new replies.

Advertisement