ToneMapping Clamping Exposure

Started by
8 comments, last by InvalidPointer 11 years, 6 months ago
I am close to getting my tone mapping working in my engine. There is one lasting issue which isn't a showstopper but I would like to see if I can still do better. I think the problem stems from using a global tonemapping operator as opposed to local.

The problem is that in my out door scene, in areas with the same "key" value, my average luminance can still vary quite a bit. From some viewpoints, the "exposure" decreases and things get shifted darker, but from other viewpoints the "exposure" increases and things get shifted brighter.

In one particular set of viewpoints the "exposure" is causing things to darken a bit, and the artist is complaining because he does not like the darkening shift from this viewpoint and wants a more vibrant color because it is sunny outdoors. So the key value is increased to compensate, but now, from other viewpoints where exposure > 1, the scene gets overbrighten. We are using exposure clamps to minimize the amount, but I am wondering if there is more I can do?
-----Quat
Advertisement
You could try to use histogram for better control over tonemapping. There is a nice presentation from Valve about their tonemapping with histogram aproach: http://www.valvesoft...heOrangeBox.pdf
I clamp the exposure too, but the other way around to avoid lighting up dark scenes too much.


wants a more vibrant color because it is sunny outdoors.

What tone mapper are you using (reinhard ?), try to use a filmic tonemapper for artist friendly vibrant colors happy.png
One thing I don't understand about the filmic tonemapper:


float A = 0.15;
float B = 0.50;
float C = 0.10;
float D = 0.20;
float E = 0.02;
float F = 0.30;
float W = 11.2;

float3 Uncharted2Tonemap(float3 x)
{
return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F;
}

float4 ps_main( float2 texCoord : TEXCOORD0 ) : COLOR
{
float3 texColor = tex2D(Texture0, texCoord );
texColor *= 16; // Hardcoded Exposure Adjustment

float ExposureBias = 2.0f;
float3 curr = Uncharted2Tonemap(ExposureBias*texColor);

float3 whiteScale = 1.0f/Uncharted2Tonemap(W);
float3 color = curr*whiteScale;

float3 retColor = pow(color,1/2.2);
return float4(retColor,1);


is where does he use the average luminance?

Even in his Reinhard implementation, he doesn't use the average luminance...


float4 ps_main( float2 texCoord : TEXCOORD0 ) : COLOR
{
float3 texColor = tex2D(Texture0, texCoord );
texColor *= 16; // Hardcoded Exposure Adjustment
texColor = texColor/(1+texColor);
float3 retColor = pow(texColor,1/2.2);
return float4(retColor,1);
}
[/quote]
-----Quat
You can apply the tone mapping operator on the individual color channels, instead of an achromatic version with just the luminance. This way some color shift might occur, but you this color shift is actually more realistic than an achromatic implementation which keeps your colors.

You can apply the tone mapping operator on the individual color channels, instead of an achromatic version with just the luminance. This way some color shift might occur, but you this color shift is actually more realistic than an achromatic implementation which keeps your colors.
[/quote]

Thanks for your reply. I understand you can work with color channels. But doesn't the tonemapping still need to find an average color then? So instead of dividing by average luminance as in the Reinhard paper, you divide by average color or something?
-----Quat
The average luminance is used before applying the tone mapping curve. Some people refer to this step as "calibration", I refer to it as "applying exposure". The idea is that you have HDR values of widely varying intensity, and you want to pick some single scalar value that will scale your HDR values in such a way that the "important" part of your visible scene ends up being in the range that's suitable for your tone mapping curve to produce acceptable results with visible contrast. So if you have a really bright scene, you would use some value < 1.0 (negative exposure) to reduce the intensity of all of your pixels so that the bright pixels end up being in the visible range. Or if you have a dark scene, you do the opposite and apply a value > 1.0 (positive exposure). The "average luminance" bit from Reinhard's paper is part of an algorithm for automatically determining a good exposure value, sort of like an auto-exposure feature on a camera. Basically you just divide some "key value" representing the "key" or "mood" of the scene by the average luminance and you get an exposure value that you can multiply with your HDR color. After this you apply the tone mapping curve itself, which may work in terms of luminance (Reinhard) or in terms of RGB values (filmic). If you want to use an auto-exposure setup with a filmic curve you wouldn't compute the average RGB values, you would compute average luminance the same way you would with Reinhard and then apply the resulting exposure value with your HDR color before applying the filmic tone mapping curve. In that demo code John Hable provided he wasn't using auto-exposure, he just had a manual exposure value picked by hand.

Anyway filmic tonemapping can definitely produce better results, especially if you're going for a "vibrant" look with more saturated, contrasty colors. But it may not solve your problem with picking a key value, since it still needs to be chosen directly depending whether you have a low-key or high-key scene. I've seen some algorithms for trying to estimate a good key value, but haven't had much success myself going down that route. But if you're interested this paper suggests a simple estimator based on the average luminance, although in my experience it doesn't produce great results. There's a more complex estimator that I saw somewhere (I can't remember where), but it required more detailed analysis of the distribution of luminance in your scene. At the time I couldn't make it work well without a full histogram, and generating a histogram would enough buckets for scene with a large dynamic range is not easy or cheap.

After this you apply the tone mapping curve itself, which may work in terms of luminance (Reinhard) or in terms of RGB values (filmic). If you want to use an auto-exposure setup with a filmic curve you wouldn't compute the average RGB values, you would compute average luminance the same way you would with Reinhard and then apply the resulting exposure value with your HDR color before applying the filmic tone mapping curve. In that demo code John Hable provided he wasn't using auto-exposure, he just had a manual exposure value picked by hand.
[/quote]

OK. I thought the exposure adjustment, which would map HDR-->LDR was called "tone mapping". I basically implement Equation (4) in:

http://www.cis.rit.edu/jaf/publications/sig02_paper.pdf

in the way the DX11 SDK sample does:


float4 PSFinalPass( QuadVS_Output Input ) : SV_TARGET
{
float4 vColor = tex.Sample( PointSampler, Input.Tex );
float fLum = lum[0]*g_param.x;
float3 vBloom = bloom.Sample( LinearSampler, Input.Tex );
// Tone mapping
vColor.rgb *= MIDDLE_GRAY / (fLum + 0.001f);
vColor.rgb *= (1.0f + vColor/LUM_WHITE);
vColor.rgb /= (1.0f + vColor);

vColor.rgb += 0.6f * vBloom;
vColor.a = 1.0f;
return vColor;
}


So is the SDK sample doing "exposure" only with no tonemapping curve?

If I want to apply the filmic tonemapping functions, do I apply it after I compute


// Tone mapping
vColor.rgb *= MIDDLE_GRAY / (fLum + 0.001f);
vColor.rgb *= (1.0f + vColor/LUM_WHITE);
vColor.rgb /= (1.0f + vColor);


but before bloom?
-----Quat
This part:


vColor.rgb *= (1.0f + vColor/LUM_WHITE);
vColor.rgb /= (1.0f + vColor);


is equation 3 from Reinhard's paper. It is the modified form of Reinhard's tone mapping curve, except applied to RGB values instead of luminance. If you were to use a filmic curve, you would use it here after multiplying by middleGrey / avgLuminance.
It's helpful to think about what a camera does re: exposure-- you're controlling how much light hits the film by modifying how long the shutter stays open. From a ten-thousand-foot view, you're really just scaling the color arbitrarily, with some additional concerns in the real world due to how photosensitive paper interacts with light, objects moving around in the scene and causing more pronounced motion blur at high exposure times, etc. You don't need to consider this for most renderers, but there are solutions that are capable of modeling this.

On another brief tangent, you can think about the back buffer as being a description of how much light will come in and hit the film along the given direction per unit of time, a subtle but useful distinction. Tying this in with exposure/artistic effect, we're trying to come up with a shutter time/how to scale the color so that it looks subjectively 'good,' here defined as 'being colors other than all black or all white.' In real-world photography, scenes are darkened by reducing exposure/brightened by increasing exposure-- this should make intuitive sense; if you know how much light arrives in one unit of time, leaving the shutter open for half that time will chop the amount of light in half and vice versa. Photographers will usually do this manually, adjusting shutter time until you get something that looks nice/they way they want it to.

So anyway, the tl;dr edition: We want to get some arbitrary scale value. A good place to start is to darken the scene if most of it is way bright, and brighten it if most of it is way dark. Reinhard et al. have their exposure function up there that behaves that way-- plotting it out, as average luminance goes up, the final exposure goes down (remember division by larger numbers means you end up with a smaller one, etc.) and vice versa. You can do this however you want, though.

Re: bloom. You probably want to add this into the final scene pre-exposure and pre-tonemapping unless you're going for a specific effect. Physically-speaking bloom comes about due to sensor overload and lens stuff, and if you want to think about tone mapping as modeling the light response of the film in question it's already entered the picture mathematically.

EDIT: This came across a little more wall-of-texty than I was thinking when I started typing it all out. Standard offer of explanation/alternate breakdown applies if this is a little too opaque.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

This topic is closed to new replies.

Advertisement