Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Filmic Tone Mapping Questions


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
10 replies to this topic

#1 CDProp   Members   -  Reputation: 1033

Like
0Likes
Like

Posted 06 January 2014 - 07:42 PM

Greetings, all. 

 

I am currently trying to implement Hable's filmic tone mapping as described in this blog post. There are a lot of parameters to tweak, though, and I'm having trouble getting it right. But first, would anyone mind taking a look at my algorithm and seeing if I'm basically doing things right? I'm using OpenSceneGraph for this project, but I'll describe everything in terms of the underlying OpenGL.

 

Right now, my scene is just a skybox using SilverLining. According to the SilverLining docs, everything is rendered in units of kilocandelas per square meter. Here is what it looks like with SilverLining's built-in tone mapping:

 

urVz97sl.jpg

 

Which, in my opinion, looks very nice. Sundog's tone mapping operator looks very complicated, though, and difficult to reason about. It looks very different than the tone mapping operators that I have seen used here and elsewhere on game development blogs. But...luminance is luminance, right? I feel like I should be able to get good results with, say, Hable's filmic tone mapping operator.

 

So here is the algorithm I'm using.

 

1. Render everything to a floating point texture.

 

I've created an FBO to render everything to. It has a 16-bit floating point color attachment (GL_RGBA16F) and and depth stencil attachment (GL_DEPTH24_STENCIL8). I am rendering the scene into this FBO. The dimensions of each are 1920x1080.

 

2. Calculate the average luminance of the scene.

 

For this, I'm binding the aforementioned floating point texture, and rendering it into another floating point texture. This one is 512x512. However, I am not storing the color values in this texture, but rather the log of the luminance. Here is my fragment shader code:

uniform sampler2D baseTexture;

varying vec2 texcoord;

void main()
{
    vec4 color = textureLod(baseTexture, texcoord, 0);
    float L = 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;
    float r = log(L + 0.001);
    gl_FragColor = vec4(r,r,r,1);
}

I think call glGenerateMipmap to handle the averaging. Incidentally, I don't see how this algorithm matches what is called for in equation (1) of Reinhard's paper. It seems to be more similar to this (I'm including the exponentiation, which takes place in the next step):

 

yrBYhJU.png

 

Although I admittedly have not done the algebra yet, this does not seem to be the same as what Reinhard suggested, although it does seem to be a more proper geometric mean than what Reinhard suggested.

 

3. Exposure Control and Tone Mapping

 

I'm including these as one step because I'm doing them both in the same shader. For exposure control, I'm using equation (2) from Reinhard's paper. In order to get the average luminance, I sample the texture from step 2. In doing so, I sample the second-to-last mipmap level (the 2x2 level). This is to give the tone mapping some semblance of locality, although I suppose I should probably worry about such flourishes when I have the algorithm working properly.

 

To do the tone mapping step, I am using Hable's filmic curve. I pulled the meaning of the constants from his presentation. Here is the fragment shader code:

uniform sampler2D baseTexture;
uniform sampler2D avgLumTexture;

varying vec2 texcoord;

uniform float A;
uniform float B;
uniform float C;
uniform float D;
uniform float E;
uniform float F;
uniform float W;

uniform float key;

vec3 Uncharted2Tonemap(vec3 x)
{
    return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F;
}

void main()
{
    vec3 texColor = texture2D(baseTexture, texcoord).rgb;
    float avgL = exp(textureLod(avgLumTexture, texcoord, 8).r);
    float L = 0.2126 * texColor.r + 0.7152 * texColor.g + 0.0722 * texColor.b;
    float exposure =  key * L / avgL;
    texColor = texColor*exposure;

    vec3 curr       = Uncharted2Tonemap(texColor.rgb*2.0);
    vec3 whiteScale = 1.0f/Uncharted2Tonemap(W);
    vec3 color = curr*whiteScale;
    vec3 retColor = pow(color, 0.45);
    gl_FragColor = vec4(retColor, 1.0);
}

A quick note about gamma correction: according to the SilverLining documentation, disabling their tone mapping (which is what allows me to use my own) also disables gamma correction. So, all of the values I get from them should be linear. 

 

Here are the results, using the values for A,B,C,D,E,F,W that are hard-coded in the code sample on Hable's blog (which I did not expect to work for my own scene, but I use them as a starting point:

 

wevd4i2l.jpg

 

In the upper right corner, you can see my average luminance texture. It's using the 2x2 mipmap level, as you can see. When displaying that little debug square, I am exponentiating my sample so that it's back in linear space. I am also multiplying by 0.1 because most of the luminance values in this scene are between 0 and 10 and so it gives me a good idea of what I should be seeing.

 

In the bottom left corner, you can see the values I'm using for Hable's coefficients, as well as a graph of the curve. I should probably put axes and guidelines on that graph so I get more information than just the shape, but this is what I have for now. In any case, I can edit these values on the fly, and sort of play with things until they look right. They "key" value (variable a in Reinhard's equation) can also be modified on the fly.

 

My problem is, those cloud bottoms are extremely dark (about 0.4 kilocandelas per square meter, according to gDEBugger) while the cloud tops are extremely bright (about 5.5 kilocandelas per square meter). Any attempt I make to lighten those blacks seems to wash everything out and desaturate the whole scene. If I then modify another parameter (say, the white point) to make things less washed-out, it invariably has the effect of darkening those cloud bottoms again.

 

So I guess I have a few questions:

 

a) Do I appear to be doing this all correctly? I guess there is no sense in sitting here, tweaking the numbers, if I don't have the basics right.

b) Does anyone know of any parameters I can use that should be reasonable for outdoor scenes, especially for those using SilverLining as a skybox?

c) If not c, does anyone have a better intuition for how these parameters work so that I can find the right combination? Will I need an entirely different combination for nighttime scenes?

 



Sponsor:

#2 nfactorial   Members   -  Reputation: 731

Like
1Likes
Like

Posted 07 January 2014 - 01:36 AM

Hey,

  I use Hables tone-map operator in my own engine.

 

Just to begin with, I'd suggest using a constant luminance rather than a calculated luminance. You can add the calculated luminance back in once you're happy with the results of the tone-map. This will just help you concentrate on the tone-mapping without the luminance confusing things further.

 

Looking at your shader, I'm curious as to the line:

vec3 curr = Uncharted2Tonemap(texColor.rgb*2.0);

And wondering why you're multiplying your colour by 2? As it seems odd, especially as this is after you have calculated the pixel luminance and applied the exposure, but maybe there is a reason I'm missing.

 

Also, the constants in your luminance calculation seems incorrect, in my code it looks:

float L = max( dot( texColor, float3( 0.299f, 0.587f, 0.114f ) ), 0.0001f );

Those are the major differences I see, but they could be the result of something else your renderer is doing that I don't know about

 

n!


Edited by nfactorial, 07 January 2014 - 01:43 AM.


#3 TiagoCosta   Crossbones+   -  Reputation: 2347

Like
3Likes
Like

Posted 07 January 2014 - 07:05 AM

c) If not c, does anyone have a better intuition for how these parameters work so that I can find the right combination?

 

In this presentation (slide 142) you can see the name of each parameter. Then watch this video for more info on how to set the parameters. 

 

You can also download MJP's tonemapping sample and play with the parameters in real-time.



#4 AliasBinman   Members   -  Reputation: 431

Like
1Likes
Like

Posted 07 January 2014 - 09:52 AM

This bit looks unneeded and should be removed

 

float exposure = key * L / avgL;

 

change it to 

 

float exposure = key  / avgL;

 

I don't see why you need the multiply by pixel luminance



#5 mark ds   Members   -  Reputation: 1401

Like
1Likes
Like

Posted 07 January 2014 - 10:28 AM


Also, the constants in your luminance calculation seems incorrect, in my code it looks:
float L = max( dot( texColor, float3( 0.299f, 0.587f, 0.114f ) ), 0.0001f );
Those are the major differences I see, but they could be the result of something else your renderer is doing that I don't know about

 

No, he's correct. The oft quoted [0.299, 0.587, 0.114] are for the NTSC/PAL television broadcast colour space. For sRGB you need to use [0.212656, 0.715158, 0.072186] for correct luminance/greyscale conversions.


Edited by mark ds, 07 January 2014 - 10:31 AM.


#6 CDProp   Members   -  Reputation: 1033

Like
0Likes
Like

Posted 07 January 2014 - 02:58 PM

It just occurred to me that SilverLining outputs everything, as I mentioned, in units of kilocandelas per square meter, which is already unit of luminance rather than radiance, so it might already be weighted by the luminance function. I probably don't need to be doing that again.

As n! suggested, I will remove the automatic exposure control for now. The video that TiagoCosta posted suggested dialing in the exposure first, such that the mid values look right, then working on the toe and shoulder after that.

Alias, you might be right. By multiplying the pixel luminance in with my exposure, and then multiplying my final exposure value by the original color from which I first calculated the luminance, I might be erroneously compounding the luminance.

n!, the 2.0 factor that you asked about is the ExposureBias that Hable uses. In the comments, he says it's just a magic number. In the video TiagoCosta posted, he has a separate ExposureBias slider. He says he likes to have Exposure and ExposureBias as separate parameters, presumably because they affect different parts of his algorithm. However, in this particular shader, it appears to be superfluous.

So, I have a lot to try when I get home from school. Thanks, everyone.

#7 MJP   Moderators   -  Reputation: 11620

Like
1Likes
Like

Posted 07 January 2014 - 03:48 PM

Have you validate that things look mostly okay without using automatic exposure and a tone mapping curve? If you remove these things by picking a fixed exposure value and not applying a curve, the result you get should still basically look "correct". It probably won't look awesome since a linear tone mapping curve won't enhance contrast and will result in harsh clamping in the high end, but it shouldn't look "wrong" if that makes sense. If it does look like something is off, then you may want to double-check that you're handing gamma correction correctly throughout your pipeline.



#8 Hodgman   Moderators   -  Reputation: 31224

Like
3Likes
Like

Posted 07 January 2014 - 04:07 PM

As above, get it working first with just a hand-tweakable exposure to begin with and no curve (a linear tone-mapper, if you like).

After that, it should still look ok when you add Hable's curve, just with "better" contrast now. Tweaking Hable's values should only be necessary for some fine adjustments.

 

When I was playing with the values for Hable's curve, I made a second version of my tone-mapper that would draw a graph of the curve at the top of the screen.

This meant that as you tweaked any of the parameters, it made it much easier to actually develop intuition about what those parameters actually do.

e.g.

8S0LR9z.png qdze4Pc.png

	float3 color = (float3)(input.uv.x * maxLumValue);
	color = ToneMap(color, ...);
	color = ColorCorrection(color, ...);
	float3 r = (1+ddx(colour)*50)*0.02;//line width parameters
	float4 line  = float4(   smoothstep( input.uv.yyy-r, input.uv.yyy,   color ), 1 );//fade-in/fade-out vertical gradient
	       line *= float4( 1-smoothstep( input.uv.yyy,   input.uv.yyy+r, color ), 1 );
	return line;


#9 MJP   Moderators   -  Reputation: 11620

Like
1Likes
Like

Posted 07 January 2014 - 09:34 PM

Yeah I graph is really nice to have for this. I also recommend using a log2 scale for your graph.



#10 CDProp   Members   -  Reputation: 1033

Like
0Likes
Like

Posted 08 January 2014 - 05:20 PM

Thanks, everyone.

 

Indeed the graph has been very helpful. I have one (see my second screenshot in the OP), but it uses a linear scale. Do you mean that I should graph it with a log2 scale on the y-axis? Won't that look weird for values between 0-1?

 

I tried this again, removing the automatic exposure adjustment and using just a hand-tweakable exposure parameter. I also followed AliasBinman's advice and removed the extra pixel luminance term from the exposure formula. This seemed to help a little. I was able to dial in an exposure that made the mid-tones look alright, but I'm afraid at the end of the day, the cloud bottoms were just way too dark. The luminance values for the cloud bottoms are very low (about 5 kCD/m²) while those of the cloud tops are very high (about 30 kCD/m²). Yet, in SilverLining's own tone mapping (see my first screenshot), the cloud bottoms have an RGB value above middle-gray (0.6,0.6,0.7). So, more than half of the RGB spectrum is being squeezed into the first 15% of luminance values, which seems to argue for a curve that is nearly all-shoulder, has an immense linear mid-tone slope, and a minuscule toe. No matter how I played with the numbers, it seemed I had a choice between clouds with ink dark bottoms, or a scene that was washed out and desaturated.

 

So, I perused the SilverLining documentation, and it turns out that they do provide a way to lighten the cloud bottoms. So, I decided to give that a try, and I am much happier with my results:

 

0NRF5ICl.png

 

It could use some more work, but I like it. I'm slightly uneasy with the fact that I wasn't able to come up with good parameters using the same settings as before, because those are the settings that are used by SilverLining's own tone mapping operator. However, I just could not see how the range of luminance values I was getting could possibly play nice with the tone mapping operator I'm attempting to use.

 

So, I've put the automatic exposure control back in, and it works alright, but it seems too over-active at the moment. I'll have a scene that appears to be lit well, and I'll zoom in on a dark spot (like the unlit side of an object), and the automatic exposure adjustment will light that thing up like it's high noon. I don't know that I want the exposure to change that much just because a dark (or light) object moved into view. Is there a standard way of limiting the effect of the exposure control, or should I just get creative and fiddle with the formula a bit?

 

This also comes into play with time-of-day changes (my app has dynamic time of day changes, on a full 24 hour cycle). I already expected that I might have to change the key value at night, but it looks like I might have to change the white point as well, and maybe do it on a sliding scale. Maybe I should look into Valve's histogram method? 


Edited by CDProp, 08 January 2014 - 05:21 PM.


#11 Hodgman   Moderators   -  Reputation: 31224

Like
2Likes
Like

Posted 08 January 2014 - 06:35 PM

Indeed the graph has been very helpful. I have one (see my second screenshot in the OP), but it uses a linear scale. Do you mean that I should graph it with a log2 scale on the y-axis? Won't that look weird for values between 0-1?

The y-axis is from 0-1 and the x-axis is from 0-maxLuminance. I think MJP is suggesting to change the x-axis to be from 0-log2(maxLuminance).

e.g. here's my tone-mapper with luminance on x and final pixel value on y:

(left is 0-1, middle is 0-10, right is 0-1024 input range)

LZD2jbw.gifo0JquVJ.gif RMeEbnA.gif

Note how I need to zoom into the graph in order to see the detail at the bottom-end, and zoom out to see the detail at the top-end.

 

And here's the same function graphed with the x-axis changed to be logarithmic (still 0-1024 input range)

8f41SKR.gif

As you can see, the small bottom-end toe of the graph is now easily seen, along with the linear section and the top-end shoulder. It gives a better view on how the tone-mapper treats all inputs.

 

I don't know that I want the exposure to change that much just because a dark (or light) object moved into view. Is there a standard way of limiting the effect of the exposure control, or should I just get creative and fiddle with the formula a bit?

I'm not sure what the standard solution is to prevent your camera from having infinite sensitivity (a real camera has limits on exposure settings, or at least side-effects -- e.g. long-exposure photos introduce motion blur, or high-ISO settings increase image noise, or wide-apertures reduce the amount of depth that's in focus, etc). My solution was to give artists control over a minimum and maximum average luminosity value, which were used to clamp the actual average lum.

e.g. if the artists set the minimum-avg-lum to 0.5, but the actual measured avg-lum is 0.001, then the tone-mapper uses 0.5 as the avg-lum value.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS