Sign in to follow this  
CDProp

OpenGL Filmic Tone Mapping Questions

Recommended Posts

CDProp    1451

Greetings, all. 

 

I am currently trying to implement Hable's filmic tone mapping as described in this blog post. There are a lot of parameters to tweak, though, and I'm having trouble getting it right. But first, would anyone mind taking a look at my algorithm and seeing if I'm basically doing things right? I'm using OpenSceneGraph for this project, but I'll describe everything in terms of the underlying OpenGL.

 

Right now, my scene is just a skybox using SilverLining. According to the SilverLining docs, everything is rendered in units of kilocandelas per square meter. Here is what it looks like with SilverLining's built-in tone mapping:

 

urVz97sl.jpg

 

Which, in my opinion, looks very nice. Sundog's tone mapping operator looks very complicated, though, and difficult to reason about. It looks very different than the tone mapping operators that I have seen used here and elsewhere on game development blogs. But...luminance is luminance, right? I feel like I should be able to get good results with, say, Hable's filmic tone mapping operator.

 

So here is the algorithm I'm using.

 

1. Render everything to a floating point texture.

 

I've created an FBO to render everything to. It has a 16-bit floating point color attachment (GL_RGBA16F) and and depth stencil attachment (GL_DEPTH24_STENCIL8). I am rendering the scene into this FBO. The dimensions of each are 1920x1080.

 

2. Calculate the average luminance of the scene.

 

For this, I'm binding the aforementioned floating point texture, and rendering it into another floating point texture. This one is 512x512. However, I am not storing the color values in this texture, but rather the log of the luminance. Here is my fragment shader code:

uniform sampler2D baseTexture;

varying vec2 texcoord;

void main()
{
    vec4 color = textureLod(baseTexture, texcoord, 0);
    float L = 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;
    float r = log(L + 0.001);
    gl_FragColor = vec4(r,r,r,1);
}

I think call glGenerateMipmap to handle the averaging. Incidentally, I don't see how this algorithm matches what is called for in equation (1) of Reinhard's paper. It seems to be more similar to this (I'm including the exponentiation, which takes place in the next step):

 

yrBYhJU.png

 

Although I admittedly have not done the algebra yet, this does not seem to be the same as what Reinhard suggested, although it does seem to be a more proper geometric mean than what Reinhard suggested.

 

3. Exposure Control and Tone Mapping

 

I'm including these as one step because I'm doing them both in the same shader. For exposure control, I'm using equation (2) from Reinhard's paper. In order to get the average luminance, I sample the texture from step 2. In doing so, I sample the second-to-last mipmap level (the 2x2 level). This is to give the tone mapping some semblance of locality, although I suppose I should probably worry about such flourishes when I have the algorithm working properly.

 

To do the tone mapping step, I am using Hable's filmic curve. I pulled the meaning of the constants from his presentation. Here is the fragment shader code:

uniform sampler2D baseTexture;
uniform sampler2D avgLumTexture;

varying vec2 texcoord;

uniform float A;
uniform float B;
uniform float C;
uniform float D;
uniform float E;
uniform float F;
uniform float W;

uniform float key;

vec3 Uncharted2Tonemap(vec3 x)
{
    return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F;
}

void main()
{
    vec3 texColor = texture2D(baseTexture, texcoord).rgb;
    float avgL = exp(textureLod(avgLumTexture, texcoord, 8).r);
    float L = 0.2126 * texColor.r + 0.7152 * texColor.g + 0.0722 * texColor.b;
    float exposure =  key * L / avgL;
    texColor = texColor*exposure;

    vec3 curr       = Uncharted2Tonemap(texColor.rgb*2.0);
    vec3 whiteScale = 1.0f/Uncharted2Tonemap(W);
    vec3 color = curr*whiteScale;
    vec3 retColor = pow(color, 0.45);
    gl_FragColor = vec4(retColor, 1.0);
}

A quick note about gamma correction: according to the SilverLining documentation, disabling their tone mapping (which is what allows me to use my own) also disables gamma correction. So, all of the values I get from them should be linear. 

 

Here are the results, using the values for A,B,C,D,E,F,W that are hard-coded in the code sample on Hable's blog (which I did not expect to work for my own scene, but I use them as a starting point:

 

wevd4i2l.jpg

 

In the upper right corner, you can see my average luminance texture. It's using the 2x2 mipmap level, as you can see. When displaying that little debug square, I am exponentiating my sample so that it's back in linear space. I am also multiplying by 0.1 because most of the luminance values in this scene are between 0 and 10 and so it gives me a good idea of what I should be seeing.

 

In the bottom left corner, you can see the values I'm using for Hable's coefficients, as well as a graph of the curve. I should probably put axes and guidelines on that graph so I get more information than just the shape, but this is what I have for now. In any case, I can edit these values on the fly, and sort of play with things until they look right. They "key" value (variable a in Reinhard's equation) can also be modified on the fly.

 

My problem is, those cloud bottoms are extremely dark (about 0.4 kilocandelas per square meter, according to gDEBugger) while the cloud tops are extremely bright (about 5.5 kilocandelas per square meter). Any attempt I make to lighten those blacks seems to wash everything out and desaturate the whole scene. If I then modify another parameter (say, the white point) to make things less washed-out, it invariably has the effect of darkening those cloud bottoms again.

 

So I guess I have a few questions:

 

a) Do I appear to be doing this all correctly? I guess there is no sense in sitting here, tweaking the numbers, if I don't have the basics right.

b) Does anyone know of any parameters I can use that should be reasonable for outdoor scenes, especially for those using SilverLining as a skybox?

c) If not c, does anyone have a better intuition for how these parameters work so that I can find the right combination? Will I need an entirely different combination for nighttime scenes?

 

Share this post


Link to post
Share on other sites
nfactorial    735

Hey,

  I use Hables tone-map operator in my own engine.

 

Just to begin with, I'd suggest using a constant luminance rather than a calculated luminance. You can add the calculated luminance back in once you're happy with the results of the tone-map. This will just help you concentrate on the tone-mapping without the luminance confusing things further.

 

Looking at your shader, I'm curious as to the line:

vec3 curr = Uncharted2Tonemap(texColor.rgb*2.0);

And wondering why you're multiplying your colour by 2? As it seems odd, especially as this is after you have calculated the pixel luminance and applied the exposure, but maybe there is a reason I'm missing.

 

Also, the constants in your luminance calculation seems incorrect, in my code it looks:

float L = max( dot( texColor, float3( 0.299f, 0.587f, 0.114f ) ), 0.0001f );

Those are the major differences I see, but they could be the result of something else your renderer is doing that I don't know about

 

n!

Edited by nfactorial

Share this post


Link to post
Share on other sites
Aqua Costa    3691

c) If not c, does anyone have a better intuition for how these parameters work so that I can find the right combination?

 

In this presentation (slide 142) you can see the name of each parameter. Then watch this video for more info on how to set the parameters. 

 

You can also download MJP's tonemapping sample and play with the parameters in real-time.

Share this post


Link to post
Share on other sites
AliasBinman    855

This bit looks unneeded and should be removed

 

float exposure = key * L / avgL;

 

change it to 

 

float exposure = key  / avgL;

 

I don't see why you need the multiply by pixel luminance

Share this post


Link to post
Share on other sites
mark ds    1786

Also, the constants in your luminance calculation seems incorrect, in my code it looks:
float L = max( dot( texColor, float3( 0.299f, 0.587f, 0.114f ) ), 0.0001f );
Those are the major differences I see, but they could be the result of something else your renderer is doing that I don't know about

 

No, he's correct. The oft quoted [0.299, 0.587, 0.114] are for the NTSC/PAL television broadcast colour space. For sRGB you need to use [0.212656, 0.715158, 0.072186] for correct luminance/greyscale conversions.

Edited by mark ds

Share this post


Link to post
Share on other sites
CDProp    1451
It just occurred to me that SilverLining outputs everything, as I mentioned, in units of kilocandelas per square meter, which is already unit of luminance rather than radiance, so it might already be weighted by the luminance function. I probably don't need to be doing that again.

As n! suggested, I will remove the automatic exposure control for now. The video that TiagoCosta posted suggested dialing in the exposure first, such that the mid values look right, then working on the toe and shoulder after that.

Alias, you might be right. By multiplying the pixel luminance in with my exposure, and then multiplying my final exposure value by the original color from which I first calculated the luminance, I might be erroneously compounding the luminance.

n!, the 2.0 factor that you asked about is the ExposureBias that Hable uses. In the comments, he says it's just a magic number. In the video TiagoCosta posted, he has a separate ExposureBias slider. He says he likes to have Exposure and ExposureBias as separate parameters, presumably because they affect different parts of his algorithm. However, in this particular shader, it appears to be superfluous.

So, I have a lot to try when I get home from school. Thanks, everyone.

Share this post


Link to post
Share on other sites
MJP    19754

Have you validate that things look mostly okay without using automatic exposure and a tone mapping curve? If you remove these things by picking a fixed exposure value and not applying a curve, the result you get should still basically look "correct". It probably won't look awesome since a linear tone mapping curve won't enhance contrast and will result in harsh clamping in the high end, but it shouldn't look "wrong" if that makes sense. If it does look like something is off, then you may want to double-check that you're handing gamma correction correctly throughout your pipeline.

Share this post


Link to post
Share on other sites
Hodgman    51223

As above, get it working first with just a hand-tweakable exposure to begin with and no curve (a linear tone-mapper, if you like).

After that, it should still look ok when you add Hable's curve, just with "better" contrast now. Tweaking Hable's values should only be necessary for some fine adjustments.

 

When I was playing with the values for Hable's curve, I made a second version of my tone-mapper that would draw a graph of the curve at the top of the screen.

This meant that as you tweaked any of the parameters, it made it much easier to actually develop intuition about what those parameters actually do.

e.g.

8S0LR9z.png qdze4Pc.png

	float3 color = (float3)(input.uv.x * maxLumValue);
	color = ToneMap(color, ...);
	color = ColorCorrection(color, ...);
	float3 r = (1+ddx(colour)*50)*0.02;//line width parameters
	float4 line  = float4(   smoothstep( input.uv.yyy-r, input.uv.yyy,   color ), 1 );//fade-in/fade-out vertical gradient
	       line *= float4( 1-smoothstep( input.uv.yyy,   input.uv.yyy+r, color ), 1 );
	return line;

Share this post


Link to post
Share on other sites
CDProp    1451

Thanks, everyone.

 

Indeed the graph has been very helpful. I have one (see my second screenshot in the OP), but it uses a linear scale. Do you mean that I should graph it with a log2 scale on the y-axis? Won't that look weird for values between 0-1?

 

I tried this again, removing the automatic exposure adjustment and using just a hand-tweakable exposure parameter. I also followed AliasBinman's advice and removed the extra pixel luminance term from the exposure formula. This seemed to help a little. I was able to dial in an exposure that made the mid-tones look alright, but I'm afraid at the end of the day, the cloud bottoms were just way too dark. The luminance values for the cloud bottoms are very low (about 5 kCD/m²) while those of the cloud tops are very high (about 30 kCD/m²). Yet, in SilverLining's own tone mapping (see my first screenshot), the cloud bottoms have an RGB value above middle-gray (0.6,0.6,0.7). So, more than half of the RGB spectrum is being squeezed into the first 15% of luminance values, which seems to argue for a curve that is nearly all-shoulder, has an immense linear mid-tone slope, and a minuscule toe. No matter how I played with the numbers, it seemed I had a choice between clouds with ink dark bottoms, or a scene that was washed out and desaturated.

 

So, I perused the SilverLining documentation, and it turns out that they do provide a way to lighten the cloud bottoms. So, I decided to give that a try, and I am much happier with my results:

 

0NRF5ICl.png

 

It could use some more work, but I like it. I'm slightly uneasy with the fact that I wasn't able to come up with good parameters using the same settings as before, because those are the settings that are used by SilverLining's own tone mapping operator. However, I just could not see how the range of luminance values I was getting could possibly play nice with the tone mapping operator I'm attempting to use.

 

So, I've put the automatic exposure control back in, and it works alright, but it seems too over-active at the moment. I'll have a scene that appears to be lit well, and I'll zoom in on a dark spot (like the unlit side of an object), and the automatic exposure adjustment will light that thing up like it's high noon. I don't know that I want the exposure to change that much just because a dark (or light) object moved into view. Is there a standard way of limiting the effect of the exposure control, or should I just get creative and fiddle with the formula a bit?

 

This also comes into play with time-of-day changes (my app has dynamic time of day changes, on a full 24 hour cycle). I already expected that I might have to change the key value at night, but it looks like I might have to change the white point as well, and maybe do it on a sliding scale. Maybe I should look into Valve's histogram method? 

Edited by CDProp

Share this post


Link to post
Share on other sites
Hodgman    51223

Indeed the graph has been very helpful. I have one (see my second screenshot in the OP), but it uses a linear scale. Do you mean that I should graph it with a log2 scale on the y-axis? Won't that look weird for values between 0-1?

The y-axis is from 0-1 and the x-axis is from 0-maxLuminance. I think MJP is suggesting to change the x-axis to be from 0-log2(maxLuminance).

e.g. here's my tone-mapper with luminance on x and final pixel value on y:

(left is 0-1, middle is 0-10, right is 0-1024 input range)

LZD2jbw.gifo0JquVJ.gif RMeEbnA.gif

Note how I need to zoom into the graph in order to see the detail at the bottom-end, and zoom out to see the detail at the top-end.

 

And here's the same function graphed with the x-axis changed to be logarithmic (still 0-1024 input range)

8f41SKR.gif

As you can see, the small bottom-end toe of the graph is now easily seen, along with the linear section and the top-end shoulder. It gives a better view on how the tone-mapper treats all inputs.

 

I don't know that I want the exposure to change that much just because a dark (or light) object moved into view. Is there a standard way of limiting the effect of the exposure control, or should I just get creative and fiddle with the formula a bit?

I'm not sure what the standard solution is to prevent your camera from having infinite sensitivity (a real camera has limits on exposure settings, or at least side-effects -- e.g. long-exposure photos introduce motion blur, or high-ISO settings increase image noise, or wide-apertures reduce the amount of depth that's in focus, etc). My solution was to give artists control over a minimum and maximum average luminosity value, which were used to clamp the actual average lum.

e.g. if the artists set the minimum-avg-lum to 0.5, but the actual measured avg-lum is 0.001, then the tone-mapper uses 0.5 as the avg-lum value.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now