Jump to content

  • Log In with Google      Sign In   
  • Create Account

Some HDR / Tone Mapping questions


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 CDProp   Members   -  Reputation: 955

Like
0Likes
Like

Posted 03 March 2012 - 12:56 AM

Greetings,

I've recently begun looking into tone mapping, and I have several questions. I am hoping that someone with more experience can give me some advice on these topics.

1. Is using a 16-bit floating point texture still the norm?

2. Does this technique play well with deferred shading or deferred lighting?

3. Is the RADIANCE .hdr file format a good choice for textures?

4. Is it necessary to have all textures in HDR format? Or is it okay to have a few key textures (skybox, for example) in HDR, and some HDR values for the lights, but keep the typical object textures in an LDR format?


Thanks.

Sponsor:

#2 MJP   Moderators   -  Reputation: 11306

Like
1Likes
Like

Posted 03 March 2012 - 01:26 AM

1. That depends on the hardware. For modern PC hardware, FP16 is pretty common. On consoles you have encoded formats like RGBM being pretty popular on PS3, while the 360 supports a special FP10 format. If you're using DX10 or DX11 you also have a R11G11B10_FLOAT format available.

2. Definitely. If you're additively blending your lights then you need blending support for your HDR format. On DX10 hardware every floating point format supports blending, so you don't have anything to worry about. However if you want to use an encoded format like RGBM then hardware blending isn't an option.

3. It's okay for source assets, but like anything else you'll probably want to convert to something very close to your runtime format so that loading is quick and so you can have mips pre-generated. But in general the RGBE encoding used by .HDR is good representation for HDR images. If you're targeting DX11 hardware then there's a compressed floating point format (BC6H) which is pretty cool, and you definitely want to compress to that format ahead of time since it's really slow to do so.

4. No. In general you only need HDR textures for things that are emissive, which means that they emit light. Skyboxes certainly fit that category. You can also use an HDR texture for emissive/glow maps on regular geometry, however in a lot of cases it's suitable to use an LDR color map combined with a floating point intensity value that gets multiplied with the texture at runtime in the shader.

Anything that's an albedo map definitely doesn't need to be HDR. For instance, a diffuse albedo map contains values representing the ratio of light emitted from a surface relative to the amount light hitting a surface. Since it doesn't make sense for a surface to emit more light than what hits it, albedo maps are restricted to the [0,1] range and can be suitably represented by an LDR texture. Some things like a gloss/roughness mapmay not be restricted to [0,1] conceptually, but in practice the map can still be LDR because you don't need more than that level of precision. So for instance you could multiply your gloss map value by 255 or 1024 to get a specular power that you use for Blinn-Pong.

#3 CDProp   Members   -  Reputation: 955

Like
0Likes
Like

Posted 03 March 2012 - 05:58 PM

MJP, thanks very much for your help. This is becoming much clearer to me now, particularly looking at diffuse textures as "albedo maps" is very helpful (and gives me an extra Google search term to boot). I was going to ask you a question about the tone mapping operator I was using (Reinhard), but then I read your blog post about crappy tone-mapping operators. Hahaha!!

Just kidding. I'll ask away, and then if you think I should abandon Reinhard for something else I will certainly appreciate the advice. Basically, I created a quick program that simply displays a full-screen quad with a texture on it. The texture is a 256x256 floating point texture with a gradient from 0.0 to 255.0. Here is how it looks with no tone mapping (other than to divide by 255.0 to map it to the 0-1 range):

Posted Image

Now, I want to apply the Reinhard operator to this, to see what I get. I calculated the average luminance to be about 90.19 using this formula:

Posted Image

(basically, the average luminance of a single column in the image, because I figure each column will be the same)

Note that I'm not currently converting back and forth from Yxy color space at this point. I have the code to do so, but I figure since my texture is grayscale, I might as well comment it out until I'm sure that the rest of the shader is correct (to eliminate a potential source of error and frustration).

I'm using a key/middleGray value of 0.18, as this seems to be fairly standard from what I am reading. I'm using a white point of 255.0, since this is the value that I want to show up as white.

When I run the program, this is what I get:

Posted Image

Hmmm, something seems wrong. I wasn't necessarily expecting a perfect match to the first image. I was expect that, perhaps, the gradient would be non-linear or something. However, I was at least expecting that the top row of pixels would be black (as they are) and that the bottom row would be white, since I used 255.0 as my white point. Here is my shader code:

uniform sampler2D hdrTex;
uniform float avgLuminance;
uniform float key;
uniform float white;
varying vec2 texCoord;

void main(void)
{
    vec4 texColor = texture2D(hdrTex, texCoord);

    float Lp = texColor.r * key / avgLuminance;
    texColor.r = (Lp * (1.0f + Lp/(white * white)))/(1.0f + Lp);

    gl_FragColor = vec4(texColor.r, texColor.r, texColor.r, 1.0);\
}

This URL shows an interesting way to calculate the middle gray value, so I gave it a try, using my average luminance of 90.19, and obtained a result of 0.723. Here is what the image looks like with this value:

Posted Image

Definitely lighter! But still not quite what I (think) I am looking for.

Another quick question: when it comes time to simulate an aperture (i.e., calculate the average luminance of the scene, adapt the exposure accordingly, perhaps with some time lag), which parameter(s) do I tweak? Do I keep the key/middleGray parameter constant, and just change the average luminance?

Thanks again!

#4 MJP   Moderators   -  Reputation: 11306

Like
1Likes
Like

Posted 03 March 2012 - 06:11 PM

Well before we work out the gradient issues, are you properly handling sRGB converison after tone mapping? If you don't convert from linear to sRGB, then pretty much all bets are off. Looking at your pictures it looks like you are doing this correctly, and it's just the expected results from your parameters. For a key value of .18 and an average luminance of 90.19 your exposure value is ~0.002, which is lower than the ~0.004 you were using when you were dividing by 255.

As for Reinhard, I would definitely recommend using a filmic curve instead. The default value for Hable's curve has a strong toe which can really saturate colors and crush blacks, so you might want to tweak it to be a bit more neutral if that's not the desired look. But either way it has a really nice, natural response.

As for you last question, typically you tweak the key value for different scenes. You can think of the key value as a "mood" slider: higher values for brighter scenes (such as sunny outdoors) with a more uniform distribution of luminosity, and lower values for scenes that are darker and moodier with lots of dark areas and a few bright hotspots (think an underground tunnel with sparse, artifical lighting). These concepts correspond pretty well to low-key lighting and high-key lighting from photography. At work we also allow the artists to specify min and max exposure values to clamp the results from auto exposure, so that they can keep the exposure from varying too much in response to small bright/dark areas. Either way it's very much a subjective/artistic thing. You can try to use methods like the one you linked to estimate a decent key value to use (however if you want to do a decent job if it you really need to generate a histogram and calculate the 2% min and max of the scene), but in the end I've found you always want an artist to pick something that gives them the look they want.

Also btw...you don't need to convert to Yxy do apply the scaling value...you can just calculate luminance using a weighted average of your RGB values (dot(color, float3(float3(0.299f, 0.587f, 0.114f)) and then compute your RGB scale as a ratio of luminances.

#5 CDProp   Members   -  Reputation: 955

Like
0Likes
Like

Posted 03 March 2012 - 07:46 PM

Concerning conversion to sRGB: are you referring to gamma correction? If so, I am not. I am somewhat new to this color-space concept. Is it a simple matter of raising the color I'm about to output to the power of 1/2.2? Or is there something more to it? Also, do I do the same thing for the alpha channel? I have no alpha blending in my simple example, but in raising the other channels to 1/2.2 (keeping the key and average luminance at 0.18 and 90.19, respectively) here is what I get:

Posted Image

Here is my new shader code:

uniform sampler2D hdrTex;
uniform float avgLuminance;
uniform float key;
uniform float white;
varying vec2 texCoord;
void main(void)
{
    vec4 texColor = texture2D(hdrTex, texCoord);
    float Lp = texColor.r * key / avgLuminance;
    texColor.r = (Lp * (1.0f + Lp/(white * white)))/(1.0f + Lp);
    float outputColor =  pow(texColor.r, 0.45);
    gl_FragColor = vec4(outputColor, outputColor, outputColor, 1.0);
}


So, if I understand you correctly about the key value, it should stay relatively constant as long as the overall makeup of the scene doesn't change? And is the exposure level calculated by dividing the key value by the average luminance? If so, would changing the exposure level simply involve changing the average luminance? (which, of course, could be calculated from the scene itself, or set manually to give more control over the 'aperture')

One last question: I can see how that dot product would give me the luminance of the pixel, but once I've calculated that luminance and modified it, how do I use the modified luminance to alter the original RGB colors? In other words, what do you mean by computing the RGB scale as a ratio of luminances?

Again, this is immensely appreciated. I've been doing straight 1998-style LDR, non-gamma-corrected rendering up until now, with very little concept of color spaces, radiometry, etc. Doing the research has been fun, but slow due to all of the questions raised, and so I really appreciate the one-on-one help you've given me.

#6 CDProp   Members   -  Reputation: 955

Like
0Likes
Like

Posted 03 March 2012 - 10:23 PM

This blog post has some good information, I believe, on computing the RGB scale as a ratio of luminances:

http://imdoingitwrong.wordpress.com/2010/08/19/why-reinhard-desaturates-my-blacks-3/

(not that MJP needs it, but I like to answer my own questions sometimes in case anyone else needs the information)

#7 CDProp   Members   -  Reputation: 955

Like
0Likes
Like

Posted 03 March 2012 - 11:30 PM

Ah, looking at the actual equation, I wondered if the white point shouldn't be the equal to the lowest scaled luminance that should mapped to white (that is, the luminance value after it is multiplied by key/avgLuminance). So, if the highest luminance in my texture is 255, and I'm using a key of 0.18 and the average luminance is 90.19, then my white point should be 255*(0.18/90.19) = 0.51. When I try this, I get a much more sensible result:

Posted Image

And here's the latest GLSL:

uniform sampler2D hdrTex;
uniform float avgLuminance;
uniform float key;
uniform float white;
varying vec2 texCoord;


void main(void)
{
    vec4 texColor = texture2D(hdrTex, texCoord);

    float L = 0.299 * texColor.r + 0.587 * texColor.g + 0.114 * texColor.b;
    float Lp = L * key / avgLuminance;
    float nL = (Lp * (1.0f + Lp/(white * white)))/(1.0f + Lp);
    texColor *= nL / L;

    texColor = pow(texColor, 0.45);

    gl_FragColor = vec4(texColor.rgb, 1.0);
}

Of course, if I want 255 to be white, then I better scale it by 1/255. For that, nL better be 1. For nL to be 1, then Lp better equal the white point. If, instead, I choose a white point of 255 (the unscaled luminance), then I have no hope of ever getting a white pixel. This experience will teach me to read the math expressions more carefully. =P

#8 Quat   Members   -  Reputation: 404

Like
0Likes
Like

Posted 15 October 2012 - 03:57 PM

At work we also allow the artists to specify min and max exposure values to clamp the results from auto exposure, so that they can keep the exposure from varying too much in response to small bright/dark areas.


I think I am having this problem; basically, we like the tonemap adjustment, but sometimes it gets too strong (probably because using global tone map operator based on log avg luminance).

What part are you clamping? Is it just clamping the average lum calculated so it can't vary too much and thus can't modify the scene as much?

Edited by Quat, 15 October 2012 - 04:49 PM.

-----Quat

#9 MJP   Moderators   -  Reputation: 11306

Like
0Likes
Like

Posted 15 October 2012 - 07:27 PM

We compute an exposure value by doing kayValue / avgLuminance, and then we clamp that. We actually do it in log2 domain, since it's more intuitive for the artists.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS