Tone Mapping

Started by
52 comments, last by MJP 6 years, 5 months ago

Does one normally apply Tone Mapping in the Lighting part? Or does one postpone this to the Post Processing part? If one would apply this at the final step of the lighting, can this become problematic for blending?

🧙

Advertisement

Typically you'd apply it at the end of your pipeline, since that's when you want to map from your higher source range to your lower target range (i.e. the monitor).  It really depends on your pipeline though, if the UI is part of the pipeline then you may end up doing tone mapping before that, only on your lit scene, then rendering the UI over that directly.

I looked at some operators and the most common seems to be the Reinhard operator.

Some code sample I found, uses the following implementation:


float CalcLuminance(float3 color) {
    return max(dot(color, float3(0.299f, 0.587f, 0.114f)), 0.0001f);
}

I have no idea what color space is used, but if it is RGB, then the luminance should be calculated as:


float Luminance(float3 rgb) {
    return max(dot(rgb, float3(0.212671f, 0.715160f, 0.072169f)), 0.0001f);
}

The opperators are as follows:


// Applies Reinhard's basic tone mapping operator
float3 ToneMapReinhard(float3 color)  {
    float pixelLuminance = CalcLuminance(color);    
    float toneMappedLuminance = pixelLuminance / (pixelLuminance + 1);
    return toneMappedLuminance * pow(color / pixelLuminance, LuminanceSaturation);    
}

// Applies Reinhard's modified tone mapping operator
float3 ToneMapReinhardModified(float3 color)  {    
    float pixelLuminance = CalcLuminance(color);
    float toneMappedLuminance = pixelLuminance * (1.0f + pixelLuminance / (WhiteLevel * WhiteLevel)) / (1.0f + pixelLuminance);
    return toneMappedLuminance * pow(color / pixelLuminance, LuminanceSaturation);
}

For both methods, I understand the mapping of the luminance value toneMappedLuminance, but what is this multiplication by pow(color / pixelLuminance, LuminanceSaturation) supposed to represent? The color / pixelLuminance is a normalization of the luminance value (though a local normalization as one can use an image wide average or maximum value as well, I suppose). But what about the power function?

🧙

3 hours ago, matt77hias said:

Does one normally apply Tone Mapping in the Lighting part? Or does one postpone this to the Post Processing part? If one would apply this at the final step of the lighting, can this become problematic for blending?

IIRC you perform tonemapping right before AA, but I'm not sure.

-potential energy is easily made kinetic-

9 hours ago, matt77hias said:

Does one normally apply Tone Mapping in the Lighting part? Or does one postpone this to the Post Processing part?

When we first started doing HDR on the PS3/360, we did tonemapping immediately after lighting and before post-processing. This was a deliberate performance choice so that our post-processing passes could work with 8bit colours as it was much faster. When we moved to the PS4/XbOne, we started toing tonemapping after post-processing, so that post-processing passes worked with HDR colours, as it produces much prettier results.

The phenomenon where bright pixels have a much stronger effect in DOF/motion-blur just happens by default when doing HDR post-processing, but requires the use of a lot of hacks if doing 8-bit post-processing :)

bokeh.jpg.537ed732a1cf459a224e2374deff97ca.jpg

7 hours ago, matt77hias said:

I have no idea what color space is used, but if it is RGB, then the luminance should be calculated as:

There's a lot of ways to calculate luminance. Each are "right" within a specific definition... As long as you're somewhat close to perceptual luminance it doesn't really matter. The first set of numbers you posted are from the HDTV standard, the second set are from the International Commission on Illumination. I doubt you'd be able to tell the difference really :D

6 hours ago, Infinisearch said:

IIRC you perform tonemapping right before AA, but I'm not sure.

You are right in that anti aliasing needs the tone mapped result, but the AA is probably also before the post processing, which will work best in HDR space instead. So usually there is a tonemap in the anti aliasing shader before the resolve, and an inverse tonemap right after it, so that the post procesing is getting the anti aliased image in HDR space. Then of course you need to do the tonemapping again at the end.

So If I summarize all this:

  1. Tone Mapping + Anti-Aliasing + Inverse Tone Mapping
  2. Post Processing
  3. Tone Mapping
  4. Sprites
  5. Gamma Correction

For a forward pass (for which I currently see no reason to not use MSAA), this becomes:

  1. LBuffer
  2. Lighting
  3. Post Processing
  4. Tone Mapping
  5. Sprites
  6. Gamma Correction

(Here, LBuffer generates the shadow maps etc.)

For a deferred pass, this becomes:

  1. LBuffer
  2. GBuffer
  3. Lighting
  4. Tone Mapping + Anti-Aliasing + Inverse Tone Mapping
  5. Post Processing
  6. Tone Mapping
  7. Sprites
  8. Gamma Correction

The latter is currently a bit messy, since I actually do:

Lighting :=

  1. Opaque Lighting
  2. Transfer non-MS to MS
  3. Transparent Lighting

I try to preserve the multi-sampling in both pipelines since I actually see no feasible alternative for alpha-to-coverage transparency? I do not want to sort transparent objects, which is not even guaranteed possible? This currently works "ok" in the presence of no post-processing.

So in both cases (forward and deferred), I will actually have a multi-sample texture. I am not really sure if that is ideal as input for the post-processing? Furthermore, what does the post-processing need as input (in general)? Current image and depth buffer? Or should I have normals as well (minimal GBuffer)?

Since HDR values are preserved between multiple passes, does that mean that I need an intermediate RTV such as A32B32G32R32? (The multi-sampling is going to hurt my memory :) )

🧙

1 hour ago, matt77hias said:

So If I summarize all this:

  1. Tone Mapping + Anti-Aliasing + Inverse Tone Mapping
  2. Post Processing
  3. Tone Mapping
  4. Sprites
  5. Gamma Correction

Sprites (GUI?) should probably be after gamma correction assuming they are not lighted. The image authoring tools already work in gamma space by default, so if you do no lighting on the sprites, then just render them after gamma correction. If the sprites have lighting, then they should be rendered before tonemapping.

1 hour ago, matt77hias said:

For a forward pass (for which I currently see no reason to not use MSAA), this becomes:

Keep in mind, you should still do the inverse tonemap trick for MSAA for best results. You have to do a custom resolve for that.

1 hour ago, matt77hias said:

I try to preserve the multi-sampling in both pipelines since I actually see no feasible alternative for alpha-to-coverage transparency? I do not want to sort transparent objects, which is not even guaranteed possible? This currently works "ok" in the presence of no post-processing.

I like to avoid alpha-to coverage and instead use a two pass approach. This consists of first rendering the opaque parts only with alpha test and depth writes turned on. The second pass is depth read only (greater or less depth test but not equal) and with alpha blending but not alpha tested. You don't have to sort alpha blended geometry, because it really won't be visible because of the opaque parts are dominating and are correctly sorted because of the depth buffer. Obviously this works best for vegetation, but not glass and the like. But so does alpha-to coverage.

1 hour ago, matt77hias said:

So in both cases (forward and deferred), I will actually have a multi-sample texture. I am not really sure if that is ideal as input for the post-processing? Furthermore, what does the post-processing need as input (in general)? Current image and depth buffer? Or should I have normals as well (minimal GBuffer)?

Since HDR values are preserved between multiple passes, does that mean that I need an intermediate RTV such as A32B32G32R32? (The multi-sampling is going to hurt my memory :) )

RGBA16_FLOAT format for HDR should be more than enough. You should probably be running your post processing on non-multisampled textures. You should resolve or even downsample your g-buffers before post processing for faster access in the shaders. The output render targets should definetly be non multisampled for post processing.

28 minutes ago, turanszkij said:
1 hour ago, matt77hias said:

So in both cases (forward and deferred), I will actually have a multi-sample texture. I am not really sure if that is ideal as input for the post-processing? Furthermore, what does the post-processing need as input (in general)? Current image and depth buffer? Or should I have normals as well (minimal GBuffer)?

Since HDR values are preserved between multiple passes, does that mean that I need an intermediate RTV such as A32B32G32R32? (The multi-sampling is going to hurt my memory :) )

RGBA16_FLOAT format for HDR should be more than enough. You should probably be running your post processing on non-multisampled textures. You should resolve or even downsample your g-buffers before post processing for faster access in the shaders. The output render targets should definetly be non multisampled for post processing.

Small addition: if you use physically based sky/sun stuff, you could easily end up with values in your scene, that are bigger than what a half can store. In that case you either need to do some preexposition at the end of the lighting/sky passes (the frostbite pbr course notes mention how they do this), or use 32bit RTs. (I do the latter at the moment, preexposing would be the preferable way to solve this, but I haven't yet had the patience to rewire my frame graph to support it.)

1 hour ago, turanszkij said:

Sprites (GUI?) should probably be after gamma correction assuming they are not lighted. The image authoring tools already work in gamma space by default, so if you do no lighting on the sprites, then just render them after gamma correction. If the sprites have lighting, then they should be rendered before tonemapping.

I implicitly assumed doing inverse gamma mapping first when retrieving the data from the sprite texture. But now that you are mentioning it, it makes no sense to do inverse gamma mapping and gamma mapping since it is a no-op if I do not perform anything in between.

🧙

This topic is closed to new replies.

Advertisement