# Tone Mapping

## Recommended Posts

Does one normally apply Tone Mapping in the Lighting part? Or does one postpone this to the Post Processing part? If one would apply this at the final step of the lighting, can this become problematic for blending?

##### Share on other sites

Typically you'd apply it at the end of your pipeline, since that's when you want to map from your higher source range to your lower target range (i.e. the monitor).  It really depends on your pipeline though, if the UI is part of the pipeline then you may end up doing tone mapping before that, only on your lit scene, then rendering the UI over that directly.

##### Share on other sites

I looked at some operators and the most common seems to be the Reinhard operator.

Some code sample I found, uses the following implementation:

float CalcLuminance(float3 color) {
return max(dot(color, float3(0.299f, 0.587f, 0.114f)), 0.0001f);
}

I have no idea what color space is used, but if it is RGB, then the luminance should be calculated as:

float Luminance(float3 rgb) {
return max(dot(rgb, float3(0.212671f, 0.715160f, 0.072169f)), 0.0001f);
}

The opperators are as follows:

// Applies Reinhard's basic tone mapping operator
float3 ToneMapReinhard(float3 color)  {
float pixelLuminance = CalcLuminance(color);
float toneMappedLuminance = pixelLuminance / (pixelLuminance + 1);
}

// Applies Reinhard's modified tone mapping operator
float3 ToneMapReinhardModified(float3 color)  {
float pixelLuminance = CalcLuminance(color);
float toneMappedLuminance = pixelLuminance * (1.0f + pixelLuminance / (WhiteLevel * WhiteLevel)) / (1.0f + pixelLuminance);
}

For both methods, I understand the mapping of the luminance value toneMappedLuminance, but what is this multiplication by pow(color / pixelLuminance, LuminanceSaturation) supposed to represent? The color / pixelLuminance is a normalization of the luminance value (though a local normalization as one can use an image wide average or maximum value as well, I suppose). But what about the power function?

##### Share on other sites
3 hours ago, matt77hias said:

Does one normally apply Tone Mapping in the Lighting part? Or does one postpone this to the Post Processing part? If one would apply this at the final step of the lighting, can this become problematic for blending?

IIRC you perform tonemapping right before AA, but I'm not sure.

##### Share on other sites
9 hours ago, matt77hias said:

Does one normally apply Tone Mapping in the Lighting part? Or does one postpone this to the Post Processing part?

When we first started doing HDR on the PS3/360, we did tonemapping immediately after lighting and before post-processing. This was a deliberate performance choice so that our post-processing passes could work with 8bit colours as it was much faster. When we moved to the PS4/XbOne, we started toing tonemapping after post-processing, so that post-processing passes worked with HDR colours, as it produces much prettier results.

The phenomenon where bright pixels have a much stronger effect in DOF/motion-blur just happens by default when doing HDR post-processing, but requires the use of a lot of hacks if doing 8-bit post-processing

7 hours ago, matt77hias said:

I have no idea what color space is used, but if it is RGB, then the luminance should be calculated as:

There's a lot of ways to calculate luminance. Each are "right" within a specific definition... As long as you're somewhat close to perceptual luminance it doesn't really matter. The first set of numbers you posted are from the HDTV standard, the second set are from the International Commission on Illumination. I doubt you'd be able to tell the difference really

##### Share on other sites
6 hours ago, Infinisearch said:

IIRC you perform tonemapping right before AA, but I'm not sure.

You are right in that anti aliasing needs the tone mapped result, but the AA is probably also before the post processing, which will work best in HDR space instead. So usually there is a tonemap in the anti aliasing shader before the resolve, and an inverse tonemap right after it, so that the post procesing is getting the anti aliased image in HDR space. Then of course you need to do the tonemapping again at the end.

##### Share on other sites

So If I summarize all this:

1. Tone Mapping + Anti-Aliasing + Inverse Tone Mapping
2. Post Processing
3. Tone Mapping
4. Sprites
5. Gamma Correction

For a forward pass (for which I currently see no reason to not use MSAA), this becomes:

1. LBuffer
2. Lighting
3. Post Processing
4. Tone Mapping
5. Sprites
6. Gamma Correction

(Here, LBuffer generates the shadow maps etc.)

For a deferred pass, this becomes:

1. LBuffer
2. GBuffer
3. Lighting
4. Tone Mapping + Anti-Aliasing + Inverse Tone Mapping
5. Post Processing
6. Tone Mapping
7. Sprites
8. Gamma Correction

The latter is currently a bit messy, since I actually do:

Lighting :=

1. Opaque Lighting
2. Transfer non-MS to MS
3. Transparent Lighting

I try to preserve the multi-sampling in both pipelines since I actually see no feasible alternative for alpha-to-coverage transparency? I do not want to sort transparent objects, which is not even guaranteed possible? This currently works "ok" in the presence of no post-processing.

So in both cases (forward and deferred), I will actually have a multi-sample texture. I am not really sure if that is ideal as input for the post-processing? Furthermore, what does the post-processing need as input (in general)? Current image and depth buffer? Or should I have normals as well (minimal GBuffer)?

Since HDR values are preserved between multiple passes, does that mean that I need an intermediate RTV such as A32B32G32R32? (The multi-sampling is going to hurt my memory  )

##### Share on other sites
1 hour ago, matt77hias said:

So If I summarize all this:

1. Tone Mapping + Anti-Aliasing + Inverse Tone Mapping
2. Post Processing
3. Tone Mapping
4. Sprites
5. Gamma Correction

Sprites (GUI?) should probably be after gamma correction assuming they are not lighted. The image authoring tools already work in gamma space by default, so if you do no lighting on the sprites, then just render them after gamma correction. If the sprites have lighting, then they should be rendered before tonemapping.

1 hour ago, matt77hias said:

For a forward pass (for which I currently see no reason to not use MSAA), this becomes:

Keep in mind, you should still do the inverse tonemap trick for MSAA for best results. You have to do a custom resolve for that.

1 hour ago, matt77hias said:

I try to preserve the multi-sampling in both pipelines since I actually see no feasible alternative for alpha-to-coverage transparency? I do not want to sort transparent objects, which is not even guaranteed possible? This currently works "ok" in the presence of no post-processing.

I like to avoid alpha-to coverage and instead use a two pass approach. This consists of first rendering the opaque parts only with alpha test and depth writes turned on. The second pass is depth read only (greater or less depth test but not equal) and with alpha blending but not alpha tested. You don't have to sort alpha blended geometry, because it really won't be visible because of the opaque parts are dominating and are correctly sorted because of the depth buffer. Obviously this works best for vegetation, but not glass and the like. But so does alpha-to coverage.

1 hour ago, matt77hias said:

So in both cases (forward and deferred), I will actually have a multi-sample texture. I am not really sure if that is ideal as input for the post-processing? Furthermore, what does the post-processing need as input (in general)? Current image and depth buffer? Or should I have normals as well (minimal GBuffer)?

Since HDR values are preserved between multiple passes, does that mean that I need an intermediate RTV such as A32B32G32R32? (The multi-sampling is going to hurt my memory  )

RGBA16_FLOAT format for HDR should be more than enough. You should probably be running your post processing on non-multisampled textures. You should resolve or even downsample your g-buffers before post processing for faster access in the shaders. The output render targets should definetly be non multisampled for post processing.

Edited by turanszkij

##### Share on other sites
28 minutes ago, turanszkij said:
1 hour ago, matt77hias said:

So in both cases (forward and deferred), I will actually have a multi-sample texture. I am not really sure if that is ideal as input for the post-processing? Furthermore, what does the post-processing need as input (in general)? Current image and depth buffer? Or should I have normals as well (minimal GBuffer)?

Since HDR values are preserved between multiple passes, does that mean that I need an intermediate RTV such as A32B32G32R32? (The multi-sampling is going to hurt my memory  )

RGBA16_FLOAT format for HDR should be more than enough. You should probably be running your post processing on non-multisampled textures. You should resolve or even downsample your g-buffers before post processing for faster access in the shaders. The output render targets should definetly be non multisampled for post processing.

Small addition: if you use physically based sky/sun stuff, you could easily end up with values in your scene, that are bigger than what a half can store. In that case you either need to do some preexposition at the end of the lighting/sky passes (the frostbite pbr course notes mention how they do this), or use 32bit RTs. (I do the latter at the moment, preexposing would be the preferable way to solve this, but I haven't yet had the patience to rewire my frame graph to support it.)

##### Share on other sites
1 hour ago, turanszkij said:

Sprites (GUI?) should probably be after gamma correction assuming they are not lighted. The image authoring tools already work in gamma space by default, so if you do no lighting on the sprites, then just render them after gamma correction. If the sprites have lighting, then they should be rendered before tonemapping.

I implicitly assumed doing inverse gamma mapping first when retrieving the data from the sprite texture. But now that you are mentioning it, it makes no sense to do inverse gamma mapping and gamma mapping since it is a no-op if I do not perform anything in between.

##### Share on other sites
1 hour ago, turanszkij said:

Keep in mind, you should still do the inverse tonemap trick for MSAA for best results. You have to do a custom resolve for that.

So if I understand correctly, you need to tone-map at the end of the Lighting pass. And then perform another pass to apply the inverse tone-mapping? But isn't this a no-op? The output color is calculated once per pixel and distributed based on the coverage mask among the sub pixels. How is this influenced by the color space?

##### Share on other sites
1 minute ago, matt77hias said:

So if I understand correctly, you need to tone-map at the end of the Lighting pass. And then perform another pass to apply the inverse tone-mapping? But isn't this a no-op? The output color is calculated once per pixel and distributed based on the coverage mask among the sub pixels. How is this influenced by the color space?

No, I mean that if you have an AA resolve shader, you do the tonemapping right there before the resolve (for all the samples), and inverse tonemap right after, in the same shader (for the final resolved sample). It helps when there are huge discontinuities in the image in nearby samples, because the averaging operation would be biased towards the high values, and aliasing would not be eliminated.

##### Share on other sites
1 hour ago, turanszkij said:

I like to avoid alpha-to coverage and instead use a two pass approach. This consists of first rendering the opaque parts only with alpha test and depth writes turned on. The second pass is depth read only (greater or less depth test but not equal) and with alpha blending but not alpha tested. You don't have to sort alpha blended geometry, because it really won't be visible because of the opaque parts are dominating and are correctly sorted because of the depth buffer. Obviously this works best for vegetation, but not glass and the like. But so does alpha-to coverage.

So you render all models (opaque and transparent) first and perform a clip in the PS based on the alpha value and some threshold (or just 1.0f?). Then you render only transparent models with alpha blending, and only reading the depth buffer.

I actually perform a similar approach if the user does not select MSAA. Though, I do not have the only reading the depth buffer part, but this is not really an issue I suppose if you don't use the depth buffer anymore (sprites just go on top). The alpha blended geometry looks pretty artificial? If I go up and look at the Crytek Sponza scene from above, the pillar plants look very white since they start blending with each other.

##### Share on other sites
16 minutes ago, turanszkij said:

No, I mean that if you have an AA resolve shader, you do the tonemapping right there before the resolve (for all the samples), and inverse tonemap right after, in the same shader (for the final resolved sample). It helps when there are huge discontinuities in the image in nearby samples, because the averaging operation would be biased towards the high values, and aliasing would not be eliminated.

But in case of MSAA what would the purpose of the AA resolve shader be? Temporal AA only?

##### Share on other sites
14 minutes ago, matt77hias said:

But in case of MSAA what would the purpose of the AA resolve shader be? Temporal AA only?

The MSAA resolve shader is just sampling all your subsamples of the MSAA texture and averaging them. Now, DX11 provides a built in function for this, but you want to avoid it and write your own if you have a HDR pipeline with tonemap - average - inversetonemap.

26 minutes ago, matt77hias said:

So you render all models (opaque and transparent) first and perform a clip in the PS based on the alpha value and some threshold (or just 1.0f?). Then you render only transparent models with alpha blending, and only reading the depth buffer.

I actually perform a similar approach if the user does not select MSAA. Though, I do not have the only reading the depth buffer part, but this is not really an issue I suppose if you don't use the depth buffer anymore (sprites just go on top). The alpha blended geometry looks pretty artificial? If I go up and look at the Crytek Sponza scene from above, the pillar plants look very white since they start blending with each other.

The opaque clipping is like this:

clip(baseColor.a - AlphaTestThreshold);

Where AlphaTestThreshold is 1.0f / 255.0f by default (so that only the completely transparent parts are clipped, but the user can modify it with values between 0 and 1.

The transparent part is depth read not equal because we need to reject samples based on depth test, so every part of the texture gets discarded which was rendered in the opaque pass. Optionally, there could be an additional clip(alpha - 1.0/255.0) to try to early out on the completely transparent parts. The blending equation should be color  = ( srcA x srcCol ) + ( invSrcA x destCol ) , so you shouldn't be getting white on the plants as you say. You should be getting something like the attached image. You could notice sorting artifacts where there is a lot of overlap, but that is not so obvious. Instead you get very nice soft edges on distant silhouettes for example, see the part where there is sky at the background. Anyway, this is getting off-topic.

##### Share on other sites

Ok I didn't looked into this, but it actually makes really sense

Alpha-to-Coverage:

Not so good "alpha blending":

I must remove all the opaque fragments when alpha blending. (This will increase the number of shaders by 2).

8 minutes ago, turanszkij said:

The MSAA resolve shader is just sampling all your subsamples of the MSAA texture and averaging them. Now, DX11 provides a built in function for this, but you want to avoid it and write your own if you have a HDR pipeline with tonemap - average - inversetonemap.

Ah, in order to get rid of of the multi-sample texture before post-processing?

Interesting, the MSAA and alpha-to-coverage could still be of some use for forward. Though with all the post-processing, I clearly need to have a non-multi-sampled back buffer.

Edited by matt77hias

##### Share on other sites
11 minutes ago, turanszkij said:

Where AlphaTestThreshold is 1.0f / 255.0f by default (so that only the completely transparent parts are clipped, but the user can modify it with values between 0 and 1.

For my shadow maps, I use a value of 0.8f in the presence of transparent geometry

##### Share on other sites
6 minutes ago, matt77hias said:

For my shadow maps, I use a value of 0.8f in the presence of transparent geometry

That is perfectly fine, just less conservative. I used something like 0.75 hardcoded for some time, now it can be tweaked.

##### Share on other sites
22 minutes ago, turanszkij said:

color  = ( srcA x srcCol ) + ( invSrcA x destCol )

Hmm. I remember originally to use some blends of the DirectXTK, though their naming wasn't very convincing.

Opaque Blend: (source × 1) + (destination × 0) = source

Alpha Blend: (source × 1) + (destination × (1 - source.alpha))

Additive Blend:(source × source.alpha) + (destination × 1)

Non-premultiplied Blend: (source × source.alpha) + (destination × (1 - source.alpha))

The one you propose seems more like alpha blending

Opaque Blend: (source × 1) + (destination × 0) = source

Alpha Blend: (source × source.alpha) + (destination × (1 - source.alpha))

Additive Blend:(source × 1) + (destination × 1)

Non-premultiplied Blend: nvm

Edited by matt77hias

##### Share on other sites
2 hours ago, turanszkij said:

The blending equation should be color  = ( srcA x srcCol ) + ( invSrcA x destCol )

I refactored all my blend equations.

Alpha blending:

I used LESS_EQUAL in all cases, never rendered any model twice and did not use alpha testing.

1. Render opaque non-light interacting models (no blending)
2. Render opaque light interacting models (no blending)
3. Render transparent light non-interacting models (alpha blending)
4. Render transparent light interacting models (alpha blending)

My transparent models are those which use (A instead of X in DXGI_FORMAT) the alpha channel in their base color texture.

Thanks for explicitizing the blend equation.

Edited by matt77hias

##### Share on other sites

Based on what I have learned so far, I have a new summary  :

Forward MSAA/Alpha-to-Coverage:

1. LBuffer
2. Lighting
3. Tone Mapping + MS-to-no-MS + Tone Mapping (Compute Shader)
5. Tone Mapping + Gamma Correction
6. Sprites

Forward non-MSAA/Alpha Blending:

1. LBuffer
2. Lighting
3. Tone Mapping + AA + Tone Mapping (Compute Shader)
5. Tone Mapping + Gamma Correction
6. Sprites

Deferred:

1. LBuffer
2. GBuffer
4. Tone Mapping + AA + Tone Mapping (Compute Shader)
6. Tone Mapping + Gamma Correction
7. Sprites

RTV back buffer (swap chain):

1. Tone Mapping + Gamma Correction
2. Sprites

Cost for targets:

• LDR GBuffer
• HDR image/depth buffer (which is multi-sampled for forward MSAA, single-sampled otherwise)
• HDR UAV buffer
• back/depth buffer
Edited by matt77hias

##### Share on other sites
17 hours ago, turanszkij said:

You are right in that anti aliasing needs the tone mapped result, but the AA is probably also before the post processing, which will work best in HDR space instead. So usually there is a tonemap in the anti aliasing shader before the resolve, and an inverse tonemap right after it, so that the post procesing is getting the anti aliased image in HDR space. Then of course you need to do the tonemapping again at the end.

Is the only reason that AA is before post processing that post processing is faster with less samples?  Also if you do PPAA then do you do it after other PP steps?

##### Share on other sites
13 minutes ago, Infinisearch said:

Is the only reason that AA is before post processing that post processing is faster with less samples?  Also if you do PPAA then do you do it after other PP steps?

No, having an already anti-aliased image for the post processes can also remove flickering artifacts in some cases. Look at temporal AA for example which can eliminate specular aliasing (when speculars pop in and out on high intensity surfaces). With flickering speculars, the bloom post process can also flicker and pop-in very much. Having temporally stable speculars solves this issue beautifully. It is also true for depth of field bokeh for instance.

Edited by turanszkij

##### Share on other sites
1 hour ago, matt77hias said:

I used LESS_EQUAL in all cases, never rendered any model twice and did not use alpha testing.

1. Render opaque non-light interacting models (no blending)
2. Render opaque light interacting models (no blending)
3. Render transparent light non-interacting models (alpha blending)
4. Render transparent light interacting models (alpha blending)

My transparent models are those which use (A instead of X in DXGI_FORMAT) the alpha channel in their base color texture.

A problem case could be an object of which some triangles blend with other triangles of that same object.

But how common is that? A double sided object with holes at the outside?

##### Share on other sites
On 10/13/2017 at 12:13 PM, turanszkij said:

Sprites (GUI?) should probably be after gamma correction assuming they are not lighted. The image authoring tools already work in gamma space by default, so if you do no lighting on the sprites, then just render them after gamma correction. If the sprites have lighting, then they should be rendered before tonemapping.

I noticed that you use a variable gamma value in your engine. Has this value an impact on the sprites?

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627722
• Total Posts
2978804

• 10
• 9
• 21
• 14
• 12