Advertisement Jump to content
Sign in to follow this  
matt77hias

Tone Mapping

This topic is 450 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Based on what I have learned so far, I have a new summary :) :

Forward MSAA/Alpha-to-Coverage:

  1. LBuffer
  2. Lighting
  3. Tone Mapping + MS-to-no-MS + Tone Mapping (Compute Shader)
  4. Post-processing (Compute Shader)
  5. Tone Mapping + Gamma Correction
  6. Sprites

Forward non-MSAA/Alpha Blending:

  1. LBuffer
  2. Lighting
  3. Tone Mapping + AA + Tone Mapping (Compute Shader)
  4. Post-processing (Compute Shader)
  5. Tone Mapping + Gamma Correction
  6. Sprites

Deferred:

  1. LBuffer
  2. GBuffer
  3. Lighting (Compute Shader)
  4. Tone Mapping + AA + Tone Mapping (Compute Shader)
  5. Post-processing (Compute Shader)
  6. Tone Mapping + Gamma Correction
  7. Sprites

 

RTV back buffer (swap chain):

  1. Tone Mapping + Gamma Correction
  2. Sprites

Cost for targets:

  • LDR GBuffer
  • HDR image/depth buffer (which is multi-sampled for forward MSAA, single-sampled otherwise)
  • HDR UAV buffer
  • back/depth buffer
Edited by matt77hias

Share this post


Link to post
Share on other sites
Advertisement
17 hours ago, turanszkij said:

You are right in that anti aliasing needs the tone mapped result, but the AA is probably also before the post processing, which will work best in HDR space instead. So usually there is a tonemap in the anti aliasing shader before the resolve, and an inverse tonemap right after it, so that the post procesing is getting the anti aliased image in HDR space. Then of course you need to do the tonemapping again at the end.

Is the only reason that AA is before post processing that post processing is faster with less samples?  Also if you do PPAA then do you do it after other PP steps?

Share this post


Link to post
Share on other sites
13 minutes ago, Infinisearch said:

Is the only reason that AA is before post processing that post processing is faster with less samples?  Also if you do PPAA then do you do it after other PP steps?

No, having an already anti-aliased image for the post processes can also remove flickering artifacts in some cases. Look at temporal AA for example which can eliminate specular aliasing (when speculars pop in and out on high intensity surfaces). With flickering speculars, the bloom post process can also flicker and pop-in very much. Having temporally stable speculars solves this issue beautifully. It is also true for depth of field bokeh for instance.

Edited by turanszkij

Share this post


Link to post
Share on other sites
1 hour ago, matt77hias said:

I used LESS_EQUAL in all cases, never rendered any model twice and did not use alpha testing.

  1. Render opaque non-light interacting models (no blending)
  2. Render opaque light interacting models (no blending)
  3. Render transparent light non-interacting models (alpha blending)
  4. Render transparent light interacting models (alpha blending)

My transparent models are those which use (A instead of X in DXGI_FORMAT) the alpha channel in their base color texture.

A problem case could be an object of which some triangles blend with other triangles of that same object.

But how common is that? A double sided object with holes at the outside?

Share this post


Link to post
Share on other sites
On 10/13/2017 at 12:13 PM, turanszkij said:

Sprites (GUI?) should probably be after gamma correction assuming they are not lighted. The image authoring tools already work in gamma space by default, so if you do no lighting on the sprites, then just render them after gamma correction. If the sprites have lighting, then they should be rendered before tonemapping.

I noticed that you use a variable gamma value in your engine. Has this value an impact on the sprites?

Share this post


Link to post
Share on other sites
1 hour ago, matt77hias said:

I noticed that you use a variable gamma value in your engine. Has this value an impact on the sprites?

No, it doesn't and I think it shouldn't. The gamma slider modifies the gamma correction of lighting. It should be set to the gamma of the artist's monitor who created textures which will be used in the lighting pipeline, so that lighting will be performed in linear space. The sprites are not lighted, so performing the same operation on them would just result in a nop. If this slider would be for adjusting your monitor output gamma, then everything would need to be converted, but not the same operator, so something like this: 

linearColor = pow(baseColor, 1.0 / authoringGamma)
correctedColor = pow(linearColor, monitorGamma)

I am not sure that something like this would be beneficial, I think the result would just lose information, because the original image was not created with enough information for the new gamma space. Remember, gamma remaps colors so that 0-255 range is better distributed for perception, so dark colors can be represented with more detail.

But someone correct me if I'm wrong, please :)

Edited by turanszkij

Share this post


Link to post
Share on other sites
1 minute ago, turanszkij said:

No, it doesn't and I think it shouldn't. The gamma slider modifies the gamma correction of lighting.

Does this also imply that you only use sRGB texture formats for sprites? Since lighting related data cannot benefit from hardware sRGB due to the variable gamma?

Share this post


Link to post
Share on other sites
On ‎15‎/‎10‎/‎2017 at 10:49 AM, matt77hias said:

Does this also imply that you only use sRGB texture formats for sprites? Since lighting related data cannot benefit from hardware sRGB due to the variable gamma?

Right now I don't use SRGB format for anything. Why would I if they are already in SRGB space?

Share this post


Link to post
Share on other sites
59 minutes ago, turanszkij said:

Right now I don't use SRGB format for anything. Why would I if they are already in SRGB space?

For colour assets authored by your artists (which are stored in sRGB space), it gives you free sRGB->Linear decoding when you sample from them, so that you don't need to do your own gamma-encoding with manual shader code.

Share this post


Link to post
Share on other sites
34 minutes ago, Hodgman said:

For colour assets authored by your artists (which are stored in sRGB space), it gives you free sRGB->Linear decoding when you sample from them, so that you don't need to do your own gamma-encoding with manual shader code.

Yeah I've heard about that, but I wanted to avoid it, because seems like a bit of magic. For example, how does it know which gamma space was the image authored in (probably it is 2.2 but anyway)? I'd also need to rewrite asset loading for that but just for the assets that need conversion. Converting explicitly in the shaders is just so much more self-documenting, and what are a few pow() operations anyway?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!