Some HDR, tonemapping and bloom questions

Started by
3 comments, last by knarkowicz 7 years, 7 months ago
Got some questions regarding terminology and such.
  1. HDR simply means using a larger-than-backbuffer texture format, such as R16G16B16A16 while doing lighting (like adding up light contributions, which could be greater than the [0, 1] range) and later in the pipeline, tonemapping/bloom post-processing effects?
  2. Tonemapping is about calculating the average luminance (essentially light intensity?) and to smooth it out over the final image?
  3. If so, where does Bloom fit into all this? Is tonemapping and bloom the same thing? Does it make sense to use only either or only both together? (Tonemap vs Bloom vs Tonemap+Bloom)
  4. When calculating the average luminance, is it wise to do it in a compute shader or just mipmap the whole image and read the 1x1 level?
  5. Does tonemapping and bloom require special 3d assets to function properly? I am using assimp for example and am stuck with its restrictions when it comes to asset parsing. Best thing would be if it could just adapt to whatever scene is rendered.

Cheers

Advertisement

HDR simply means using a larger-than-backbuffer texture format, such as R16G16B16A16 while doing lighting (like adding up light contributions, which could be greater than the [0, 1] range) and later in the pipeline, tonemapping/bloom post-processing effects?

Tonemapping is about calculating the average luminance (essentially light intensity?) and to smooth it out over the final image?

Actually as I've been told, HDR just means the first step of storing/calculating lights in a larger space. Tonemapping is just a process to get this information in a format that can be displayed on regular displays. If we ever had a full HDR monitor, no tonemapping would be needed anymore.

If so, where does Bloom fit into all this? Is tonemapping and bloom the same thing? Does it make sense to use only either or only both together? (Tonemap vs Bloom vs Tonemap+Bloom)

Bloom is just a light-bleeding effect that is naturally caused by the restrictions of a lense. In CG, we use a dedicated bloom pass to simulate this behaviour. So you can have tonemapping without bloom, but bloom works way better with HDR (now instead of having a bloom-threshold of 0.9, you can have a value of 9000.0 which will only bloom really bright light sources ie).

When calculating the average luminance, is it wise to do it in a compute shader or just mipmap the whole image and read the 1x1 level?

Regarding the different methods of calculating the average luminance, I would personally do it manually eigther via a compute-shader, or repeated fullscreen downsample-passes. I have found that the automatic methods fail to produce accurate results, which might be depending on the hardware, yet it happened on all of my PCs. The problem I've been facing with auto-downsampling was that it wouldn't take the average of the neigbouring pixels on downsampling correctly, kind of seemed like he was using a point-filter. The effect was, that a single dark area in a specific portion of the scren would cause the avg luminance to drop drastically. I haven't checked this in a while but back then, using manual downsampling actually fixed it.

Does tonemapping and bloom require special 3d assets to function properly? I am using assimp for example and am stuck with its restrictions when it comes to asset parsing. Best thing would be if it could just adapt to whatever scene is rendered.

No, not at all. Models don't need to change based on the rendering model anyways, and textures can also stay the same (since ie. the color texture of a model actually represents the amout of light being absorbed, which stays the same with HDR or without it).

1) pretty much. Having a gamma-correct pipeline is also a prerequisite -- the HDR data is linear whereas the post-tonemap data is sRGB.

2) You don't need to use average luminance when tonemapping. Tonemapping is going from HDR to "normal" images. In a digital camera, it goes from the real world of unbounded light intensity to 8bit sRGB JPEGs, using exposure (basically a simple multiplier), a gamma curve, and maybe a tonemapping curve if you're lucky.
If you use nothing but a multiplier (followed by the linear->sRGB gamma curve) we'd say that you're using a linear tonemapping function. The better tonemapping functions (Reinhard, Hable, Hejl, etc) are basically another curve that gets applied after exposure and before gamma.

If your camera is set to 'auto' mode, then yeah, it will sample the average scene luminance and use this to pick an exposure value.
However, if you set the camera to 'manual', you can arbitrarily pick an exposure value.
Some games give artists complete camrra-like control, which means in some levels they might use manual exposure :)

3) Bloom represents the fact that the lens isn't perfect and will blur the image slightly, similar to lens flare (they'd be computed at the same time).
This means just running a subtle blur filter over your HDR data before tonemapping. It should have a very high peak, but wide tail, so that it's only noticeable in very bright areas.
Before HDR, we used to do bloom by either having artists manually specify which surfaces were "bright", or we'd use a threshold - e.g. Anything over 0.9 brightness will blur.

4. Profile and test the performance differences :)
It's a "parallel reduction" algorithm that maps well to both pixel and compute.

5. As above, sometimes in the past we'd use "bloom mask" textures, but hopefully not any more :)
Great HDR these days goes hand in hand with a good shading model / BRDF, and good textures (e.g. PBR stuff).

Great answers!

Right now I use DXGI_FORMAT_R16G16B16A16_UNORM for my "light accumulation" or HDR-buffer. I guess then I should use DXGI_FORMAT_R16G16B16A16_FLOAT instead?

Right now I perform my gamma-correction by simply doing a fullscreen pass and rendering the DXGI_FORMAT_R16G16B16A16_UNORM texture into the DXGI_FORMAT_R8G8B8A8_UNORM_SRGB backbuffer texel-by-texel. I guess this works fine since both are in the [0, 1] range. Do these tonemapping alghorithms (like Reinhard) also bring the values > 1.0 into the [0,1] range or is it "just about averaging/smoothing the light intensity"? i.e will I possibly end up with > 1.0 values after tonemapping? :)

EDIT: also, what happens when you write > 1.0 valus into a UNORM texture? I assume they jsut get clamped to 1.0?

EDIT2: another question, I suppose FXAA is to be applied AFTER tonemapping? and bloom BEFORE tonemapping?

Right now I use DXGI_FORMAT_R16G16B16A16_UNORM for my "light accumulation" or HDR-buffer. I guess then I should use DXGI_FORMAT_R16G16B16A16_FLOAT instead?

You could use RGB16_UNORM by manually scaling shader outputs and inputs, but RGB16_Float render target is more convenient and precise (you want a lot of precision for the low values and not too much for the very high ones).

Right now I perform my gamma-correction by simply doing a fullscreen pass and rendering the DXGI_FORMAT_R16G16B16A16_UNORM texture into the DXGI_FORMAT_R8G8B8A8_UNORM_SRGB backbuffer texel-by-texel. I guess this works fine since both are in the [0, 1] range. Do these tonemapping alghorithms (like Reinhard) also bring the values > 1.0 into the [0,1] range or is it "just about averaging/smoothing the light intensity"? i.e will I possibly end up with > 1.0 values after tonemapping? :)

Tonemapping is about reducing color range for the final display, so yes a common tonemapper (one designed for LDR displays) will bring image to the [0;1] range.

EDIT: also, what happens when you write > 1.0 valus into a UNORM texture? I assume they jsut get clamped to 1.0?

Yes. Output values will be clamped to [0;1] range.

EDIT2: another question, I suppose FXAA is to be applied AFTER tonemapping? and bloom BEFORE tonemapping?

Yes. You want bloom before tonemapping (so it works with linear HDR values) and FXAA after tonemapping (so edge blending works "correcly").

This topic is closed to new replies.

Advertisement