Question about tone mapping / HDR

Started by
8 comments, last by Vexal 10 years, 3 months ago

I've been trying to implement HDR but have been having some trouble figuring out the correct approach. I have bloom working, but not tone mapping. I am using DirectX 11.

I have read several sources but they each say different things. I currently have a shader that takes the texture from the first pass and renders to the luminance texture, then finds the average luminance by generating all of the mip maps. The final pass applies the luminance scaling (along with the bloom).

I am confused why other articles and samples compute the average luminance by downsampling the texture multiple times. Why is it not enough to just call deviceContext::GenerateMips()? I saw another article suggesting using the compute shader to do this.

I am also confused what I should be storing as the luminance. This article talks about finding the average log of the pixel color and some color weights http://msdn.microsoft.com/en-us/library/windows/desktop/bb173486(v=vs.85).aspx, but this post says nothing of logarithmic operations when finding the average scene luminance http://www.gamedev.net/topic/532960-hdr-lighting--bloom-effect/, but does use the exponential operator when computing the final color.

This paper linked to by the microsoft article goes into more detail on the equations, but doesn't talk about how the relate to the shaders at all http://www.cs.utah.edu/~reinhard/cdrom/tonemap.pdf

I tried looking at the samples in the deprecated directx SDK (the compute shader HDR sample), and that version just averages the color of the pixels directly and weights them by (.299, .587, .114, 0) to get the average luminance, then scales the color of the final pixel linearly. Running the sample looks decent, but when I run it in my program, because blue is weighted so low, looking at the sky makes everything else extremely bright. Weighting each RGB value equally just makes the final image darker in most cases and doesn't produce any interesting effect.

Which formula should I be using to get the average luminance, and which formula should I be using to convert the unmapped pixel color to the mapped pixel color given the average luminance?

Advertisement

"I am confused why other articles and samples compute the average luminance by downsampling the texture multiple times. Why is it not enough to just call deviceContext::GenerateMips()? I saw another article suggesting using the compute shader to do this." - I suspect this amounts to the same thing, but you don't know for sure how GenerateMips is implemented, it's probably going to use the GPU and downsample the texture multiple times in exactly the same way, but if it happens to do the work on the CPU then that could be a massive performance cost. I suspect you're fine to use GenerateMips if you find it's working for you (although perhaps you should google to check that it behaves how you expect on most devices), but I think it makes sense that articles and papers are more explicit about the required process.

"I am also confused what I should be storing as the luminance. This article talks about finding the average log of the pixel color and some color weights http://msdn.microsoft.com/en-us/library/windows/desktop/bb173486(v=vs.85).aspx, but this post says nothing of logarithmic operations when finding the average scene luminance http://www.gamedev.net/topic/532960-hdr-lighting--bloom-effect/, but does use the exponential operator when computing the final color." - I'm not sure, but I think this probably depends on how you're encoding the HDR. Older implementations would probably do clever things to do HDR with 8888 render targets by encoding some logarithmic scaling factor in the alpha channel. More modern implementations probably just use float or half-float render targets instead. What you need to do depends on your implementation. Most likely you're using half-float render targets and you just need a simple average.

"I tried looking at the samples in the deprecated directx SDK (the compute shader HDR sample), and that version just averages the color of the pixels directly and weights them by (.299, .587, .114, 0) to get the average luminance, then scales the color of the final pixel linearly. Running the sample looks decent, but when I run it in my program, because blue is weighted so low, looking at the sky makes everything else extremely bright. Weighting each RGB value equally just makes the final image darker in most cases and doesn't produce any interesting effect." - I think it's correct that blue should have a lower weight when converting RGB to luminance. Perhaps your sky just isn't correctly calibrated against the rest of your scene. Probably most of your scene is dynamically lit with your HDR system and your sky is just a pre-baked 8888 skybox texture, I think that you'd need to modify your sky so that it's a lot brighter. You could do that by modifying the shader you use when rendering to force up the values, or by using a sky texture that can contain HDR values (half-float or something) and get your artist to tweak it.

My sky is just a gradient with pure white (1, 1, 1) at the horizon, and pure blue (0, 0, 1) at the top. When a large portion of the blue part is in view, the rest of the scene becomes too bright. Should I be making the sky brighter than one?

Also, I still don't understand where the formula for exposure from the other post comes into play in relation to all of the other descriptions of HDR.

This one: float4 exposed = 1.0 - pow( 2.71, -( vignette * unexposed * exposure ) );

Hiya,

You should be using HDR values for your scene, if your colours are in the range [0...1] then you don't need HDR tonemapping.

So, as an example, if your brightest pixel is 10,10,10 (instead of 1,1,1) then your blue would be (0,0,10) or higher.

n!

The shaders aren't really important, that's just implementation details. You should focus on understanding the Reinhard paper, because that describes the actual techniques used by the SDK samples. In particular there are 2 steps from the paper that are implemented in most samples that you see:

1. Image calibration, where you adjust the intensity each pixel based on the overall intensity of the entire image. This step works like auto-exposure on a camera: if the image is very bright, the pixels will be darkened and vice versa. The purpose of this step is to bring the image from an arbitrary dynamic range to a more workable range.

2. Tone mapping, where you apply a curve to the calibrated pixels. Reinhard suggests two different curves in his paper, and you might see either being used in samples. The first one is simpler, and has the form x / (1 + x). You can start with that one, since it's easy. The primary purpose of this step to gracefully handle pixels whose intensity is still > 1 after calibration. If you didn't use a curve here, then those pixels would be clamped to 1 which can produce undesirable effects.

The part you seem to be confused about would be the first step. Reinhard's technique for this essentially boils down to compute the average intensity (luminance) of the entire image, then multiplying each pixel by (KeyValue / AvgIntensity) where 'KeyValue' is a value chosen based on whether you have low-key or high-key lighting conditions. As for the average intensity, the simplest way to compute is to compute the luminance for each pixel, and then average of all pixels. As you've already noticed there are several ways to do this, and perhaps the simplest way is to repeatedly downscale the pixels until you end up with a single 1x1 image containing the average. This can be achieved by using GenerateMips as you've already pointed out, or by manually downscaling with multiple passes. Back in the DX9 era generating mips wasn't supported for all render target formats on all hardware, and neither was bilinear filtering. Therefore it was common to manually filter in a pixel shader, which is what you've probably seen in DX9 SDK samples. However if you're targeting DX10-capable or higher hardware, then you can use GenerateMips to make your life a bit easier. On DX11 hardware it's also possible to use compute shaders to compute the average using a parallel reduction. This can potentially be faster than repeated downscaling since it can consume less bandwidth, however in practice in can be challenging to make it faster for all of the various hardware iterations available. I would definitely suggest that you just stick to downscaling, since the performance loss is minimal (fractions of a millisecond at most).

Now the downside of using a simple average (mean) of luminance is that it's susceptible to outliers in the image. So for instance if you have a few bright pixels due to specular reflections or particles, those will pull the average luminance up and can result in over-darkening of the final image. To avoid this, Reinhard suggested using a geometric mean instead. To perform a geometric mean you first compute the log of all values, compute the average, and then compute the exponential of the result. To implement this in DX11 is fairly simple:

1. Perform a full-screen pass that computes the luminance of each pixel, and then returns log(luminance) as the pixel shader output

2. Call GenerateMips on the result texture

3. During tone mapping, sample the lowest 1x1 mip level of the result and use exp() to get the average luminance value.

Thanks for the response. Your clarification on the formulas help.

In addition to that, I think I'm actually having an issue generating (or sampling) the mip maps correctly.

Originally I was doing this in the final shader:

float4 luminance = luminanceTexture.SampleLevel(g_samPoint, float2(.5f, .5f), 10); //luminanceTexture should have the average luminance at the lowest mip level

But sampling at the 10th level is zero for most cases -- it only seems to average the luminance values near the center of the texture. I tried sampling at other levels in between and it seems to be interpolating or something. Rendering the luminance texture to the screen with the texture coordinates of the pixel looks like this for different mip levels.

The brightness of the texture changes based on the position of the sphere on screen. When I sample at the lowest level, the screen is correctly all one color, but it changes when I move the circle around.

L0vR31k.jpg

peErGO7.jpg1o91W6P.jpgDiLsNok.jpg

I figured out that the mip mapping is not correctly averaging the pixels. Looking at the mip levels in visual studio, it shows this.

The color of the final pixel changes based on the position of the objects in the screen. The closer the objects are to the center, the brighter the final pixel. In this example, the final pixel ends up being zero and I cannot figure out why.

Level 0:

uXZX6v3.png

Level 3

j9aSeHQ.png

Level 5:

tyxwdEz.png

Level 9:

KWrQ6AY.png

And finally, level 10 is just a single black pixel. I checked it with the color tool and it's 0,0,0

ef9tGZ4.png

The brightness of the texture changes based on the position of the sphere on screen. When I sample at the lowest level, the screen is correctly all one color, but it changes when I move the circle around.

This shouldn't happen if the mip-maps are being generated correctly...
Another issue could be the texture format used -- are you using an FP16, or other "higher than 8-bit channels" format?

[edit]

Level 9:
And finally, level 10 is just a single black pixel. I checked it with the color tool and it's 0,0,0

Ok, that shows it jumping from a 3px wide texture to a 1px wide texture. Depending on how they're generating the mip-maps, they might be failing to take the right-most pixel into account (just averaging the two left-most pixels). That's very bad :/

You can work around this by calculating the mip-levels yourself instead of using the automatic method. The performance cost will be the same (assuming you do it the same way as the automatic method does it), but your code will obviously be more complex, and you'll have full control over the calculations.

Does your texture contain just luminance, or log(luminance)?

Right now it just contains luminance.

This topic is closed to new replies.

Advertisement