General questions on hdr and tonemapping

Started by
7 comments, last by videl 11 years, 7 months ago
Hello,

I am currently implementing HDR rendering into our game. I got a basic version working, but I have a few questions on "how to do it right".

Exposure control and tonemapping are, even though you can do them in the same shader, two "completely" different steps in the pipeline, right? I mean the exposure control does not depend on which tonemapping operator I use?

I am wondering what's the best way to properly adapt the exposure. I've read that I should calculate the geometric mean of the luminance instead of the simple arithmetic mean. At the moment I do the exposure adjustment in a very basic way (found in the ogre engine hdr sample app):
texColor *= MIDDLE_GREY / lum;
Where texColor is obviously the scene. MIDDLE_GREY has a value of 0.72 and lum is the geometric mean of the luminances of all pixels.
Is this the standard way to adjust the exposure or is there a better algorithm to do this?

I first had the same algorithm just using a arithmetic mean. It seams that now with the geometric mean the exposure adjustment has a much smaller effect. It pretty much does only dial down very bright scenes and doesn't seam to lighten dark scenes very much. Am I doing something wrong or is this just the normal effect of using a geometric mean?

Is there some good literature on this topic? I can't find too much online ressources on hdr, exposure and tonemapping.

I hope someone can help ;)

regards
videl
Advertisement
I'm not an HDR expert. When you're talking about the geometric mean, are you doing something like in this blog?

I would imagine you might be falling afoul of the logarithm, e.g. that article says to scale by 2^exposure.
The distinction between "exposure" and "tone mapping" is one I like to make, since that makes sense for me conceptually and also makes the process more analogous to photography. But you won't always see it described that way, in fact a lot of academic research involving tone mapping of HDR imagery will refer to your "tone mapping operator" as the entire process used to transform your HDR image to the image that you display. In games there's typically quite a few steps in that process (both literarlly and conceptually), so I don't like to keep with that terminology.

Geometric mean is commonly used because it's less sensitive to outliers compared to arithmetic mean. So if you have small spots of really bright areas (for instance, emissive geometry representing small light sources) then your auto exposure won't darken too much in response. Keep in mind that if you're moving between drastically different lighting environments, for instance from a dark underground area to a bright sunny outdoor area, then you will probably also need to adjust your key value (called MIDDLE_GREY in your code) to get subjectively optimal exposure value. You can think of that value as sort of a "mood" slider.

For further reading, the book High Dynamic Range Imaging is definitely a good resource. As far as game-specific resources there's some presentations by Tri-ace, Naughty Dog, and Bungie. This course from SIGGRAPH 2010 also has a lot of good information. I also have a blog article about this, which someone already linked to. If you're feeling particularly ambitious you can also look into research and reading materials regarding auto-exposure and film processing/simulation in photography.
@jefferytitan Yes I got the information about the geometric mean from the blog post you mentioned.
To calculate the geometric mean, I simply calculate the log of luminance and write the results to a 1024×1024 texture. I then call GenerateMips to automatically generate the full mip-map chain. At that point I can apply exp() to the last mip level to get a full log-average of the scene.[/quote]


That's exactly what I am doing right now (except I am downsampling using custom shaders and not the mip mapping function). I am not sure which base I should use for the log and exp. Default log and exp functions are using e. Somehow there must also be a error in my code, using the geometric mean the exposure control does really not help to brighten dark areas. But it works using the arithmetic mean...

@MJP
Ok, separating exposure control and tonemapping makes sense. I've also read the article on John Hable's Blog. He does also separate the exposure correction and the tonemapping I think.

In your demo application (which is very useful ;)) you have a slightly more advanced algorithm to control the exposure. You do also use the geometric mean, right?
Controlling the mood is essential yes. In your application that is the keyValue slider I suppose.

I will take a look at this book, I've already found two of the presentations and your blog article before ;)
You want to use a base e logarithm, so log() and exp() are what you should use in the shader.

In that sample I did try out a more complex auto-exposure routine where you could use an average luminance that was localized for a portion of the screen. It was somewhat experimental, and I don't currently use it anywhere else.
Ok, then I will try log and exp.

I am currently implementing a bloom effect, let's see how far I can push the image as a whole. This will require me to completely separate the process of exposure correction and tonemapping. At the moment the pipeline is the following: forward render the image -> downsample to get luminance -> correct exposure and tonemap in one shader
I think I will have to adjust it to: forward render the image -> downsample to get luminance -> correct exposure -> create bloom -> add bloom to render result and tonemap.
After implementing bloom and some other stuff I got back to my problem with the geometric mean. The problem is, that the render textures obviously can't store negative values. So I just got the value of 1 (the luminance texture is 0 if luminance < e, so I get exp(0) = 1,) for all dark average luminance values. The solution to this would be to map the luminance range to only positive numbers and then convert back to the real range in the exposure correction shader. I know this is done for example with normal maps. But as the range of values in the texture and luminances is not just [0..1] respectively [-1...1] it's not as easy as with normal maps.

My idea would be to define a lower bound for the log of a specific pixels luminance. Then add this value to the log to get only positive numbers. So if the lower bound would be -10, I would add 10 to the log of the luminance. And then substract 10 again just in the exposure shader.
Written in 'pseudocode' it would be something like this (although the shader does this quite different using several downsample blur passes):

sum = log(P[sub]1[/sub]) + 10 + log(P[sub]2[/sub]) +10 + log(P[sub]N[/sub]) +10
sum = sum / N
->store to luminanceTex

Exposure Shader:
lum = exp(luminanceTex - 10)
...

I hope that makes it clear what I have in mind.

Edit/ this approach seams to work, any better ideas though?
I usually just use a floating point texture format like R16_FLOAT or R32_FLOAT for storing log(luminance), and those formats have a sign bit. Which format are you currently using?
Haha, I've been told by other programmers on our project that textures can't store negative numbers. Using R16_FLOAT works just perfect with negative numbers, thank you for the hint ;)

This topic is closed to new replies.

Advertisement