Sign in to follow this  

HDR - "ATI" tone mapping operator

This topic is 4106 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, most of HDR samples use simplified version of Reinhard's global tone mapping operator, however ATI in their "Rendering with Natural Light" demo (which is realtime version of Debevec's movie) they use totally different approach, which looks like this:
float4 hlsl_tone_map(float2tc: TEXCOORD0) : COLOR
{
  float fExposureLevel= 32.0f;
  float4 original = tex2D(originalImageSampler, tc);
  float4 blur = tex2D(blurImageSampler, tc);
  float4 color = lerp(original, blur, 0.4f);

  tc -= 0.5f; // Put coords in -1/2 to 1/2 range
  // Square of distance from origin (center of screen)
  float vignette = 1 - dot(tc, tc);

  // Multiply by vignette to the fourth
  color = color * vignette*vignette*vignette*vignette;
  color *= fExposureLevel; // Apply simple exposure level
  return pow(color, 0.55f); // Apply gamma and return
}




Does anyone know how it works? Is vignette used to make the image darker at the borders of the screen? Assuming that the pixel is in the middle of the screen vginette would be 1, so color won't change after multiplying by vignette^4. If we increase the color even more by multiplying it by exposure level, the value can be much bigger than 1.0, so where is the whole tone mapping in it? The result is actually quite good and it doesn't require average luminance of the scene (however calculating average luminance allows to introduce luminance adaptation). [EDIT] While looking for something completely different I've notice tonemap.fx file in NVIDIA FX Composer. It looks very similar to ATI shader, but there's no vignette at all. All it does is multiplying color by exposure and applying gamma. Isn't tone mapping supposed to map values into 0-1 range? [Edited by - g0nzo on September 15, 2006 6:36:59 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by g0nzo
Is vignette used to make the image darker at the borders of the screen?

Yes, vignetting is an effect known from photography it is independant from the tone mapping.

Quote:

Assuming that the pixel is in the middle of the screen vginette would be 1, so color won't change after multiplying by vignette^4. If we increase the color even more by multiplying it by exposure level, the value can be much bigger than 1.0, so where is the whole tone mapping in it?


They use the most simple tone mapping linear operator: multiply all colors by a constant exposure level. It's no problem to use an exposure level greater than 1.0 since the colors are being gamma corrected and end up in the correct range. Another possibility is to use an exposure level smaller than 1.0 and skip the gamma part. Simple scale operators tend to preserve contrast better than the Reinhard operator.

Share this post


Link to post
Share on other sites
Thanks!

Quote:
Original post by DonnieDarko
It's no problem to use an exposure level greater than 1.0 since the colors are being gamma corrected and end up in the correct range.


I don't really get gamma correction and linear color space (I wanted to create another thread about it), but if the color channels have values of i.e. 100, then even after applying gamma (color^0.55) the value will still be greater than 1, so it should be visible as white color? So they still are not in correct range.

Share this post


Link to post
Share on other sites
Quote:
Original post by g0nzo
So they still are not in correct range.


It's not a problem if you're ready to accept some saturation.

Usually photographs and 3d renders have some amount of saturation in the bright areas. There's nothing you can do about that except by scaling everything to the max value but then you'll end up with no details in the lower light areas of the screen.

LeGreg

Share this post


Link to post
Share on other sites
Man, I remember digging through all that HDR and photography literature and getting so irritated with how unspecific and / or 'breezy' all those examples and papers where. I didn't mess around too much with the ATI tech demos but this might help you out nonetheless; I wrote it as an independent study the last semester of my college career.

Go here: http://web.umr.edu/~kwoxrf/ and then right click and save as the file called "Overcoming Display Limitations in Real Time Systems"

It's a long read but it's straight forward and tells you everything.

...well, that's my sales pitch anyway.

Good luck.

Share this post


Link to post
Share on other sites
Thank you!!!

I'm trying to write similar paper (unfortunately it has to be much longer) myself. I've read yours and got lots of questions:

1. Unfortunately I don't have "Game Programming Gems 5", so do you remember maybe what is this "gamma ramping approach" described there? Is it similar to what I've posted in my first post?

2. Converting RGB to luminance is quite easy (however there are again many ways to do it), but where did you get the function for converting luminance to RGB?

color = Lummap / Lumx,y * color

I've tried using color space transformations used in Reinhard's source code (RGB->XYZ->RGB), but the results are quite strange (the red channel becomes dominant for higher keys). I've also tried function presented in Goodnight's paper about implementation of Reinhard's local operator:

color = pow((color/Lumx,y), alfa) * Lummap

but the colors are washed out.

3. Regarding automatic key estimation - have you tried using method described by Reinhard in "Parameter Estimation for Photographic Tone Reproduction", which is mentioned in your paper? I'm using it right now, but it requires minimum, maximum and average luminance, besides it gives too low key values, so everything is a bit too dark (but maybe that's just problem with my implemenation). Also, do you calculate the key in pixel shader for every pixel or just once on CPU and then pass it to the shader? Do you also use key estimation together with luminance adaptation? If you do, do you use average luminance or adapted luminance for calculating key?

4. Where did you get funtions for tau and rho used in luminance adaptation?

5. You're making bright pass texture from already tone mapped LDR scene texture, right? How do you distinguish normal white pixels from bright ones that were clamped to white? Right now I'm using HDR scene texture for my bright pass and I'm tone mapping it to get LDR texture which is later used for blur. However I'm not really sure about "my" approach (I've taken it from one of DX SDK samples), because it's necessary to perform tone mapping twice.

6. You mention that to achieve smooth blur, you're downsampling the texture and apply blur few times. Does it really give much better results, than just one pass? Do these additional passes degrade performance significantly?

That's all (for now [smile] ).
Thanks in advance for answering these questions!

[Edited by - g0nzo on September 17, 2006 8:15:14 AM]

Share this post


Link to post
Share on other sites
You can not multiply or divide a color value with luminance. It is just not right.
If you convert from color to luminance by using the three magic constants you can not go back to color from there ... this is the problem. Therefore using the RGB->XYZ->RGB conversion that Reinhard and g0nzo use (or any other color space transformation that offers a luminance channel as a end result) is the rigth way to deal with this.
g0nzo: if you get strange looking red results, I would guess there is something wrong with your bright pass filter. Do you mind posting the source here? I achieve good results so far with the RGB<->XYZ conversion you describe.

Quote:

3. Regarding automatic key estimation - have you tried using method described by Reinhard in "Parameter Estimation for Photographic Tone Reproduction", which is mentioned in your paper? I'm using it right now, but it requires minimum, maximum and average luminance, besides it gives too low key values, so everything is a bit too dark (but maybe that's just problem with my implemenation). Also, do you calculate the key in pixel shader for every pixel or just once on CPU and then pass it to the shader? Do you also use key estimation together with luminance adaptation? If you do, do you use average luminance or adapted luminance for calculating key?

This sounds interesting. I looked into this probably a year ago and made a pseudo-code implementation. Than I gave up because everything looked so expensive.


Pseudo Code:
float3 ToneMapping(float2 Tex : TEXCOORD0)
{
float I_a; // pixel adaption
float I_g, I_l; // global and local adaption

float3 PixelColor = tex2D(FullResMapSampler, Tex);
float3 BlurredImage = tex2D(RenderMapSampler, Tex);

// AverageLumSampler holds
// global average luminance of color channels in rgb
// global average log luminance in alpha
float3 Cav = tex2D(AverageLumSampler, Tex).xyz;

// fetch average scene luminance
float Lav = tex2D(AverageSceneLumSampler, Tex).a;

// first channel: adaption level == scene log lum level
// second channel: m
float2 a = tex2D(AdaptedLuminanceMapSampler, float2(0.5f, 0.5f));
float m = a.a;

// get Lmax and Lmin
float2 LumMaxMin = tex2D(LumMaxMinSampler, float2(0.5f, 0.5f));
float Lmax = LumMaxMin.x;
float Lmin = LumMaxMin.y;

// get luminance of the local pixel == local operator
float L = dot(PixelColor, LuminanceConst);

// overall intensity
// user defined: value range -8.0..8.0 default: 0.0
f = exp(-f);

// calculate for each color channel
for(int i = 0; i < 3; i++)
{
I_l = c * PixelColor[i] + (1 - c) * L;
I_g = c * Cav[i] + (1 - c) * Lav;
I_a = a * I_l + (1 - a) * I_g;
PixelColor[i] /= PixelColor[i] + pow(f * I_a, m);
}

// normalization
PixelColor = saturate((PixelColor - Lmin) / (Lmax - Lmin));

// mix with bloom filter
PixelColor += BlurFactor * BlurredImage;

return PixelColor;
}

I wrote this code while being stuck in London Heathrow for a couple of hours .. this is not supposed to compile or work ... it is probably even wrong.

Share this post


Link to post
Share on other sites
Longer than 15 pages eh?
I hope it's not due tomorrow.


1) The dynamic gamma ramping approach from GPG Gems 5 (actually that'a typo in
the paper it should be 4) is pretty simple...

a) You take the output of a traditional renderer and find the min, max and
avg luminance of the image (forget about floating point render targets
and all that).

b) Transform the image so that pixels that have a luminance of min, max and
avg are mapped to pixels that have a luminance of 0, 1 and 0.5
respectively.

c) Of course there are some other details you have to deal with like not
expanding scenes that are meant to be dark into the full brightness
range, adjusting the effectiveness of the range mapping step overtime
to simulate the adaptation of the user's pupils and so on. On thing
to note about this approach is that the data from your renderer is being
clamped to [0, 1] before the HDR effects...so you aren't really getting
a full blown HDR effect, you're just getting a near optimal contrast
expansion.


2) Suppose you know the average luminance of the scene and the 'key' of the
scene (LumAvg, Key)...

The relative luminance of pixel x,y is given by:

LumRel(x,y) = Key * Lum(x,y) / LumAvg

All this function does is give pixels with a luminance equal to the avg
luminance a relative luminance of 1.0 * Key, and gives pixels with a
luminance equal to zero a relative luminance of 0.0. Notice that the
range of this function is [0, +infinity]

At this point we want to map LumRel(x,y), whatever it may be, to the zero
to one range of the device. How do we do that? I'm not going to start
jabbering about limits and whatnot at you but its simple if you think about
it.

LumMap(x,y) = LumRel(x,y) / (1 + LumRel(x,y))

You might ask why are we dividing by (1 + LumRel(x,y)) instead of 2 + LumRel
or 100 + LumRel, the answer is not that satisfying. You can make the
constant in the denominator anything you want, you'll just end up adjusting
the Key of the scene accordingly. Now, on to your question:

ColorMap(x,y) = LumMap(x,y) / Lum(x,y) * Color(x,y)

If you muck with this equation for a while you can turn it into this:

Lum ( ColorMap(x,y) ) = LumMap(x, y)

Which is exactly what we want. I came up with the equation by doing the
algebra such that given a desired luminance, the actual luminance, and
a floating point pixel a range mapped pixel is produced (that has the
desired luminance).


3) Okay, I pretty much tried everybody's implementations. In that experience I
found that all of them require a great deal of tweaking of algorithm
constants, art assets, and ligthing parameters to get pleasing results.

My key is calculated by preforming a render to texture luminance transform
on the raw floating point scene. That texture is then recursively scaled
down to a texture that is 4 by 3 or 16 by 9 (depending on the aspect ratio).
that texture is then locked and the handfull of pixels are averaged to
find the average luminance of the scene. This value is then linearly
interpolated with the adapted luminance.

The key is calculated once per frame, the average luminance is calculated
once per frame and the adapted luminance is calculated once per frame.

And yes the adapted luminance is substituted for the average luminance in
all the formulas to account for the users dynamic adaptation.

4) Tau is a simple linear interpolation between the two adaptation rates.
when rho is one the adaptation rate is that of the rods and when rho is
zero the adaptation rate is that of the cones.

The equation for Rho is once again a simple normalizing function, very
similiar to the luminance mapping. All we're trying to do is take a
value on [0, +infinity] and map it to [0, 1]; the easiest way to do that
is f(x) = x / (k + x). Where k is some constant that depends on what you
are using the normalizing function for. For your HDR implementation you'll
have to play around with different values for k to find what works and
what doesn't (assuming you do something similiar to what I did)

5) Um...read that part of the paper again, i'm not sure how to explain it any
other way. But yeah, the scene is tone mapped once (the tone mapping
operator is adjusted a little at this point in the paper)

6) The sucessive downscaling approach is faster when you consider the size of
the blur achieved given the time it takes. Additionally the later passes
don't add much to the total time because you are dealing with a smaller
and smaller image.

I'm going to be gone for a while, traveling across the country to start my new
job (as a game programmer!) in LA. It would probably be best to direct your
questions to the general GameDev population for a while.

Share this post


Link to post
Share on other sites
Quote:
(as a game programmer!)
Cool! Congratulations! Which company in LA is it?

Quote:
Okay, I pretty much tried everybody's implementations.
I am only aware of the one in the DX SDK, NVIDIA SDK and the ATI SDK .. any other public available implementations?

Share this post


Link to post
Share on other sites

Wolfgang Engel?!...good gosh, I got my start in shaders by reading the column you cooked up a long time ago here on GameDev; Fundamentals of Vertex and Pixel shaders.

By "I pretty much tried everybody's implementations" I mean that I wrote a demo based on my interpretation of every single paper listed in the references. So, no I don't know of any other open source demos.

"Perceptual Effects in Real-Time Tone Mapping" most definitely gave the best results and I'm pretty sure it's closer to what Valve did with the Lost Coast demo (which was my goal). I just didn't have the time to get the implementation rock solid and the execution time fast enough to be practical. Hence the lame global tone mapping operator and the hack for the white point in the paper. I actually found out about a month after graduating that I was recreating all of my shaders every single frame...which explains all of my performance problems (it was this really weird bug with my resource manager).

Yeah, I landed a job with Pandemic Studios...it's life long dream come true. I'm very lucky.

Share this post


Link to post
Share on other sites
Quote:
Pandemic Studios
This is fantastic!! Must be a great company from what I hear.

Quote:
Valve did with the Lost Coast demo
Jason Mitchell talked about HDR in Lost Coast extensively on SIGGRAPH ... I made a lot of notes, but hopefully they also release the slides.
The obvious advantage they had was that they work with an environment with a fixed sun. I develop here for games with a time of day feature. This makes things a bit more complicated :-)

Share this post


Link to post
Share on other sites
Thank you for your answers.

Quote:

I landed a job with Pandemic Studios

Congratulations!!!

Quote:

You can not multiply or divide a color value with luminance. It is just not right.

I'll probably use RGB<->XYZ conversion anyway, but the other methods give quite similar result and are much faster. I haven't seen any demo that uses this conversion.

Quote:

The key is calculated once per frame, the average luminance is calculated
once per frame and the adapted luminance is calculated once per frame.

I guess I could add another pass to render to 1x1 texture that would store key and white point. That would be faster than locking the 1x1 texture, that stores luminance values and calculating it on CPU, which I'm doing now.

Quote:

if you get strange looking red results, I would guess there is something wrong with your bright pass filter.

The problem was that I had similar result without any bloom. I've just found out that it was caused by specific HDR images, not the algoritm itself.


However there are still 2 things that I can't figure out:

What is the best way to do post processing? There are 3 options:
- use LDR scene texture (after tone mapping) for all calculations. Simplest and probably the worst idea, because at this point you loose all HDR info about the scene
- use HDR texture for bright pass and all following calculations, lerp between scene texture and result of post processing and tone map the result. This method is presented in "Rendering with Natural Light" demo. I've tried it, but I can't get good results
- use HDR texture for bright pass, tone map the result, perform blur etc., then add to the tone mapped LDR scene texture. I'm using it now, but the problem is that I need to perform tone mapping twice.

Another (and more important) thing is that I'm trying to implement Reinhard's local tone mapping operator. But I can't understand how they estimate parameters for gaussian convolutions (7 convolutions are calculated with varying kernel size and standard deviation to find the largest area with small luminance contrast for a given pixel). Here's the quote from "Perceptual Effects in Real-time Tone Mapping":
Quote:

The Gaussian for the first scale is one pixel wide, setting kernel size
to s = 1/(2*sqrt(2)) (~0.35), on each following scale s is 1.6 times larger.

What is the difference between scale and kernel size? How can kernel size be 0.35?

Share this post


Link to post
Share on other sites

This topic is 4106 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this