Jump to content

  • Log In with Google      Sign In   
  • Create Account


Reinhard operator and HDR


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
13 replies to this topic

#1 Bacterius   Crossbones+   -  Reputation: 8178

Like
1Likes
Like

Posted 17 September 2012 - 06:38 PM

Hi,
I'm gathering all the information I can get my hands on in the prospect of implementing a simple spectral ray tracer. Currently I'm stuck on HDR - I understand the concept but I have some trouble grasping the theory. I am working on Reinhard's operator at the moment.

As I understand, HDR images cannot be displayed on LDR monitors because the luminance range in the image exceeds what the monitor can display - so the luminance has to be scaled somehow. This is done by considering the per-pixel luminance in the image and processing them to compress the HDR image's luminance range in a way that's perceptually acceptable.

I have done some tests with this algorithm, which would seem to be the real deal as it produces fairly nice tone-mapped images but some things are still unclear:

1. calculate the maximum luminance in the image, using the standard CIE luminance curve
2. for each pixel, scale its luminance as:


3. thus scale the pixel's RGB components by multiplying them with the luminance coefficient:


First, should the maximum luminance in the image be used for ? In this reference they call it and state it should be the luminance that will be mapped to a LDR luminance of 1 (i.e. maximum). This suggests that it could be anything, depending on the exposure we want to obtain - should the maximum or the average be used? Or perhaps the median? Maximum seems to produce the nicest results on most of the HDR photos I tested but is sometimes too dark. Average is often far too bright, but is sometimes spot on. What gives?

Thanks!

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


Sponsor:

#2 MJP   Moderators   -  Reputation: 10243

Like
2Likes
Like

Posted 17 September 2012 - 07:23 PM

In Reinhard's original paper he referred to it as Lwhite, because its "the smallest luminance that will be mapped to pure white". He came up with that formulation so that you could allow the image to "burn out", such that a certain subset of high values mapped >= 1.0. Using your maximum luminance means that absolutely everything will end up in the [0, 1], but that's already what you get (1 /(1 + x)) which is his original formula. Ultimately some amount of burn out is usually desirable, so that you don't compress your mids and lows too much.

#3 Bacterius   Crossbones+   -  Reputation: 8178

Like
0Likes
Like

Posted 17 September 2012 - 07:41 PM

In Reinhard's original paper he referred to it as Lwhite, because its "the smallest luminance that will be mapped to pure white". He came up with that formulation so that you could allow the image to "burn out", such that a certain subset of high values mapped >= 1.0. Using your maximum luminance means that absolutely everything will end up in the [0, 1], but that's already what you get (1 /(1 + x)) which is his original formula. Ultimately some amount of burn out is usually desirable, so that you don't compress your mids and lows too much.

Oh, you are correct, I had gamma correction in one test and not the other so they came out different - maximum is the same as the simple formula, indeed.

So I take it I need to take the maximum luminance and multiply it by some "burn out" coefficient (between 0 and 1), and then use that as Lwhite? Is there some way to estimate a good coefficient or is it really trial and error? I understand Reinhard is a global operator and as such won't always look nice on every image (an adaptive algorithm would be a more sophisticated solution).

Ah, I missed the "key" part in the paper - I need to use the summation formula to estimate the total luminance in the image on a logarithmic scale and work with that, if I understand correctly.

EDIT: ok, I got it working - Reinhard works superbly when the luminance range isn't excessive, after which it starts to break down (everything becomes dark) - I think a local operator is required for those pathological cases, as indicated in the paper.

Edited by Bacterius, 17 September 2012 - 08:13 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#4 MJP   Moderators   -  Reputation: 10243

Like
2Likes
Like

Posted 17 September 2012 - 11:23 PM

When I last worked with Reinhard we just let the artists pick the white point, which they chose according to taste. The only attempt we ever made at automatically picking or guiding a value was to determine the maximum luminance, and that doesn't work out well. We've been using a filmic curve for a long time now at work, and that's been much easier for the artists to work with. We just settled on a curve they were happy with, and now they just tweak exposure settings rather than the curve itself. I'm not sure if your idea with the burn out coefficient would work well in practice since I haven't tried it, but it sounds like it could work.

The bit about the "key value" and total luminance in the paper deals with exposure, or "calibration" as it sometimes referred to in literature. The goal of this part is to come up with a linear scale value for your pixel intensity so that the entire is "shifted" so that the relevant parts of the image hit the part of your tone-mapping curve that produce results with decent contrast. It can be considered independent of the tone mapping curve, and I like to think of it that way. The approach suggested by the Reinhard paper (which is commonly used in games) is to compute the geometric mean of luminance for the entire screen, which you can compute by computing log(luminance) for every pixel, computing the average, then calculating exp(luminance). The key value is a scalar that works in conjunction with this, and controls how brightly or darkly the algorithm will expose your scene. When I explain it to the artists I usually tell them that it's sort of a "mood" slider, where darker values correspond to darker, "moodier" scenes while higher key values correspond to brighter, "happier" scenes. If you're familiar with photography then you can think of it as lower values mapping well to low-key lighting and higher values mapping to high-key lighting.

#5 Hodgman   Moderators   -  Reputation: 27837

Like
1Likes
Like

Posted 17 September 2012 - 11:48 PM

We ended up with something similar to what MJP's describing, except that we were new to all this so experimented with a lot of different formulas, and might be doing things that aren't quite right Posted Image
We let the artists choose between reinhard and filmic curves (they choose filmic for almost all scenes, and used a curve-editor to tweak it slightly from our initial curve values), and then let them specify the "white" saturation point and a middle-grey "key" value (pretty sure this bit is a bit fudged in our code). The "white" value is used in the tone-mapper, but they "key" value is used beforehand for automatic exposure adjustment. The geometric mean luminance and the key are used to adjust each RGB value before tone-mapping.

#6 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 18 September 2012 - 02:41 AM

Why is the Reinhard Tone Mapping operator considered "non-filmic" anyway? I plotted it and discovered that it had a s-curve as well, so I modified it a bit and got pretty much the same results as the Uncharted 2 Filmic Tone Mapping operator. I even think that my modified Reinhard operator is actually pretty good, since it's fast, filmic and is easily configurable.



is the white point of the curve. Here's a plot of it ():

Posted Image

Edited by CryZe, 18 September 2012 - 03:22 AM.


#7 Ashaman73   Crossbones+   -  Reputation: 6734

Like
1Likes
Like

Posted 18 September 2012 - 05:05 AM

I plotted it and discovered that it had a s-curve as well, so I modified it a bit and got pretty much the same results as the Uncharted 2 Filmic Tone Mapping operator. I even think that my modified Reinhard operator is actually pretty good, since it's fast, filmic and is easily configurable.

Have you some comparision shots (color gradient in linear,reinhard,uncharted and modified reinhard) ? This would be quite interesting.

#8 Hodgman   Moderators   -  Reputation: 27837

Like
2Likes
Like

Posted 18 September 2012 - 05:20 AM

Why is the Reinhard Tone Mapping operator considered "non-filmic" anyway?

I don't know if this is right, but I use "filmic" to refer to "the John Hable curve".
If someone said "a filmic curve", I would think of an S-curve, but if someone said "the filmic curve", I would assume they were quoting Hable (though he called his curve "Uncharted2Tonemap", not "the filmic curve", I've still seen enough people call refer to his "Filmic Tonemapping Operators" blog post calling it the filmic curve. Also note that Reinhard is listed in that blog post).
When I was explaining hable and reinhard to our artists, I described the former as Uncharted's curve to make it look like a film, and the latter as a curve from an academic trying to make digital HDR photos look like traditional photographs.

p.s. I just had a muck around with your modified Reinhard in a graphing calculator - it looks pretty useful. I might have to try it out on some test scenes tomorrow Posted Image

Edited by Hodgman, 18 September 2012 - 05:26 AM.


#9 CryZe   Members   -  Reputation: 768

Like
1Likes
Like

Posted 18 September 2012 - 07:17 AM

You could generalize it even more by adding an exponent to increase the "contrast", or let's say to make it more filmic Posted Image



With an exponent of , you basically get the same curve as the Uncharted 2 curve except for the really bright colors. The upper tail actually conserves more colors instead of abruptly cutting them off. In the actual shader you only need to calculate a single pow, so it's not that much slower than the one without the additional parameter.

Have you some comparision shots (color gradient in linear,reinhard,uncharted and modified reinhard) ? This would be quite interesting.

As soon, as I can test it out, I'm going to create some comparison shots. Even though I can already tell you, that it should basically look the same, as the Uncharted 2 Tone Mapping operator, except for the fact that bright colors are converging more smoothly to pure white. The rest of the colors look as "filmic", as the Uncharted 2 operator does.

Edited by CryZe, 19 September 2012 - 08:58 AM.


#10 MJP   Moderators   -  Reputation: 10243

Like
2Likes
Like

Posted 18 September 2012 - 03:33 PM

Why is the Reinhard Tone Mapping operator considered "non-filmic" anyway? I plotted it and discovered that it had a s-curve as well, so I modified it a bit and got pretty much the same results as the Uncharted 2 Filmic Tone Mapping operator. I even think that my modified Reinhard operator is actually pretty good, since it's fast, filmic and is easily configurable.


To me "filmic" implies that your curve is aimed at emulating the actual response of film, which was definitely not Reinhard's goal. In fact in his paper he specifically mentions avoiding a filmic s-curve in favor of a curve that only compressed higher luminances. So although the curve is still "s-shaped" when plotted on a log graph the "toe" portion is visibly less severe compared to a an actual film response curve, or even the curve that Hable originally suggested that was based on a particular Kodak film stock. The curve is also steeper in film curves, while shoulder rolls off more quickly. This is all done to boost perceived contrast, in order to combat Stevens Effect (perceived contrast is lessened at lower luminance) and Hunts Effects (perceived "colorfulness" is lessened at lower luminance). For some reference, here's a screenshot showing a Reinhard curve, and another screenshot showing a typical filmic curve. By changing Reinhard's curve to have similar properties you're really fundamentally changing it from Reinhard's original intentions, so I don't think it would be right to call it "Reinhard" at all. Posted Image

#11 CryZe   Members   -  Reputation: 768

Like
3Likes
Like

Posted 19 September 2012 - 08:39 AM

Ok, let's call it the Serr Tone Mapping operator than Posted Image

I've just tweaked some settings so that it matches the Uncharted 2 Tone Mapping operator exactly. With and you get the exact same tone mapping operator. Here's a plot:

Posted Image

The thing is that it's way lighter on the GPU than the one from Uncharted 2 Posted Image
With these settings it's basically just

Edited by CryZe, 19 September 2012 - 08:44 AM.


#12 Hodgman   Moderators   -  Reputation: 27837

Like
1Likes
Like

Posted 19 September 2012 - 09:06 AM

That is pretty simple once you collapse the non-per-pixel math!
N.B. I compared it against Hable's and your instruction count actually comes out higher in SM3.0, because it assumes that most operations can be done in 4-wide SIMD, but that pow must be scalar. SM5 asm has a SIMD version of exp/log so the instruction count beats Hable there.
However, this is pretty meaningless, as HLSL asm isn't at all representative of the GPU, and different instructions have different latencies...

Anyway, the code does look good ;)
//optimized hable
float3 f = (BC*x-B*x+DEsubDF)/(A*x*x+B*x+DF)-EsubFdivF;

    ps_3_0
    def c2, 1, 0, 0, 0
    dcl_texcoord v0.xyz
    mul r0.xyz, c0.z, v0
    mad r1.xyz, c1.x, v0, -r0
    add r1.xyz, r1, c1.z
    mul r2.xyz, v0, v0
    mad r0.xyz, r2, c0.y, r0
    add r0.xyz, r0, c1.y
    rcp r2.x, r0.x
    rcp r2.y, r0.y
    rcp r2.z, r0.z
    mad oC0.xyz, r1, r2, -c1.w
    mov oC0.w, c2.x

ps_5_0
dcl_globalFlags refactoringAllowed
dcl_constantbuffer cb0[2], immediateIndexed
dcl_input_ps linear v0.xyz
dcl_output o0.xyzw
dcl_temps 2
mul r0.xyz, v0.xyzx, v0.xyzx
mul r1.xyz, v0.xyzx, cb0[0].zzzz
mad r0.xyz, r0.xyzx, cb0[0].yyyy, r1.xyzx
mad r1.xyz, cb0[1].xxxx, v0.xyzx, -r1.xyzx
add r1.xyz, r1.xyzx, cb0[1].zzzz
add r0.xyz, r0.xyzx, cb0[1].yyyy
div r0.xyz, r1.xyzx, r0.xyzx
add o0.xyz, r0.xyzx, -cb0[1].wwww
mov o0.w, l(1.000000)
ret
//optimized Serr
float3 XpowB = pow(x,B);
float3 f = (ApowBsub1*XpowB)/(ApowBsub2*XpowB+ApowB);

    ps_3_0
    def c1, 1, 0, 0, 0
    dcl_texcoord v0.xyz
    log r0.x, v0.x
    log r0.y, v0.y
    log r0.z, v0.z
    mul r0.xyz, r0, c0.w
    exp r1.x, r0.x
    exp r1.y, r0.y
    exp r1.z, r0.z
    mul r0.xyz, r1, c0.x
    mad r1.xyz, c0.y, r1, c0.z
    rcp r2.x, r1.x
    rcp r2.y, r1.y
    rcp r2.z, r1.z
    mul oC0.xyz, r0, r2
    mov oC0.w, c1.x


ps_5_0
dcl_globalFlags refactoringAllowed
dcl_constantbuffer cb0[1], immediateIndexed
dcl_input_ps linear v0.xyz
dcl_output o0.xyzw
dcl_temps 2
log r0.xyz, v0.xyzx
mul r0.xyz, r0.xyzx, cb0[0].wwww
exp r0.xyz, r0.xyzx
mul r1.xyz, r0.xyzx, cb0[0].xxxx
mad r0.xyz, cb0[0].yyyy, r0.xyzx, cb0[0].zzzz
div o0.xyz, r1.xyzx, r0.xyzx
mov o0.w, l(1.000000)
ret


#13 Bacterius   Crossbones+   -  Reputation: 8178

Like
0Likes
Like

Posted 19 September 2012 - 02:11 PM

Ok, thanks everyone for your help Posted Image
I have a couple final questions. I've done good progress on the path tracer and I'm now at the stage where each of my pixels in my final image have a spectral power distribution, which is basically a list of sampled wavelengths along with their respective radiance. I can convert this distribution to a color (but not a shade) easily, however I am a little confused on how to calculate the shade. Can I use the raw per-pixel radiance as luminance in my tone-mapper and scale the final RGB values by the radiance scaling factor, or should I first convert all my CIE colors to RGB, multiply them by their radiance, and tonemap that?

Basically I feel I should be able to do better than the CIE luminance approximation given that I already have the true radiance value for each per-pixel, but I am unsure if they are the same thing.

Edited by Bacterius, 20 September 2012 - 02:01 AM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#14 Bacterius   Crossbones+   -  Reputation: 8178

Like
0Likes
Like

Posted 20 September 2012 - 02:01 AM

Update: scaling the RGB values with the total measured radiance and tonemapping that seems to be the right thing to do, anything else produces garbage. For future reference, since I had a hard time "getting it": the absolute per-wavelength radiance in your spectral power distribution is irrelevant in determining the resulting perceived color - all that matters is the relative radiance between wavelengths (the "shape" of the distribution). However, this will only give you the color, which is fundamentally different from what you would think a "color" is because it does not encode luminance information at all - the luminance (shade) is the area under the spectral power curve (e.g. the total radiance over the visible spectrum) and this is where the absolute measurements are used. This will give you a wide luminance range to work with, which you can then tone map at your leisure.

Edited by Bacterius, 20 September 2012 - 02:02 AM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS