HDR Help

Started by
15 comments, last by larspensjo 11 years, 7 months ago
You're going to use the average luminance to scale the scene exposure. This allows the camera to adapt to the scene brightness. This is simulating what a camera or your eye does when you walk into a dark room and your eye brightens everything up.

I feel like what you really need is a good tutorial or article. Unfortunetly the one that was on GameDev.net seems to be gone. The only one I could find quickly is: http://www.gamedev.n...-on-mains-r2485 If somebody can find a better one it would be great.
Advertisement
Guys:

I'm so close to having this working. What happens now:

If I turn my light really low (like 0.2, 0.2, 0.2), then if I look at my sky, the terrain turns black, if I look at my terrain (no sky), the terrain shows dark, but I can see it. Nothing is using HDR values.

Now, I set my light to (5.0, 5.0, 5.0) and my terrain is just about white, its blown out. If I mess with the exposure, I can get my terrain visible with an exposure of like 0.01, but then my sky is really dark.

I'm doing something wrong here.

I've got my tonemapping down to this:
final = hdrCol;
float Lp = (fExposure / l.r) * max( final.r, max( final.g, final.b ) );
float toneScalar = Lp / ( 1.0f + Lp );
retCol = final * toneScalar;

I'm not sure my equation above is right.

For example, if my average luminance (l.r) = lets say 1.0... and my exposure was 1.0 and my max color was 5.0, then lp = 5... so 5 / (1+5) = 0.9-ish... well, 0.9 * 5 is still greater than 1, so it will be blown out... so how is this suppose to work???

Thanks for any more help!
Jeff.
Correct me if I am wrong, but the way I understand this:

  1. Transform the colors from the range [0,1) to the range [0,inf).
  2. Do various manipulations, like multiplying with lights from lamps.
  3. Transform the colors back again from [0,inf) to [0,1)

So if you have dark scene, with no lights or transformations, it will come out exactly the same again.

Using the tone mapping function mentioned above, x/(1+x), in step three, means you need to do a reverse mapping in step one: x(1-x). Notice that this reverse transformation can't take a value of 1, you have to clamp it somewhere below 1.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
larspensjo,

I'm not sure that's right. So, I need to do something like:

final = hdrCol * (1 - hdrCol); ?

That doesn't seem right, being hdrCol can be any value, for my example, let's say (5,5,5). final color would be = (20,20,20)

Any other thoughts?
Jeff.

I'm not sure that's right. So, I need to do something like:

final = hdrCol * (1 - hdrCol); ?

That doesn't seem right, being hdrCol can be any value, for my example, let's say (5,5,5). final color would be = (20,20,20)

That function shall be used for the first phase, not the last. The values you reading from a normal texture sampler (RGB8) in in first phase are scaled from [0,255] to [0,1]. So you use the function x/(1-x) to get a hdrCol. That is what you render into the D3DFMT_A16B16G16R16F.

In the last phase, when you render D3DFMT_A16B16G16R16F to the screen, you have to do the tone mapping x/(1+x). The value going to the final display will then be in the range [0,1].

In between these two phases, you can manipulate the D3DFMT_A16B16G16R16F as you want, as going above 1 is no longer a problem. For example, if you have many lamps that all add light, the value can get rather high, but will still transform into something in the range [0,1] in the last phase.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
larspensjo, you seem to be a bit confused on this subject. A typical albedo texture does not contain "HDR" values, it contains "LDR" albedo values that are of the [0, 1] range. You don't want to "convert" these to HDR, you still want them to be [0, 1] even in an HDR lighting scenario. For instance if you have a grey surface with albedo = 0.5 and light source with intensty = 100, your diffuse reflectance would be 0.5 * 100 (divided by Pi if you're energy conserving) which would be 50. If you were to perform some transformation on the albedo color that would give you a value above 1, it would mean that the surface could reflect more light than the amount of light that it's receiving. That's obviously not want you want.

larspensjo, you seem to be a bit confused on this subject.

Thanks for info, and please excuse me for adding confusion!

I guess that what is clear, is that there are at least two ways to transform back from HDR (tone mapping). One simple solution is to use the Reinhard tonemapper, x/(1+x), for every pixel. But this has disadvantages. It won't give extra details at the lower brightness parts of the picture. The OP is using a more advanced solution, which makes use of adaptive luminance. That is, pixels are sampled to get an average luminance, that is then used in the tone mapping function.

The advantage of the Reinhard tonemapper is that you don't need to know average luminance, and so you don't need the downsampling process. I guess the choice then depends on the requirements. If the requirements are to enhance a HDR photo so as to make both high and low luminance areas visible, then the Reinhard filter isn't good enough.


I setup my render target texture as D3DFMT_A16B16G16R16F. How do I actually get those extra bits of color? If I render my scene as-is and then render that texture out to the screen, it's exactly the same.

One way is to load one or more external bitmaps in floating point format, already saved with the extra information. If you just take a normal 24-bit bmp file, for example, you have already lost the HDR information.

Another way is that you have a picture manipulation algorithm that adds details of higher resolution.


The HDR_Pipeline actually has a pixel shader that multiplies each RGB by a scalar factor. Is this how HDR is done? So, every object (or really every shader) in my scene now has to multiply it's pixel RGB by some magical scalar factor so I can later use that for my luminance calculation? Is this how everyone does it, take the result RGB from a model and multiply it by some scalar to get over 1.0 results?
[/quote]

That would transform the color into HDR space, this time using a linear transform. But then we are in the same "problem domain" MJP is talking about, and his arguments has to be considered.

I will explain the way I use HDR in my game. It may, or may not, be relevant to the requirements of the OP. And there may be basic problems in the design, as pointed out by MJP. However, it seems to work quite good.

I have a diffuse color map where I want to add effects of lighting. I add up the contributions for all light sources (maintained in a separate R16F buffer) for every pixel. The diffuse color map is coded in the range [0,1), and if I simply multiply the color with the "light intensity" from the R16F buffer, I can get a value bigger than one (depending on the number light sources). I think it can be seen as adding energy. Instead of using clamping, I want to use tone mapping, and the Reinhard filter was my first choice.

When applying the Reinhard filter, and there are a light contribution near a factor of 1, high value luminance of the original diffuse colors will get depressed. That is the reason I use an inverse Reinhard filter of the diffuse color before adding light manipulations. So to speak, I am creating the HDR image from the diffuse colors. I suppose this may be physically incorrect, but it looks fine. The Reinhard filter works fine in this application, as the goal is not to enhance low luminance details.
[attachment=11327:Saturation1.jpg][attachment=11328:Saturation2.jpg]
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

This topic is closed to new replies.

Advertisement