# HDR + Tonemapping + Skylight = Fail?

This topic is 1962 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Regarding SRGB and LogLUV: when using an SRGB texture, it will go through a linear->gamma transform when writing, and a gamma->linear transform when sampling, so it shouldn't have much of an impact, but it can mean that the value that you sample later is slightly different to the value that you wrote originally. This transform is only designed to be used to store RGB "color" information in a perceptually efficient way (distributing more bits to darker colors) where it doesn't matter if there is a slight change in the values.
However, LogLUV splits the L component over 2 8-bit channels, and if either of those channels is slightly modified, then when you reconstruct the original 16-bit value, it will be wildly different. This makes it more of a "data" texture than a "color" texture, and so SRGB encoding should not be used.

Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?

Also, this one part of MJP's sample has me very confused. The hardware downscale function takes an RT and then renders it to another RT which is 1/2 the size of the source RT 3 times. The comments say it's meant to downsample to an RT 1/16th the size, so whats the purpose of rendering 3 1/2 size RTs?

 /// <summary> /// Downscales the source to 1/16th size, using hardware filtering /// </summary> /// <param name="source">The source to be downscaled</param> /// <param name="result">The RT in which to store the result</param> protected void GenerateDownscaleTargetHW(RenderTarget2D source, RenderTarget2D result) { IntermediateTexture downscale1 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format); PostProcess(source, downscale1.RenderTarget, scalingEffect, "ScaleHW"); IntermediateTexture downscale2 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format); PostProcess(downscale1.RenderTarget, downscale2.RenderTarget, scalingEffect, "ScaleHW"); downscale1.InUse = false; Engine.Context.PixelShader.SetShaderResource(null, 0); IntermediateTexture downscale3 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format); PostProcess(downscale2.RenderTarget, downscale3.RenderTarget, scalingEffect, "ScaleHW"); downscale2.InUse = false; PostProcess(downscale3.RenderTarget, result, scalingEffect, "ScaleHW"); downscale3.InUse = false; }  Edited by Telanor

##### Share on other sites

Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?
That sounds right.
Not every input surface needs to be sRGB though -- e.g. normal maps are also "data" and shouldn't have "gamma correction" applied to them (or you'll bend all your normals!).
The standard gamma space for computer monitors is sRGB, so when outputting a final "LDR" image, it should be sRGB encoded, so that the monitor displays it as you intend.

This also means that any input "colour-type" textures that are authored on an sRGB monitor, actually contain ("gamma space") sRGB colours, because when your artists were painting those colours they were viewing them on an sRGB device.
All of your lighting math should be done in linear space (not gamma space), so when sampling from these textures you need the [font=courier new,courier,monospace]sRGB->Linear[/font] transform to take place (which happens automatically if you tell the API that it's an sRGB texture).

##### Share on other sites
I just finished my implementation this morning and at the end when the color correction worked I had some issues with blur and brightness. The main ideas I got from MJPs implementation but I changed two things:
First: Sampling the luminance from the downscaled HDR texture was not acceptable for me. The average luminance difference were way too high when moving the camera through the scene. I know could change the adaption but sampling it from the whole screenspace gives smoother results too.
Second: I changed the luminance sampling code itself to the code that is provided in the DirectXSDK

I don't understand how this works. Isn't the point of downsampling so that you can get to a 1x1 texture which has the average luminance value? If you dont downsample, how do you get a single value? Edited by Telanor

##### Share on other sites

I don't understand how this works. Isn't the point of downsampling so that you can get to a 1x1 texture which has the average luminance value? If you dont downsample, how do you get a single value?

Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.

Also, this one part of MJP's sample has me very confused. The hardware downscale function takes an RT and then renders it to another RT which is 1/2 the size of the source RT 3 times. The comments say it's meant to downsample to an RT 1/16th the size, so whats the purpose of rendering 3 1/2 size RTs?

///
/// Downscales the source to 1/16th size, using hardware filtering
///
///The source to be downscaled
///The RT in which to store the result
protected void GenerateDownscaleTargetHW(RenderTarget2D source, RenderTarget2D result)
{
IntermediateTexture downscale1 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(source, downscale1.RenderTarget, scalingEffect, "ScaleHW");

IntermediateTexture downscale2 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale1.RenderTarget, downscale2.RenderTarget, scalingEffect, "ScaleHW");
downscale1.InUse = false;

IntermediateTexture downscale3 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale2.RenderTarget, downscale3.RenderTarget, scalingEffect, "ScaleHW");
downscale2.InUse = false;

PostProcess(downscale3.RenderTarget, result, scalingEffect, "ScaleHW");
downscale3.InUse = false;
}

The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.

##### Share on other sites
Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.

Ok so instead of starting at the 1/16th size image, you started from the fullsize image, calculated the initial luminance using the function you provided, and then downscale it to a 1x1 to get the average? Correct me if I'm wrong but the code you provided seems to sample the same pixel 9 times. Is that intentional...?

The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.

I had figured that was the intention but it's not actually using the size of the previous RT, but continually using the size of the source RT. I checked with the debugger, source.width / 2 is the same number for all 3 RTs. I'm guessing thats a bug in the sample then. Edited by Telanor

##### Share on other sites

Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.

Ok so instead of starting at the 1/16th size image, you started from the fullsize image, calculated the initial luminance using the function you provided, and then downscale it to a 1x1 to get the average? Correct me if I'm wrong but the code you provided seems to sample the same pixel 9 times. Is that intentional...?

Yes that is correct. As far as I understand the sampling helps to get a smoother result but I did not go through the whole C++ code of the DirectXSDK sample so I could be wrong but it gives me a better result. Oh and the "+1" to vSample is a project specific color correction constant so you can ignore it.

The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.

I had figured that was the intention but it's not actually using the size of the previous RT, but continually using the size of the source RT. I checked with the debugger, source.width / 2 is the same number for all 3 RTs. I'm guessing thats a bug in the sample then.

That's strange indeed. It's probably a bug. I never noticed it because I'm not using the hardware scaling.

##### Share on other sites

Yes that is correct. As far as I understand the sampling helps to get a smoother result but I did not go through the whole C++ code of the DirectXSDK sample so I could be wrong but it gives me a better result. Oh and the "+1" to vSample is a project specific color correction constant so you can ignore it.

Why not sample once, save yourself the loop and x9?

##### Share on other sites

Yes that is correct. As far as I understand the sampling helps to get a smoother result but I did not go through the whole C++ code of the DirectXSDK sample so I could be wrong but it gives me a better result. Oh and the "+1" to vSample is a project specific color correction constant so you can ignore it.

Why not sample once, save yourself the loop and x9?
[/quote]

Your probably right. I did a quick test and did not noticed big differences but also no performance improvement at all. But I'm messing around with the shaders for some other features atm so I have to test it again later.

##### Share on other sites
So I took a look at the initial luminance image being generated and it looks a bit odd to me. I'm not sure what its supposed to look like, but it seems wrong to me that everything that is not the sky has exactly 0 luminance and that the sky has such a strong gradient on it.

Initial Luminance:
[attachment=10975:luminance.png]

Final Image:
[attachment=10976:RuinValor 2012-08-28 05-31-54-35.png]

##### Share on other sites
The problem is that the dotproduct in the luminance sample function is smaller than 1 so the log becomes negativ and negativ values are clamped to 0. What you can do is add a "+1" to make sure the log is always greater or equal 0 or boost the lighting so that the values in the HDRimage getting higher.

##### Share on other sites
Well, we did some of that and we got a good+bad result.

##### Share on other sites
You do not want to clamp your log(luminance) at 0 or 1. The value can and will be negative. A 32-bit or 16-bit floating point format has a sign bit, so there is no problem with storing a negative number in such a render target. The only thing you'll want to do is clamp the luminance value to some small epsilon before taking the log of it, since log(0) is undefined.

##### Share on other sites
So the luminance is supposed to look like that then? I did also try turning up the lighting values a lot and the luminance looks a bit more normal, but the final screen result is about the same so... I guess the luminance isn't really the issue.

There is still an issue with the bloom though. It seems like only colors with a large amount of red in them get any bloom applied (although white doesn't seem to get any bloom). I've attached a screenshot of several different colored blocks.

[attachment=10983:RuinValor 2012-08-28 18-02-28-60.png]

##### Share on other sites
And to give you guys some video of what is going on:

[media]
[/media]

##### Share on other sites
So, i was trying my best to figure out what could be causing this and we are more or less stuck! We think it is due to blur... I started to play some of my newer games and see what they are doing and maybe this was a common thing. I noticed the game witcher and saw this same effect ( although theirs does not look like crap! )

Perhaps the answer is we cant use bright tones of colors? This overblur only happens on red tone colors... perhaps there is something wrong with tonemapping? If anybody has any suggestions at this point we are at a loss for what to do. I dont want to break down each system and find it that way as that would be a lot of work.. but it is looking more and more like that is the only solution.

Edited by riuthamus

##### Share on other sites
How does your bloom effect work?

##### Share on other sites
Well... i convinced him to make a slider system so we could control the settings... and.... we came up with the following:

Now this looks rather good so far, but there are certain lighting scenarios where that blur just goes ape shit again. This new control system we setup really helps.... curious if we could do, On the fly editing of it... hm....

Anyway, we also fixed the auto exposure setting, once we get some shadows in ( another issue ) we are going to be in business!

##### Share on other sites

How does your bloom effect work?

Its the same as in the sample. It takes the 1/16 size image, applies a bright pass, blurs it, then upsamples it to 1/2 size to pass on to the tone mapping. The only problem now is the kind of odd color bias. Some colors recieve bloom, others don't. Specular highlights are also recieving no bloom. Are specular highlights supposed to have light values > 1?

Also, the auto-exposure adjustment seems to be too good. If your in a dark setting, it adjusts the lighting all the way up until it looks like daylight again. How can we control it so we can still have dark scenes...?

##### Share on other sites
Honestly I don't have any idea why you experiencing those strange issues. I'm using the same bloom effect and think it looks good and the speculars receiving bloom. But I don't know your lighting system maybe your specular exponents are to high. And what is your luminance convert constant? Usually blue is about 0.1 or something while RG is 0.5, 0.7... Turning up the blue value could help eventually or its just a stupid thought.

Also, the auto-exposure adjustment seems to be too good. If your in a dark setting, it adjusts the lighting all the way up until it looks like daylight again. How can we control it so we can still have dark scenes...?

Few days ago I got a similar problem but more extreme. In shadows it just burned all colors until the whole screen was white and sometime even more until all colors got negated. Then I worked on a few other features and ended up with a new lighting system and need to overhaul my HDR code. While reseaching I found this article
(http://mynameismjp.wordpress.com/2010/04/30/a-closer-look-at-tone-mapping/) about tone mapping whick actually helped me alot. So I have rewritten my whole tone mapping code and now its working as its supposed to and I'm pretty sure there was a mistake in my previous tone mapping code.

##### Share on other sites
What kind of values are you using for your sun light and sky? I've implemented the new tonemapping (using the filmic ALU one) and now everything is pretty dark.

[attachment=11066:RuinValor 2012-09-03 05-22-57-42.png]

##### Share on other sites
I personally like the look, we just need to find a way to fix the highlights of brighter colors. When we place sand it shoudl be brighter. Would luminance do that? again i like this new color and tone as it is very rich in color! We just need to figure out how to get sand and such to highlight now. Thank you guys for your continued help. I am seriousl going to put an advert for gamedev.net when we make the game playable.

##### Share on other sites

What kind of values are you using for your sun light and sky? I've implemented the new tonemapping (using the filmic ALU one) and now everything is pretty dark.

Just the standart values I was using before HDR with slight adjustment in ambient light and a bit more lightpower. But I experienced this too and turning down the maximum white value helps alot. Turning it down to 1 - 1.5 works great for me.

I personally like the look, we just need to find a way to fix the highlights of brighter colors. When we place sand it shoudl be brighter. Would luminance do that? again i like this new color and tone as it is very rich in color! We just need to figure out how to get sand and such to highlight now. Thank you guys for your continued help. I am seriousl going to put an advert for gamedev.net when we make the game playable.

Do you use some sort of material shader? Different light values like diffuse color or a multiplier could do the job to boost the values of some blocks while leaving the rest untouched. You could also experiment with specular exponents but in my opinion it makes it look unnatural when the exponent gets to small. Tweaking your textures may help too. Sometimes good looking textures with LDR look bad with HDR. Edited by quiSHADgho

##### Share on other sites

Do you use some sort of material shader? Different light values like diffuse color or a multiplier could do the job to boost the values of some blocks while leaving the rest untouched. You could also experiment with specular exponents but in my opinion it makes it look unnatural when the exponent gets to small. Tweaking your textures may help too. Sometimes good looking textures with LDR look bad with HDR.

Certainly, well we switched some things around and finally added in filmics system and got a very desirable effect overall and even got specular blur working! YEAH!!, now our issue is our skybox, everything looks great except for the skybox is being destroyed by the entire process. (we are getting a very washed out color even though I have a vibrant one in the settings )

It is key to note that we are not using a texture for the skybox. This enables us to control the color of the sky manually at any point in the day and create transitions from one color to the next. The only aspect that is an image is the sun and the moon, so we are kinda at a loss as to how we could fix this. We have attempted to change the lighting multiplier for the sky and other things but ultimately the result is a washed out color.

Any ideas as to how we could get the sky to either be a rich vibrant color, or how to make it bypass the entire process all together? This system is proving to be the most troublesome.... and rather annoying. We still have an issue with the auto exposure correction as well. We are unsure of where to modify the values that control how much of it works... so when you go into a dark region the outside becomes... daytime! lmao

While funny at first it is proving to be annoying in many cases. Here are some screenshots, btw I want to thank you guys for all this help. I hate how much of a bother we are being but this is like pulling teeth for us since none of us have ever coded an HDR setup before.

Exposure issue:

and the sky issue:
Edited by riuthamus

##### Share on other sites
Actually.... i just changed the values to a more rich version of the color... and i got some very nice results. I think this is why i could never understand it. I need to pick colors that compensate for the HDR... hm....

So at this point our only issue is the auto exposure. Does anybody have any good tutorials for this? maybe we are using one that has no settings. We simply need to find a way to clamp the value and change how much it does or doesnt bloom out... sadly my knowledge on this system is very much lacking. Thanks for any direction or help you can provide. I think after we are done with this I am going to work on an article that describes our path and how we got to the final result.

##### Share on other sites
In my last game, we simply gave the artists a minimum and maximum value that would be applied to the "average scene luminance" step.
e.g. the artists might say min=0.5, max=10, and then if the average is calculated to be 0.1, it will be increased to 0.5.

We found this to be a simple way to stop the auto-exposure making every different scene look the same.

• 10
• 11
• 9
• 16
• 19