HDR + Tonemapping + Skylight = Fail?

Started by
52 comments, last by riuthamus 11 years, 7 months ago
Yea I did see that on the blog and fixed it. It did indeed make it look a lot better.
Advertisement
I just finished my implementation this morning and at the end when the color correction worked I had some issues with blur and brightness. The main ideas I got from MJPs implementation but I changed two things:
First: Sampling the luminance from the downscaled HDR texture was not acceptable for me. The average luminance difference were way too high when moving the camera through the scene. I know could change the adaption but sampling it from the whole screenspace gives smoother results too.
Second: I changed the luminance sampling code itself to the code that is provided in the DirectXSDK:
[source lang="cpp"] float3 vSample = 0.0f;
float fLogLumSum = 0.0f;

for(int iSample = 0; iSample < 9; iSample++)
{
// Compute the sum of log(luminance) throughout the sample points
vSample = tex2D(PointSampler0, PSIn.TexCoord ) +1;
fLogLumSum += log(dot(vSample, LUM_CONVERT)+0.0001f);
}

// Divide the sum to complete the average
fLogLumSum /= 9;

return float4(fLogLumSum, fLogLumSum, fLogLumSum, 1.0f);[/source]

I just finished my implementation this morning and at the end when the color correction worked I had some issues with blur and brightness. The main ideas I got from MJPs implementation but I changed two things:
First: Sampling the luminance from the downscaled HDR texture was not acceptable for me. The average luminance difference were way too high when moving the camera through the scene. I know could change the adaption but sampling it from the whole screenspace gives smoother results too.
Second: I changed the luminance sampling code itself to the code that is provided in the DirectXSDK:
[source lang="cpp"] float3 vSample = 0.0f;
float fLogLumSum = 0.0f;

for(int iSample = 0; iSample < 9; iSample++)
{
// Compute the sum of log(luminance) throughout the sample points
vSample = tex2D(PointSampler0, PSIn.TexCoord ) +1;
fLogLumSum += log(dot(vSample, LUM_CONVERT)+0.0001f);
}

// Divide the sum to complete the average
fLogLumSum /= 9;

return float4(fLogLumSum, fLogLumSum, fLogLumSum, 1.0f);[/source]


Do you have a project as well? or are you just doing these things for fun? Curious is why I ask. When telanor gets up I will make sure he goes over what you have provided. Thank you again for all your help, and those who have helped previously.
I have a project but not published any footage because I think it is not good enough yet and need to grow a bit more. It's a RPG and I write the engine in C# like you guys but atm I got XNA under the hood.

[quote name='Telanor' timestamp='1345966702' post='4973421']I think the reason for the problem is a combination of drawing the sky to the gbuffer's color RT (which is a R8G8B8A8_UNORM_SRGB surface)
Usually the GBuffer stores surface albedo, which is a 0-1 fractional/percentage value. A skydome is more like an emissive surface though, than a regular diffuse surface, so yeah, it doesn't make sense to render it into your GBuffer's albedo channels. When adding emissive surfaces to a deferred renderer, the usual approach is to render these surfaces directly into your light-accumulation buffer, instead of into the GBuffer.
[/quote]

This is an interesting point to make, because most people would just render their skyboxes/skydomes in the albedo buffer and make it skip the lighting pass so it keeps its original color in the final render. So suppose if the sky is a blue gradient, you just leave the pixels black in the albedo buffer where the sky would be, and draw the shades of blue in the lighting pass? How would you apply sky lighting to all the objects in it? Usually, I lit everything in outdoor scenes with directional lighting.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor


Regarding SRGB and LogLUV: when using an SRGB texture, it will go through a linear->gamma transform when writing, and a gamma->linear transform when sampling, so it shouldn't have much of an impact, but it can mean that the value that you sample later is slightly different to the value that you wrote originally. This transform is only designed to be used to store RGB "color" information in a perceptually efficient way (distributing more bits to darker colors) where it doesn't matter if there is a slight change in the values.
However, LogLUV splits the L component over 2 8-bit channels, and if either of those channels is slightly modified, then when you reconstruct the original 16-bit value, it will be wildly different. This makes it more of a "data" texture than a "color" texture, and so SRGB encoding should not be used.


Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?

Also, this one part of MJP's sample has me very confused. The hardware downscale function takes an RT and then renders it to another RT which is 1/2 the size of the source RT 3 times. The comments say it's meant to downsample to an RT 1/16th the size, so whats the purpose of rendering 3 1/2 size RTs?


/// <summary>
/// Downscales the source to 1/16th size, using hardware filtering
/// </summary>
/// <param name="source">The source to be downscaled</param>
/// <param name="result">The RT in which to store the result</param>
protected void GenerateDownscaleTargetHW(RenderTarget2D source, RenderTarget2D result)
{
IntermediateTexture downscale1 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(source, downscale1.RenderTarget, scalingEffect, "ScaleHW");

IntermediateTexture downscale2 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale1.RenderTarget, downscale2.RenderTarget, scalingEffect, "ScaleHW");
downscale1.InUse = false;

Engine.Context.PixelShader.SetShaderResource(null, 0);

IntermediateTexture downscale3 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale2.RenderTarget, downscale3.RenderTarget, scalingEffect, "ScaleHW");
downscale2.InUse = false;

PostProcess(downscale3.RenderTarget, result, scalingEffect, "ScaleHW");
downscale3.InUse = false;
}

Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?
That sounds right.
Not every input surface needs to be sRGB though -- e.g. normal maps are also "data" and shouldn't have "gamma correction" applied to them (or you'll bend all your normals!).
The standard gamma space for computer monitors is sRGB, so when outputting a final "LDR" image, it should be sRGB encoded, so that the monitor displays it as you intend.

This also means that any input "colour-type" textures that are authored on an sRGB monitor, actually contain ("gamma space") sRGB colours, because when your artists were painting those colours they were viewing them on an sRGB device.
All of your lighting math should be done in linear space (not gamma space), so when sampling from these textures you need the [font=courier new,courier,monospace]sRGB->Linear[/font] transform to take place (which happens automatically if you tell the API that it's an sRGB texture).
I just finished my implementation this morning and at the end when the color correction worked I had some issues with blur and brightness. The main ideas I got from MJPs implementation but I changed two things:
First: Sampling the luminance from the downscaled HDR texture was not acceptable for me. The average luminance difference were way too high when moving the camera through the scene. I know could change the adaption but sampling it from the whole screenspace gives smoother results too.
Second: I changed the luminance sampling code itself to the code that is provided in the DirectXSDK


I don't understand how this works. Isn't the point of downsampling so that you can get to a 1x1 texture which has the average luminance value? If you dont downsample, how do you get a single value?

I don't understand how this works. Isn't the point of downsampling so that you can get to a 1x1 texture which has the average luminance value? If you dont downsample, how do you get a single value?


Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.


Also, this one part of MJP's sample has me very confused. The hardware downscale function takes an RT and then renders it to another RT which is 1/2 the size of the source RT 3 times. The comments say it's meant to downsample to an RT 1/16th the size, so whats the purpose of rendering 3 1/2 size RTs?

///
/// Downscales the source to 1/16th size, using hardware filtering
///
///The source to be downscaled
///The RT in which to store the result
protected void GenerateDownscaleTargetHW(RenderTarget2D source, RenderTarget2D result)
{
IntermediateTexture downscale1 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(source, downscale1.RenderTarget, scalingEffect, "ScaleHW");

IntermediateTexture downscale2 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale1.RenderTarget, downscale2.RenderTarget, scalingEffect, "ScaleHW");
downscale1.InUse = false;

Engine.Context.PixelShader.SetShaderResource(null, 0);

IntermediateTexture downscale3 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale2.RenderTarget, downscale3.RenderTarget, scalingEffect, "ScaleHW");
downscale2.InUse = false;

PostProcess(downscale3.RenderTarget, result, scalingEffect, "ScaleHW");
downscale3.InUse = false;
}


The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.
Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.


Ok so instead of starting at the 1/16th size image, you started from the fullsize image, calculated the initial luminance using the function you provided, and then downscale it to a 1x1 to get the average? Correct me if I'm wrong but the code you provided seems to sample the same pixel 9 times. Is that intentional...?

The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.


I had figured that was the intention but it's not actually using the size of the previous RT, but continually using the size of the source RT. I checked with the debugger, source.width / 2 is the same number for all 3 RTs. I'm guessing thats a bug in the sample then.

This topic is closed to new replies.

Advertisement