Jump to content

  • Log In with Google      Sign In   
  • Create Account


HDR + Tonemapping + Skylight = Fail?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
53 replies to this topic

#21 Telanor   Members   -  Reputation: 1311

Like
1Likes
Like

Posted 27 August 2012 - 04:56 AM

Yea I did see that on the blog and fixed it. It did indeed make it look a lot better.

Sponsor:

#22 quiSHADgho   Members   -  Reputation: 325

Like
1Likes
Like

Posted 27 August 2012 - 06:12 AM

I just finished my implementation this morning and at the end when the color correction worked I had some issues with blur and brightness. The main ideas I got from MJPs implementation but I changed two things:
First: Sampling the luminance from the downscaled HDR texture was not acceptable for me. The average luminance difference were way too high when moving the camera through the scene. I know could change the adaption but sampling it from the whole screenspace gives smoother results too.
Second: I changed the luminance sampling code itself to the code that is provided in the DirectXSDK:
[source lang="cpp"] float3 vSample = 0.0f; float fLogLumSum = 0.0f; for(int iSample = 0; iSample < 9; iSample++) { // Compute the sum of log(luminance) throughout the sample points vSample = tex2D(PointSampler0, PSIn.TexCoord ) +1; fLogLumSum += log(dot(vSample, LUM_CONVERT)+0.0001f); } // Divide the sum to complete the average fLogLumSum /= 9; return float4(fLogLumSum, fLogLumSum, fLogLumSum, 1.0f);[/source]

#23 riuthamus   Moderators   -  Reputation: 4838

Like
0Likes
Like

Posted 27 August 2012 - 07:04 AM

I just finished my implementation this morning and at the end when the color correction worked I had some issues with blur and brightness. The main ideas I got from MJPs implementation but I changed two things:
First: Sampling the luminance from the downscaled HDR texture was not acceptable for me. The average luminance difference were way too high when moving the camera through the scene. I know could change the adaption but sampling it from the whole screenspace gives smoother results too.
Second: I changed the luminance sampling code itself to the code that is provided in the DirectXSDK:
[source lang="cpp"] float3 vSample = 0.0f; float fLogLumSum = 0.0f; for(int iSample = 0; iSample < 9; iSample++) { // Compute the sum of log(luminance) throughout the sample points vSample = tex2D(PointSampler0, PSIn.TexCoord ) +1; fLogLumSum += log(dot(vSample, LUM_CONVERT)+0.0001f); } // Divide the sum to complete the average fLogLumSum /= 9; return float4(fLogLumSum, fLogLumSum, fLogLumSum, 1.0f);[/source]


Do you have a project as well? or are you just doing these things for fun? Curious is why I ask. When telanor gets up I will make sure he goes over what you have provided. Thank you again for all your help, and those who have helped previously.

#24 quiSHADgho   Members   -  Reputation: 325

Like
1Likes
Like

Posted 27 August 2012 - 07:29 AM

I have a project but not published any footage because I think it is not good enough yet and need to grow a bit more. It's a RPG and I write the engine in C# like you guys but atm I got XNA under the hood.

#25 CC Ricers   Members   -  Reputation: 623

Like
1Likes
Like

Posted 27 August 2012 - 10:01 AM

I think the reason for the problem is a combination of drawing the sky to the gbuffer's color RT (which is a R8G8B8A8_UNORM_SRGB surface)

Usually the GBuffer stores surface albedo, which is a 0-1 fractional/percentage value. A skydome is more like an emissive surface though, than a regular diffuse surface, so yeah, it doesn't make sense to render it into your GBuffer's albedo channels. When adding emissive surfaces to a deferred renderer, the usual approach is to render these surfaces directly into your light-accumulation buffer, instead of into the GBuffer.


This is an interesting point to make, because most people would just render their skyboxes/skydomes in the albedo buffer and make it skip the lighting pass so it keeps its original color in the final render. So suppose if the sky is a blue gradient, you just leave the pixels black in the albedo buffer where the sky would be, and draw the shades of blue in the lighting pass? How would you apply sky lighting to all the objects in it? Usually, I lit everything in outdoor scenes with directional lighting.

Edited by CC Ricers, 27 August 2012 - 10:03 AM.

My development blog: Electronic Meteor

#26 Telanor   Members   -  Reputation: 1311

Like
0Likes
Like

Posted 27 August 2012 - 07:42 PM

Regarding SRGB and LogLUV: when using an SRGB texture, it will go through a linear->gamma transform when writing, and a gamma->linear transform when sampling, so it shouldn't have much of an impact, but it can mean that the value that you sample later is slightly different to the value that you wrote originally. This transform is only designed to be used to store RGB "color" information in a perceptually efficient way (distributing more bits to darker colors) where it doesn't matter if there is a slight change in the values.
However, LogLUV splits the L component over 2 8-bit channels, and if either of those channels is slightly modified, then when you reconstruct the original 16-bit value, it will be wildly different. This makes it more of a "data" texture than a "color" texture, and so SRGB encoding should not be used.


Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?

Also, this one part of MJP's sample has me very confused. The hardware downscale function takes an RT and then renders it to another RT which is 1/2 the size of the source RT 3 times. The comments say it's meant to downsample to an RT 1/16th the size, so whats the purpose of rendering 3 1/2 size RTs?

/// <summary>
/// Downscales the source to 1/16th size, using hardware filtering
/// </summary>
/// <param name="source">The source to be downscaled</param>
/// <param name="result">The RT in which to store the result</param>
protected void GenerateDownscaleTargetHW(RenderTarget2D source, RenderTarget2D result)
{
	IntermediateTexture downscale1 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
	PostProcess(source, downscale1.RenderTarget, scalingEffect, "ScaleHW");

	IntermediateTexture downscale2 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
	PostProcess(downscale1.RenderTarget, downscale2.RenderTarget, scalingEffect, "ScaleHW");
	downscale1.InUse = false;

	Engine.Context.PixelShader.SetShaderResource(null, 0);

	IntermediateTexture downscale3 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
	PostProcess(downscale2.RenderTarget, downscale3.RenderTarget, scalingEffect, "ScaleHW");
	downscale2.InUse = false;

	PostProcess(downscale3.RenderTarget, result, scalingEffect, "ScaleHW");
	downscale3.InUse = false;
}

Edited by Telanor, 27 August 2012 - 08:29 PM.


#27 Hodgman   Moderators   -  Reputation: 29493

Like
1Likes
Like

Posted 27 August 2012 - 08:37 PM

Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?

That sounds right.
Not every input surface needs to be sRGB though -- e.g. normal maps are also "data" and shouldn't have "gamma correction" applied to them (or you'll bend all your normals!).
The standard gamma space for computer monitors is sRGB, so when outputting a final "LDR" image, it should be sRGB encoded, so that the monitor displays it as you intend.

This also means that any input "colour-type" textures that are authored on an sRGB monitor, actually contain ("gamma space") sRGB colours, because when your artists were painting those colours they were viewing them on an sRGB device.
All of your lighting math should be done in linear space (not gamma space), so when sampling from these textures you need the sRGB->Linear transform to take place (which happens automatically if you tell the API that it's an sRGB texture).

#28 Telanor   Members   -  Reputation: 1311

Like
0Likes
Like

Posted 27 August 2012 - 10:39 PM

I just finished my implementation this morning and at the end when the color correction worked I had some issues with blur and brightness. The main ideas I got from MJPs implementation but I changed two things:
First: Sampling the luminance from the downscaled HDR texture was not acceptable for me. The average luminance difference were way too high when moving the camera through the scene. I know could change the adaption but sampling it from the whole screenspace gives smoother results too.
Second: I changed the luminance sampling code itself to the code that is provided in the DirectXSDK


I don't understand how this works. Isn't the point of downsampling so that you can get to a 1x1 texture which has the average luminance value? If you dont downsample, how do you get a single value?

Edited by Telanor, 27 August 2012 - 10:40 PM.


#29 quiSHADgho   Members   -  Reputation: 325

Like
1Likes
Like

Posted 28 August 2012 - 12:31 AM

I don't understand how this works. Isn't the point of downsampling so that you can get to a 1x1 texture which has the average luminance value? If you dont downsample, how do you get a single value?


Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.

Also, this one part of MJP's sample has me very confused. The hardware downscale function takes an RT and then renders it to another RT which is 1/2 the size of the source RT 3 times. The comments say it's meant to downsample to an RT 1/16th the size, so whats the purpose of rendering 3 1/2 size RTs?

///
/// Downscales the source to 1/16th size, using hardware filtering
///
///The source to be downscaled
///The RT in which to store the result
protected void GenerateDownscaleTargetHW(RenderTarget2D source, RenderTarget2D result)
{
IntermediateTexture downscale1 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(source, downscale1.RenderTarget, scalingEffect, "ScaleHW");

IntermediateTexture downscale2 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale1.RenderTarget, downscale2.RenderTarget, scalingEffect, "ScaleHW");
downscale1.InUse = false;

Engine.Context.PixelShader.SetShaderResource(null, 0);

IntermediateTexture downscale3 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale2.RenderTarget, downscale3.RenderTarget, scalingEffect, "ScaleHW");
downscale2.InUse = false;

PostProcess(downscale3.RenderTarget, result, scalingEffect, "ScaleHW");
downscale3.InUse = false;
}


The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.

#30 Telanor   Members   -  Reputation: 1311

Like
0Likes
Like

Posted 28 August 2012 - 01:31 AM

Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.


Ok so instead of starting at the 1/16th size image, you started from the fullsize image, calculated the initial luminance using the function you provided, and then downscale it to a 1x1 to get the average? Correct me if I'm wrong but the code you provided seems to sample the same pixel 9 times. Is that intentional...?

The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.


I had figured that was the intention but it's not actually using the size of the previous RT, but continually using the size of the source RT. I checked with the debugger, source.width / 2 is the same number for all 3 RTs. I'm guessing thats a bug in the sample then.

Edited by Telanor, 28 August 2012 - 01:32 AM.


#31 quiSHADgho   Members   -  Reputation: 325

Like
0Likes
Like

Posted 28 August 2012 - 02:00 AM

Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.

Ok so instead of starting at the 1/16th size image, you started from the fullsize image, calculated the initial luminance using the function you provided, and then downscale it to a 1x1 to get the average? Correct me if I'm wrong but the code you provided seems to sample the same pixel 9 times. Is that intentional...?


Yes that is correct. As far as I understand the sampling helps to get a smoother result but I did not go through the whole C++ code of the DirectXSDK sample so I could be wrong but it gives me a better result. Oh and the "+1" to vSample is a project specific color correction constant so you can ignore it.

The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.

I had figured that was the intention but it's not actually using the size of the previous RT, but continually using the size of the source RT. I checked with the debugger, source.width / 2 is the same number for all 3 RTs. I'm guessing thats a bug in the sample then.


That's strange indeed. It's probably a bug. I never noticed it because I'm not using the hardware scaling.

#32 riuthamus   Moderators   -  Reputation: 4838

Like
1Likes
Like

Posted 28 August 2012 - 02:44 AM

Yes that is correct. As far as I understand the sampling helps to get a smoother result but I did not go through the whole C++ code of the DirectXSDK sample so I could be wrong but it gives me a better result. Oh and the "+1" to vSample is a project specific color correction constant so you can ignore it.


Why not sample once, save yourself the loop and x9?

#33 quiSHADgho   Members   -  Reputation: 325

Like
0Likes
Like

Posted 28 August 2012 - 03:25 AM


Yes that is correct. As far as I understand the sampling helps to get a smoother result but I did not go through the whole C++ code of the DirectXSDK sample so I could be wrong but it gives me a better result. Oh and the "+1" to vSample is a project specific color correction constant so you can ignore it.


Why not sample once, save yourself the loop and x9?


Your probably right. I did a quick test and did not noticed big differences but also no performance improvement at all. But I'm messing around with the shaders for some other features atm so I have to test it again later.

#34 Telanor   Members   -  Reputation: 1311

Like
0Likes
Like

Posted 28 August 2012 - 03:41 AM

So I took a look at the initial luminance image being generated and it looks a bit odd to me. I'm not sure what its supposed to look like, but it seems wrong to me that everything that is not the sky has exactly 0 luminance and that the sky has such a strong gradient on it.

Initial Luminance:
luminance.png

Final Image:
RuinValor 2012-08-28 05-31-54-35.png

#35 quiSHADgho   Members   -  Reputation: 325

Like
0Likes
Like

Posted 28 August 2012 - 03:47 AM

The problem is that the dotproduct in the luminance sample function is smaller than 1 so the log becomes negativ and negativ values are clamped to 0. What you can do is add a "+1" to make sure the log is always greater or equal 0 or boost the lighting so that the values in the HDRimage getting higher.

#36 riuthamus   Moderators   -  Reputation: 4838

Like
0Likes
Like

Posted 28 August 2012 - 08:55 AM

Well, we did some of that and we got a good+bad result.

#37 MJP   Moderators   -  Reputation: 10928

Like
0Likes
Like

Posted 28 August 2012 - 03:06 PM

You do not want to clamp your log(luminance) at 0 or 1. The value can and will be negative. A 32-bit or 16-bit floating point format has a sign bit, so there is no problem with storing a negative number in such a render target. The only thing you'll want to do is clamp the luminance value to some small epsilon before taking the log of it, since log(0) is undefined.

#38 Telanor   Members   -  Reputation: 1311

Like
0Likes
Like

Posted 28 August 2012 - 04:14 PM

So the luminance is supposed to look like that then? I did also try turning up the lighting values a lot and the luminance looks a bit more normal, but the final screen result is about the same so... I guess the luminance isn't really the issue.

There is still an issue with the bloom though. It seems like only colors with a large amount of red in them get any bloom applied (although white doesn't seem to get any bloom). I've attached a screenshot of several different colored blocks.

RuinValor 2012-08-28 18-02-28-60.png

#39 riuthamus   Moderators   -  Reputation: 4838

Like
0Likes
Like

Posted 28 August 2012 - 04:14 PM

And to give you guys some video of what is going on:



#40 riuthamus   Moderators   -  Reputation: 4838

Like
0Likes
Like

Posted 31 August 2012 - 03:32 PM

So, i was trying my best to figure out what could be causing this and we are more or less stuck! We think it is due to blur... I started to play some of my newer games and see what they are doing and maybe this was a common thing. I noticed the game witcher and saw this same effect ( although theirs does not look like crap! )

Perhaps the answer is we cant use bright tones of colors? This overblur only happens on red tone colors... perhaps there is something wrong with tonemapping? If anybody has any suggestions at this point we are at a loss for what to do. I dont want to break down each system and find it that way as that would be a lot of work.. but it is looking more and more like that is the only solution.

Thanks for any help you can provide in advance.

Posted Image
Posted Image
Posted Image

Edited by riuthamus, 31 August 2012 - 03:42 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS