Jump to content

  • Log In with Google      Sign In   
  • Create Account


HDR ToneMapping: Exposure selection

  • You cannot reply to this topic
1 reply to this topic

#1 cifa   Members   -  Reputation: 203

Like
1Likes
Like

Posted 04 July 2014 - 05:09 PM

Hi everyone!

 

I've been trying adding HDR support to my system but I got a couple of questions. I believe what I'm doing before  tone-mapping is fine: 

 

- Setup a RGBA16F Render Target

- Render to that RT

- call the postprocessing tone mapping on the cited RT and display the result on screen. 

 

 

Now to test this I've cranked up a bit my light intensity: 

 

jzHh4q5.png

 

Now if I apply the following tone mapping function (either one, the result is similar)

	col *= exposure;
	col = max(vec3(0.0), col -0.004);
	col = (col *(6.2*col + 0.5)) / (col *(6.2*col +1.7)+0.06);
	return vec4(col ,1.0)

or 

	col *= exposure;
	col = col / (1+col);
	return vec4(col, 1.0);

if the exposure is more than 0.5 for the above image and previous functions the result is pretty flat:

 

7eM68eY.png

 

Where for an exposure value of about 0.4 the result is ok: 

 

QoX5F2O.png

 

Although it's very darkish (unsurprisingly as the exposure is pretty low). 

 

Now the same issue (flatness of the tone-mapped image) is there for a "normal" light intensity which is fine even without using HDR. To get a decent result I have to lower the exposure even for that case, resulting though, again, in a darkish image. 

 

My questions are then two: 

 

- Am I missing something important here? I feel that the result I get is somewhat wrong

- Is there a way to "automatically" select what's considered a good exposure value for a given scene? 

 

 

 

Thank you very much



Sponsor:

#2 Juliean   GDNet+   -  Reputation: 2387

Like
0Likes
Like

Posted 05 July 2014 - 10:22 AM


- Is there a way to "automatically" select what's considered a good exposure value for a given scene?

 

Indeed, there is. Auto-exposure requires a few additional steps and probably even a different tonemapping-algorithm. It works like this:

 

- Take the scene-rendertarget and calculate the geometric luminance per pixel, save this in another render target.

 

- Sum up this geometric luminance to get the average scene luminance. This can be eigther done in a compute shader (complicated but best in performance if done right), by repeaded downsampling to half size & summing up 4 pixels per step until you hit 1x1-texture (easiest to do), or automatically computing the lowest possible mipmap-level for the luminance texture (not recommended, since you require a card that implements linear filtering for mipmap-generation in order for this to produce correct values - it didn't for me).

 

- You can additionally add another step and blend between the current and last luminance value by ping-ponging between two render-targets for the average luminance, this will produce the effect of adaptation, where if you e.g. come from a dark room it will take some time for your eyes to adapt to the brighter environment. But thats entirely optional.

 

- In the last step, perform tonemapping using the average-luminance-value you got. I don't know for sure if you just use this as the exposure value, I'll just show you the shaders I'm using, I believe I implemented some modification of the reinhard-tonemapping:

vertexShader(SCREEN)

pixelShader
{
	textures
	{
		2D AvgLuminance;
		2D Image;
	}
	
	out
	{
		float4 vColor;
	}
	
	main
	{
		float BarLw = Sample(AvgLuminance, float2(0.5, 0.5));
		float aOverBarLw = 0.18f * (1.0f/exp(BarLw));
		
		float3 hdrPixel = Sample(Image, in.vTex0).rgb;
		float lumi = dot( hdrPixel, float3( 0.2126, 0.7152, 0.0722 ));
		float Lxy = aOverBarLw * lumi;
		
		float numer = (1 + (Lxy * 4.0)) * Lxy;
		float denum = 1 + Lxy;
		float Ld = numer / denum;
		
		float3 ldrPixel = (hdrPixel / lumi) * Ld;
		
		out.vColor = float4(pow(ldrPixel, float3(1/2.2f)), 1.0f);
	}
}

Its in my own shading meta language, though the code in the "main"-block is basically HLSL. There is still a few tunable "magic" numbers like the 4.0f, I don't remember exactly what it does (magic numbers sure ARE evil) but lower value results in overall less bright scene so on top of automatic exposure selection you can still tune the overall look of the scene. This is the tonemapping-shader itself. To convert the scene to geometric luminance, you do:

vertexShader(SCREEN)

pixelShader
{
	textures
	{
		2D Scene;
	}
	
	out
	{
		float4 vColor;
	}
	
	main
	{
		float3 lightTransport = Sample(Scene, in.vTex0).rgb;
	
		//convert to luminance
		float lum = dot(lightTransport, float3( 0.2126, 0.7152, 0.0722 ));

		out.vColor = float4(max(0.0f, log(0.001 + lum)), 0.0f, 0.0f, 1.0f);
	}
}

Also, make sure you account for gamma correction. This basically means doing pow(color, 2.2) before making your lighting calculations on about everything on screen, as you can see when converting the HDR-range to LDR range I undo this by doing pow(pixel, 1/2.2) (though maybe not entirely logical at first, this will produce entirely different (read : better) results, in fact before doing so my HDR looked really crappy and I ended up using a broken version because it looked way better).

 

EDIT: Take high note that with this auto-exposure algorithm your face will look way too bright in the beginning since your screen is entirely black. So eigther fake some semi-bright value for the black pixels in the shader or fill up your background with something (read: do something that your black pixels won't account for average luminance calculation).

 

Also, I got this algorithm (with exception of the tonemapping) from Nvidias 9 SDK, see the "HDR Deferred Shading"-Sample for theory & implementation.


Edited by Juliean, 05 July 2014 - 10:27 AM.






PARTNERS