# Filter Kernel Average Scene Luminance Reinhard Approach

## Recommended Posts

Hi all,

I've implemented Reinhard tone mappping via CIE Yxy. I downsample from a 256x256 scene texture down to 1x1, but I'm not sure if I'm using the correct kernel to average the pixels.
I'm using a 4x4 kernel and I'm just using it as a box filter, averaging just the pixels. That means accessing the pixels one by one row by row and accumulating into a variable then dividing the end result by 16 (as the average wants).

this is the code:
//pixelSize = 1/renderTarget_w,1/renderTarget_h//offsets x,y for 4x4 box filterfloat coeffs_x[4] = {-2.f,-1.f,1.f,2.f};float coeffs_y[4] = {-2.f,-1.f,1.f,2.f};float bDecodeLuminance;float4 ps_avg_scene_luminance_main( in float2 iTc : TEXCOORD) : COLOR{		float4 vColor = 0;	for (int x = 0; x < 4; x++)	{		for (int y = 0; y < 4; y++)		{			float2 vOffset;			vOffset = float2(coeffs_x[x], coeffs_y[y])*pixelSize;			float4 vSample = tex2D(pointTextureSamplerD3D, iTc + vOffset);			vColor += vSample;		}	}	vColor /= 16.f;				if(bDecodeLuminance)		vColor = float4(exp(vColor.r), 1.0f, 1.0f, 1.0f);			return vColor;}

The result is that it's not giving me what I expect.
For example if I look straight toward the floor it's fine, but if I look elsewhere it's oversaturating everything.
Again I'm using the log luminance approach from Reinhard tone mapping approach (in CIE color space), adjusting the luminance and then converting back from XYZ to RGB prior to show the back buffer results.

Could that depend on how I calculate the average luminance ? Could it be that it's wrong how I average the pixel at each chain step ?

btw, here is a video showing the issue: http://img201.imageshack.us/img201/7581/nextenginer2010111.mp4

Thanks in advance for any help

[Edited by - MegaPixel on November 14, 2010 2:05:44 PM]

##### Share on other sites
For Reinhard you use a geometric mean, not an arithmetic mean. This means you take the natural log of your luminance, take the average, and then perform the exponential to get the final luminance value.

To make implementing it simpler, your can just take the log of luminance in your very first downsample pass. Then in subsequent passes, you just keep running your 4x4 box filter to calculate the final average. Then in your very last pass you can call exp() on the value, or you can do it in the tonemapping shader when you sample the 1x1 texture.

By the way...the reason for a geometric mean is that it effectively remotes outliers. So this way if you have a few small hot-spots in the scene with very high luminance, you won't end up using too low of an exposure and darkening the image.

##### Share on other sites
Quote:
 Original post by MJPFor Reinhard you use a geometric mean, not an arithmetic mean. This means you take the natural log of your luminance, take the average, and then perform the exponential to get the final luminance value.To make implementing it simpler, your can just take the log of luminance in your very first downsample pass. Then in subsequent passes, you just keep running your 4x4 box filter to calculate the final average. Then in your very last pass you can call exp() on the value, or you can do it in the tonemapping shader when you sample the 1x1 texture.By the way...the reason for a geometric mean is that it effectively remotes outliers. So this way if you have a few small hot-spots in the scene with very high luminance, you won't end up using too low of an exposure and darkening the image.

I actually do exactly what you said. I downsample the scene into a render target that is 1/16 the screen resolution (w/4,h/4) extracting the pixel luminance and then performing the natural log like: log(epsilon + Luminance). However I'm not getting what I want in terms of end result. I get the artifacts that you saw in the video.
I'm using CIE Yxy color space because is good if I want to work on just luminance without affecting the croma.
Do you know a good way to know whether the average luminance is correct or not being able to say: "Yes that render target shows the luminance correctly !!!" ...

##### Share on other sites
What do you actually do with the scaled luminance - I've seen methods which simply multiply the original RGB color with the scaled lum, but this doesn't work very well. It's preferable to convert your original RGB color to xyY (RGB->XYZ->xyY), replace Y with your scaled lum (Y'), and convert back to RGB (xyY'->XYZ->RGB). This works much better, and your exposure and white point values have more usable range.

##### Share on other sites
Quote:
 Original post by Scoob DroolinsWhat do you actually do with the scaled luminance - I've seen methods which simply multiply the original RGB color with the scaled lum, but this doesn't work very well. It's preferable to convert your original RGB color to xyY (RGB->XYZ->xyY), replace Y with your scaled lum (Y'), and convert back to RGB (xyY'->XYZ->RGB). This works much better, and your exposure and white point values have more usable range.

Yes I do use CIE !

see below my code:

float4 ps_bloom_composite_main( in float2 iTc : TEXCOORD) : COLOR{#ifndef OGL	float4 sceneColor = tex2D(sceneTextureSamplerD3D,iTc);#else    iTc.y = 1.f - iTc.y;	float4 sceneColor = tex2D(sceneTextureSampler,iTc);#endif    //prepare for Reinhard tone mapping        //Go to CIE Yxy color space    float3 XYZ = mul(RGB2XYZ_Matrix,sceneColor.xyz);        float invDen = 1.f/dot(XYZ,float3(1,1,1));    float2 xy = XYZ.xy*invDen;           float AvgSceneLum				= tex2D(averageSceneLuminanceTextureSamplerD3D,float2(0.5f,0.5f)).x;	    //L_avg -> average scene luminance        float PixelSceneLuminance		= dot(sceneColor.xyz,luminanceWeights);										//L_w -> world luminance        float ScaledSceneLuminance		= (PixelSceneLuminance *( a / (AvgSceneLum + 0.001f) ) );									//L_scaled -> scaled scene luminance        float CompressedSceneLuminance	= ScaledSceneLuminance  / (1.f+ScaledSceneLuminance);                       //L_d -> compressed scene luminance        //new XYZ considering the new compressed luminance taken from the application of reinhard tone mapping operator         float d = (CompressedSceneLuminance/xy.y);    XYZ.x  = xy.x*d;    XYZ.y  = CompressedSceneLuminance;    XYZ.z  = (1.f-xy.x-xy.y)*d;        float3 color = mul(XYZ2RGB_Matrix,XYZ);        return float4(color,1.f);}

Everything seems correct with respect to the math and so on. However I'm digging to find out what is happening.

Any thoughts on my code Scoob Droolins ?

[Edited by - MegaPixel on November 17, 2010 6:43:34 AM]

##### Share on other sites
Quote:
 Original post by Scoob DroolinsWhat do you actually do with the scaled luminance - I've seen methods which simply multiply the original RGB color with the scaled lum, but this doesn't work very well. It's preferable to convert your original RGB color to xyY (RGB->XYZ->xyY), replace Y with your scaled lum (Y'), and convert back to RGB (xyY'->XYZ->RGB). This works much better, and your exposure and white point values have more usable range.

I tried (just for testing) in multiplying the scaled luminance straight away and I'm not getting the oversaturation that I'm getting passing through CIE. So the downsampling algorithm should work and therefore the luminance calculation should work. There may be an error in the CIE convertion ...
In the previous video, as you can see, it oversaturates too much...

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627701
• Total Posts
2978697

• 21
• 14
• 12
• 10
• 12