Separable gaussian blur too dark ?

Started by
19 comments, last by MegaPixel 11 years, 7 months ago
You're right. Good catch.
Advertisement
One of my friends has a good blog post he did on the issue you can check it out here:
http://theinstructionlimit.com/gaussian-blur-experiments

Hope this helps! :)


float gaussianKernel(float x, float standardDeviation)
{
return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}

float4 PS(VSO input) : SV_TARGET0
{
const int numSamples = 3;
const float standardDeviation = numSamples / 3.0;

const float offset[numSamples] = { 0.0, 1.3846153846, 3.2307692308 };
const float weight[numSamples] = { 0.40261952, 0.2442015368, 0.0544886997 }; //Either use these or the gaussianKernel function
float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV).xyz * gaussianKernel(0, standardDeviation); //You forgot about this weight here

for (int i = 1; i < numSamples; i++)
{
float weight = gaussianKernel(i, standardDeviation);
texColor += TargetTexture.Sample(TargetTextureSampler, input.UV + float2(offset, 0.0f) / _ScreenSize.x).rgb * weight;
texColor += TargetTexture.Sample(TargetTextureSampler, input.UV - float2(offset, 0.0f) / _ScreenSize.x).rgb * weight;
}
return float4(texColor.rgb, 1.0f);
}


You might also want to check out the implementation I'm currently using in my engine (even though I'm currently switching to a more optimized compute shader implemention with a runtime of O(log n) per pixel):
#ifndef MIN_WEIGHT
#define MIN_WEIGHT 0.0001f
#endif
#ifndef FILTER
#error You have to define the filter. (FILTER = (GAUSSIAN|EXPONENTIAL))
#endif
#define GAUSSIAN 0
#define EXPONENTIAL 1
#if FILTER == GAUSSIAN
#ifndef STANDARD_DEVIATION
#error You have to define the standard deviation when using a gaussian kernel. (STANDARD_DEVIATION = float)
#endif
#elif FILTER == EXPONENTIAL
#ifndef MEAN_VALUE
#error You have to define the mean value when using an exponential kernel. (MEAN_VALUE = float)
#endif
#endif
#ifndef DIRECTION
#error You have to define the direction. (DIRECTION = (HORIZONTAL|VERTICAL|int2(x,y)))
#endif
#ifndef MIP
#define MIP 0
#endif
#define HORIZONTAL int2(1, 0)
#define VERTICAL int2(0, 1)
Texture2D SourceTexture : register(t0);
cbuffer InfoBuffer : register(b0)
{
float Width;
float Height;
};
struct PSIn
{
float4 Position : SV_POSITION;
float2 TexCoord : TEXCOORD0;
float2 ScreenPos : SCREEN_POSITION;
};
float gaussianKernel(int x, float standardDeviation)
{
return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}
float integratedExponentialKernel(float x, float m)
{
return 0.5 * (1 - exp(-x / m) / 2) * (sign(x) + 1) - 0.25 * exp(x / m) * (sign(x) - 1);
}
float exponentialKernel(int x, float m)
{
return integratedExponentialKernel(x + 0.5, m) - integratedExponentialKernel(x - 0.5, m);
}
float filter(int x)
{
#if FILTER == GAUSSIAN
return gaussianKernel(x, STANDARD_DEVIATION);
#elif FILTER == EXPONENTIAL
return exponentialKernel(x, MEAN_VALUE);
#endif
}
float3 sample(int2 position, int offset)
{
float3 textureColor = 0.0f;
float2 newOffset = offset * DIRECTION;

if (newOffset.x >= -8 &amp;&amp; newOffset.x <= 7 &amp;&amp; newOffset.y >= -8 &amp;&amp; newOffset.y <= 7)
textureColor = SourceTexture.Load(
int3(position, MIP),
newOffset);
else
textureColor = SourceTexture.Load(int3(position + newOffset, MIP));

return textureColor;
}
float4 PSMain(PSIn Input) : SV_Target
{
float3 accumulatedColor = 0.0f;

float accumulatedWeight = 0, weight = 0;
[unroll]
for (int x = 0; (weight = filter(x)) > MIN_WEIGHT; ++x)
{
accumulatedWeight += (x != 0) ? (2 * weight) : weight;
}
[unroll]
for (int x = 0; (weight = filter(x)) > MIN_WEIGHT; ++x)
{
accumulatedColor += weight / accumulatedWeight * sample((int2)Input.ScreenPos, x);
if (x != 0)
accumulatedColor += weight / accumulatedWeight * sample((int2)Input.ScreenPos, -x);
}
return float4(accumulatedColor, 1);
}



Hi,

I have few questions about your approach:

1) why you use the custom semantic SCREEN_POSITION and not the SV_Position which is actually the same once you get it in the pixel shader as input.

2) which kind of kernel is the exponential one with respect to the standard gaussian one and which visual results it gives and what is best for?

3) Also, do you support a poisson sampling kernel as well ? If yes what's the best way to express it and what is good for (I mean in what situation)

4) do you apply you filter on downsampled version of your buffer? If yes how much and how that will impact the correctness of the end result ? I guess no, because I see you don't sample at the pixels edges ... (that should help biliniear sampling when you stretch at fullscreen, right?)

5) Last :D, do you know any good resource on the web where I can find a list of filters along with their use case situation (I mean something like when it's good to use that filter or that other one and in which case and how it looks like etc.)

Thanks in advance for any reply

1) why you use the custom semantic SCREEN_POSITION and not the SV_Position which is actually the same once you get it in the pixel shader as input.

Oh, I didn't even know about that, thanks biggrin.png


2) which kind of kernel is the exponential one with respect to the standard gaussian one and which visual results it gives and what is best for?

The exponential kernel is the exponential distribution from probability theory. It's sharper in the center than a gaussian kernel. This is the reason why it's better suited for bloom than a gaussian kernel. In the Unreal Engine 4 Elemental Demo they used multiple gaussian kernels of different widths and summed them together to approximate an exponential distribution. The thing is, that it's not really separable. That's why they chose to sum multiple gaussians. I implemented it before knowing that it was not separable, that's why it's even in my code. I still don't know which one to choose for my engine. Here's a visual comparison of all of these filters:
filters.png


3) Also, do you support a poisson sampling kernel as well ? If yes what's the best way to express it and what is good for (I mean in what situation)

No, I can't think of a situation where it would make any sense.


4) do you apply you filter on downsampled version of your buffer? If yes how much and how that will impact the correctness of the end result ? I guess no, because I see you don't sample at the pixels edges ... (that should help biliniear sampling when you stretch at fullscreen, right?)

I point sample the Mip 1 and render the result to a texture with half the size. I didn't really do any benchmarks. But it doesn't really look much worse. I'm currently implementing a compute shader version with logarithmic runtime. It should be fast enough to run in full resolution.


5) Last , do you know any good resource on the web where I can find a list of filters along with their use case situation (I mean something like when it's good to use that filter or that other one and in which case and how it looks like etc.)

I don't know a good website that shows the different kernels. I just came up with the exponential distribution as a filter myself while seeing that Epic Games summed multiple gaussians together, which resulted in the shape of an exponential distribution, which let me think about why one does not simply implement a filter with exponential distribution instead of summing gaussians together.
So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)



5) Last , do you know any good resource on the web where I can find a list of filters along with their use case situation (I mean something like when it's good to use that filter or that other one and in which case and how it looks like etc.)


I don't know a good website that shows the different kernels. I just came up with the exponential distribution as a filter myself while seeing that Epic Games summed multiple gaussians together, which resulted in the shape of an exponential distribution, which let me think about why one does not simply implement a filter with exponential distribution instead of summing gaussians together.
[/quote]

Well because is not separable! You also said that :). While Gaussian blur is separable.

So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)


You blur multiple times until you are satisfied ! Like 3/4 times will give you good results ! You do that by ping ponging and blurring repeatedly the same blurred image ! But then you can also use more or less taps or varying the standard deviation and see what happens ! Fx are not really rocket science. The rule is always tweak tweak tweak until it looks good :)

[quote name='lipsryme' timestamp='1347193797' post='4978281']
So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)


You blur multiple times until you are satisfied ! Like 3/4 times will give you good results ! You do that by ping ponging and blurring repeatedly the same blurred image ! But then you can also use more or less taps or varying the standard deviation and see what happens ! Fx are not really rocket science. The rule is always tweak tweak tweak until it looks good smile.png
[/quote]

Ok it's quite a bit better now with 4 x blur but I still seem to be putting it wrong together. What I currently do is on the last blur I return the blurred image + the original lighting image and then in my deferred composition shader I put it together like so:


float4 albedo = ToLinear(AlbedoTarget.Sample(LinearTargetSampler, input.UV));
float4 lighting = LightMapTarget.Sample(LinearTargetSampler, input.UV);
float AO = SSAOTarget.Sample(LinearTargetSampler, input.UV).r;
output = albedo * (float4(lighting.rgb, 1.0f) + AO);


The lighting itself does look okay but when being combined with the rest it doesn't seem to do anything except give it a blurry border.

[quote name='MegaPixel' timestamp='1347222451' post='4978383']
[quote name='lipsryme' timestamp='1347193797' post='4978281']
So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)


You blur multiple times until you are satisfied ! Like 3/4 times will give you good results ! You do that by ping ponging and blurring repeatedly the same blurred image ! But then you can also use more or less taps or varying the standard deviation and see what happens ! Fx are not really rocket science. The rule is always tweak tweak tweak until it looks good smile.png
[/quote]

Ok it's quite a bit better now with 4 x blur but I still seem to be putting it wrong together. What I currently do is on the last blur I return the blurred image + the original lighting image and then in my deferred composition shader I put it together like so:


float4 albedo = ToLinear(AlbedoTarget.Sample(LinearTargetSampler, input.UV));
float4 lighting = LightMapTarget.Sample(LinearTargetSampler, input.UV);
float AO = SSAOTarget.Sample(LinearTargetSampler, input.UV).r;
output = albedo * (float4(lighting.rgb, 1.0f) + AO);


The lighting itself does look okay but when being combined with the rest it doesn't seem to do anything except give it a blurry border.
[/quote]

Do you blur the AO buffer right ?... The AO term should be part of the ambient part of the lighting equation, so it should be:

output = (albedo * float4(lighting.rgb, 1.0f) ) + AO;

becuase AO is part of the global illumination interaction so it should add as part of the ambient term...
I always thought the ambient term has to only be added to the diffuse/specular light.
edit: Actually no, if I add it to the albedo*lighting I get a weird merged ssao (white/grey) and albedo picture.
But anyway the AO term isn't the problem.

I guess showing it on pictures is easier:

Note: The blurred images already include tonemapping and BG lighting is turned down (0.2f)

Lighing only (blur off): http://cl.ly/image/3x0o1w451W2x
Compose (blur off): http://cl.ly/image/3K022S2a2Y0g

Lighting only (4x blur): http://cl.ly/image/0g0q0z1n0937
Compose (4x blur): http://cl.ly/image/3D2j0K04413Y

As you can also see the back of the cube is leaking the light from the background, too which is a problem...

This topic is closed to new replies.

Advertisement