# Bilateral Filtering

## Recommended Posts

Hello, I am trying to write a bilateral filter in HLSL but I haven't found a good explanation, nor algorithmic or mathematical representation of it.

The best thing I've found is:

The intensity value at each pixel in an image is replaced by a weighted average of intensity values from nearby pixels(do I make the kernel? I suppose it exponentially decreases?). Crucially the weights depend not only on Euclidean distance but also on distance in color space from the pixel in question

Well its really not that clean, any tips?

##### Share on other sites
MJP    19754
What are you going to use it for? More often than not when people talk about bilateral filters in graphics they're talking about doing it with respect to discontinuities in the depth buffer and/or the normals, rather than in the color.

##### Share on other sites
I just want it for 2D pictures nothing fancy, isn't it called trilateral the one with normals included?
In any case I want it to blur stuff without messing with the edges :P

##### Share on other sites
something more 2D?

##### Share on other sites
aryx    402
How are you with implementing Gaussian filtering? In gaussian filtering you have something of this nature for each pixel in your kernel:
    weight = exp(-(deltax^2 + deltay^2) / sigma^2)
Well Bilateral filtering is much the same, except you add a term which is the difference with the center pixel of the kernel:
    weight = exp(-(deltax^2 + deltay^2) / sigma^2) * exp(-delta_color^2 / color_sigma^2)
How you choose to measure color difference is your choice (L1 / L2 norms are kind of common).

If you want something even better than bilateral filtering, check out geodesic support weights. I implemented it for my stereo vision research and it gives superior results over adaptive support weights (a bilateral weighting scheme) in most cases.

##### Share on other sites
thats very cool thanks for both!

##### Share on other sites
Hmm I am still a bit confused with the whole kernel business

How big should be the kernel? 3x3? The whole image? My preference? Which center should I use, for example in simple blur i ll use the average of the surrounding pixels in order to calculate the center pixel's value then move to the next one. How do I go about for this one?

anyone?

##### Share on other sites
knighty    313
Quote:
 Original post by Tipotas688Hmm I am still a bit confused with the whole kernel businessHow big should be the kernel? 3x3? The whole image? My preference?

Your preference:) It depends on the results you want to achieve.

Quote:
 Original post by Tipotas688Which center should I use, for example in simple blur i ll use the average of the surrounding pixels in order to calculate the center pixel's value then move to the next one. How do I go about for this one?

The same way, only the weights change. They don't depend only on the ralative position in image space but also the relative position in color space (and/or depth ...etc. why not?).

##### Share on other sites
OK so I am trying to finish first my Gaussian Filter in order to proceed to the bilateral one:

float GaussianCoef(int x, int y){	float sigma = 1.0f;	return  ( 1 / ( sqrt(2*3.14f) * sigma ) ) * exp( -(x*x+y*y) / (2*sigma*sigma) );}float4 BilateralFiltering(VertexOut input) : COLOR0{	float4 k11 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-0.01,-0.01))*GaussianCoef(2,2);	float4 k12 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-0.01,0))*GaussianCoef(1,1);	float4 k13 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-0.01,0.01))*GaussianCoef(2,2);		float4 k21 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,-0.01))*GaussianCoef(1,1);	float4 k22 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,0))*GaussianCoef(0,0);	float4 k23 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,0.01))*GaussianCoef(1,1);		float4 k31 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0.01,-0.01))*GaussianCoef(2,2);	float4 k32 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0.01,0))*GaussianCoef(1,1);	float4 k33 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0.01,0.01))*GaussianCoef(2,2);}

All I get is the picture blurred but darker which makes sense considering with the value i am multiplying it with, I have got something wrong with the sigma (its supposed to be the size of the window, does it refer to the kernel?), also is my distance from the center pixel right if my kernel is 3x3? Kernel:
212
101
212

##### Share on other sites
DarkChris    110
Just a tip:
Never ever blur over x and y in one pass. By blurring over the width and than over the height you don't need to sample any diagonals at all.
Also use a precomputed array for your coefficients. It will speed up the whole process. This might not really be possible since you want to use a bilateral filter, but keep it in mind.

Sigma needs to be the blur kernels width / 3

##### Share on other sites
if my kernel is:

-1,-1 -1,0 -1,1
0,-1 0,0 0,1
1,-1 1,0 1,1

then its distances should be:

sqrt(2) 1 sqrt(2)
1 0 1
sqrt(2) 1 sqrt(2)

so I should use floats for my Gaussian method shouldn't I?

for a 3x3 kernel the radius is 1 so the sigma is 0.33

##### Share on other sites
DarkChris    110
The graphics card itself calculates with floats either way, so it's safer to use floats and like you said, you'll need them anyway.

You should also stay in the same space. Your samples are 0.01f away. So your blur kernel should be 0.01f wide, the values you calculate the coefficients with should be float2(+-0.01f, +-0.01f) and sigma should be 0.01f/3.

##### Share on other sites
I am going to multiply both x and y for starters and then I ll try to do it in 2 passes.

Using this makes the result brighter:
float GaussianCoef(float x, float y){	float sigma = 0.01f/3;	return  ( 1 / ( sqrt(2*3.14f) * sigma ) ) * exp( -(x*x+y*y) / (2*sigma*sigma) );}float4 BilateralFiltering(VertexOut input) : COLOR0{	float4 color = float4(0,0,0,0);			const float pixel = 0.01f;	float sqrtTwoPixel = sqrt(2)*pixel;		float4 k11 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,-pixel))*GaussianCoef(sqrtTwoPixel,sqrtTwoPixel);	float4 k12 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,0))*GaussianCoef(pixel,pixel);	float4 k13 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,pixel))*GaussianCoef(sqrtTwoPixel,sqrtTwoPixel);		float4 k21 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,-pixel))*GaussianCoef(pixel,pixel);	float4 k22 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,0))*GaussianCoef(0,0);	float4 k23 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,pixel))*GaussianCoef(pixel,pixel);		float4 k31 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,-pixel))*GaussianCoef(sqrtTwoPixel,sqrtTwoPixel);	float4 k32 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,0))*GaussianCoef(pixel,pixel);	float4 k33 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,pixel))*GaussianCoef(sqrtTwoPixel,sqrtTwoPixel);	color =  (k11+k12+k13+k21+k22+k23+k31+k32+k33);	return color/9;   //color / sum;}

##### Share on other sites
DarkChris    110
GaussianCoef(pixel, pixel);
Or
GaussianCoef(0, pixel);
Or
GaussianCoef(pixel, 0);
Or
GaussianCoef(0, 0);

Always. sqrt(2)*pixel is wrong.

##### Share on other sites
so like:
	float4 k11 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,-pixel))*GaussianCoef(pixel,pixel);	float4 k12 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,0))*GaussianCoef(pixel,0);	float4 k13 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,pixel))*GaussianCoef(pixel,pixel);		float4 k21 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,-pixel))*GaussianCoef(0,pixel);	float4 k22 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,0))*GaussianCoef(0,0);	float4 k23 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,pixel))*GaussianCoef(0,pixel);		float4 k31 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,-pixel))*GaussianCoef(pixel,pixel);	float4 k32 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,0))*GaussianCoef(pixel,0);	float4 k33 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,pixel))*GaussianCoef(pixel,pixel);

The result always is:

[Edited by - Tipotas688 on November 11, 2010 1:25:52 PM]

##### Share on other sites
DarkChris    110
Your gaussian blur filter isn't even correct. You mixed the one dimensional formula with the two dimensional. There's no square root in the two dimensional one and it's pow(sigma, 2).

return ( 1 / ( 2*3.14f*sigma*sigma ) ) * exp( -(x*x+y*y) / (2*sigma*sigma) );

##### Share on other sites
ah you are right, i forgot to change it but now its even brighter

##### Share on other sites
DarkChris    110
No play a bit around with the values you use. Use "1" instead of "pixel", or sigma = 0.33; since I start to doubt what I said.
I always work in "pixel" space where I only have values ranging from 1 to 5 (for the gaussian blur). And not 0.01f.

##### Share on other sites
With 1s and 0.33f for sigma i get the same picture but darker (not blurred and I get the same for using sqrt(2) for the diagonal ones) :S

for pixel and 0.33f I get:

EDIT: Also the kernel weights I get don't sum up to 1, is that right?

[Edited by - Tipotas688 on November 11, 2010 6:28:11 PM]

##### Share on other sites
Another thought, should I divide the sum of the pixels with the sum of the weights and not 9?

##### Share on other sites
swiftcoder    18426
Quote:
 Original post by Tipotas688Another thought, should I divide the sum of the pixels with the sum of the weights and not 9?
Yes. Otherwise you will be shifting the luminance.

##### Share on other sites
Something's still off as if I use 1 for the Gaussian distance the result is the same if its 0.01f the result is a simple blur :S

float GaussianCoef(float x, float y){	float sigma = 0.33f;	return  ( 1 / ( 2.0f*3.14f * sigma*sigma ) ) * exp( -(x*x+y*y) / (2.0f*sigma*sigma) );}float4 BilateralFiltering(VertexOut input) : COLOR0{	float4 color = float4(0,0,0,0);		const float pixel = 0.01f;		float sum = 0;	float temp = 0;		temp = GaussianCoef(pixel,pixel);	sum += temp;		float4 k11 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,-pixel))*temp;		temp = GaussianCoef(pixel,0);	sum += temp;		float4 k12 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,0))*temp;		temp = GaussianCoef(pixel,pixel);	sum += temp;		float4 k13 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(-pixel,pixel))*temp;		temp = GaussianCoef(0,pixel);	sum += temp;		float4 k21 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,-pixel))*temp;		temp = GaussianCoef(0,0);	sum += temp;		float4 k22 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,0))*temp;		temp = GaussianCoef(0,pixel);	sum += temp;		float4 k23 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(0,pixel))*temp;		temp = GaussianCoef(pixel,pixel);	sum += temp;		float4 k31 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,-pixel))*temp;		temp = GaussianCoef(pixel,0);	sum += temp;		float4 k32 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,0))*temp;		temp = GaussianCoef(pixel,pixel);	sum += temp;		float4 k33 = tex2D( ColoredTextureSampler, input.textureCoordinates.xy + float2(pixel,pixel))*temp;			color =  (k11+k12+k13+k21+k22+k23+k31+k32+k33)/sum;	return color;

##### Share on other sites
swiftcoder    18426
Quote:
 Original post by Tipotas688Something's still off as if I use 1 for the Gaussian distance the result is the same if its 0.01f the result is a simple blur :S
Unless I am really confused, that *is* a simple gaussian blur.

For a bilateral filter, your weights need to depend on the actual intensity/colour value at each pixel, no? Where are your closeness and similarity metrics?