Sign in to follow this  
ic0de

Bilateral Blur with linear depth?

Recommended Posts

ic0de    1012
I recently implemented a proper bilateral blur for my ssao shader but it seems to have the problem that it only works a few feet in front of the camera before depth discontinuities can no longer be detected. I assume the culprit is a non linear depth buffer. If I linearize the depth buffer I will still need to deal with a significant lack of precision for farther distances as well as the increased shader complexity. Has anyone else had this problem before? None of the material I read mentioned linearizing the buffer is there an alternative I should know about? Anyways here is my fragment shader in glsl:
 
uniform sampler2D color_texture;
uniform sampler2D depth_texture;

varying vec2 vTexCoord;
 
const float maxdiff = 0.0001;

varying vec2 blurCoords[4];
 
void main(void)
{
	vec4 center = texture2D(color_texture, vTexCoord);
	float centerdepth = texture2D(depth_texture, vTexCoord).r;
	vec4 sum = center * 0.2270270270;
	float weightSum = 0.2270270270;
	
	vec4 sample = texture2D(color_texture, blurCoords[0]);
	float depth = texture2D(depth_texture, blurCoords[0]).r;
	float closeness = step(abs(centerdepth - depth), maxdiff);
	float weight = 0.0702702703 * closeness;
	sum += sample * weight;
	weightSum += weight;
	
	sample = texture2D(color_texture, blurCoords[1]);
	depth = texture2D(depth_texture, blurCoords[1]).r;
	closeness = step(abs(centerdepth - depth), maxdiff);
	weight = 0.3162162162 * closeness;
	sum += sample * weight;
	weightSum += weight;
	
	sample = texture2D(color_texture, blurCoords[2]);
	depth = texture2D(depth_texture, blurCoords[2]).r;
	closeness = step(abs(centerdepth - depth), maxdiff);
	weight = 0.3162162162 * closeness;
	sum += sample * weight;
	weightSum += weight;
	
	sample = texture2D(color_texture, blurCoords[3]);
	depth = texture2D(depth_texture, blurCoords[3]).r;
	closeness = step(abs(centerdepth - depth), maxdiff);
	weight = 0.0702702703 * closeness;
	sum += sample * weight;
	weightSum += weight;
	
	gl_FragColor = vec4(sum.rgb, 1.0) / weightSum;
}

Attached is a screenshot of my shader in action using the common Sibenik cathedral test model. As you can see the edges are quite pronounced on the players weapon and on columns close to the camera but become blurry as the distance increases.

 

Share this post


Link to post
Share on other sites
MJP    19754

You can linearize a sample from a depth buffer with just a tiny bit of math, using some values from your projection matrix:

 

float linearZ = Projection._43 / (zw - Projection._33);

 

That's using HLSL matrix syntax, you would have to convert that to the appropriate GLSL syntax.

Share this post


Link to post
Share on other sites
ic0de    1012

Hmm

 

After linearizing the depth buffer I see another problem. With linear depth if I look at an angled surface the difference between pixels will increase as the distance from the camera increases due to perspective projection at a certain point this will exceed my "maxdiff" and the angled surface will not be blurred, because I am using a separable blur the result will be blurred along one axis but not the other creating even worse artifacts. attached is a screenshot with the artifact circled (needs to be viewed at full resolution).

 
 

Share this post


Link to post
Share on other sites
Hodgman    51227

I've never dealt with fixing that particular problem, but you could use some kind of slope-scaled depth threshold.

e.g. threshold /= abs(normal_vs.z)

 

If you don't have per-pixel normals in this pass, you can guesstimate them with ddx/ddy on the depth values (or just guesstimate the slope directly rather than a full normal).

 

Regarding linearization, I've usually done this in a pass beforehand so that it occurs once per pixel, not once per blur sample.

Share this post


Link to post
Share on other sites
ic0de    1012

I've never dealt with fixing that particular problem, but you could use some kind of slope-scaled depth threshold.

e.g. threshold /= abs(normal_vs.z)

 

If you don't have per-pixel normals in this pass, you can guesstimate them with ddx/ddy on the depth values (or just guesstimate the slope directly rather than a full normal).

 

Regarding linearization, I've usually done this in a pass beforehand so that it occurs once per pixel, not once per blur sample.

 

I do have access to per pixel normals in this case but I was hoping to avoid the extra texture sampling. Depending on how I implement the slope method It may be cheaper, I am only going along one axis anyway. 

 

EDIT: Just realized that the slope method wouldn't work very well. To calculate slope I would essentially do what I have already calculating:

slope = 1.0 / abs(centerdepth - depth)

 

simplified:

 

step(abs(centerdepth - depth), maxdiff * abs(centerdepth - depth));

 

which would always result in 0 because abs(centerdepth - depth) > 0.01 * abs(centerdepth - depth)

Edited by ic0de

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this