• Create Account

Bilateral Blur with linear depth?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

4 replies to this topic

#1ic0de  Members

955
Like
0Likes
Like

Posted 03 July 2013 - 12:47 PM

I recently implemented a proper bilateral blur for my ssao shader but it seems to have the problem that it only works a few feet in front of the camera before depth discontinuities can no longer be detected. I assume the culprit is a non linear depth buffer. If I linearize the depth buffer I will still need to deal with a significant lack of precision for farther distances as well as the increased shader complexity. Has anyone else had this problem before? None of the material I read mentioned linearizing the buffer is there an alternative I should know about? Anyways here is my fragment shader in glsl:

uniform sampler2D color_texture;
uniform sampler2D depth_texture;

varying vec2 vTexCoord;

const float maxdiff = 0.0001;

varying vec2 blurCoords[4];

void main(void)
{
vec4 center = texture2D(color_texture, vTexCoord);
float centerdepth = texture2D(depth_texture, vTexCoord).r;
vec4 sum = center * 0.2270270270;
float weightSum = 0.2270270270;

vec4 sample = texture2D(color_texture, blurCoords[0]);
float depth = texture2D(depth_texture, blurCoords[0]).r;
float closeness = step(abs(centerdepth - depth), maxdiff);
float weight = 0.0702702703 * closeness;
sum += sample * weight;
weightSum += weight;

sample = texture2D(color_texture, blurCoords[1]);
depth = texture2D(depth_texture, blurCoords[1]).r;
closeness = step(abs(centerdepth - depth), maxdiff);
weight = 0.3162162162 * closeness;
sum += sample * weight;
weightSum += weight;

sample = texture2D(color_texture, blurCoords[2]);
depth = texture2D(depth_texture, blurCoords[2]).r;
closeness = step(abs(centerdepth - depth), maxdiff);
weight = 0.3162162162 * closeness;
sum += sample * weight;
weightSum += weight;

sample = texture2D(color_texture, blurCoords[3]);
depth = texture2D(depth_texture, blurCoords[3]).r;
closeness = step(abs(centerdepth - depth), maxdiff);
weight = 0.0702702703 * closeness;
sum += sample * weight;
weightSum += weight;

gl_FragColor = vec4(sum.rgb, 1.0) / weightSum;
}


Attached is a screenshot of my shader in action using the common Sibenik cathedral test model. As you can see the edges are quite pronounced on the players weapon and on columns close to the camera but become blurry as the distance increases.

Attached Thumbnails

you know you program too much when you start ending sentences with semicolons;

#2MJP  Moderators

18236
Like
1Likes
Like

Posted 03 July 2013 - 02:02 PM

You can linearize a sample from a depth buffer with just a tiny bit of math, using some values from your projection matrix:

float linearZ = Projection._43 / (zw - Projection._33);


That's using HLSL matrix syntax, you would have to convert that to the appropriate GLSL syntax.

#3ic0de  Members

955
Like
0Likes
Like

Posted 03 July 2013 - 02:56 PM

Hmm

After linearizing the depth buffer I see another problem. With linear depth if I look at an angled surface the difference between pixels will increase as the distance from the camera increases due to perspective projection at a certain point this will exceed my "maxdiff" and the angled surface will not be blurred, because I am using a separable blur the result will be blurred along one axis but not the other creating even worse artifacts. attached is a screenshot with the artifact circled (needs to be viewed at full resolution).

Attached Thumbnails

you know you program too much when you start ending sentences with semicolons;

#4Hodgman  Moderators

49430
Like
0Likes
Like

Posted 03 July 2013 - 07:34 PM

I've never dealt with fixing that particular problem, but you could use some kind of slope-scaled depth threshold.

e.g. threshold /= abs(normal_vs.z)

If you don't have per-pixel normals in this pass, you can guesstimate them with ddx/ddy on the depth values (or just guesstimate the slope directly rather than a full normal).

Regarding linearization, I've usually done this in a pass beforehand so that it occurs once per pixel, not once per blur sample.

#5ic0de  Members

955
Like
0Likes
Like

Posted 03 July 2013 - 08:15 PM

I've never dealt with fixing that particular problem, but you could use some kind of slope-scaled depth threshold.

e.g. threshold /= abs(normal_vs.z)

If you don't have per-pixel normals in this pass, you can guesstimate them with ddx/ddy on the depth values (or just guesstimate the slope directly rather than a full normal).

Regarding linearization, I've usually done this in a pass beforehand so that it occurs once per pixel, not once per blur sample.

I do have access to per pixel normals in this case but I was hoping to avoid the extra texture sampling. Depending on how I implement the slope method It may be cheaper, I am only going along one axis anyway.

EDIT: Just realized that the slope method wouldn't work very well. To calculate slope I would essentially do what I have already calculating:

slope = 1.0 / abs(centerdepth - depth)

simplified:

step(abs(centerdepth - depth), maxdiff * abs(centerdepth - depth));

which would always result in 0 because abs(centerdepth - depth) > 0.01 * abs(centerdepth - depth)

Edited by ic0de, 03 July 2013 - 09:09 PM.

you know you program too much when you start ending sentences with semicolons;

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.