gaussian blur and linear sampling clarification

This topic is 654 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I was reading several articles regarding the use of linear filter when sampling textures to optimize a gaussian blur kernel effectiveness.

Here:

And the first thing I don't understand is the fact that they are talking as if the linear filter were taking only 2 samples when ( if I understood correctly) it should takes 4 samples, doesn't that affect the result in any way? (when performing two pass blur, first horizontally and then vertically).

[attachment=34952:three2.png]

The other thing may be due to my lack of proper understanding of how linear filtering works, but from what I could find, linear filtering is performed first by rounding the uv position to the nearest texel, taking a sample there and then taking a sample from the right, right-bottom, and bottom texels ( the four nearest texels), as shown below.

[attachment=34951:Screenshot_4.png]

(awesomely drawn grid of texels)

So if we are sampling at the center of the grid (blue point), the returned value will be the interpolated value between all of those green rectangles (plus the filled one).

If that is correct then I don't understand the way they are calculating the offsets to sample with the kernel. I mean, why there?

[attachment=34953:three3.png]

[attachment=34954:three4.png]

I hope my explanation/questions are clear, if not please say so and I will try to draw it better :P

Share on other sites

"That means if we don’t fetch at texel center positions our texture then we can get information about multiple pixels."

Never thought of this. What happens is that if you fetch any texel with nearest filtering, you will get either the 6 pixel or the 5 pixel in your case.  If you sample 5.5, the texture filtering processors (a piece of hardware outside of your current shader), is going to take care of giving you the average between the 6 and 5 pixel. So anytime you call a texture fetch, it will be able to average 2 pixels. Which bi-linear averaging is probably free at this point because it probably comes back to the shader at the same time as point filtering.  So let it average pixels for you, and then you average together those results.

Share on other sites

So anytime you call a texture fetch, it will be able to average 2 pixels.

Better than that, you can do any kind of lerp / weighted average by choosing the texture coordinate carefully.

And the first thing I don't understand is the fact that they are talking as if the linear filter were taking only 2 samples when ( if I understood correctly) it should takes 4 samples, doesn't that affect the result in any way? (when performing two pass blur, first horizontally and then vertically).

If your coordinates are perfectly in line with one row of pixels, then that row will have a weight of 1.0 and the row above/below will have a weight of 0.0. So, yes, technically you're blending 4 texels, but 2 of them are multiplied by zero, so it's as if you're only blending two pixels.

The other thing may be due to my lack of proper understanding of how linear filtering works, but from what I could find, linear filtering is performed first by rounding the uv position to the nearest texel, taking a sample there and then taking a sample from the right, right-bottom, and bottom texels ( the four nearest texels), as shown below.

That's not quite right. You blend the nearest 4 texels to the sample location.

In your diagram, the blue sample location is above the center of the center texel, so the nearest four texels, relative to your filled-green texel are right and up (not right and down).

Share on other sites

What happens is that if you fetch any texel with nearest filtering, you will get either the 6 pixel or the 5 pixel in your case.
If you sample 5.5, the texture filtering processors are going to take care of giving you the average between the 6 and 5 pixel. So anytime you call a texture fetch, it will be able to average 2 pixels.

So wait, this is ground breaking to me. You're telling me that nearest filtering is actually able of sampling and averaging 2 pixels instead of one? What happens if you sample a point at exactly between 4 centers? Or were you talking already about linear instead of point filtering.

And the first thing I don't understand is the fact that they are talking as if the linear filter were taking only 2 samples when ( if I understood correctly) it should takes 4 samples, doesn't that affect the result in any way? (when performing two pass blur, first horizontally and then vertically).

If your coordinates are perfectly in line with one row of pixels, then that row will have a weight of 1.0 and the row above/below will have a weight of 0.0. So, yes, technically you're blending 4 texels, but 2 of them are multiplied by zero, so it's as if you're only blending two pixels

Oh, I didn't know that linear filtering was taking samples and multiplying them by a certain weight. Everything I've read always said that it took 4 samples and computed an average. So I always thought it was something like (t1+t2+t3+t4)/4.

I guess it goes along these lines then:

Share on other sites

Nearest just gives you the exact texel color that is closest to your uv location. Linear will blend 4 as said, but its weighting everything by how close the sample is to any given sample. So as suggested, if vertically its dead center on a row, then the 2 pixel in that row are weighted.

Further more..... there is tri-linear filtering, which does all of this linear stuff on 2 mip maps, and then takes those 2 outputs from the linear weighting, and weights them as well.

1. 1
2. 2
3. 3
Rutin
19
4. 4
khawk
14
5. 5
frob
12

• 9
• 11
• 11
• 23
• 12
• Forum Statistics

• Total Topics
633657
• Total Posts
3013200
×