Gaussian Blur

Started by
6 comments, last by TheAdmiral 16 years, 10 months ago
For a blur effect what is the best way to blur the scene in a post processing effect? Currently I am choosing 10 close points to the current pixel and averaging them but it does not come out very blurred and I think it would be overkill averaging many more colors. Is there a good way to do a blur without many shader instructions? Thanks!
Advertisement
It might take a few passes to get some good blurring. You really don't need to sample too many points, something like 4 or 9 should work well.
Moved to GP&T where this is more suitable...

Gaussian Blur is seperable along the X and Y axis, so the common trick is to implement it as two passes - taking 4-9 samples along a single axis and feeding the result of one into the second.

Be careful with your sample offsets. Most of the time you'll be okay, but it's not hard to greatly reduce the impact of a sample by making it too close to another one (especially if point sampling is used). If your source texture supports linear filtering then you can be very clever and put your sample offsets exactly on the border of two pixels and get the TMU to do a double-sample for the price of one [cool].


Look up the PostProcess sample in the DirectX SDK - it gives you a good UI to just play with and you can also see the code behind it.

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Is this something you do with shaders? Because I've been learning OpenGL,
(just the basics yet) and I wonder how you would do these things you speak of?
Quote:Original post by PureW
Is this something you do with shaders? Because I've been learning OpenGL,
(just the basics yet) and I wonder how you would do these things you speak of?
Yes, post processing is done using shaders. I'm sure you can hack a few things around with older fixed-function techniques, but by and large it's easiest to just use a pixel shader. Not being an OpenGL programmer I don't know whether that offers any PostProcessing benefits D3D doesn't...

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote:Original post by jollyjeffers
Be careful with your sample offsets. Most of the time you'll be okay, but it's not hard to greatly reduce the impact of a sample by making it too close to another one (especially if point sampling is used). If your source texture supports linear filtering then you can be very clever and put your sample offsets exactly on the border of two pixels and get the TMU to do a double-sample for the price of one [cool].


So you mean making the offsets multiples of 1.0 or 0.5? I understand what you're saying which would be clever indeed I'm just not sure what the offset would be. Thanks for the replies!
Quote:Original post by Programmer101
Quote:Original post by jollyjeffers
Be careful with your sample offsets. Most of the time you'll be okay, but it's not hard to greatly reduce the impact of a sample by making it too close to another one (especially if point sampling is used). If your source texture supports linear filtering then you can be very clever and put your sample offsets exactly on the border of two pixels and get the TMU to do a double-sample for the price of one [cool].


So you mean making the offsets multiples of 1.0 or 0.5? I understand what you're saying which would be clever indeed I'm just not sure what the offset would be. Thanks for the replies!
There was an older ATI whitepaper/presentation covering this trick - back in the 9x00 generation IIRC. Had lots of nice diagrams as well [smile]

Basically, take this ASCII art:

+---+---+|   |   || X | X ||   |   |+---+---+


Typically you'd sample in the middle of each pixel and then sum/average them manually. The alternative, where linear filtering is available, is:

+---+---+|   |   ||   X   ||   |   |+---+---+


Take a single sample on the boundary of the pixel. The linear filtering should sample both pixels and return a single sample that is a 50:50 mix of the two - which is basically what you'd be doing manually yourself...

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

For a straight, pixel-perfect blur effect, you'll just need a bigger kernel or more passes. But for many effects, where the blurred image is blended with the raw image in some way or other, you can achieve good results by downsampling the image prior to blurring.

For example, if you want a very strong bloom effect, you could get away with scaling a 1280x1024 bright-passed image down to 256x256 (for example) before performing the blur. This cuts the sample size down by a factor of 20. Once the blur is done, you can scale it back up to size (bilinearly, of course), blend additively, and see very little in the way of visual anomaly.

The tradeoff, as always, is that between performance and visual quality. With a separated 15x15 Gaussian kernel, downsampling by a factor of 5x5 or so, you can achieve quite a convincing bloom with remarkably little performance cost. The technique is applicable, albeit less so, to other post-processes, including the standard radial & motion blurs.

Admiral
Ring3 Circus - Diary of a programmer, journal of a hacker.

This topic is closed to new replies.

Advertisement