• entries
422
1540
• views
490122

# Rotated blurring

166 views

So, a quick post before I get back to that damn "revision" thing [grin]

I was reading an old ATI paper on shadow mapping last night and one of the ideas in it got me thinking about another problem.

You can sometimes see it in my HDRPipeline Sample - the process of down-sampling, blurring and then up-sample blending can really show some nasty aliasing. Basically you can end up seeing big "soft" pixels. They're blurred/filtered, but because of the down-sampling they still appear very blocky.

Anyway, the ATI paper suggested that a uniform filter wasn't a good idea as it introduced artificial patterns into the result. The human eye is very good at spotting these artifical patterns. It's a fairly common theme across computer graphics to use different sampling patterns (e.g. Monte-Carlo Integration in ray-tracing) and methods to get around this problem.

So I was thinking about (when I get time) experimenting with this. I thought that instead of uploading a grid of constants to the pixel shader I could upload a number of sampling points within the grid.

Then I thought about the fact that this is applied to every pixel (in true Pixel Shader fashion) the same way. So I wondered about doing some sort of dynamic selection/rotation of the samples.

Using some of the HLSL intrinsics you should be able to work out if the current pixel is either an ODD or EVEN row/column. Thus you end up with 4 possibilities for a given pixel:

• Odd X, Odd Y
• Odd X, Even Y
• Even X, Odd Y
• Even X, Even Y

For an odd value of X you could invert the X coordinates of each sampling point, and for an odd Y you could invery the Y coordinates of each sampling point. Thus for every pixel you get one of 4 rotations of sampling points.

+-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+|o| | | | |  | | | | |o|  | | | |o| |  | |o| | | |+-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+| | |o| |o|  |o| |o| | |  | |o| | | |  | | | |o| |+-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+| | |X| | |  | | |X| | |  | | |X| | |  | | |X| | |+-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+| |o| | | |  | | | |o| |  | | |o| |o|  |o| |o| | |+-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+| | | |o| |  | |o| | | |  |o| | | | |  | | | | |o|+-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+  +-+-+-+-+-+ Original     Mirror X     Mirror Y     Mirror XY

I need to try it out (maybe tomorrow), but I think it could eliminate much of the obvious artificial artifacts involved with the original (trivial) algorithm...

I know i've experimented that once, and the results were not satisfying, but i can't remember how many samples i used. The offset was screen-space based, too, and as expected it looked a bit weird (like some shadow pixels were flickering) when you moved the camera. I think it would look much better if the offset was based on the world-space position of the pixel instead, but you'd need to implement a sort of hash function in the pixel shader.. not sure how you could do that..

Thanks for the info [smile]

I've got a rough idea about how to implement it in a way that I can also play with the distribution/samples. So I can see about ramping up the distribution size/shape as well as the number of samples and see if there is a best combination..

cheers
Jack

I suppose a nice idea would be to use a texture containing an offset. Using something like a normal map would have a lot of different offsets, and using texture coords you should be able to quickly find an offset that fits the pixel you're calculating. This would take out the odd/even calculation, which I think might take a bit of time on the PS.

Using warpping on the texture, and a 2x2 texture, you should be able to achieve this technique using a much simpler implementation. I think :).

## Create an account

Register a new account