Progressive antialiazing

Started by
4 comments, last by Emergent 14 years, 11 months ago
Hello everybody, I'm busy with making post-process AA, I don't need it ot run in real-time but to have a good quality. Some of the browser based 3d applications has "progressive" antialiazing, this is exactly what I need, but have no idea how it's done. Maybe, they simply use super sampling aa, not sure. Does anybody have information on subject?
Advertisement
I guess by "progressive" you mean, they get AA when you dont move the object and have aliasing as soon as you move again?

yeah, they internall render the object several times, slightly offseted.

The nice thing about that is that you can use the same rendering technique all the time, you have a constant memory usage and get in a cheap way very high antialiasing levels.

beside the usual depth and backbuffer, you need some buffer to accumulate the values, 16bit per channel is usually enough. (you dont need to make it higher resolution, if your usuall frame is 640*480, that 'accumulation' buffer is also just 640*480.)

when blitting to the screen, you just divide RGB (every channel) by the amount of passes you already added.
But it might simply blur whole image. Or offset should be really small?
offset shall be, just like for other anti-aliasing cases, within one pixel distance.
you can easily accomplish this by offsetting the screen-matrix (the one that scales your draw data from viewspace (-1.f to 1.f) to screenspace (0 to width or height)).

so you have to add slight offsets in the range of 0.f to 1.f/width (and 1.f/height).
Sounds logical, thank you for explanation ))
I was thinking about this aproach and still belive that it will provide a bit blured result, because UV's and normals will be moving as well, so it will lead to accumulating slightly different diffuse and specular results.
Or I just need to focus on depth\normal difference between offseted frames and if difference is significant, than pixel of current offseted frame will be a bit transparent and blended with accumulated frames. For example, if I'm using 8 frames to accumulate final result, each pixel might have transparency of 1/8-th if depth\normal difference is big or just fully opaque if edge is not detected.
Won't this darken edges, in case when only a half of sampled frames would be "wrong".
Not sure that I'm getting it right.
Want good quality antialiasing? Don't need realtime? Then just supersample. To make it "progressive" you could just add more samples to refine your image. In fact, you never have to stop refining it. Just keep casting random rays and updating your pixel averages.

This topic is closed to new replies.

Advertisement