Jump to content
  • Advertisement
Sign in to follow this  
Syranide

Anti-Aliasing Theory

This topic is 4862 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok, after seen a couple of threads and tutorials on what AA really is, that is, the edges of the triangles are "upsampled", so that instead of one pixel, you have e.g a grid of 2x2, 4x4 and so on... but really... why? Why wouldn't one take the pixel behind, and blend it with the pixel at the edge, that is, depending on how much space the edge takes it get the same amount of influence on that pixel... this would straight away give you "perfect" AA (INFxINF ;)), with just minor minor difference in visual quality from "true" 16x16 AA... doing "my way" would give you 16x "AA" at the cost of a fraction of a percent. Or is this just not possible for some reason (shaders)? (I guess there must be something preventing it from being used) Does anyone know anything about it as I am curious to why we use such a performance degrading AA technique today instead of something so simple but yet so powerful if possible to implement on todays GPUs.

Share this post


Link to post
Share on other sites
Advertisement
"on how much space the edge takes it get the same amount of influence on that pixel"

This is what the software antialiasing library AGG (Anti Grain Geometry) does. In practice this only actually gives you 256 levels of AA (equiv to 16x16?), but this could be corrected with an alpha channel larger than 8 bits.

I imagine that calculating the area of the pixel covered by the triangle is more computationally intensive than deciding if a point is covered N^2 times.
You will also have problems with overlapping polygons because only the amount of coverage is written to the pixel, not what parts of the pixel are covered. For example I may be drawing a red triangle in front of a blue triangle. Both have exactly the same screen coordinates. Only the red triangle should be drawn, but the edges are mixed blue and red because the renderer has no idea which parts of the pixel are covered by the red so it also draws the blue. I'm not sure if that paints a clear picture.
Yet another complication related to the last one is you now have to use the painter's algorithm and sort all polygons. As with transparent polygons, having the edges alpha blended makes the z-buffer not work correctly.

Share this post


Link to post
Share on other sites
Also I doesn't work.

Consider rendering the following "mesh":

0----1
\ |\
\ X \
\ | \
\| \
2----3



Pixel X is exactly split in the middle by the edge.
First assume that the pixel X is white (255, 255, 255) from the sky box or what ever.
Second we render the two connected triangles with a black color (0, 0, 0).
You'd expect the pixel to become black after rendering, right?
This is what happens if you do it your way:
Let's say that we render the left triangle first.
The triangle covers 50% of the X pixel, thus a color value of 50% white + 50% black is written (after the blend). I.e grey (128, 128, 128).
Next the right triangle is rendered, also covering 50% of pixel X.
Now we get grey blended with black giving us a darker grey (64, 64, 64) but not black.

The same happens in alot more cases aswell.

Edit: Beaten to it :)

Share this post


Link to post
Share on other sites
Yes, I agree that perhaps figuring out how much area is actually covered might cost pretty much.

Ok, I haven't read anything so this is just a conclusion, but is the backbuffer "upsampled" too? E.g., running 640x480 using 2x AA give you a 1280x960 backbuffer? As far as I understood only a pixel was "upsampled" and then all the "subpixels" were blended into one pixel again.

If so that would explain why indeed it should be done the way it is done today.

EDIT: btw, the reason why I said INFxINF was because, actually, it doesn't really become 16x16 if the coverage was expressed by a float, thus would allow you to get 16x16 AA, but considering that you have a float, you could then have fractions of each color blended, which would give greater accuracy, but this is not something I'm going to argue about or try to prove I'm right (I'm probably not) ;)

Share this post


Link to post
Share on other sites
I believe eq's comment describes the main reason why this technique is not used. As I'm sure you know, triangles sharing edges is an extremely common occurance in games today.

For graphical proof of the problem see http://antigrain.com/svg/. It is described there as the "problem of adjacent edges". A technique for reducing the visibility of the problem is described, but it can never be eliminated.

Share this post


Link to post
Share on other sites
Ah yes, now when you point it out like that, it really is obvious to why my "theory" wouldn't work out in practice (and partially because I've misunderstood the AA technique)´.

Thank you both for clearing it out, *hears the bed whisper* ... I must be getting tired I guess ;).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!