Antialiasing will stay in the future ?

Started by
13 comments, last by mafiesto4 8 years, 5 months ago
There's many kinds of anti-aliasing. Post-process (e.g. FXAA/SMAA/etc) is different to adaptive edge sampling (MSAA/EQAA/etc) is different to temporal supersampling, is different to texture filtering, is different to Toksvig mapping, is different to "thin wire rendering", etc.

Go turn off mipmaps and bilinear filtering and see how high you have to crank the resolution before you can't notice the quality degradation that you just caused.

In reality, I can easily view a sculpture made out of 100 micrometer thick, black threads.
Rendering a polygonal version of the scene (naïvely), even with MSAA + FXAA would lead to horrible flickering under camera motion, as the tiny threads pass over pixel centres and then disappear again. Even on a 4k tablet!

Increasing the display resolution is simply equivalent to supersampling, which is about the least efficient method, and it only approaches the ideal quality as the resolution approaches infinity.
Specialised, smart, targeted algorithms remain relevant until the brute force approach is trivially performant. So... Not any time soon :P
Advertisement

I disagree with MarkS. Even if you have pixels that are not individually discernible, aliasing can introduce visible artifacts, e.g. moiré patterns. A pixel should ideally be of the color that is the average color of the area it covers, and a single sample is a poor estimate of the average.


Indeed. In particular the eye is very good at picking up on rapid flickering patterns, even in cases where the display pixel is too small for us to discern the individual colors. Increasing the display and shading resolution is the brute-force way of doing it: there are plenty of ways to improve the appearance with lower cost.

There's also the fact that the only display that actually comes close to matching the resolution of a 20/20 human eye today, at average viewing distance, for is the Sony Z5 at 4k. Our eyes, between the very large number of cones and quite probable temporal supersampling/morphological AA our brains do by itself have an absolutely massive effective resolution. Even the 1440p phones today, while being "good enough" for most things, can still have discernable pixels for line tests/extremely thin lines.

So, yeah AA is going to be around for quite a while yet. While it's not quite approaching infinity, my best guess puts it at around a 26k screen for something generally in the center of our vision. That being said we'll probably be able to get away with 16k with a bit of AA.

As Hodgman pointed out, Anti-Aliasing addresses far more than just lack of resolution. Point sampling is actually a very poor approximation of what actually occurs in nature; cone/path/photon tracing is much closer to what is actually going on. Point sampling, regardless of the resolution, will always exhibit artifacts, so AA schemes of various sorts will always be needed.

It's not just the ability to separate the color of two adjacent pixels (we may be close to that limit already), but you have these effects that are still present:

- the human eye is very good at seeing contrasted edges and narrow variations in brightness, we are very far from the situation where they are not present (narrow features will still appear ropey : http://www.massal.net/article/hdr/roping.gif).

- the ordered grid of the pixels of the screen is not good at hiding moiré. Ideally each pixel would have a slightly random position to its adjacent one instead of being all equidistant. Until we have that you will always find content that will show moire (moire can also happen in real life, but the choice of how it's rendered and displayed should not add more).

Don't forget that texture pre-filtering (then anisotropic filtering on top) is also used to hide aliasing : the antialiasing that most people think about applied on geometric edges is a small part of what is done to reduce aliasing on the screen.

So yes in theory antialiasing is still going to be useful (would you get rid of mipmaps for example ?). In my mind, antialiasing has a better pay off than purely increasing the resolution : it can be done in a pre-computed pass (mipmaps) or as a post processing pass (intra and inter-frame), or with analytics (lean/leadr) and so on. Keep in mind that brute forcing increases of resolution also has a diminishing return.

Keep in mind that Virtual Reality is also growing up. When I was developing app for Samsung Gear VR it was absolutely necessary to apply AA for each eye (even if it costs). Without it pixels were flickering all around because head is all the time moving (not like in a games) so this introduced unstable view.

I'm pretty sure that AA will be with us for a long time.

Flax Game Engine - www.flaxengine.com

This topic is closed to new replies.

Advertisement