In order to avoid aliasing, the Nyquist theorem says we must take at least twice as many samples as the highest frequency detail in our input signal. So how about we just render our scene at a really, really high output resolution?
Of course, we may not want our final image to be at such a silly high resolution. It is no good if we end up with more pixels than our screen is capable of displaying! So once we have our beautiful, high resolution, alias-free image, we shrink it down to the final output size, using a high quality filter to maximize image quality.
Here is an example model, first rendered at 64x64 with no antialiasing, and then rendered at 256x256 (16x supersampling) before shrinking down to 64x64 using the "Best Quality" image resize filter in Paint.NET (if you were doing this on the fly in a game, you could use a pixel shader to apply a similar high quality scaling filter):
- Triangle edge aliasing
- Geometry aliasing
- Texture map aliasing
- Shader aliasing In fact the only common aliasing problem that supersampling does not help with is temporal (time based) aliasing.
The problem is cost. Even if you only double the horizontal and vertical resolution (giving 4x supersampling), you now have four times as many pixels to render. Four times as many texture fetches and pixel shader computations, not to mention four times the amount of framebuffer and depth buffer memory, leaves your GPU performing at roughly quarter speed. And that's just for 4x supersampling, which is nowhere near enough to get rid of all geometry, texture, and shader aliasing.
Remember that pesky Nyquist theorem? To avoid aliasing, it is important to consider the ratio between the maximum and minimum scale at which a signal can be sampled:
- Consider a texture map, such as the side of a building
- How close is the player likely to get to the building?
- To keep the graphics looking crisp, we want our texture to be high enough resolution to remain detailed when sampled at this maximum scale
- Now the player starts to back away from the building
- The texture shrinks in size, ie. is sampled at a lower frequency
- After a while, it will be sampled at too low a rate, below the Nyquist threshold, so aliasing will occur
- Supersampling to the rescue!
- We turn on 2x supersampling, so the aliasing goes away
- But the player continues to back away from the building
- When the texture reaches half the scale at which we previously saw aliasing, the problem returns It is common for objects to change size by a factor of 100 or more as you move around the world. But it would be wildly impractical to increase our rendering resolution by 100x in both width and height, as this would increase the GPU pixel workload by a factor of 10,000!
For this reason, supersampling is not widely used, and hence this series of articles is not over. But it is an important starting point for understanding other more advanced antialiasing techniques.
One place you probably have used supersampling is if you ever connected an Xbox 360 to a standard definition TV. Most Xbox games will continue to render at an HD resolution such as 1280x720, relying on the hardware scaler to shrink the resulting images to the output video resolution. That's supersampling, right there! Thanks to the high quality scaling hardware, those extra high resolution pixels are not wasted, but instead used to create a nicely antialiased 640x480 image.
Another place some games use supersampling is when taking screenshots for marketing materials. Internal builds of MotoGP included a special mode that would render the same scene many times using different projection matrices, combining literally hundreds of 640x480 tiles to build up a massive 16384*16384 screenshot image, which we would then downsample to a more sane resolution in Photoshop. Cheating? Kinda. But it was the normal in-game rendering engine and shaders, using the regular in-game models and textures, running on the actual Xbox hardware. Yet the resulting images were so nicely antialiased, they looked great when printed on a giant poster.