NFAA - A Post-Process Anti-Aliasing Filter (Results, Implementation Details).

Started by
26 comments, last by TheUndisputed 11 years, 8 months ago
Hey Sikkpin. I had downloaded it when you released it. Job well done at keeping it simple and fast, but there were some artifacts that bothered me a bit too much. :/ Looked to me like the final offsets were a bit too far.


PS: It's me, Xzero. :B
Advertisement
Do you think your results would be better if you computed the normal map for the aa from a pre-stored depth map which is common to store for deferred rendering anyway?
Douglas Eugene Reisinger II
Projects/Profile Site
Not in my experience, unfortunately. I've tried that as well as with scene normals and neither seem to work as well. It works very well for outer edges but aliasing still occurs on inner ones, and normals don't seem to smooth the outter ones facing away from the camera. Despite slowing things down a bit, a good method could be to combine normals or depth with a luminance pass, which should smooth a good amount of edges at all times.

Another idea would be to somehow make a scene "edge map". How to do this with optimal speeds, I'm not quite sure. I thought of wireframe but it would slow things down quite a bit, so you may as well go with a multi-pass method as above.

Anyway, I just recently implemented some new code to smooth out long horizontal and vertical aliasing even more, which works rather well. The idea came from this thread. Unfortunately I can't store the edges in the alpha channel of my image so I have to recalculate them for every sample, which is obviously fairly slow. However even then, the technique is only <~3ms, meaning with the obvious optimizations it should be very fast. I'm still testing some stuff so I'm doing some additional samples which can probably be removed, making it even quicker.

longedgesmooth.gif


The new code did introduce some artifacts in distant areas, so I fade the effect using the depth of the scene. This works very well.

Now I just need to find a way to move away from luminance based detection so that textures can stay crisp and low-contrast edges can also be smoothed.

[quote name='JorenJoestar' timestamp='1296044936' post='4765004']
Can you describe better the process you are using? Are you using both normal and depth???

Using only the color information. At each pixel the edge direction and contrast is estimated using color differences between neighboring pixels (in our version, the color of the "central" pixel is ignored for this purpose). E.g., if it's bright on the right side and dark on the left side, then the edge has a strong vertical component (put another way, the edge normal points to the left).

In practice our sample placement represents a slightly rotated pair of basis vectors relative to the rows and columns of the image. The color differences are reduced to scalars (somehow) which become the weights for a sum of these basis vectors, and that sum is the "step size" for the directional blur.

[attachment=1191:aa filter.jpg]
[/quote]

Hi there 0xffffffff

I'm very interested in your approach.
Do you have any example code for your technique?

Thanks!
Ren
Hey guys, been a while since I posted about this.

I've recently made some additions and changes. The technique is now multi-pass and refined. There's still room for optimizations and I still need to find a solution for sub-pixel aliasing, but jagged edges are a thing of the past. :D

The shader runs at ~0.1ms, which I believe is faster than MLAA or SRAA (MLAA apparently needing all 6 SPUs on the PS3 just to run), and provides exceptional results. There's no supersampled buffers, it's all post. It's still run on the luminance so contrast issues can still occur in mostly dark scenes, but extending this to use normals or depth should be easy (can't do it since I'm using CryEngine and the passes I need to use don't have access to the depthmap). Either way, the contrast issues aren't too bad, and even then you still get some measure of AA. See the images below for a comparison :D (mind the little dots on the ground, that was a texture filtering problem that was solved soon after the images were taken).


screenshot0012e.jpg


screenshot0011m.jpg


I may or may not post source code later. I've got a job coming and I'll likely try to push them to let me implement this into the game/engine, at which point I'll write a nice and lengthy paper about the technique and how to implement it. So until then, chow! :D

Edit: Fixed the ms estimation. 1ms was a typo.
"[color="#1C2837"]The shader runs at ~1ms, which I believe is faster than MLAA or SRAA (MLAA apparently needing all 6 SPUs on the PS3 just to run)"
[color="#1C2837"]
[color="#1C2837"]What is the hardware ? On recent hardware MLAA usually runs < 0.15 ms.
[color="#1C2837"]Current MLAA GPU implementations are pretty fast ( even on "old" console GPU like 360 one ). Afaik Sony's algorithm divides the MLAA task on as many SPU as possible because it's more a "brute force shape search' compared to GPU implementation.
[color="#1C2837"]Have a look at http://www.iryoku.com/mlaa/ for MLAA GPU performances report.
Thanks for sharing,

Regards
Oops, I meant ~0.1. lol my bad.

I've tried the approach on that page but I didn't get very good results. Nothing compared to the images they show. I am limited in how I can implement it, which is probably the cause of the poor quality (I can't output 2 channels for the first pass, stuff like that).

[color="#555555"]
Typical execution times are 3.79 ms on Xbox 360 and 0.44 ms on a nVIDIA GeForce 9800 GTX+, for a resolution of 720p. Memory footprint is 2x the size of the backbuffer on Xbox 360 and 1.5x on the 9800 GTX+. Meanwhile, 8x MSAA takes an average of 5 ms per image on the same GPU at the same resolution, 1180% longer (i.e. processing times differ by an order of magnitude). The method presented can therefore challenge the current gold standard in real time anti-aliasing.[/quote]

I hadn't measured the performance in-depth until after that post, 0.1ms was a guess. But I've crunched the numbers: In a semi-optimized state NFAA runs at < 0.3ms in a standard scene (33ms) at 1920x1080 on a my GTX260. That's twice the resolution of the MLAA test while still being faster. I wasn't too far off :D. With a few more optimizations I'm sure I can get this to perform quicker, there's definitely room for improvement.
a compleate new way with good results


simple in the last post pass before you present the screen,, can allso be done as a sprite effect



how to use put your alpha in and wacth the magic

float4 outColor = tex2d(scene,texcoord);

outColor.a = AntiAliasingAlpha(outColor.a);

return outColor;

you can allso use this for normal buffer as well if you are render out small normal and depth buffer and then you can upscale * 2 , i only use the color buffer

so if you render out 640x360 you can present the final image on screen in 1280x720


////////////////////////////////////////////////////////////////////////////////////////////////////////
float blurThreshold = 0.9f;
float AntiAliasingAlpha(float alpha)
{
float rho = alpha;

float sat = 0.8; // saturation
float invsat = 1 - sat;

float a = invsat * rho * sat;
// Uncomment the following line to remove edge blurring,, try othere values
blurThreshold = 1;

if( rho < blurThreshold )
{
a = 1.0f;
}
else
{
float normrho = (rho - blurThreshold) * 1 / (1 - blurThreshold);
a = cos(normrho * 3.14159/2);
}

return a;
}

This topic is closed to new replies.

Advertisement