NFAA - A Post-Process Anti-Aliasing Filter (Results, Implementation Details).

Started by
26 comments, last by TheUndisputed 11 years, 8 months ago
Helloooo people of GameDev.net! How ya been?

I've got something I'd like to share with you, along with implementation details and results. I'm sure some of you with a deferred renderer will appreciate this.

It's a new method of post-processing anti-aliasing that I finished today. I haven't thought of a name for it yet (if anyone has any ideas, I'd be infinitely grateful). - (I found a name, see bottom of post) -

Anyway, Here's how it works:

1. Gather samples around the current pixel for edge detection. I use 8, 1 for each neighboring pixel (up, down, left, right, diagonals), I initially used 3 but that wasn't enough.
2. Use them to create a normal-map image of the entire screen. This is the edge detection. How you do this is entirely up to you. Scroll down the page to see some sample code of how I do it.
3. Average 5 samples of the scene using the new normal image as offset coordinates (quincux pattern).

As simple as it sounds it actually works extremely well in most cases, as seen below. Images rendered with CryEngine2.

Left is post-AA, right is no AA. Image scaled by 200%.


Gif animation comparing the two. Also scaled by 200%.


Here's the heart of my algorithm, the edge detection.


Pros:
- Easily implemented: it's just a post effect.
- Pretty fast, mostly texture-fetch heavy (8 samples for edge detection, 5 for scene). < 1ms on a 9800GTX+ at 1920x800.
- Can solve aliasing on textures, lighting, shadows, etc.
- Flexible: in a deferred renderer, it can be applied to any buffer you want, so you can use real AA on the main buffer and post-AA on things like lighting.
- Works on shader model 2.0
- Since it's merely a post-process pixel shader, it scales very well with resolution

Cons:
- Can soften the image due to filtering textures (double-edged sword).
- Inconsistent results (some edges appear similar to 16x MSAA while others look like 2x).
- Can cause artifacts with too high of a normal-map strength/radius.
- Doesn't account for "walking jaggies" (temporal aliasing) and can't compensate for details on a sub-pixel level (like poles or wires in the distance), since that requires creating detail out of thin air.

Notes on the cons and how they might possibly be fixed:
-Image softening can be solved by using a depth-based approach instead.
-Inconsistent results and artifacts can probably be solved with higher sample counts.
-Last one will require some form of temporal aliasing (check Crytek's method in "Reaching the speed of light" if you want to do it via post-processing as well.)

That's it. Use the idea as you want, but if you get a job or something because of it, let 'em know I exist. ;)

Edit: New name = Normal Filter AA, because we're doing a normal filter for the edge detection. Less confusing than the previous name. :)

[Edited by - Styves on September 3, 2010 1:41:37 AM]
Advertisement
Nice results.

Just to clarify -
* The normal is based on the rate of change in color, luminosity, ...?
* If the rate of change is large vertically, then your normal points in the horizontal direction?
* When doing the averaging, you take 4 points in a line, oriented in the direction of the normal?
The quality's definitely not bad for a post-process technique. It's not MLAA quality, but it's obviously many times simpler and cheaper. :P

Personally I'm still not a fan of color-only approaches. They work well for large high-contrast edges, but without sub-pixel information you totally fail deal two of the most annoying results of aliasing: low-frequency artifacts from thin wire-shaped geometry, and "crawling edges" caused by pixel-sized movements. You can also end up inadvertently filtering out high-frequency details in your textures or shading.

In regards to improvements, I don't think CryTek's solution for addressing temporal artifacts is a particularly good one...I think you really need sub-pixel coverage information to properly deal with that problem without just blurring things even further. Also depth works well for the silhouettes of your meshes, but you need normals as well if you want to handle intersecting geometry and also avoid false-positives for triangles parallel to the view direction.

Sorry if I sound overly negative...I've just been spending some time on this subject recently and I've been frustrated with the shortcomings. :P
1. Anything you want. I'm using color for the moment, but I may switch to luminosity or use a different color space (I read a GPU MLAA paper that suggested using LAB color space). I tried depth and normals, which also works, albeit you need to do a bit of tweaking to get the most edges possible.
2. I'm not quite sure I get what you mean by that.
3. Maybe some pseudo-code might help:

  float4 Scene0 = tex2D(sceneSampler, TexCoord.xy);  float4 Scene1 = tex2D(sceneSampler, TexCoord.xy + Normal.xy);  float4 Scene2 = tex2D(sceneSampler, TexCoord.xy - Normal.xy);  float4 Scene3 = tex2D(sceneSampler, TexCoord.xy + float2(Normal.x, -Normal.y));  float4 Scene4 = tex2D(sceneSampler, TexCoord.xy - float2(Normal.x, -Normal.y));  return (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;


That's pretty much it, should give you a better explanation than me trying to type it out.

I hear y'a MJP, this definitely isn't a magic solution for AA. It just helps reduce aliasing from edges and alpha-tested textures, which is better than nothing (especially in a game that uses a lot of vegetation, like Crysis for example). :)

[Edited by - Styves on August 26, 2010 1:16:04 PM]
Quote:Original post by Styves
As for the name, me and some friends decided to call it "Tangent Space Anti-Aliasing", 'cause we're basically working with a tangent-space normal map for the edge detection.


Nitpick: This is most definitely in screen space, not tangent space :)

Have you experimented at all with temporal reprojection? If you've read any of the new SIGGRAPH 2010 materials on CryEngine 3, Crytek demonstrates pretty good results for their new edge antialiasing technique by jittering the camera to capture sub-pixel detail-- it works surprisingly well. I have link(s) if you're interested.

clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.
Edit: Changed the name to "Normal Filter AA", since we're doing a normal-map filter (more appropriate, less confusing :D).

I linked the CryEngine3 papers in my original post, but if you have any other links I'd be glad to have 'em. I do most (if not all) of my work in CryEngine2, so I've got quite a few limitations (can't provide my own engine information, can't create custom shader passes, can't assign certain shaders custom textures, etc.) It's a real pain.

Anyway, I'm definitely interested in at least attempting it to see how far I can get. :)

[Edited by - Styves on August 26, 2010 1:17:16 PM]
Hello,

I was looking for a good anti-aliasing technique for my deferred rendering for a long time. Your idea sounds very interesting and I would like to test it in my engine, but I am not sure if I fully understood how it works. How exactly do you calculate normals from color? In my engine I used luminosity, but my results are not as good as yours.

Here is my shader code:

float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );// Normalfloat2 upOffset = float2( 0, vPixelViewport.y );float2 rightOffset = float2( vPixelViewport.x, 0 );float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+upOffset).rgb );float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-upOffset).rgb );float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset).rgb );float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset).rgb );float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset+upOffset).rgb );float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset-upOffset).rgb );float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset+upOffset).rgb );float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset-upOffset).rgb );float xDifference = (2*rightHeight+rightTopHeight+rightBottomHeight)/4.0f - (2*leftHeight+leftTopHeight+leftBottomHeight)/4.0f;float yDifference = (2*topHeight+leftTopHeight+rightTopHeight)/4.0f - (2*bottomHeight+rightBottomHeight+leftBottomHeight)/4.0f;float3 vec1 = float3( 1, 0, xDifference );float3 vec2 = float3( 0, 1, yDifference );float3 Normal = normalize( cross( vec1, vec2 ) );// ColorNormal.xy *= vPixelViewport * 2;	// Increase pixel size to get more blurfloat4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );// Final coloro_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;


And this is how I calculate luminosity:
float GetColorLuminance( float3 i_vColor ){    return dot( i_vColor, float3( 0.2126f, 0.7152f, 0.0722f ) );}


And at the end I would like to show you screenshots with and without AA. Images scaled by 200%.


[Edited by - Volgut on September 2, 2010 4:25:22 AM]
Volgut, I'm very glad to see someone interested in the idea. :D

I don't use a cross computation to get my normal-image. I use an add/subtract system in order to form my edges.

What I do is sum up samples that form triangles then subtract the opposite triangle.

I've modified your code to use the same edge detection algorithm, try it out and let me know how it works (also added documentation for convenience). :)

float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );// Normal, scale it up 3x for a better coverage areafloat2 upOffset = float2( 0, vPixelViewport.y ) * 3;float2 rightOffset = float2( vPixelViewport.x, 0 ) * 3;float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+upOffset).rgb );float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-upOffset).rgb );float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset).rgb );float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset).rgb );float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset+upOffset).rgb );float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset-upOffset).rgb );float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset+upOffset).rgb );float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset-upOffset).rgb );  // Normal map creation, this is where it differs.float sum0 = rightTopHeight+ topHeight + rightBottomHeight;float sum1 = leftTopHeight + bottomHeight + leftBottomHeight;float sum2 = leftTopHeight + leftHeight + TopRight;float sum3 = leftBottomHeight + rightHeight + rightBottomHeight ;// Then for the final vectors, just subtract the opposite sample set.// The amount of "antialiasing" is directly related to "filterStrength".// Higher gives better AA, but too high causes artifacts.float filterStrength = 1;float vec1 = (sum1 - sum0) * filterStrength;float vec2 = (sum2 - sum3) * filterStrength;// Put them together and multiply them by the offset scale for the final result.float2 Normal = float2( vec1, vec2) * vPixelViewport;// ColorNormal.xy *= vPixelViewport * 2;	// Increase pixel size to get more blurfloat4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );// Final coloro_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;// To debug the normal image, use this:// o_Color .xyz = normalize(float3(vec1, vec2 , 1) * 0.5 + 0.5);// using vec1 and vec2 for the debug output as Normal won't display anything (due to the pixel scale applied to it).



You should get something similar to this when using the debug output.



As I said before, I'm using pure color. It's rather simpler really, just do the same thing as usual but sample the colors (just change the samples to float3), and just before you put them together to form "Normal", you add all three color channels together.

[Edited by - Styves on September 3, 2010 1:54:58 AM]
Hello Styves,

Thank you for helping me :) Your code was very useful. I only modified it a bit and I implemented normal scale/offset based on pixel distance from the camera. With fixed scale/offset I was getting some artefacts in the distance (bleeding).

Here you can see two screenshots (scaled 200%) with and without anti-aliasing.


Your anti-aliasing technique works very well. As you wrote, it doesn't solve all problems, but it definitely can reduce aliasing effect. Now I will try to look into Crytek's temporal aliasing.

Thanks again.

Regards,
Tomasz
Quote:Original post by Styves
I don't use a cross computation to get my normal-image. I use an add/subtract system in order to form my edges.
What I do is sum up samples that form triangles then subtract the opposite triangle.
Hey that's cool. I actually implemented that once on the Wii, except my triangles were rotated 45 degrees compared to yours! I never ended up getting as good quality as this out of it though, so we scrapped it.
I'm going to have to give this another shot now, thanks ;)

This topic is closed to new replies.

Advertisement