NFAA - A Post-Process Anti-Aliasing Filter (Results, Implementation Details).

Started by
26 comments, last by TheUndisputed 11 years, 8 months ago
Hi. I've been testing this out. Seems pretty cool so far

Quote:Original post by Styves

source:

// Put them together and multiply them by the offset scale for the final result.
float2 Normal = float2( vec1, vec2) * vPixelViewport;

// Color
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur



This seems wrong to me though. I don't think you need to multiply by vPixelViewport a second time?
Advertisement
Yes, you are right. The vector should be only scaled by variable called "filterStrength".
Ah yes, you're right. The second multiplication isn't needed, you just need to do it once. :)

Feel free to post images and ideas you have regarding the technique! I'm curious to see how it's helping people :D.
I implemented this in OpenGL/GLSL and at first I was disappointed with the results... but after playing with it a bit I got pretty decent results. The most important thing I found was clamping Normal.xy before doing the texture lookups:
Normal.xy = clamp(Normal,-float2(1.0,1.0)*0.4,float2(1.0,1.0)*0.4); //Prevent over-blurring in high-contrast areas! -venzonNormal.xy *= vPixelViewport * 2;	// Increase pixel size to get more blurfloat4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );


[Edited by - venzon on October 2, 2010 12:32:22 AM]
I've made some changes to the shader. I've removed the need to clamp values or scale the edges. The source of the problem seemed to be the diagonal samples at the end of the shader. I've tweaked the shader code from this thread to do the same. Of course, if you're using the shader on an HDR image then clamping may still be necessary. :)

float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );const float fScale = 3;// Offset coordinatesfloat2 upOffset = float2( 0, vPixelViewport.y ) * fScale;float2 rightOffset = float2( vPixelViewport.x, 0 ) * fScale;float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + upOffset).rgb );float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - upOffset).rgb );float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset).rgb );float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset).rgb );float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset + upOffset).rgb );float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset - upOffset).rgb );float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset + upOffset).rgb );float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset - upOffset).rgb );  // Normal map creationfloat sum0 = rightTopHeight+ topHeight + rightBottomHeight;float sum1 = leftTopHeight + bottomHeight + leftBottomHeight;float sum2 = leftTopHeight + leftHeight + TopRight;float sum3 = leftBottomHeight + rightHeight + rightBottomHeight;float vec1 = (sum1 - sum0);float vec2 = (sum2 - sum3);// Put them together and scale.float2 Normal = float2( vec1, vec2) * vPixelViewport * fScale;// Colorfloat4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) * 0.5 );float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) * 0.5 );// Final coloro_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;
Hi,

Very nice results! Unfortunately I could not reproduce them no matter how. So I came up with my own version of your filter. It requires slightly less samples and still gives very good results.

I made a blog entry about it and here is the GLSL filter. w controls the width of the filter (how large the edges will get) and the threshold (hard-coded comparison to 1.0/6.0) prevents over blurring edges that might not need it.

float	lumRGB(vec3 v){	return dot(v, vec3(0.212, 0.716, 0.072));	} void	main(){    vec2    UV = gl_FragCoord.xy * inverse_buffer_size;     float   w = 1.75;     float   t = lumRGB(texture2D(source, UV + vec2(0.0, -1.0) * w * inverse_buffer_size).xyz),	    l = lumRGB(texture2D(source, UV + vec2(-1.0, 0.0) * w * inverse_buffer_size).xyz),	    r = lumRGB(texture2D(source, UV + vec2(1.0, 0.0) * w * inverse_buffer_size).xyz),	    b = lumRGB(texture2D(source, UV + vec2(0.0, 1.0) * w * inverse_buffer_size).xyz);     vec2    n = vec2(-(t - b), r - l);    float   nl = length(n);     if	(nl < (1.0 / 16.0))	gl_FragColor = texture2D(source, UV);     else    {	n *= inverse_buffer_size / nl; 	vec4	o = texture2D(source, UV),		t0 = texture2D(source, UV + n * 0.5) * 0.9,		t1 = texture2D(source, UV - n * 0.5) * 0.9,		t2 = texture2D(source, UV + n) * 0.75,		t3 = texture2D(source, UV - n) * 0.75; 	gl_FragColor = (o + t0 + t1 + t2 + t3) / 4.3;    }}


Btw, I played with temporal anti-aliasing and there really is something great to be done on this front too. The main issue to solve is about how to combine jittered frames without introducing too much ghosting artifacts.

Cheers
Praise the alternative.
We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.

We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.


Can you describe better the process you are using? Are you using both normal and depth???

Thanks!
---------------------------------------http://badfoolprototype.blogspot.com/

Can you describe better the process you are using? Are you using both normal and depth???

Using only the color information. At each pixel the edge direction and contrast is estimated using color differences between neighboring pixels (in our version, the color of the "central" pixel is ignored for this purpose). E.g., if it's bright on the right side and dark on the left side, then the edge has a strong vertical component (put another way, the edge normal points to the left).

In practice our sample placement represents a slightly rotated pair of basis vectors relative to the rows and columns of the image. The color differences are reduced to scalars (somehow) which become the weights for a sum of these basis vectors, and that sum is the "step size" for the directional blur.

[attachment=1191:aa filter.jpg]
I did pretty much the same thing a little bit ago that really did work rather well, considering how simple it is. The shader code can be found here. It's for Crysis and it's in the "PostEffects.cfx" file. Searching for "// sikk---> Edge AA" in the file will take you right to it.

This topic is closed to new replies.

Advertisement