Jump to content
  • Advertisement
Sign in to follow this  
Styves

NFAA - A Post-Process Anti-Aliasing Filter (Results, Implementation Details).

This topic is 2139 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I've been testing this out. Seems pretty cool so far

Quote:
Original post by Styves

source:

// Put them together and multiply them by the offset scale for the final result.
float2 Normal = float2( vec1, vec2) * vPixelViewport;

// Color
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur



This seems wrong to me though. I don't think you need to multiply by vPixelViewport a second time?

Share this post


Link to post
Share on other sites
Advertisement
Ah yes, you're right. The second multiplication isn't needed, you just need to do it once. :)

Feel free to post images and ideas you have regarding the technique! I'm curious to see how it's helping people :D.

Share this post


Link to post
Share on other sites
I implemented this in OpenGL/GLSL and at first I was disappointed with the results... but after playing with it a bit I got pretty decent results. The most important thing I found was clamping Normal.xy before doing the texture lookups:
Normal.xy = clamp(Normal,-float2(1.0,1.0)*0.4,float2(1.0,1.0)*0.4); //Prevent over-blurring in high-contrast areas! -venzon
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );


[Edited by - venzon on October 2, 2010 12:32:22 AM]

Share this post


Link to post
Share on other sites
I've made some changes to the shader. I've removed the need to clamp values or scale the edges. The source of the problem seemed to be the diagonal samples at the end of the shader. I've tweaked the shader code from this thread to do the same. Of course, if you're using the shader on an HDR image then clamping may still be necessary. :)


float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );

const float fScale = 3;

// Offset coordinates
float2 upOffset = float2( 0, vPixelViewport.y ) * fScale;
float2 rightOffset = float2( vPixelViewport.x, 0 ) * fScale;

float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + upOffset).rgb );
float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - upOffset).rgb );
float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset).rgb );
float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset).rgb );
float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset + upOffset).rgb );
float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset - upOffset).rgb );
float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset + upOffset).rgb );
float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset - upOffset).rgb );

// Normal map creation
float sum0 = rightTopHeight+ topHeight + rightBottomHeight;
float sum1 = leftTopHeight + bottomHeight + leftBottomHeight;
float sum2 = leftTopHeight + leftHeight + TopRight;
float sum3 = leftBottomHeight + rightHeight + rightBottomHeight;

float vec1 = (sum1 - sum0);
float vec2 = (sum2 - sum3);

// Put them together and scale.
float2 Normal = float2( vec1, vec2) * vPixelViewport * fScale;

// Color
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) * 0.5 );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) * 0.5 );

// Final color
o_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;



Share this post


Link to post
Share on other sites
Hi,

Very nice results! Unfortunately I could not reproduce them no matter how. So I came up with my own version of your filter. It requires slightly less samples and still gives very good results.

I made a blog entry about it and here is the GLSL filter. w controls the width of the filter (how large the edges will get) and the threshold (hard-coded comparison to 1.0/6.0) prevents over blurring edges that might not need it.


float lumRGB(vec3 v)
{ return dot(v, vec3(0.212, 0.716, 0.072)); }

void main()
{
vec2 UV = gl_FragCoord.xy * inverse_buffer_size;

float w = 1.75;

float t = lumRGB(texture2D(source, UV + vec2(0.0, -1.0) * w * inverse_buffer_size).xyz),
l = lumRGB(texture2D(source, UV + vec2(-1.0, 0.0) * w * inverse_buffer_size).xyz),
r = lumRGB(texture2D(source, UV + vec2(1.0, 0.0) * w * inverse_buffer_size).xyz),
b = lumRGB(texture2D(source, UV + vec2(0.0, 1.0) * w * inverse_buffer_size).xyz);

vec2 n = vec2(-(t - b), r - l);
float nl = length(n);

if (nl < (1.0 / 16.0))
gl_FragColor = texture2D(source, UV);

else
{
n *= inverse_buffer_size / nl;

vec4 o = texture2D(source, UV),
t0 = texture2D(source, UV + n * 0.5) * 0.9,
t1 = texture2D(source, UV - n * 0.5) * 0.9,
t2 = texture2D(source, UV + n) * 0.75,
t3 = texture2D(source, UV - n) * 0.75;

gl_FragColor = (o + t0 + t1 + t2 + t3) / 4.3;
}
}




Btw, I played with temporal anti-aliasing and there really is something great to be done on this front too. The main issue to solve is about how to combine jittered frames without introducing too much ghosting artifacts.

Cheers

Share this post


Link to post
Share on other sites
We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.

Share this post


Link to post
Share on other sites

We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.


Can you describe better the process you are using? Are you using both normal and depth???

Thanks!

Share this post


Link to post
Share on other sites

Can you describe better the process you are using? Are you using both normal and depth???

Using only the color information. At each pixel the edge direction and contrast is estimated using color differences between neighboring pixels (in our version, the color of the "central" pixel is ignored for this purpose). E.g., if it's bright on the right side and dark on the left side, then the edge has a strong vertical component (put another way, the edge normal points to the left).

In practice our sample placement represents a slightly rotated pair of basis vectors relative to the rows and columns of the image. The color differences are reduced to scalars (somehow) which become the weights for a sum of these basis vectors, and that sum is the "step size" for the directional blur.

[attachment=1191:aa filter.jpg]

Share this post


Link to post
Share on other sites
I did pretty much the same thing a little bit ago that really did work rather well, considering how simple it is. The shader code can be found here. It's for Crysis and it's in the "PostEffects.cfx" file. Searching for "// sikk---> Edge AA" in the file will take you right to it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!