Sign in to follow this  
Styves

NFAA - A Post-Process Anti-Aliasing Filter (Results, Implementation Details).

Recommended Posts

Helloooo people of GameDev.net! How ya been?

I've got something I'd like to share with you, along with implementation details and results. I'm sure some of you with a deferred renderer will appreciate this.

It's a new method of post-processing anti-aliasing that I finished today. I haven't thought of a name for it yet (if anyone has any ideas, I'd be infinitely grateful). - (I found a name, see bottom of post) -

Anyway, Here's how it works:

1. Gather samples around the current pixel for edge detection. I use 8, 1 for each neighboring pixel (up, down, left, right, diagonals), I initially used 3 but that wasn't enough.
2. Use them to create a normal-map image of the entire screen. This is the edge detection. How you do this is entirely up to you. Scroll down the page to see some sample code of how I do it.
3. Average 5 samples of the scene using the new normal image as offset coordinates (quincux pattern).

As simple as it sounds it actually works extremely well in most cases, as seen below. Images rendered with CryEngine2.

Left is post-AA, right is no AA. Image scaled by 200%.


Gif animation comparing the two. Also scaled by 200%.


Here's the heart of my algorithm, the edge detection.


Pros:
- Easily implemented: it's just a post effect.
- Pretty fast, mostly texture-fetch heavy (8 samples for edge detection, 5 for scene). < 1ms on a 9800GTX+ at 1920x800.
- Can solve aliasing on textures, lighting, shadows, etc.
- Flexible: in a deferred renderer, it can be applied to any buffer you want, so you can use real AA on the main buffer and post-AA on things like lighting.
- Works on shader model 2.0
- Since it's merely a post-process pixel shader, it scales very well with resolution

Cons:
- Can soften the image due to filtering textures (double-edged sword).
- Inconsistent results (some edges appear similar to 16x MSAA while others look like 2x).
- Can cause artifacts with too high of a normal-map strength/radius.
- Doesn't account for "walking jaggies" (temporal aliasing) and can't compensate for details on a sub-pixel level (like poles or wires in the distance), since that requires creating detail out of thin air.

Notes on the cons and how they might possibly be fixed:
-Image softening can be solved by using a depth-based approach instead.
-Inconsistent results and artifacts can probably be solved with higher sample counts.
-Last one will require some form of temporal aliasing (check Crytek's method in "Reaching the speed of light" if you want to do it via post-processing as well.)

That's it. Use the idea as you want, but if you get a job or something because of it, let 'em know I exist. ;)

Edit: New name = Normal Filter AA, because we're doing a normal filter for the edge detection. Less confusing than the previous name. :)

[Edited by - Styves on September 3, 2010 1:41:37 AM]

Share this post


Link to post
Share on other sites
Nice results.

Just to clarify -
* The normal is based on the rate of change in color, luminosity, ...?
* If the rate of change is large vertically, then your normal points in the horizontal direction?
* When doing the averaging, you take 4 points in a line, oriented in the direction of the normal?

Share this post


Link to post
Share on other sites
The quality's definitely not bad for a post-process technique. It's not MLAA quality, but it's obviously many times simpler and cheaper. :P

Personally I'm still not a fan of color-only approaches. They work well for large high-contrast edges, but without sub-pixel information you totally fail deal two of the most annoying results of aliasing: low-frequency artifacts from thin wire-shaped geometry, and "crawling edges" caused by pixel-sized movements. You can also end up inadvertently filtering out high-frequency details in your textures or shading.

In regards to improvements, I don't think CryTek's solution for addressing temporal artifacts is a particularly good one...I think you really need sub-pixel coverage information to properly deal with that problem without just blurring things even further. Also depth works well for the silhouettes of your meshes, but you need normals as well if you want to handle intersecting geometry and also avoid false-positives for triangles parallel to the view direction.

Sorry if I sound overly negative...I've just been spending some time on this subject recently and I've been frustrated with the shortcomings. :P

Share this post


Link to post
Share on other sites
1. Anything you want. I'm using color for the moment, but I may switch to luminosity or use a different color space (I read a GPU MLAA paper that suggested using LAB color space). I tried depth and normals, which also works, albeit you need to do a bit of tweaking to get the most edges possible.
2. I'm not quite sure I get what you mean by that.
3. Maybe some pseudo-code might help:


float4 Scene0 = tex2D(sceneSampler, TexCoord.xy);
float4 Scene1 = tex2D(sceneSampler, TexCoord.xy + Normal.xy);
float4 Scene2 = tex2D(sceneSampler, TexCoord.xy - Normal.xy);
float4 Scene3 = tex2D(sceneSampler, TexCoord.xy + float2(Normal.x, -Normal.y));
float4 Scene4 = tex2D(sceneSampler, TexCoord.xy - float2(Normal.x, -Normal.y));

return (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;


That's pretty much it, should give you a better explanation than me trying to type it out.

I hear y'a MJP, this definitely isn't a magic solution for AA. It just helps reduce aliasing from edges and alpha-tested textures, which is better than nothing (especially in a game that uses a lot of vegetation, like Crysis for example). :)

[Edited by - Styves on August 26, 2010 1:16:04 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Styves
As for the name, me and some friends decided to call it "Tangent Space Anti-Aliasing", 'cause we're basically working with a tangent-space normal map for the edge detection.


Nitpick: This is most definitely in screen space, not tangent space :)

Have you experimented at all with temporal reprojection? If you've read any of the new SIGGRAPH 2010 materials on CryEngine 3, Crytek demonstrates pretty good results for their new edge antialiasing technique by jittering the camera to capture sub-pixel detail-- it works surprisingly well. I have link(s) if you're interested.

Share this post


Link to post
Share on other sites
Edit: Changed the name to "Normal Filter AA", since we're doing a normal-map filter (more appropriate, less confusing :D).

I linked the CryEngine3 papers in my original post, but if you have any other links I'd be glad to have 'em. I do most (if not all) of my work in CryEngine2, so I've got quite a few limitations (can't provide my own engine information, can't create custom shader passes, can't assign certain shaders custom textures, etc.) It's a real pain.

Anyway, I'm definitely interested in at least attempting it to see how far I can get. :)

[Edited by - Styves on August 26, 2010 1:17:16 PM]

Share this post


Link to post
Share on other sites
Hello,

I was looking for a good anti-aliasing technique for my deferred rendering for a long time. Your idea sounds very interesting and I would like to test it in my engine, but I am not sure if I fully understood how it works. How exactly do you calculate normals from color? In my engine I used luminosity, but my results are not as good as yours.

Here is my shader code:


float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );

// Normal
float2 upOffset = float2( 0, vPixelViewport.y );
float2 rightOffset = float2( vPixelViewport.x, 0 );

float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+upOffset).rgb );
float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-upOffset).rgb );
float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset).rgb );
float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset).rgb );
float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset+upOffset).rgb );
float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset-upOffset).rgb );
float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset+upOffset).rgb );
float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset-upOffset).rgb );

float xDifference = (2*rightHeight+rightTopHeight+rightBottomHeight)/4.0f - (2*leftHeight+leftTopHeight+leftBottomHeight)/4.0f;
float yDifference = (2*topHeight+leftTopHeight+rightTopHeight)/4.0f - (2*bottomHeight+rightBottomHeight+leftBottomHeight)/4.0f;
float3 vec1 = float3( 1, 0, xDifference );
float3 vec2 = float3( 0, 1, yDifference );
float3 Normal = normalize( cross( vec1, vec2 ) );

// Color
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );

// Final color
o_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;




And this is how I calculate luminosity:

float GetColorLuminance( float3 i_vColor )
{
return dot( i_vColor, float3( 0.2126f, 0.7152f, 0.0722f ) );
}




And at the end I would like to show you screenshots with and without AA. Images scaled by 200%.


[Edited by - Volgut on September 2, 2010 4:25:22 AM]

Share this post


Link to post
Share on other sites
Volgut, I'm very glad to see someone interested in the idea. :D

I don't use a cross computation to get my normal-image. I use an add/subtract system in order to form my edges.

What I do is sum up samples that form triangles then subtract the opposite triangle.

I've modified your code to use the same edge detection algorithm, try it out and let me know how it works (also added documentation for convenience). :)


float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );

// Normal, scale it up 3x for a better coverage area
float2 upOffset = float2( 0, vPixelViewport.y ) * 3;
float2 rightOffset = float2( vPixelViewport.x, 0 ) * 3;

float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+upOffset).rgb );
float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-upOffset).rgb );
float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset).rgb );
float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset).rgb );
float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset+upOffset).rgb );
float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset-upOffset).rgb );
float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset+upOffset).rgb );
float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset-upOffset).rgb );

// Normal map creation, this is where it differs.
float sum0 = rightTopHeight+ topHeight + rightBottomHeight;
float sum1 = leftTopHeight + bottomHeight + leftBottomHeight;
float sum2 = leftTopHeight + leftHeight + TopRight;
float sum3 = leftBottomHeight + rightHeight + rightBottomHeight ;

// Then for the final vectors, just subtract the opposite sample set.
// The amount of "antialiasing" is directly related to "filterStrength".
// Higher gives better AA, but too high causes artifacts.
float filterStrength = 1;
float vec1 = (sum1 - sum0) * filterStrength;
float vec2 = (sum2 - sum3) * filterStrength;

// Put them together and multiply them by the offset scale for the final result.
float2 Normal = float2( vec1, vec2) * vPixelViewport;

// Color
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );

// Final color
o_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;

// To debug the normal image, use this:
// o_Color .xyz = normalize(float3(vec1, vec2 , 1) * 0.5 + 0.5);
// using vec1 and vec2 for the debug output as Normal won't display anything (due to the pixel scale applied to it).














You should get something similar to this when using the debug output.



As I said before, I'm using pure color. It's rather simpler really, just do the same thing as usual but sample the colors (just change the samples to float3), and just before you put them together to form "Normal", you add all three color channels together.

[Edited by - Styves on September 3, 2010 1:54:58 AM]

Share this post


Link to post
Share on other sites
Hello Styves,

Thank you for helping me :) Your code was very useful. I only modified it a bit and I implemented normal scale/offset based on pixel distance from the camera. With fixed scale/offset I was getting some artefacts in the distance (bleeding).

Here you can see two screenshots (scaled 200%) with and without anti-aliasing.


Your anti-aliasing technique works very well. As you wrote, it doesn't solve all problems, but it definitely can reduce aliasing effect. Now I will try to look into Crytek's temporal aliasing.

Thanks again.

Regards,
Tomasz

Share this post


Link to post
Share on other sites
Quote:
Original post by Styves
I don't use a cross computation to get my normal-image. I use an add/subtract system in order to form my edges.
What I do is sum up samples that form triangles then subtract the opposite triangle.
Hey that's cool. I actually implemented that once on the Wii, except my triangles were rotated 45 degrees compared to yours! I never ended up getting as good quality as this out of it though, so we scrapped it.
I'm going to have to give this another shot now, thanks ;)

Share this post


Link to post
Share on other sites
Hi. I've been testing this out. Seems pretty cool so far

Quote:
Original post by Styves

source:

// Put them together and multiply them by the offset scale for the final result.
float2 Normal = float2( vec1, vec2) * vPixelViewport;

// Color
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur



This seems wrong to me though. I don't think you need to multiply by vPixelViewport a second time?

Share this post


Link to post
Share on other sites
I implemented this in OpenGL/GLSL and at first I was disappointed with the results... but after playing with it a bit I got pretty decent results. The most important thing I found was clamping Normal.xy before doing the texture lookups:
Normal.xy = clamp(Normal,-float2(1.0,1.0)*0.4,float2(1.0,1.0)*0.4); //Prevent over-blurring in high-contrast areas! -venzon
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );


[Edited by - venzon on October 2, 2010 12:32:22 AM]

Share this post


Link to post
Share on other sites
I've made some changes to the shader. I've removed the need to clamp values or scale the edges. The source of the problem seemed to be the diagonal samples at the end of the shader. I've tweaked the shader code from this thread to do the same. Of course, if you're using the shader on an HDR image then clamping may still be necessary. :)


float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );

const float fScale = 3;

// Offset coordinates
float2 upOffset = float2( 0, vPixelViewport.y ) * fScale;
float2 rightOffset = float2( vPixelViewport.x, 0 ) * fScale;

float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + upOffset).rgb );
float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - upOffset).rgb );
float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset).rgb );
float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset).rgb );
float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset + upOffset).rgb );
float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset - upOffset).rgb );
float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset + upOffset).rgb );
float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset - upOffset).rgb );

// Normal map creation
float sum0 = rightTopHeight+ topHeight + rightBottomHeight;
float sum1 = leftTopHeight + bottomHeight + leftBottomHeight;
float sum2 = leftTopHeight + leftHeight + TopRight;
float sum3 = leftBottomHeight + rightHeight + rightBottomHeight;

float vec1 = (sum1 - sum0);
float vec2 = (sum2 - sum3);

// Put them together and scale.
float2 Normal = float2( vec1, vec2) * vPixelViewport * fScale;

// Color
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) * 0.5 );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) * 0.5 );

// Final color
o_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;



Share this post


Link to post
Share on other sites
Hi,

Very nice results! Unfortunately I could not reproduce them no matter how. So I came up with my own version of your filter. It requires slightly less samples and still gives very good results.

I made a blog entry about it and here is the GLSL filter. w controls the width of the filter (how large the edges will get) and the threshold (hard-coded comparison to 1.0/6.0) prevents over blurring edges that might not need it.


float lumRGB(vec3 v)
{ return dot(v, vec3(0.212, 0.716, 0.072)); }

void main()
{
vec2 UV = gl_FragCoord.xy * inverse_buffer_size;

float w = 1.75;

float t = lumRGB(texture2D(source, UV + vec2(0.0, -1.0) * w * inverse_buffer_size).xyz),
l = lumRGB(texture2D(source, UV + vec2(-1.0, 0.0) * w * inverse_buffer_size).xyz),
r = lumRGB(texture2D(source, UV + vec2(1.0, 0.0) * w * inverse_buffer_size).xyz),
b = lumRGB(texture2D(source, UV + vec2(0.0, 1.0) * w * inverse_buffer_size).xyz);

vec2 n = vec2(-(t - b), r - l);
float nl = length(n);

if (nl < (1.0 / 16.0))
gl_FragColor = texture2D(source, UV);

else
{
n *= inverse_buffer_size / nl;

vec4 o = texture2D(source, UV),
t0 = texture2D(source, UV + n * 0.5) * 0.9,
t1 = texture2D(source, UV - n * 0.5) * 0.9,
t2 = texture2D(source, UV + n) * 0.75,
t3 = texture2D(source, UV - n) * 0.75;

gl_FragColor = (o + t0 + t1 + t2 + t3) / 4.3;
}
}




Btw, I played with temporal anti-aliasing and there really is something great to be done on this front too. The main issue to solve is about how to combine jittered frames without introducing too much ghosting artifacts.

Cheers

Share this post


Link to post
Share on other sites
We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.

Share this post


Link to post
Share on other sites
[quote name='0xffffffff' timestamp='1295832219' post='4763691']
We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.
[/quote]

Can you describe better the process you are using? Are you using both normal and depth???

Thanks!

Share this post


Link to post
Share on other sites
[quote name='JorenJoestar' timestamp='1296044936' post='4765004']
Can you describe better the process you are using? Are you using both normal and depth???
[/quote]
Using only the color information. At each pixel the edge direction and contrast is estimated using color differences between neighboring pixels (in our version, the color of the "central" pixel is ignored for this purpose). E.g., if it's bright on the right side and dark on the left side, then the edge has a strong vertical component (put another way, the edge normal points to the left).

In practice our sample placement represents a slightly rotated pair of basis vectors relative to the rows and columns of the image. The color differences are reduced to scalars (somehow) which become the weights for a sum of these basis vectors, and that sum is the "step size" for the directional blur.

[attachment=1191:aa filter.jpg]

Share this post


Link to post
Share on other sites
I did pretty much the same thing a little bit ago that really did work rather well, considering how simple it is. The shader code can be found [url="http://crymod.com/thread.php?threadid=65998"]here[/url]. It's for Crysis and it's in the "PostEffects.cfx" file. Searching for "// sikk---> Edge AA" in the file will take you right to it.

Share this post


Link to post
Share on other sites
Hey Sikkpin. I had downloaded it when you released it. Job well done at keeping it simple and fast, but there were some artifacts that bothered me a bit too much. :/ Looked to me like the final offsets were a bit too far.


PS: It's me, Xzero. :B

Share this post


Link to post
Share on other sites
Not in my experience, unfortunately. I've tried that as well as with scene normals and neither seem to work as well. It works very well for outer edges but aliasing still occurs on inner ones, and normals don't seem to smooth the outter ones facing away from the camera. Despite slowing things down a bit, a good method could be to combine normals or depth with a luminance pass, which should smooth a good amount of edges at all times.

Another idea would be to somehow make a scene "edge map". How to do this with optimal speeds, I'm not quite sure. I thought of wireframe but it would slow things down quite a bit, so you may as well go with a multi-pass method as above.

Anyway, I just recently implemented some new code to smooth out long horizontal and vertical aliasing even more, which works rather well. The idea came from [url="http://www.gamedev.net/topic/594209-dlaa/"]this thread[/url]. Unfortunately I can't store the edges in the alpha channel of my image so I have to recalculate them for every sample, which is obviously fairly slow. However even then, the technique is only <~3ms, meaning with the obvious optimizations it should be very fast. I'm still testing some stuff so I'm doing some additional samples which can probably be removed, making it even quicker.

[img]http://img121.imageshack.us/img121/4118/longedgesmooth.gif[/img]


The new code did introduce some artifacts in distant areas, so I fade the effect using the depth of the scene. This works very well.

Now I just need to find a way to move away from luminance based detection so that textures can stay crisp and low-contrast edges can also be smoothed.

Share this post


Link to post
Share on other sites
[quote name='0xffffffff' timestamp='1296076048' post='4765273']
[quote name='JorenJoestar' timestamp='1296044936' post='4765004']
Can you describe better the process you are using? Are you using both normal and depth???
[/quote]
Using only the color information. At each pixel the edge direction and contrast is estimated using color differences between neighboring pixels (in our version, the color of the "central" pixel is ignored for this purpose). E.g., if it's bright on the right side and dark on the left side, then the edge has a strong vertical component (put another way, the edge normal points to the left).

In practice our sample placement represents a slightly rotated pair of basis vectors relative to the rows and columns of the image. The color differences are reduced to scalars (somehow) which become the weights for a sum of these basis vectors, and that sum is the "step size" for the directional blur.

[attachment=1191:aa filter.jpg]
[/quote]

Hi there 0xffffffff

I'm very interested in your approach.
Do you have any example code for your technique?

Thanks!
Ren

Share this post


Link to post
Share on other sites
Hey guys, been a while since I posted about this.

I've recently made some additions and changes. The technique is now multi-pass and refined. There's still room for optimizations and I still need to find a solution for sub-pixel aliasing, but jagged edges are a thing of the past. :D

The shader runs at ~0.1ms, which I believe is faster than MLAA or SRAA (MLAA apparently needing all 6 SPUs on the PS3 just to run), and provides exceptional results. There's no supersampled buffers, it's all post. It's still run on the luminance so contrast issues can still occur in mostly dark scenes, but extending this to use normals or depth should be easy (can't do it since I'm using CryEngine and the passes I need to use don't have access to the depthmap). Either way, the contrast issues aren't too bad, and even then you still get some measure of AA. See the images below for a comparison :D (mind the little dots on the ground, that was a texture filtering problem that was solved soon after the images were taken).


[img]http://img853.imageshack.us/img853/3756/screenshot0012e.jpg[/img]


[img]http://img816.imageshack.us/img816/511/screenshot0011m.jpg[/img]


I may or may not post source code later. I've got a job coming and I'll likely try to push them to let me implement this into the game/engine, at which point I'll write a nice and lengthy paper about the technique and how to implement it. So until then, chow! :D

Edit: Fixed the ms estimation. 1ms was a typo.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this