•      Sign In
• Create Account

## Anti-aliasing Techniques

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

6 replies to this topic

### #1Bendtner  Members

378
Like
1Likes
Like

Posted 22 January 2012 - 06:29 AM

I have recently made a fabric renderer which procedurally generates textures of the fabric at the micro-level using GLSL shaders. However, this approach suffers from heavy texture aliasing when the view is zoomed out from the object.

Here's my question, can anyone please give me a point in direction to some of the most popular approaches to handle aliasing with anisotropic procedural textures?

Your help would be appreciated, thanks!

### #2Digitalfragment  Members

1371
Like
1Likes
Like

Posted 22 January 2012 - 09:29 PM

You can use the derivative functions, ddx & ddy, to determine a mip level to perform antialising of per pixel data. Heres some example code that simulates antialiasing, from an line renderer. Not directly portable to your case, as it assumes the inputs are layed out how its expecting; but the concept of derivative distance is the same.

float antialias(float2 texcoord, float2 edge)
{
float derivativex = ddx(texcoord).x;
float derivativey = ddy(texcoord).x;
float derivativeLength = sqrt(derivativex*derivativex + derivativey*derivativey) * 2;
float antialiasedEdge = saturate((1.0-abs(edge.x)) / (derivativeLength + 0.00001f));
return antialiasedEdge;
}

### #3ic0de  Members

955
Like
1Likes
Like

Posted 22 January 2012 - 10:10 PM

many image based anti-aliasing methods handle all kinds of aliasing not just edges

look into FXAA:

http://developer.download.nvidia.com/assets/gamedev/files/sdk/11/FXAA_WhitePaper.pdf

you know you program too much when you start ending sentences with semicolons;

### #4Martins Mozeiko  Members

1428
Like
0Likes
Like

Posted 23 January 2012 - 02:09 AM

Or into SMAA: http://www.iryoku.com/smaa/

### #5MJP  Moderators

18215
Like
1Likes
Like

Posted 23 January 2012 - 02:22 AM

It's a very common problem without any easy fix. The sure-fire way to solve it is to shade in texture space, generate mip-maps, and then map that onto the polygons. Obviously, this is expensive since you'll essentially be supersampling for any surface not using the top mip level. You can also supersample directly in the shader, but that doesn't put you in any better position than shading in UV space. Screen-space/temporal anti-aliasing techniques can help somewhat, but they're nothing but a band-aid.

What you really want to do figure out how to reformulate your algorithm as a function of LOD, such that you don't end up with high-frequency aliasing as your shading rate decreases. This is the concept driving techniques like LEAN/CLEAN mapping, which are for antialiasing specular highlights on normal maps. They essentially use the variance of normals in a normal map figure out how to handle normal map details when they transition from being macro-level to micro-level, which they do by modifying the parameters to their BRDF (high normal variance turns into high roughness).

### #6Hodgman  Moderators

49410
Like
1Likes
Like

Posted 23 January 2012 - 02:24 AM

You've got three main categories:
1) Analytical -- use derivatives (dfdx/dfdx) to avoid outputting high frequency details in the first place. This is what mip-maps do for regular texturing.
2) Super sampling -- either render at a higher resolution, or evaluate your procedural function multiple times per pixel, then average/filter the results.
3) Post processing -- use a filter after the fact, such as FXAA/MLAA/SMAA/etc.

### #7Krypt0n  Members

4529
Like
2Likes
Like

Posted 23 January 2012 - 02:57 AM

for procedural content, those conventional methods often don't work well, as you don't have continuous functions (dFdx/dFdy) and post processing is just a cheap fake that works to some degree on static images that suffer from a few jaggy edges, not a procedural generated image (which is rather like noise), while super sampling is quite expensive on one side (e.g. 16x the bandwidth), and on the other it does not give you the adaptive sample rate that you need (just <=16), while wasting a lot of time in areas that don't need SSAA.
The best way to deal with that is to loop inside the shader several times for a particular fragment, in an optimal case you'd reduce noise in every loop cycle and break out of the loop based on the divergence.

if you have something like this
vec4 ProceduralTexture(vec3 pos)
{
...magic
return color;
}
vec4 main(..){return ProceduralTexture(WorldPos);}

you'd transform the main function to something like
vec4 main(..)
{
vec4 DeltaToNextPixelX = dFdx(WorldPos)*0.5f;
vec4 DeltaToNextPixelY = dFdY(WorldPos)*0.5f;
vec4 Color=0;
int loop Count=0;
do
{
vec4 LastColor=Color;
Color += ProceduralTexture(WorldPos+DeltaToNextPixelX*randf()+DeltaToNextPixelY*randf());//randf shall return 0.f to 1.f
Count++;
if(Count>4)//at least 4 samples
{
vec4 Delta = LastColor/(Count-1)-Color/Count;
if(length(Delta)<0.1)//0.1 is our noise treshold
break;
}
}while(true); //or set a limit e.g. Count<128
return Color/Count;
}
(just an example, you shall implement a smarter threshold function, this one can lead to noise in some cases).

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.