Faking the Antialiasing effect

Started by
8 comments, last by AHashem 15 years, 7 months ago
I was trying to find a way to "fake" the anti-aliasing effect on the GPUs that don't support anti-aliasing (the one set from the SetRenderState). The main goal is to remove the sharp edges of the objects with each other. Any ideas how this could be done?? By the way, I'm using Directx 9 and HLSL... Thanks, Atef Hashem
Advertisement
You could use edge-blurring. Basically you run an edge-detect on a depth buffer (using a Sobel operator or something similar), and use the result of the edge-detection to control how much you blur a particular pixel. IMO the results aren't so great, but you may think otherwise. If you want an example, you can look at some screens for Crysis (the game uses this technique when FSAA is disabled). If you need some code samples, check towards the end of this tutorial for a full shader implementation.

Aside from that...your options are pretty limited. Manually super-sampling is terribly expensive, so it's not usually a practical option. Techniques like motion blur and depth of field can help reduce the effects of aliasing, as will using low-contrast colors in your scenes.
Thanks for your reply...

Yea, I want a practical solution to use in my bowling game...could you please tell more about the motion-blur and depth of field techs??

Thanks again,

Atef Hashem
I was about the link the GPU Gems 2 article about Stalker deferred rendering who have a section about that, but MJP is faster with a much better one. I might be wrong but if a card support HLSL I doubt it won't support AA. Deferred shading need much more power than simple AA in my book.

MJP, this article date from 2005. I wonder if this implementation is still a good valid choice for DX10 games, or should you just use multiple lights using shader version 4 and some variance/exp shadow map + hardware AA? I know the answer will be "it depends on the game" but what for an average game?
OpenGL supports edge anti-aliasing (I don't mean FSAA), I think I have used it successfully on graphics cards as old as geforece 4. It can be enabled with glEnable(GL_POLYGON_SMOOTH) / glEnable(GL_LINE_SMOOTH). I don't know the d3d equivalent but if it is a hardware feature there should be some way to enable it.
Quote:Original post by Dunge
MJP, this article date from 2005. I wonder if this implementation is still a good valid choice for DX10 games, or should you just use multiple lights using shader version 4 and some variance/exp shadow map + hardware AA? I know the answer will be "it depends on the game" but what for an average game?


If you're talking about deferred shading in general, it's actually a better choice with DX10/SM4 then it is in DX9/SM3. With DX10 you have a lot of features that makes DR faster and easier to implement, such as direct access of the device depth buffer and accessing the sub-samples of a texture in the pixel shader (which allows you to use MSAA).
Quote:Original post by AHashem

Yea, I want a practical solution to use in my bowling game...could you please tell more about the motion-blur and depth of field techs??



There's a few ways to do motion blur, but one of the more common ways is to to create a velocity buffer and use that to blur the screen. To create the velocity buffer, you either use MRT's or a second pass and you write out the velocity of a pixel from the last frame using the worldViewProjection matrix from the previous frame. Like this:

// vertex shaderout.position = mul(in.position, worldViewProjMatrix);out.lastPosition = mul(in.position, lastWorldViewProjMatrix);// pixel shaderfloat2 velocity = (in.position.xy / in.position.w) - (in.lastPosition.xy / in.lastPosition.w);return float4(velocity, 1, 1);


Then in a full-screen post-processing pass, you sample the velocity and blur in that direction

const int NUM_SAMPLES = 12;float2 velocity = tex2D(velocitySampler, texCoord).xy / 2;velocity.y = -velocity.y;float4 sum = 0;for (int i = 0; i < NUM_SAMPLES; i++){  float2 sampleCoord = texCoord + velocity * i / (float)NUM_SAMPLES;  sum += tex2D(sceneSampler, sampleCoord);}return sum / NUM_SAMPLES;


There's a sample in the DirectX SDK for doing this, it's called PixelMotionBlur. You can check that out for help with setting up the application code.

As for depth of field, that technique usually involves having a "focus range" and blurring everything that's not in that range. Doing this requires having access to a depth buffer, which means either using a vendor-specific method for accessing the device depth buffer or rendering it yourself. There's a sample in the SDK that does DOF, called PostProcess. It's one of the several techniques available.
Thanks for your effort, but I don't think that motion blur or DOF will help in my specific situation. Here is a screenshot of the scene.

As you can see the edges of the objects are sharp (the sides of the lanes, the ball-return machines and the room pilars). I know turning on anti-aliasing will resolve this and will produce a very good look. But I was thinking, what if the device doesn't support AA, should I fall back to another approach that would, at least, produce a better look?? Do you think I should do this, or just stick with the built-in hardware AA, if supported?

Thanks...Atef
Well as long you're not rendering to a floating-point format, then the only GPU's that won't support multi-sampling will be integrated chipsets. And on those chips pretty much anything you can do to reduce the aliasing will be too expensive to mess with.
yea, you're right...so I will just use the hardware anti-aliasing, and hopefully it would be supported :))...Thank U

This topic is closed to new replies.

Advertisement