Original post by CaossTec
Antialiasing is indeed one of the greatest drawbacks of DS, but they are not as hard to simulate as you point. I had found that using a simple edge detection filter and then just blurring those few edges with a 3x3 kernel produce very convincing results with out sacrifying performance.
That's exactly what the first thing I mentioned in my post was, and I'll be shocked if your solution looks as good as what hardware AA provides. I honestly that it is the #1 worst solution to AA+DS in existence, and I find it ludicrous that ANYONE in the graphics industry actually takes it seriously, considering the results that a 3x3 blur gives compared to what hardware AA does.
AndyTX: I am curious: what do you mean with subtly different projection matrices?
I think he's referring to having two sets of G-buffers that are each screen-resolution size, and the projection matrices each have sub-pixel offsets (I don't know the math behind it, but it'd be worth checking out functions in D3DX like D3DXMatrixPerspectiveOffCenter). It'd be interesting to see how something like quincunx AA would work as a postprocess. I never did entertain that idea before, and it might work. On the same token though, I don't exactly recall having a superb experience with Quincunx AA back on my old GeForce4. Also, I'm curious as to what you mean, AndyTX, by a jittered grid in this context and how you'd implement it.
Just displacing the framebuffer pixels (jittering)... so really more of a different *post-projection* matrix, but usually it's easiest just to tack it on to projection.
But where are you making that jittering? Inbetween frames? During a post-process?