• Advertisement
Sign in to follow this  

Temporal AA

This topic is 2568 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all.
Recently I've started making this technique, but there is a lot of troubles.
The base approach, which was explained in the Crytek's presentations, doesn't work for me.
The first problem of this method is that reverse reprojection works properly only inside the triangle boundary, while at the edges, doesn't work at all. That happens because every frame a triangle boundary changes slightly and pixel which was visible at the previous frame may be not visible in the current. In the one of the papers recommended in the Crytek's paper, is used linear filter for depth to mark pixel on the edges as needed to be recomputed. Crytek's paper says that to gather sub-sample information on the edges we need to use edge detection to mark edges and give them priority to be visible always.(as I understood that)
But here we have the same problem, if we use current depth or color buffer to calculate edges, this edge mask will be changed from frame to frame for the same reason. If we will use not only current color and depth buffer to reconstruct edges but also previous we may have ugly shaft of this mask if we rapidly change camera.
So I've decided that using reverse reprojection is not suit to the TAA. Now I save only color buffer for each frame and appropriate view projection matrix. In the shader I reconstruct world position of the current pixel, and then I transform it to the each frame by multiplying on VP matrix. I.e. If I'd like to have MSAAx4 I use 4 additional to the back buffer render targets, each one stores color information of one of the previous frames. I get clip space position of the original pixel in the all three frames, compute texture coordinate and retrieve sample color. To find out can I use this color to construct final image I make simple comparison. Each clip position minus current frame clip position has to be not great than half of the original screen resolution.
This works but I'd like to make it better, but for now I don't have ideas how. Maybe some one can help me or also tries to implement this technique?
P.S. I know that there is a lot of other techniques like MLAA, FXAA, SRAA but they all are just post-process and they can't reconstruct additional sub-sample information to properly eliminate small geometry jittering and lighting flickering. And I want to solve exactly it.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement