Temporal Antialising

Started by
6 comments, last by turanszkij 6 years, 8 months ago

Hello,

I am working on a Deferred Shading Engine, which actually uses MSAA for Antialising. Apart from the big G-Buffer ressources its working fine. But the intention of my engine is not only realtime-rendering as also render Screenshots as well as Videos. In that case I've enough time to do everything to get the best results. While using 8x MSAA, some scenes might still flicker.. especially on vegetations. Unfortunately 8x seems to be the maximum on DX11 Hardware, so there is no way to get better results, even if don't prefer realtime.

So finally I am looking for a solution, which might offer an unlimited Sample count. The first thing I thought about was to find a way to manually manipulate MSAA Sample locations, in order to be able to render multiple frames with different patterns and combining them. I found out that NVIDIA did something equal with TXAA. However, I only found a solution to use NVAPI, in order to change sample locations. https://mynameismjp.wordpress.com/2015/09/13/programmable-sample-points/

While I am working on .NET and SlimDX I've no idea how hard it would to implement the NVIDIA API and if its possible to use it together with SlimDX. And this approach would be also limited to NV.

Does anyone have an idea or maybe a better approach I could use?

Thanks, Thomas

Advertisement

Hi there is a good siggraph presentation on how they do temporal AA in Uncharted 4. I managed to implement the technique from it in a couple of hours. I will post the name of it when I get home and find it.

And there is also one from the developers of Unreal engine so you can combine. :)

It basically just offsets the projection matrix in each frame and doesn't involve multisampling sample position modifying. Good luck it's a really interesting tech!

1 minute ago, J said:

Hi there is a good siggraph presentation on how they do temporal AA in Uncharted 4. I managed to implement the technique from it in a couple of hours. I will post the name of it when I get home and find it.

And there is also one from the developers of Unreal engine so you can combine. :)

It basically just offsets the projection matrix in each frame and doesn't involve multisampling sample position modifying. Good luck it's a really interesting tech!

If its really only the projection matrix, it would be awesome! Looking forward to your reply :)

Yeah look up the term halton sequence, its basically a sequence of random numbers with uniform distribution inside a rectangle (your pixel). You feed the sequence into your projection matrix translation part and you got your offset sample positions. I can't go into more from mobile, but maybe you can even find the presentation on the web in the meantime. I've got a video about it on youtube called Wicked Engine - Temporal antialiasing where I mention the exact name of the paper.

2 hours ago, J said:

Yeah look up the term halton sequence, its basically a sequence of random numbers with uniform distribution inside a rectangle (your pixel). You feed the sequence into your projection matrix translation part and you got your offset sample positions. I can't go into more from mobile, but maybe you can even find the presentation on the web in the meantime. I've got a video about it on youtube called Wicked Engine - Temporal antialiasing where I mention the exact name of the paper.

Yes I found it here: advances.realtimerendering.com/s2016/
>Temporal Antialising in Uncharted 4

Very cool! The only thing I couln't find was how big these offests should be in the Projection Matrix, in the Slides they've shown to replace Proj[2,0] and [2,1] with offsets.

EDIT:
Just found the answer here: https://bartwronski.com/2014/03/15/temporal-supersampling-and-antialiasing/
 

Quote

Multiply your MVP matrix with a simple translation matrix that jitters in (-0.5 / w, -0.5 / h) and (0.5 / w, 0.5 / h) every other frame plus write a separate pass that combines frame(n) and frame(n-1) together and outputs the result.

 

@J thanks again for your help. I'll make test next days and see how it works.

You may want to have a look at my MSAA + TAA demo: https://github.com/TheRealMJP/MSAAFilter

It implements sub-pixel jittering via translation in the projection matrix, and also uses MSAA sub-sample data (if available) to improve TAA quality. It also implements a higher-quality MSAA resolve then what you get from doing a "normal" hardware resolve on HDR render targets.

By the way there is something you could do for offsetting sampling positions. In HLSL there are functions like EvaluateAttributeSnapped with which you can evaluate input attributes from the VS to PS in different sampling positions. I don't have experience with it, but it could be good for specular AA, or improving alpha testing maybe?

This topic is closed to new replies.

Advertisement