Jump to content
  • Advertisement
megav0xel

Frostbite's stochastic SSR - help with noise reduction

Recommended Posts

Hi all!

I have been trying to implement this feature in my spare time for several months. Here is a brief summary of my implementation.

Basically the algorithm can be break into 2 part, ray marching stage and resolve stage.

For ray marching stage, firstly I generate reflection vector using importance sampling(Here I'm using GGX). In the original slides they use Hi-Z ray marching to get intersection point, which is described in GPU Pro 5. My code is adapted from the improved version from Stingray dev team and this post on Gamedev. After getting intersection point, I store the position of intersection point and the pdf of IS generated reflection vector in a texture.

The resolve stage mainly does two thing, ray reuse and BRDF normalization. For every pixel on screen, search through neighboring pixels to see if any pixel get a hit point and "steal" its result. This trick allow every pixel on screen to get color info even if it didn't hit anything during ray marching stage. Then, to further reduce noise, the shading equation is reorganized to alleviate variance. This process is summarized in following pages.

C985o2J.jpg

ryVyWIG.jpg

 

Finally, I apply TAA to screen to accumulate results from previous frames. And this is what I get.

td9QM5G.jpg

The techniques described in the slides do help to reduce some noise but the result I get is no where close to what they showed in the slide. I tried to increase resolve samples per pixel but it didn't help much.

MbGbUTU.jpg

m798DwO.jpgT

Their result is almost free of noise. Actually I think it looks a bit too good for real time rendering:)

Would glad if someone could give me some tips on noise reduction or point out something I may be wrong about. Thanks for any help in advance.

Edited by megav0xel

Share this post


Link to post
Share on other sites
Advertisement
On 1/21/2018 at 8:01 AM, megav0xel said:

Would glad if someone could give me some tips on noise reduction or point out something

Have you tried a simple blur? Sampling the border pixels could also reduce noise.

I don't know this course but it looks like they scale the image larger and blur it over. I could be wrong.

 

Share this post


Link to post
Share on other sites

Do you use some kind of prefiltered mip chain to look up the color info during resolve, or just the backbuffer? ( The slides mention prefiltering to reduce noise, it should help with rougher surfaces, and places where the intersection is further away. )

Another thing that would be interesting to try out, is to slowly drop ray reuse in cases where the reflection is close to being mirror like. The idea is in these cases the original ray already contains all the info about the reflection that is needed, using the neighbouring pixels data wouldn't contribute to the result in any useful way. ( If we would be doing some kind of cone tracing to calculate reflections, in these cases the resulting cone would be "raylike", because the low roughnes would result in low cone angles, and the close by intersection would make the cone short. ) The idea is similar to reflection probe prefiltering, where the top mip level's prefiltering pass could be skipped, because it represents mirror like reflections. I don't know if this would work/help in this case, but in my mind it makes perfect sense. :)

This github repo could help also: https://github.com/cCharkes/StochasticScreenSpaceReflection 
It's unity implementation of the technique, you could find result shots in the unity forum somewhere too. ( I'm planning to implement SSSR too, this repo is one of the sources I use to plan out my implementation. )

Share this post


Link to post
Share on other sites
On 2018/1/27 at 9:58 PM, Scouting Ninja said:

Have you tried a simple blur? Sampling the border pixels could also reduce noise.

I don't know this course but it looks like they scale the image larger and blur it over. I could be wrong.

 

Hi! I'm not sure this will work as every sample is weighted by it's PDF and BRDF value. They didn't mention any blur pass in the original slides.

On 2018/1/27 at 11:08 PM, LandonJerre said:

Do you use some kind of prefiltered mip chain to look up the color info during resolve, or just the backbuffer? ( The slides mention prefiltering to reduce noise, it should help with rougher surfaces, and places where the intersection is further away. )

Another thing that would be interesting to try out, is to slowly drop ray reuse in cases where the reflection is close to being mirror like. The idea is in these cases the original ray already contains all the info about the reflection that is needed, using the neighbouring pixels data wouldn't contribute to the result in any useful way. ( If we would be doing some kind of cone tracing to calculate reflections, in these cases the resulting cone would be "raylike", because the low roughnes would result in low cone angles, and the close by intersection would make the cone short. ) The idea is similar to reflection probe prefiltering, where the top mip level's prefiltering pass could be skipped, because it represents mirror like reflections. I don't know if this would work/help in this case, but in my mind it makes perfect sense. :)

This github repo could help also: https://github.com/cCharkes/StochasticScreenSpaceReflection 
It's unity implementation of the technique, you could find result shots in the unity forum somewhere too. ( I'm planning to implement SSSR too, this repo is one of the sources I use to plan out my implementation. )

Hi! I do implement the prefiltering thing they mentioned in the slides. My problem with that is when using the function they showed in the presentation my reflected image becomes over-blurred and I'm getting some heavy flickering artifacts, so I have to keep the cone tanget in a very low value. I'm using hardware generated mip map for my color buffer. Do I have to manually convolve one for that?

About ray reuse, I think it looks good enough for me on smooth surface. Currently I'm having problems with surface of medium and high roughness value, as shown in the image I post. 

I also checked that Unity plugin when I was working on my own one, as it's the only open sourced implementation I can find on web.  I think his result(He released a demo) is slightly better than me mainly because he is using blue noise rather than halton sequence. But it's still worse than what was showed in the original slides.

Another thing I just realized is that there is some bugs with my Hi-Z ray marching implementation. A lot of pixels couldn't find intersection point with higher roughness value when combined with importance sampling. IMO The original code in GPU Pro 5 isn't easy to understand, which makes it hard to debug.

Share this post


Link to post
Share on other sites

According to the DICE paper, I think 2 things help a lot for noise reduction:

1. Resolving the rays in full resolution.

2. A dedicated SSSR reprojection pass in addition to TAA.

One thing needs to be mentioned is that they use the ray intersection point's depth (the reflection depth) instead of the ray start point's depth (reflective pixel's depth) to calculate the reprojection position (page 66). This helps a lot when strafing the camera but it actually makes it worse when zooming the camera (moving towards the reflection).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!