Sign in to follow this  

Fake Blur after 2D Ray Tracing in one pass

This topic is 680 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey,

I implemented a 2D Ray Tracer for Screen Space Reflections in order to give a water surface a more realistic look.

For performance reasons I only use one ray with 16 samples. This inevitably leads to a dotted pattern on the surface. The roughness of the surface slightly covers this problem but not enough.

Now here is my problem. I only want to use one render pass -> no post process blur. Are there any techniques to somehow reduce the dotted pattern in this case?

Share this post


Link to post
Share on other sites

What do you mean by "1 ray, 16 samples"? That you shoot one ray (from eye to surface point) per pixel, reflecting it on the surface normal and approximate a roughness-based cone with 16 samples? Or that you have only shoot one reflection ray and weight it with the pixel's roughness value while you only do 16 depth-comparisons ("samples", in the means of "steps")??

 

Maybe I got you totally wrong, but my implementation used only one reflection ray per pixel, because more is simply a waste of gpu power in my eyes - that means you can either have only sharp reflections, or you have to blur the result afterwards. Since you don't want to have a blur pass, you can do what I did: mipmap your color buffer before you da the reflections and sample mipmap levels based on your pixel's roughness. Worked very good for reasonably low roughness values, depending on your mipmap algorithm. If you even want to avoid this mipmapping step, you can shoot only one main ray instead of 16 and take multiple samples from your buffer at the intersection point...try stretching them based on the pixel's roughness - but don't expect wonders, I expect this to be slower than using a mipmap step and the quality won't be great and will decrease with the roughness value, as you would have to take more samples for a rougher pixel.

Share this post


Link to post
Share on other sites


What do you mean by "1 ray, 16 samples"? That you shoot one ray (from eye to surface point) per pixel, reflecting it on the surface normal and approximate a roughness-based cone with 16 samples? Or that you have only shoot one reflection ray and weight it with the pixel's roughness value while you only do 16 depth-comparisons ("samples", in the means of "steps")??

Your second assumption fits very well. Sorry for my incomprehensible expression.

I already tried all your ideas and it helps a bit, but it doesn't solve my biggest problem. The dotted pattern is created by neighboring pixels of whom a few rays hit the depth buffer and the others don't. It doesn't matter how I do the color sampling in the case the ray has a hit when the neighboring pixels have nothing to sample.

Share this post


Link to post
Share on other sites

Ahh, now I see which of the dozens of porblems of ssr you want to solve tongue.png

 

Sadly, this is one of the hardest problems. What you need is some knid of pull-and-push or fill-the-holes filter (or however this is officially called). I don't think it's possible to implement it without an additional step after the tracing is already completed. Have a look at the Killzone: Shadow Fall tech slides, where they show how the final image is composed step after step. Or have a look at this snippet where you can see the blurry reflection result https://youtu.be/DDYVcQNgu4Y?t=101 . As so many other people said already, it might not be the best idea to have too sharp screen space reflections... because the sharper they are, the harder to hide the high frquency artifacts.

 

EDIT: It would always be nice to have some example screenshots from your problems.

Edited by hannesp

Share this post


Link to post
Share on other sites

To do a blur, you need to be able to fetch the results of neighboring pixels (or pass your result to neighboring pixels)... which generally means you need all the data to be generated in a pass before the blur begins (in a later pass).

 

If the SSR is generated in a compute shader, you can use memory barriers to implement limited communication techniques within the one pass.

In a pixel shader, you can use the ddx_fine and ddy_fine instructions to communicate between neighboring pixels in a 2x2 pixel grid area (see "Shader Amortization using Pixel Quad Message Passing" in GPU Pro 2 if its available to you).

Besides that, you'll need to perform the blur in an additional pass (or two!).

 

What's the rationale behind wanting to do it all in one pass?

Share this post


Link to post
Share on other sites

This topic is 680 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this