• Advertisement
Sign in to follow this  

"Next" Gen Reflections

This topic is 2258 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I was thinking about how to do dynamic reflections, and a possible way occurred to me.

If your scene is mostly static you could store multiple environment maps, pulling whatever tricks you like. Corrected projections and etc.

The main point is you not only store albedo, but normal, depth, and a full on material if you want to go that way.

You then throw your environment maps onto your normal geometry, but store the results in a separate g-buffer.

You'd then do a lighting pass on this separate g-buffer, call if a reflection buffer if your like.

Your then resolve your reflection buffer into your normal buffer with whatever bdrf's your using.

There we are, rather expensive but dynamically lit reflections for everything. Of course there would be a lot of weird little glitches and things to consider, how would you cull shadow maps correctly, how would you deal with dynamic objects? Etc. But reflections need to be improved somehow, raytracing is just going to go insane on taking up resources no matter what tricks you pull. That billboard trick Epic pulled with their Samaritan demo did look neat, but I suspect only really looks good when you have a ton of light sources to fill up the screen.

Any thoughts?

Share this post


Link to post
Share on other sites
Advertisement
What is the difference when you render the environment via deferred shading?
If you store the depth, you can raymarch, but I assume that is not cheap.

You mean you save a "deferred" pass ? 

Share this post


Link to post
Share on other sites

What is the difference when you render the environment via deferred shading?
If you store the depth, you can raymarch, but I assume that is not cheap.

You mean you save a "deferred" pass ?





Yeah, raymarching wouldn't be cheap. And yeah, you'd save all you needed in the environment map and then do two deferred lighting passes. One normal, one you built up from the environment maps.

I got the idea from a demo someone did for their thesis. They just used raytracing to build up the second g-buffer, though I think they only included highly glossy surfaces. This is the same idea but no need for raytracing.

Share this post


Link to post
Share on other sites
though I think they only included highly glossy surfaces
[size=2]Yeah, one drawback is that this reflection buffer only allows for 'perfect' reflections.
[size=2]With regular environment mapping (on non-mirror-like surfaces[size=2]), you'd usually choose a sample a different mip level based on how blurry you want your reflections to be (i.e. based on the roughness of the reflecting surface[size=2]). With g-buffer type environment maps, this wouldn't be possible (unless you incorporate a way to pre-filter a g-buffer mip-chain in a way that makes sense).

Share this post


Link to post
Share on other sites
I got the idea from a demo someone did for their thesis. They just used raytracing to build up the second g-buffer, though I think they only included highly glossy surfaces. This is the same idea but no need for raytracing.[/quote]
Actually this is pretty good way, but still it doesn't work for anything but perfect reflection.

It is better to actually shade during ray tracing as a matter of fact.

With regular environment mapping (on non-mirror-like surfaces), you'd usually choose a sample a different mip level based on how blurry you want your reflections to be (i.e. based on the roughness of the reflecting surface). [/quote]
Just a note: that sampling lower mip levels can look a bit blocky, so some little 3x3 gaussian blur can help a lot. Of course you can store (gather) the reflections in separate buffer and cleverly blur (taking edges into account) it in screen space according to weights.

With g-buffer type environment maps, this wouldn't be possible (unless you incorporate a way to pre-filter a g-buffer mip-chain in a way that makes sense).[/quote]
Actually I don't think it is possible to pre-filter it in "a way that makes sense" and you'll have to do it with "the sampling way" e.g. take F.e. 5x5 samples from G-buffer, compute lighting for each one of them and in the end sum them (with gaussian weights).
I tried this technique, and still regular enviromental mapping wins (In terms of quality, they're the same ... and it terms of performance, classical enviromental mapping wins in order of magnitudes).

Share this post


Link to post
Share on other sites

[quote name='Frenetic Pony' timestamp='1324163282' post='4894897']though I think they only included highly glossy surfaces
Yeah, one drawback is that this reflection buffer only allows for 'perfect' reflections.
With regular environment mapping (on non-mirror-like surfaces), you'd usually choose a sample a different mip level based on how blurry you want your reflections to be (i.e. based on the roughness of the reflecting surface). With g-buffer type environment maps, this wouldn't be possible (unless you incorporate a way to pre-filter a g-buffer mip-chain in a way that makes sense).
[/quote]

And of course the mip level blurring stuff is already a pretty poor approximation of any decent (non-Phong) BRDF. Personally I would not be willing to take a step back in lighting/material quality in order to get dynamic reflections. I'd rather find ways to use Cook-Torrance and Ashikhmin-Shirley with specular probes.

Share this post


Link to post
Share on other sites
And of course the mip level blurring stuff is already a pretty poor approximation of any decent (non-Phong) BRDF. Personally I would not be willing to take a step back in lighting/material quality in order to get dynamic reflections. I'd rather find ways to use Cook-Torrance and Ashikhmin-Shirley with specular probes.
Yeah even with phong it's pretty poor if you just perform regular bilinear to get your mips. IIRC, cubemapgen actually evaluates the phong BRDF for each pixel when creating it's mips (which also removes seams).
AFAIK, this should also work for any BRDF, as long as the frequency of details decreases linearly with a linear increase of the 'roughness' factor, and you're ok with a piecewise linear approximation of the original function. Though, the pre-filtering step obviously becomes much more costly than regular mip generation.
[edit] scratch that, read below!

Share this post


Link to post
Share on other sites

[quote name='MJP' timestamp='1324235076' post='4895052']And of course the mip level blurring stuff is already a pretty poor approximation of any decent (non-Phong) BRDF. Personally I would not be willing to take a step back in lighting/material quality in order to get dynamic reflections. I'd rather find ways to use Cook-Torrance and Ashikhmin-Shirley with specular probes.
Yeah even with phong it's pretty poor if you just perform regular bilinear to get your mips. IIRC, cubemapgen actually evaluates the phong BRDF for each pixel when creating it's mips (which also removes seams).
AFAIK, this should also work for any BRDF, as long as the frequency of details decreases linearly with a linear increase of the 'roughness' factor, and you're ok with a piecewise linear approximation of the original function. Though, the pre-filtering step obviously becomes much more costly than regular mip generation.
[/quote]

I don't think CubeMapGen uses a phong lobe, although someone modified it to do exactly that (since it's open source now). Either way doing a Phong convolution is pretty easy, I wrote a compute shader to do it and even a naive implementation is pretty quick.

This approach most definitely does not work for any BRDF. BRDF's in general have too high of a dimensionality to be precomputed into a cubemap, even when using mip levels to store different roughness values. Typically you drop the view direction from the pre-computation which lets you index into the cubemap using only the reflection vector, but to do this you have to use the approximation that any two resulting reflection vectors will get the same result regardless of the view/normal vectors that actually produced the reflection vector (since you can only have one texel per reflection vector in a cubemap). This mostly works for Phong, since the lobe shape is the same (perfectly round) regardless of the viewing angle. However with any microfacet BRDF the specular lobe will become increasingly narrow as you approach glancing angles, which means your approximation no longer works. This is of course what makes Phong and Blinn-Phong look different:

[attachment=6458:PhongVsBlinnPhong.jpg]

[attachment=6460:ReflectionLobe.jpg]

[attachment=6459:MicrofacetLobe.jpg]

So what this means is that if you want proper lobe shapes, you need to increase your dimensionality which means you need more than a single cubemap. It gets even worse for anisotropic BRDF's where your lobe shape is also parameterized on the tangent frame + level of anisotropy. There is ongoing research in this area, but I have yet to really study and digest all of it.

Share this post


Link to post
Share on other sites
Hmmm, I guess it would be pretty limited (not too mention expensive). But the problem I can forsee is that what good would much better BDRF evaluations be if there's nothing to reflect? Cook-Torrance isn't going to do much if there's little to work off of. I suppose you could go for some hack like Tri-Ace did, but I'd really love to see something more like actual reflections.

Share this post


Link to post
Share on other sites
I think Crytek's "real-time local reflections" is a pretty nifty trick. I guess they use ray marching to do screen space reflections. Works really well for certain situations, like reflective floors especially.

Share this post


Link to post
Share on other sites

Hmmm, I guess it would be pretty limited (not too mention expensive). But the problem I can forsee is that what good would much better BDRF evaluations be if there's nothing to reflect? Cook-Torrance isn't going to do much if there's little to work off of. I suppose you could go for some hack like Tri-Ace did, but I'd really love to see something more like actual reflections.


Well most games have plenty of static geometry to reflect, so dynamic reflections are only a consideration for characters and other dynamic geometry. If that's not the case for your game, that obviously changes things.

Share this post


Link to post
Share on other sites

I think Crytek's "real-time local reflections" is a pretty nifty trick. I guess they use ray marching to do screen space reflections. Works really well for certain situations, like reflective floors especially.


It's a neat trick from a programmer perspective, but I suspected it didn't really work almost at all in an actual game scenario. I cranked up Crysis 2 recently just out of curiosity, and having reflections essentially warp in and out at the right angles just doesn't work for the most part.

I'm guessing grazing angles could work well. It could be a neat trick for mostly diffuse materials, since you're only going to really notice at grazing angles, which is when something will probably be on screen anyway.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement