Sign in to follow this  

Reflective shadow map sampling and weights

This topic is 393 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm working on a GI solution based on reflective shadow map (https://pdfs.semanticscholar.org/1b29/71e7024a3e1c4108718e59b5ba4327c44b93.pdf).

 

I had understood most of them except the final step. As my understanding, I was suppose to

1. sample from RSM, use it as a virtual point light

2. evaluate output radiance use that VPL

3. weight the output radiance.

4. and finally normalized the result

 

The question is, how to weight the output radiance. I use a poisson disc sampling pattern with 64 samples per pixel, so I should weight it with 1/64? And how to normalize the result?

 

Share this post


Link to post
Share on other sites

One idea would be to build mip maps for the RSM, so you can gather it entirely without the need to iterate over each single texel.

You can then traverse the mipmap hirarchy top down like a tree: E.g. start with each texel of 4x4 mipmap, if a texel is close to the receiving point, subivide that texel to a new 2x2 child block and repeat recursively.

If the texel is far (or level 0 has been reached), build a area light for the texel and accumulate received light.

To prevent dicontinuities from neighbouring receivers because each took a slightly different cut through that tree, you can to blend child texels with their parents - can be done in hardware with mip mapping.

 

I assume this would be much more accurate than the approximization in the paper (although i too did not understand it completely on a quick read), and nowadays with compute shaders it should be possible with similar performance.

However - because the missing occlusion of the RSM approach it will never look right - it might be worth to use something like imperfect shadow maps to get it.

Share this post


Link to post
Share on other sites

I did some RSM implementations quite long time ago (including indirect shadows using 2 or 3 ways) - just to benchmark them. I will try to share some insights:

 

Virtual Point Light generation

 

I've actually used 2 approaches, both have some advantages/disadvantages

  • Ray-tracing approach

While it might sound crazy, you can cast rays from lights (in some direction - e.g. for pointlight, use random direction; for directional light and spotlight, use light direction (for spotlight you also cast rays only into the spot cone)). These lights hit your scene at specific hitpoints and you can evaluate material color at those positions - hence you have a VPL sample position and color.

 

To remove randomization of VPL positions, you pre-generate a set of ray directions for each light and use only that set. Your origin is always light position and therefore moving the light doesn't cause blinking specific to VPL techniques. Dynamic scene is more of a challenge, to reduce blinking one has to cast multiple rays for single VPL taking average position and color. Ideally those samples are in cone with defined angle and direction.

 

The advantage of this process is, that while you are rendering, you can do this computation on the CPU for next frame (for example). Therefore reducing time required for VPL generation. Also I haven't tried high performance GPU ray tracer with this - all was done on CPU. Also it seemed a bit faster to me overall, although it heavily depends on the capabilities of the ray tracer you use.

The disadvantage is a requirement of having fast-enough ray tracer, and also scene stored in format usable for it. And therefore also a bit harder to integrate this into the engine.

  • Shadow map approach

Standard way - you render a shadow map for every light that is going to cast GI (spotlight, directional - 1 shadow map ... pointlight - 6 shadow maps). And for each you also need a color (not just depth) - to determine the color of VPL. As the samples are uniformly distributed along 1 plane (or 6 planes) there won't be any noise from moving the light (assuming you create VPL for every pixel (or NxN block) of the shadowmap). Reducing the noise for dynamic scene can be easily done by averaging NxN blocks of depth map (and color map). Note, you can use mipmap generation (as mentioned here) to improve the performance of averaging.

 

The advantage is that it is really easy to implement, and you technically don't need any additional data.

Disadvantage is to have shadow maps and color maps (assuming you can't determine color in another way) in memory.

 

Evaluate color/intensity of VPL

 

Well, technically you could try to do some more complex math behind this - calculating the light reaching VPL from the light it was created by (plus taking original light intensity into account - something like 'form factor' in radiosity), using the color of the surface (possibly modified by some surface parameter) -> that should be the basics. This is probably not physically correct, although it is already some approximation.

 

As for weighting - global 'gi_multiplier' value is also a good way to increase your control over it and just select a good value that "looks good" for your scene ... this is not physically based though, I've never tried to get something close to physically based at the time though. Using tone-mapping with dynamic light adaptation worked well for me in this case too.

 

Indirect shadows

 

This one is absolutely necessary. Generating shadow map is an option for every VPL (updating just few of them each frame - e.g. recomputing in progressively) works good, but not when there is something dynamic in the scene. Imperfect shadow maps replace your scene with point cloud which are used to generate shadow maps - that works, but only if you have enough VPLs of course (to hide the imperfections).

 

Generally by solving indirect shadows you have a very good GI system (better than most things in GI you can see these days in games). Although the problem is still performance.

Share this post


Link to post
Share on other sites

Signed distance fields/(Sparse?) voxel octree tracing is basically the standard for indirect shadows: https://simonstechblog.blogspot.com/2013/01/implementing-voxel-cone-tracing.html

 

Nothing else seems to come close to being performant in realtime. Regardless virtual pointlights haven't shipped in many games because sampling a large amount gets too expensive very, very quickly. Lightcuts is something similar and more researched to what JoeJ suggested: https://www.cs.cornell.edu/~kb/projects/lightcuts/

 

Like with distance fields/voxel octree tracing, there's a good amount of research out there on them. But from what I've seen, VPLs are still too slow/noisy/etc. for anything other than a nifty looking, short ranged GI hack like has shipped with Uncharted 4/Gears 4. The indirect shadows part is also more important, and as Villem said, once you start tracing for indirect shadows your a large part of the way towards knowing what you need to sample for GI anyway. That's what SVO cone tracing does, combining both at once.

Edited by Frenetic Pony

Share this post


Link to post
Share on other sites

Signed distance fields/(Sparse?) voxel octree tracing is basically the standard for indirect shadows: https://simonstechblog.blogspot.com/2013/01/implementing-voxel-cone-tracing.html

 

I wouldn't entirely say standard - but yes they are an option ... and actually I have them on my list to play a bit more with those.

 

In the past I've actually used simplified scene geometry and it kind of works. So pretty much - using simplified representation of the scene is the only way to go now. Although voxel based scene can be dynamically voxelized from real scene - I created low-poly version of my scene in 3D graphics software (which is a problem in real world usages).

 

 

Nothing else seems to come close to being performant in realtime.

 

These guys with ISM did it - http://resources.mpi-inf.mpg.de/ImperfectShadowMaps/ISM.pdf in 2008 (Okay, it is ~15 fps - but on current hardware you would most likely achieved better performance) ... although instead of voxels they did use point clouds. So once again, simplified scene geometry (although compared to my attempts, creating a point cloud from dynamic geometry also allows for interactivity). The downside is, that it is probably less precise compared to voxel-based scene. The real problem there was either - high quality GI and slow, or low quality GI (read noise) and fast.

 

I'd like to mention also different techniques here - because they can be real time (real time path tracers). But they do mostly work only on high-end hardware (and specific hardware), and we don't really have some set up standard for path tracing in terms of API (not even mentioning some, even small support, by renderers or even game engines). And as I've worked with them and did some research while on the University - I know there is one huge problem, noise (you're getting rid of it progressively, but that is not suitable for games).

 

Sadly right now I'm too busy with my real life work and updating my framework source to D3D12 (from old fashioned GL 4.x, because I was really in need some better approach in some areas and GL 4.x doesn't allow me for those) to really make some GI tests these days. But I'd really love to get back to GI world at some point this year or early next year.

Share this post


Link to post
Share on other sites

Finally, I have something working. Here are some of the experiences.

 

1. For directional light, store irradiance instead of radiant intensity. I = E * square(d), but for directional light, d is infinite, and we will apply a 1/square(d) distance falloff later when calculate income irradiance of a VPL, so that square(d) can be factored out here.

2. For point light, store intensity.

 

And I didn't store radiant flux in RSM.

 

 

As for weighting - global 'gi_multiplier' value is also a good way to increase your control over it and just select a good value that "looks good" for your scene ... this is not physically based though, I've never tried to get something close to physically based at the time though. Using tone-mapping with dynamic light adaptation worked well for me in this case too.

 

 

gi_multiplier is a  engineering way, and it works well for me.

 

 

Share this post


Link to post
Share on other sites

This topic is 393 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this