Using Physically based materials with Voxel Based Global Illumination

Started by
9 comments, last by kalle_h 5 years, 5 months ago

Hey guys,

I am implementing following paper (Interactive Global Illumination using Voxel Cone Tracing): https://hal.sorbonne-universite.fr/LJK_GI_ARTIS/hal-00650173v1
(This paper can be downloaded for free if you look around)
Basically the author suggest to store radiance, color etc. into the leafs of an octree and mipmap that into the higher levels of the tree. Then use conetracing to calculate 2 bounce global illumination.
So the octree is ready and I now want to inject radiance into the leaf nodes. For this task I use the suggested method and render a "light-view" map from the perspective of the light. I use physically based materials, thus the actual computation cannot be precalculated because solving the rendering equation for a specific voxel also depends on the viewing and light direction. I have seen some implementations that just use the lambertian brdf as a simplification. But that will likely worsen the quality of the resulting frame, wouldn't it? My idea is to calculate the result (using the BRDF from ue4) for more than one viewing direction and just interpolate between them at runtime. This process has to be repeated when a light changes.

So my question is: How should I handle this problem? Or should I just use the lambertian brdf and not care?

Thanks ;)

Advertisement
2 hours ago, DaOnlyOwner said:

I have seen some implementations that just use the lambertian brdf as a simplification. But that will likely worsen the quality of the resulting frame, wouldn't it?

No, it will have a barely noticeable effect, because indirect lighting is dominated by diffuse reflection in practice. Exceptions would be extremely artificial scenes with walls of shiny metal for example. (For metals you likely have better results using the greyish specular color instead of the almost black diffuse.)

But even if you would store view dependent information, doing that only for the locations of the lights would not work well i think. Simple interpolation would look more wrong than using Lambert, and spending effort to improve that would not be worth it and still cause artifacts. To do better you would need directional information at the voxels themselves so they store their environment (e.g. using spherical harmonics / ambient cubes / spherical gaussians etc.), but this becomes quickly unpractical due to memory limits.

With voxel cone tracing your real problem is that you can not even model diffuse interreflection with good accuracy, at least not for larger scenes. So you worry about a very minor detail here. If you really want to show complex materials even in reflections, probably the best way would be to trace cone paths. (Or to use DXR of course)

Thank you. JoeJ.

I think I will just evaluate a constant then. 

50 minutes ago, JoeJ said:

With voxel cone tracing your real problem is that you can not even model diffuse interreflection with good accuracy, at least not for larger scenes. So you worry about a very minor detail here. If you really want to show complex materials even in reflections, probably the best way would be to trace cone paths. (Or to use DXR of course)

I agree, but the VXGI implementation of Nvidia uses cone tracing too (however utilizing another approch for storing the voxels) and they achieve some fairly pleasant results, even for diffuse GI.

7 hours ago, DaOnlyOwner said:

I agree, but the VXGI implementation of Nvidia uses cone tracing too (however utilizing another approch for storing the voxels) and they achieve some fairly pleasant results, even for diffuse GI.

NVs implementation is very slow, i've heard (but that was many years ago). AFAIK it has not been used in a game yet. I think they used anisotropic voxels (6 colors for each), and octree. Anybody else ruled both of them out. Instead anisotropic voxels it's better to just use one more subdivision, and instead octree it's better to use plain volume textures for speed. That's what people say... personally i would give octrees still a try, though.

Cryengines approach is very interrestig: IIRC they use refractive shadowmaps, and the voxels are used just for occlusion. Likely they use one byte per voxel to utilize hardware filtering, but using just one bit would work too. This means big savings in memory, so the resolution can be higher and the light leaking becomes less of a problem. They have detailed description on their manual pages.

The developers of PS4 game 'Tomorrows Children' have a very good paper on their site. They really tried a lot of optimization ideas and have great results, so a must read if you missed it.

An unexplored idea would be to use oriented bricks of volumes. So you could prevoxelize also dynamic models like vehicles and just update their transform instead voxelizing each frame. Similar to how UE4 uses SDF volumes for shadows.

 

For performance reasons Sparse Voxel Octrees should not be used. The update just takes too long if you move a light or have moving objects. Volume textures are easier to handle but updating them takes even longer because you have to invalidate everything. Thus Nvidia uses clipmaps where moving things doesn't imply the need to revoxeliz i guess. Thanks for the link to tomorrows children tho, Im having a look at it ;)

I really like this work:

I think it is a diffusion approach, so it avoids to calculate expensive visibility.

I've experimented with this too 10 years ago using grid of spherical harmonics. I gave up it in favor of a surfel approach i'm still working on, but it's very promising.

@JoeJ I've implemented several variants - with octree and without octree. Currently I do use version without octree in production - while octree uses less memory (it's not that much less, considering that you still need to store 'bricks' in 3D texture to get hardware filtering), creating it or updating it takes simply too much time.

For performance reasons (and due to requiring support for physics heavy scenes - e.g. lots of dynamic objects) I was even more aggressive, and at current point I don't even store anything apart from resulting direct diffuse color in 3D texture for VXGI/reflections. It is just single bounce GI then, yet good enough for most cases - and fast enough to run on laptops with integrated GPUs.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

2 hours ago, Vilem Otte said:

Currently I do use version without octree in production - while octree uses less memory (it's not that much less, considering that you still need to store 'bricks' in 3D texture to get hardware filtering), creating it or updating it takes simply too much time.

Did you experiment with prevoxelization and so streaming static parts of the scene?

I assume this would make sense at least for distant cascades where dynamic objects can be ignored, but i'm unsure if it's still worth it when dynamic objects need to be added.

 

This whole hammers down to one problem - what you want to store in your voxel data.

I did some research, measured performance of various approaches - and the most stupid way, of storing directly results of lighting in voxel data, and as 3D textures, were the winner. Although to be said, the game is using only dynamic lighting and lots of dynamic objects (be it physics or animated) - for this scenario storing normal and color (and injecting light) is a too heavy hit in performance (and doesn't bring in any advantage), as you will need to do it for many (if not all) lights. SVO may be an option - but due to previous, light bricks would need to be re-built constantly.

 

If OP wants difference comparison for lambert vs. oren-nayar vs. disney diffuse F.e. inside VXGI, I can provide those - although probably not earlier than tomorrow. Sadly, I'm way too busy today.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

For voxels I would just store the diffuse lighting. To compensate specular light energy you can use same trick that UE4 use for fully rough materials. This way also metals contribute for GI. We also use this trick when rendering reflection captures.

 

#if FORCE_FULLY_ROUGH
	// Factors derived from EnvBRDFApprox( SpecularColor, 1, 1 ) == SpecularColor * 0.4524 - 0.0024
	GBuffer.DiffuseColor += GBuffer.SpecularColor * 0.45;
	GBuffer.SpecularColor = 0;
	GBuffer.Roughness = 1;
#endif

 

 

 

This topic is closed to new replies.

Advertisement