Hey guys,
I am implementing following paper (Interactive Global Illumination using Voxel Cone Tracing): https://hal.sorbonne-universite.fr/LJK_GI_ARTIS/hal-00650173v1
(This paper can be downloaded for free if you look around)
Basically the author suggest to store radiance, color etc. into the leafs of an octree and mipmap that into the higher levels of the tree. Then use conetracing to calculate 2 bounce global illumination.
So the octree is ready and I now want to inject radiance into the leaf nodes. For this task I use the suggested method and render a "light-view" map from the perspective of the light. I use physically based materials, thus the actual computation cannot be precalculated because solving the rendering equation for a specific voxel also depends on the viewing and light direction. I have seen some implementations that just use the lambertian brdf as a simplification. But that will likely worsen the quality of the resulting frame, wouldn't it? My idea is to calculate the result (using the BRDF from ue4) for more than one viewing direction and just interpolate between them at runtime. This process has to be repeated when a light changes.
So my question is: How should I handle this problem? Or should I just use the lambertian brdf and not care?
Thanks