Global illumination techniques

Started by
60 comments, last by FreneticPonE 9 years, 11 months ago

Would you guys be interested if I polished the source code up a little (practically no comments anywhere in there) and published it online along with the paper?

Of course! Code is always great to have.

is the paper already online? would like to read it.

Good to hear. The paper isn't uploaded anywhere yet, but I'll get around to it as soon as possible. I want to write an accompanying blog post, to go into a little bit more detail in certain sections (I cut a lot from the thesis original form to stay concise and a bit more implementation-neutral). No promises on delivery date yet, I still have to prepare and give a presentation about the thesis, but it's high on my priority list now :)

Advertisement

Hi again. Sorry for the long delay.

I haven't been able to get enough time to pretty up the code or write a more expansive tutorial on the algorithms or something. I'm part of a startup now and doing masters degree as well, so I have barely any spare time on my hands. But I can at least provide the thesis + win32 app + code (should work on all platforms, but build system is only provided for win32 in the form of a visual studio 2012 project file).

The link to a file containing thesis+app+project is here, it's a .zip renamed to .pdf since wordpress won't let me upload a zip file. You can just rename it back to .zip.

A little bit more detail about it is on my blog here.

The thesis isn't perfect, but I hope you can get a decent overview of the algorithms from it. Also feel free to point out any errors or ask questions. I hope you enjoy reading :)

Good stuff, thanks Agleed!

Speaking of which, Lionhead seems to have advanced Light Propagation Volumes along: http://www.lionhead.com/blog/2014/april/17/dynamic-global-illumination-in-fable-legends/

Unfortunately there's no details. But I guess that means it should be somewhere in UE4, though I didn't see it. Still, occlusion and skybox injection is nice. It still seems a fairly limited ideal, you'd never get enough propagation steps to get long range bounces from say, a large terrain. But at least it would seem more usable for anyone looking for a practical solution that they can get working relatively quickly. And hey, maybe you can use a handful of realtime cubemaps that only render long distance stuff, and just rely on the volumes for short distance.

Could go along nicely with using a sparse octree for propogation instead of a regular grid: https://webcache.googleusercontent.com/search?q=cache:http://fileadmin.cs.lth.se/graphics/research/papers/2013/olpv/ which trades off more predictable performance impact for less memory and further/faster propagation. Assuming they don't use as such already.

Thanks for that link.Their higher order spherical harmonics probably do a lot to fix the light bleeding (due to light propagation in the wrong direction) present in the original algorithm, which was pretty much by far the biggest issue left. If you disregard that it's a discretization of continuous light floating around in space and thus cannot represent physically accurate GI, I think it's pretty much a perfect environment as a first step to introduce GI for completely dynamic scenes into games before we get the computing power to use ray tracing, at least in small to medium sized scenes, which makes up the vast majority of games around today.

Otherwise even the unoptimized version I implemented was pretty fast. I'm looking forward to getting my hands on the UE4 codebase to take a look at how they optimized the different stages using compute shaders. I implemented compute shader versions as well, but they just ended up being a tad slower than the vertex/geom/fragment shader based implementations, because I can't into compute shader optimization (yet), so I just left compute shaders out of the thesis entirely.

Voxel cone-tracing was mentioned as the state-of-the-art, but I think real-time photon mapping with GPU-based final gather (by rasterizing volumes representing the photons) is still the state-of-the-art, even though it's now four years old: http://graphics.cs.williams.edu/papers/PhotonHPG09/

I especially like that you don't have to voxelize your scene. In a modern implementation, you'd use NVIDIA's OptiX to do the incoherent phase of the photon tracing on the GPU, instead of CPU like in the original paper.

"But who prays for Satan? Who, in eighteen centuries, has had the common humanity to pray for the one sinner that needed it most?" --Mark Twain

~~~~~~~~~~~~~~~Looking for a high-performance, easy to use, and lightweight math library? http://www.cmldev.net/ (note: I'm not associated with that project; just a user)

Voxel cone-tracing was mentioned as the state-of-the-art for RTGI, but I think real-time photon mapping with GPU-based initial bounce and final gather (by rasterizing volumes representing the photons) is still the state-of-the-art, even though it's now four years old: http://graphics.cs.williams.edu/papers/PhotonHPG09/

I especially like that you don't have to voxelize your scene. In a modern implementation, you'd use NVIDIA's OptiX to do the incoherent phase of the photon tracing on the GPU as well, instead of CPU like in the original paper.

"But who prays for Satan? Who, in eighteen centuries, has had the common humanity to pray for the one sinner that needed it most?" --Mark Twain

~~~~~~~~~~~~~~~Looking for a high-performance, easy to use, and lightweight math library? http://www.cmldev.net/ (note: I'm not associated with that project; just a user)

Voxel cone-tracing was mentioned as the state-of-the-art for RTGI, but I think real-time photon mapping with GPU-based initial bounce and final gather (by rasterizing volumes representing the photons) is still the state-of-the-art, even though it's now four years old: http://graphics.cs.williams.edu/papers/PhotonHPG09/

I especially like that you don't have to voxelize your scene. In a modern implementation, you'd use NVIDIA's OptiX to do the incoherent phase of the photon tracing on the GPU as well, instead of CPU like in the original paper.

RTGI has plenty of issue, e.g. when the camera clips the photon volumes, instability when lights move around, limited to diffuse GI.

Voxel cone-tracing was mentioned as the state-of-the-art for RTGI, but I think real-time photon mapping with GPU-based initial bounce and final gather (by rasterizing volumes representing the photons) is still the state-of-the-art, even though it's now four years old: http://graphics.cs.williams.edu/papers/PhotonHPG09/

I especially like that you don't have to voxelize your scene. In a modern implementation, you'd use NVIDIA's OptiX to do the incoherent phase of the photon tracing on the GPU as well, instead of CPU like in the original paper.

RTGI has plenty of issue, e.g. when the camera clips the photon volumes, instability when lights move around, limited to diffuse GI.

Did you even look at the paper I linked to? It is not limited to diffuse GI; it handles specular and any other BSDF, including in the intermediate bounces (as a cursory glance at the images in the paper would have shown you that--what's more obvious non-diffuse GI than caustics?)

"But who prays for Satan? Who, in eighteen centuries, has had the common humanity to pray for the one sinner that needed it most?" --Mark Twain

~~~~~~~~~~~~~~~Looking for a high-performance, easy to use, and lightweight math library? http://www.cmldev.net/ (note: I'm not associated with that project; just a user)


Did you even look at the paper I linked to? It is not limited to diffuse GI; it handles specular and any other BSDF, including in the intermediate bounces (as a cursory glance at the images in the paper would have shown you that--what's more obvious non-diffuse GI than caustics?)

Didn't read the paper. Caustics are fine and dandy but I see no examples of glossy reflections, which is something voxel cone tracing is pretty good at.

They explicitly state they support arbitraty BSDFs, and that includes glossy reflections. In any case, other than the first bounce (in which the rays are coherent), and the final gather, which are both done by rasterization, the intermediate bounces are all computed on the CPU, not GPU, so there is no limitation whatsoever.

The real downside of this method is actually something else entirely--it's the general limitation of all screen-space methods: stuff beyond the edges of the screen. It's the same thing with SSAO when objects move in/out of the screen, so I always use an extended border with such methods.

The biggest benefit is that it's very general, everything completely dynamic, and requires zero precomputation of anything.

"But who prays for Satan? Who, in eighteen centuries, has had the common humanity to pray for the one sinner that needed it most?" --Mark Twain

~~~~~~~~~~~~~~~Looking for a high-performance, easy to use, and lightweight math library? http://www.cmldev.net/ (note: I'm not associated with that project; just a user)

This topic is closed to new replies.

Advertisement