Regarding realtime G.I. / Photon Mapping, shortly ago a guy suggested me this paper:
http://graphics.cs.williams.edu/papers/PhotonI3D13/Mara13Photon.pdf
It shows a way to "quickly" sample photons using the GPU, using a tiled technique (somewhat similiar to Deferred Tiled Lighting). Although I know the papers usually sound better than they actually are, it said to have reasonable framerates, and the end results looked good to me. The paper doesn't describe how to quickly emit / evaluate rays though, it focuses on the 2nd "gathering" step.
Anyway. Just out of curiosity, a few questions about ray/path-tracing & for Photon Mapping:
* What is a rough indication of rays to shoot for a single point light to get somewhat reasonable results? Thousand? Hundredthousand?
* What kind of raytracer technique would that paper use to reach such framerates? My experience with rays is that they are VERY slow
* How the heck does a GPU based raytracer perform collision detection with complex structures (like the warehouse in that paper)?
* Is nVidia OptiX a good choice in your opinion?
* Would something like OptiX run on a "normal" game-computer within ~4 years?
In my mind, if you don't launch a very f#cking big amount of rays per light, you get those old grainy CD-ROM game cutscenes which actually look worse than all the fakes techniques we have so far. Obviously, increasing the raycount is an easy to tweak parameter, but I really wonder how realistic the results of that paper are.
15 minutes ago, I downloaded OptiX. No idea how it really works yet, except that its a very basic (thus flexible) raytracer framework, using CUDA (and the GPU?). But the demo programs ran like crap with a whopping 3 FPS average for most. Of course, my aging 2008 laptop is guilty, but does a modern system really makes that much of a difference? And otherwise, is the expectation that the hardware keeps growing fast enough to deal with it? I remember raytracer fans saying that it would become reality 8 years ago already, but a system that can do it practically is yet to be invented AFAIK.
Basically I'm trying to figure if it's worth the effort to dive further into OptiX now. Of course, it always is worth to learn, but having limited time you can't learn & try multiple things at the same time. But in case of sceptism, how did nVidia get those good results (GeForce 670, full HD resolution, Sponza theatre)?
Another question. I'm guessing the collision detection part is the biggest performance killer. I've heard of Kd trees and such, but never implemented them so far. But how would a GPU based Raytracer access such trees? It requires complicated stories like we had with VCT octrees again right? Would OptiX do this for you btw, or is the programmer 100% responsible for writing collision detection kernels?
Finally, the paper mentions that "Images Based path tracing is even faster". I guess that means they are using depthmaps instead of complicates trees. I can see the benefit, but GI based techniques need more than the Camera can see, or the data for collision detection would be incomplete. Its possible to combine depthmaps (from camera, light shadowMaps, fixed placed cubeMap "probes"), but either we have to loop through a big bunch of textures for each collision test step, or we have to merge them into a single volume texture (which requires a big amount of memory, or ends up very low-res). At least, as far as I know. So I just wonder what kind of smart trick they are refering though in that paper.
Cheers,
Rick