Ray/Path tracing questions from a dummy

Started by
1 comment, last by spek 10 years, 5 months ago

Regarding realtime G.I. / Photon Mapping, shortly ago a guy suggested me this paper:

http://graphics.cs.williams.edu/papers/PhotonI3D13/Mara13Photon.pdf

It shows a way to "quickly" sample photons using the GPU, using a tiled technique (somewhat similiar to Deferred Tiled Lighting). Although I know the papers usually sound better than they actually are, it said to have reasonable framerates, and the end results looked good to me. The paper doesn't describe how to quickly emit / evaluate rays though, it focuses on the 2nd "gathering" step.

Anyway. Just out of curiosity, a few questions about ray/path-tracing & for Photon Mapping:

* What is a rough indication of rays to shoot for a single point light to get somewhat reasonable results? Thousand? Hundredthousand?

* What kind of raytracer technique would that paper use to reach such framerates? My experience with rays is that they are VERY slow

* How the heck does a GPU based raytracer perform collision detection with complex structures (like the warehouse in that paper)?

* Is nVidia OptiX a good choice in your opinion?

* Would something like OptiX run on a "normal" game-computer within ~4 years?

In my mind, if you don't launch a very f#cking big amount of rays per light, you get those old grainy CD-ROM game cutscenes which actually look worse than all the fakes techniques we have so far. Obviously, increasing the raycount is an easy to tweak parameter, but I really wonder how realistic the results of that paper are.

15 minutes ago, I downloaded OptiX. No idea how it really works yet, except that its a very basic (thus flexible) raytracer framework, using CUDA (and the GPU?). But the demo programs ran like crap with a whopping 3 FPS average for most. Of course, my aging 2008 laptop is guilty, but does a modern system really makes that much of a difference? And otherwise, is the expectation that the hardware keeps growing fast enough to deal with it? I remember raytracer fans saying that it would become reality 8 years ago already, but a system that can do it practically is yet to be invented AFAIK.

Basically I'm trying to figure if it's worth the effort to dive further into OptiX now. Of course, it always is worth to learn, but having limited time you can't learn & try multiple things at the same time. But in case of sceptism, how did nVidia get those good results (GeForce 670, full HD resolution, Sponza theatre)?

Another question. I'm guessing the collision detection part is the biggest performance killer. I've heard of Kd trees and such, but never implemented them so far. But how would a GPU based Raytracer access such trees? It requires complicated stories like we had with VCT octrees again right? Would OptiX do this for you btw, or is the programmer 100% responsible for writing collision detection kernels?

Finally, the paper mentions that "Images Based path tracing is even faster". I guess that means they are using depthmaps instead of complicates trees. I can see the benefit, but GI based techniques need more than the Camera can see, or the data for collision detection would be incomplete. Its possible to combine depthmaps (from camera, light shadowMaps, fixed placed cubeMap "probes"), but either we have to loop through a big bunch of textures for each collision test step, or we have to merge them into a single volume texture (which requires a big amount of memory, or ends up very low-res). At least, as far as I know. So I just wonder what kind of smart trick they are refering though in that paper.

Cheers,

Rick

Advertisement

Just keep in mind that work expands so as to fill the time allocated to it. Meaning that if a next-gen computer is, say, twice as fast as the previous one, we'll just render twice as many objects or twice as many samples, rather than doing the same thing we did before but twice as fast.

And, yes, (nontrivial) ray tracing is nearly 100% bottlenecked by the ray-scene intersection tests. Writing a kd-tree for the GPU is tricky work. Firstly because you cannot really use stack-based approaches, so you need elaborate stackless traversal algorithms and special tree constructions, and you also need to micro-optimize your code to be as fast as possible (which is also why most GPU raytracers are written in CUDA - it's infinitely easier to optimize code when you know the target architecture, and CUDA gives you plenty of tools to make that happen).

Pretty sure there are existing kd-tree implementations in CUDA. For instance cukd. Though you may need to roll your own eventually if you really want to push your card to the max (or if you want to implement custom kd-traversal algorithms). And also, yes, nobody uses naive path tracing on the GPU for high-quality interactive scenes. It's just too slow, you still can't compute enough samples to get the noise down to a sane level before your frametime is up. Usually various simplifications are used, for instance baking some diffuse terms, using image-based lighting, some blur here and there, and I once saw a really nice BDPT GPU implementation which seemed to converge quite quickly.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Thanks for the comment. You're right about expanding work indeed. If we had to ray-trace a game environment from a 2002 game, it might be quite possible by now. But geometry, camera-view and lightcounts keep increasing. Yet, I was a bit dissapointed that even very simple scenery from the OptiX demo's ran like a clogged toilet here. Either my computer is truly lazy, papers lie about their performance, or they are doing some radically different. I don't know.

Well, as much as I love to learn about raytracing, I'd probably stay away from it for now. Yet, I still wonder how the mentioned paper seems to reach somewhat reasonable speeds (and that for Photon Mapping, not the easiest thing in the Raytracing universe). Am I missing something?

Greets

This topic is closed to new replies.

Advertisement