Why is ray tracing slow?
Members - Reputation: 172
Posted 17 December 2012 - 04:25 PM
I am writing a simple ray tracer and I heard that ray tracing in general is very slow. I don't know advance ray tracing techniques, so I don't know what makes it slow? E.g for full HD just need 1920 x 1080 = 2.073.600 rays.
In my rough guess, is the collision detection between rays and objects the most heavy work in ray tracing?
Crossbones+ - Reputation: 13331
Posted 17 December 2012 - 04:32 PM
But yes, most of what your CPU will be doing is ray-vs-scene collisions.
Members - Reputation: 197
Posted 17 December 2012 - 05:24 PM
Members - Reputation: 844
Posted 17 December 2012 - 05:46 PM
Correct me if I'm wrong, but isn't it hundreds of times faster to do raytracing operations through the GPU? I haven't gone that low level yet with graphics stuff, but from other stuff I've seen that utilizes GPU, it seems like it'd be the optimal choice.
This is true however even GPU raytraycing is too slow for realtime. I have seen raytracers that can run on the gpu fast enough for games but to get it to run this fast its nerfed to a point where you can easily get better looking graphics using conventional rendering.
Edited by ic0de, 17 December 2012 - 05:47 PM.
you know you program too much when you start ending sentences with semicolons;
Crossbones+ - Reputation: 1412
Posted 17 December 2012 - 07:19 PM
Yeah, pretty much every feature for which people use ray tracing in the first place will multiply the work. Just doing simple shadows generates one shadow ray per ordinary ray for each light in the scene, so if you have 10 lights, it'll be almost 10x the work. (Shadow rays are not as expensive as normal rays, but still.) Want soft shadows? More work, say 3x. Want reflection and refraction with a moderate 3 bounces? If there's a lot of reflective and refractive objects in the scene, that might quadruple the entire workload again. Now we are already at 120x the original workload. If we then add AA, motion blur, DOF etc., for a difficult scene we'll be doing hundreds of rays per pixel, billions of rays for the entire image.
Doing almost anything 2 million times takes a lot of time. If you want to get improvements like anti-aliasing, motion blur and field-of-depth effects, you'll have to launch many (100?) rays per pixel.
Crossbones+ - Reputation: 8890
Posted 17 December 2012 - 11:25 PM
There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.
Also, ray-tracing doesn't lend itself so well to the classic "pipeline" approach. In particular, if you're doing anything non-trivial with your rays, you can't render objects one after the other, changing shaders to modify their appearance. In general, everything has to be available on-chip at once. So that's another problem in that ray-tracing hardware would require a paradigm shift for computer graphics development.
If you compare a software rasterizer with a similarly optimized software ray-tracer, you'll see they're not too far off (for roughly the same image quality, obviously) with the rasterizer winning for average-complexity scenes, and the ray-tracer will tend to pull away after a certain scene complexity. The reason people perceive ray-tracing as horribly slow is because graphics cards are hardware-accelerated for rasterization, which makes it an unfair comparison.
Edited by Bacterius, 18 December 2012 - 03:48 AM.
The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.
- Pessimal Algorithms and Simplexity Analysis