Why is ray tracing slow?

Started by
4 comments, last by Bacterius 11 years, 4 months ago
Hi,

I am writing a simple ray tracer and I heard that ray tracing in general is very slow. I don't know advance ray tracing techniques, so I don't know what makes it slow? E.g for full HD just need 1920 x 1080 = 2.073.600 rays.

In my rough guess, is the collision detection between rays and objects the most heavy work in ray tracing?

Regards
Advertisement
Doing almost anything 2 million times takes a lot of time. If you want to get improvements like anti-aliasing, motion blur and field-of-depth effects, you'll have to launch many (100?) rays per pixel.

But yes, most of what your CPU will be doing is ray-vs-scene collisions.
Correct me if I'm wrong, but isn't it hundreds of times faster to do raytracing operations through the GPU? I haven't gone that low level yet with graphics stuff, but from other stuff I've seen that utilizes GPU, it seems like it'd be the optimal choice.

Correct me if I'm wrong, but isn't it hundreds of times faster to do raytracing operations through the GPU? I haven't gone that low level yet with graphics stuff, but from other stuff I've seen that utilizes GPU, it seems like it'd be the optimal choice.


This is true however even GPU raytraycing is too slow for realtime. I have seen raytracers that can run on the gpu fast enough for games but to get it to run this fast its nerfed to a point where you can easily get better looking graphics using conventional rendering.

Doing almost anything 2 million times takes a lot of time. If you want to get improvements like anti-aliasing, motion blur and field-of-depth effects, you'll have to launch many (100?) rays per pixel.
Yeah, pretty much every feature for which people use ray tracing in the first place will multiply the work. Just doing simple shadows generates one shadow ray per ordinary ray for each light in the scene, so if you have 10 lights, it'll be almost 10x the work. (Shadow rays are not as expensive as normal rays, but still.) Want soft shadows? More work, say 3x. Want reflection and refraction with a moderate 3 bounces? If there's a lot of reflective and refractive objects in the scene, that might quadruple the entire workload again. Now we are already at 120x the original workload. If we then add AA, motion blur, DOF etc., for a difficult scene we'll be doing hundreds of rays per pixel, billions of rays for the entire image.
For global illumination, anti-aliasing, depth-of-field, etc... you can easily reach as high as 1000 samples per pixel. Ray-tracing on the GPU is fast, but the fact is, GPU's are not designed for ray-tracing. One of the main characteristics of ray-tracing is that ray-scene collisions are unpredictable (at least after the first bounce) which means very incoherent memory access (which both CPU's and GPU's suck at to some extent). But their sheer parallel computational power compared to the CPU makes them faster for such embarrassingly parallel problems, so people use them.

There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.

Also, ray-tracing doesn't lend itself so well to the classic "pipeline" approach. In particular, if you're doing anything non-trivial with your rays, you can't render objects one after the other, changing shaders to modify their appearance. In general, everything has to be available on-chip at once. So that's another problem in that ray-tracing hardware would require a paradigm shift for computer graphics development.

If you compare a software rasterizer with a similarly optimized software ray-tracer, you'll see they're not too far off (for roughly the same image quality, obviously) with the rasterizer winning for average-complexity scenes, and the ray-tracer will tend to pull away after a certain scene complexity. The reason people perceive ray-tracing as horribly slow is because graphics cards are hardware-accelerated for rasterization, which makes it an unfair comparison.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

This topic is closed to new replies.

Advertisement