Jump to content

  • Log In with Google      Sign In   
  • Create Account

Why is ray tracing slow?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 ynm   Members   -  Reputation: 172

Like
0Likes
Like

Posted 17 December 2012 - 04:25 PM

Hi,

I am writing a simple ray tracer and I heard that ray tracing in general is very slow. I don't know advance ray tracing techniques, so I don't know what makes it slow? E.g for full HD just need 1920 x 1080 = 2.073.600 rays.

In my rough guess, is the collision detection between rays and objects the most heavy work in ray tracing?

Regards

Sponsor:

#2 Álvaro   Crossbones+   -  Reputation: 13897

Like
3Likes
Like

Posted 17 December 2012 - 04:32 PM

Doing almost anything 2 million times takes a lot of time. If you want to get improvements like anti-aliasing, motion blur and field-of-depth effects, you'll have to launch many (100?) rays per pixel.

But yes, most of what your CPU will be doing is ray-vs-scene collisions.

#3 Magdev   Members   -  Reputation: 197

Like
0Likes
Like

Posted 17 December 2012 - 05:24 PM

Correct me if I'm wrong, but isn't it hundreds of times faster to do raytracing operations through the GPU? I haven't gone that low level yet with graphics stuff, but from other stuff I've seen that utilizes GPU, it seems like it'd be the optimal choice.

#4 ic0de   Members   -  Reputation: 909

Like
1Likes
Like

Posted 17 December 2012 - 05:46 PM

Correct me if I'm wrong, but isn't it hundreds of times faster to do raytracing operations through the GPU? I haven't gone that low level yet with graphics stuff, but from other stuff I've seen that utilizes GPU, it seems like it'd be the optimal choice.


This is true however even GPU raytraycing is too slow for realtime. I have seen raytracers that can run on the gpu fast enough for games but to get it to run this fast its nerfed to a point where you can easily get better looking graphics using conventional rendering.

Edited by ic0de, 17 December 2012 - 05:47 PM.

you know you program too much when you start ending sentences with semicolons;


#5 Yrjö P.   Crossbones+   -  Reputation: 1412

Like
2Likes
Like

Posted 17 December 2012 - 07:19 PM

Doing almost anything 2 million times takes a lot of time. If you want to get improvements like anti-aliasing, motion blur and field-of-depth effects, you'll have to launch many (100?) rays per pixel.

Yeah, pretty much every feature for which people use ray tracing in the first place will multiply the work. Just doing simple shadows generates one shadow ray per ordinary ray for each light in the scene, so if you have 10 lights, it'll be almost 10x the work. (Shadow rays are not as expensive as normal rays, but still.) Want soft shadows? More work, say 3x. Want reflection and refraction with a moderate 3 bounces? If there's a lot of reflective and refractive objects in the scene, that might quadruple the entire workload again. Now we are already at 120x the original workload. If we then add AA, motion blur, DOF etc., for a difficult scene we'll be doing hundreds of rays per pixel, billions of rays for the entire image.

#6 Bacterius   Crossbones+   -  Reputation: 9266

Like
4Likes
Like

Posted 17 December 2012 - 11:25 PM

For global illumination, anti-aliasing, depth-of-field, etc... you can easily reach as high as 1000 samples per pixel. Ray-tracing on the GPU is fast, but the fact is, GPU's are not designed for ray-tracing. One of the main characteristics of ray-tracing is that ray-scene collisions are unpredictable (at least after the first bounce) which means very incoherent memory access (which both CPU's and GPU's suck at to some extent). But their sheer parallel computational power compared to the CPU makes them faster for such embarrassingly parallel problems, so people use them.

There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.

Also, ray-tracing doesn't lend itself so well to the classic "pipeline" approach. In particular, if you're doing anything non-trivial with your rays, you can't render objects one after the other, changing shaders to modify their appearance. In general, everything has to be available on-chip at once. So that's another problem in that ray-tracing hardware would require a paradigm shift for computer graphics development.

If you compare a software rasterizer with a similarly optimized software ray-tracer, you'll see they're not too far off (for roughly the same image quality, obviously) with the rasterizer winning for average-complexity scenes, and the ray-tracer will tend to pull away after a certain scene complexity. The reason people perceive ray-tracing as horribly slow is because graphics cards are hardware-accelerated for rasterization, which makes it an unfair comparison.

Edited by Bacterius, 18 December 2012 - 03:48 AM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS