Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualBacterius

Posted 18 December 2012 - 03:48 AM

For global illumination, anti-aliasing, depth-of-field, etc... you can easily reach as high as 1000 samples per pixel. Ray-tracing on the GPU is fast, but the fact is, GPU's are not designed for ray-tracing. One of the main characteristics of ray-tracing is that ray-scene collisions are unpredictable (at least after the first bounce) which means very incoherent memory access (which both CPU's and GPU's suck at to some extent). But their sheer parallel computational power compared to the CPU makes them faster for such embarrassingly parallel problems, so people use them.

There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.

Also, ray-tracing doesn't lend itself so well to the classic "pipeline" approach. In particular, if you're doing anything non-trivial with your rays, you can't render objects one after the other, changing shaders to modify their appearance. In general, everything has to be available on-chip at once. So that's another problem in that ray-tracing hardware would require a paradigm shift for computer graphics development.

If you compare a software rasterizer with a similarly optimized software ray-tracer, you'll see they're not too far off (for roughly the same image quality, obviously) with the rasterizer winning for average-complexity scenes, and the ray-tracer will tend to pull away after a certain scene complexity. The reason people perceive ray-tracing as horribly slow is because graphics cards are hardware-accelerated for rasterization, which makes it an unfair comparison.

#5Bacterius

Posted 17 December 2012 - 11:35 PM

For global illumination, anti-aliasing, depth-of-field, etc... you can easily reach as high as 1000 samples per pixel. Ray-tracing on the GPU is fast, but the fact is, GPU's are not designed for ray-tracing. One of the main characteristics of ray-tracing is that ray-scene collisions are unpredictable, which means very incoherent memory access (which both CPU's and GPU's suck at to some extent). But their sheer parallel computational power compared to the CPU makes them faster for such embarrassingly parallel problems, so people use them.

There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.

Also, ray-tracing doesn't lend itself so well to the classic "pipeline" approach. In particular, if you're doing anything non-trivial with your rays, you can't render objects one after the other, changing shaders to modify their appearance. In general, everything has to be available on-chip at once. So that's another problem in that ray-tracing hardware would require a paradigm shift for computer graphics development.

If you compare a software rasterizer with a similarly optimized software ray-tracer, you'll see they're not too far off (for roughly the same image quality, obviously) with the rasterizer winning for average-complexity scenes, and the ray-tracer will tend to pull away after a certain scene complexity. The reason people perceive ray-tracing as horribly slow is because graphics cards are hardware-accelerated for rasterization, which makes it an unfair comparison.

#4Bacterius

Posted 17 December 2012 - 11:34 PM

For global illumination, anti-aliasing, depth-of-field, etc... you can easily reach as high as 1000 samples per pixel. Ray-tracing on the GPU is fast, but the fact is, GPU's are not designed for ray-tracing. One of the main characteristics of ray-tracing is that ray-scene collisions are unpredictable, which means very incoherent memory access (which both CPU's and GPU's suck at to some extent). But their sheer parallel computational power compared to the CPU makes them faster for such embarrassingly parallel problems, so people use them.

There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.

Also, ray-tracing doesn't lend itself so well to the classic "pipeline" approach. In particular, if you're doing anything non-trivial with your rays, you can't render objects one after the other, changing shaders to modify their appearance. In general, everything has to be available on-chip at once. So that's another problem in that ray-tracing hardware would require a paradigm shift for computer graphics development.

If you compare a software rasterizer with a similarly optimized software ray-tracer, you'll see they're not too far off, with the rasterizer winning for average-complexity scenes, and the ray-tracer will tend to pull away after a certain scene complexity. The reason people perceive ray-tracing as horribly slow is because graphics cards are hardware-accelerated for rasterization, which makes it an unfair comparison.

#3Bacterius

Posted 17 December 2012 - 11:29 PM

For global illumination, anti-aliasing, depth-of-field, etc... you can easily reach as high as 1000 samples per pixel. Ray-tracing on the GPU is fast, but the fact is, GPU's are not designed for ray-tracing. One of the main characteristics of ray-tracing is that ray-scene collisions are unpredictable, which means very incoherent memory access (which both CPU's and GPU's suck at to some extent). But their sheer parallel computational power compared to the CPU makes them faster for such embarrassingly parallel problems, so people use them.

There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.

Also, ray-tracing doesn't lend itself so well to the classic "pipeline" approach. In particular, if you're doing anything non-trivial with your rays, you can't render objects one after the other, changing shaders to modify their appearance. In general, everything has to be available on-chip at once. So that's another problem in that ray-tracing hardware would require a paradigm shift for computer graphics development.

#2Bacterius

Posted 17 December 2012 - 11:26 PM

For global illumination, anti-aliasing, depth-of-field, etc... you can easily reach as high as 1000 samples per pixel. Ray-tracing on the GPU is fast, but the fact is, GPU's are not designed for ray-tracing. One of the main characteristics of ray-tracing is that ray-scene collisions are unpredictable, which means very incoherent memory access (which both CPU's and GPU's suck at to some extent). But their sheer parallel computational power compared to the CPU makes them faster for such embarrassingly parallel problems, so people use them.

There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.

#1Bacterius

Posted 17 December 2012 - 11:25 PM

For global illumination, anti-aliasing, depth-of-field, etc... you can easily reach as high as 1000 samples per pixel. Ray-tracing on the GPU is fast, but the fact is, GPU's are not designed for ray-tracing. One of the main characteristics of ray-tracing is that ray-scene collisions are unpredictable, which means very incoherent memory access (which both CPU's and GPU's suck at to some extent).

There have been some efforts at making specialized ray-tracing hardware, but they are either too specific (restrict what can be implemented with them) or not popular enough (classic chicken-and-egg problem: nobody is using your technology because it doesn't exist, and why invent a technology if nobody is going to use it?). If you designed a graphics card with hardware-accelerated ray-tracing and efficient support for scene traversal graphs (octree, kd-tree), with low-latency memory, it would crush a conventional rasterizer. Unfortunately, this hasn't caught on because it's just not cost-effective yet, such cards would cost a lot more, and to be fair, games don't really need ray-tracing yet. They look good enough. I would estimate we will start seeing some of this technology in 6-8 years.

PARTNERS