How fast is hardware-accelerated ray-tracing these days?

Started by
30 comments, last by Hodgman 8 years, 1 month ago

What I mean by hardware-accelerated is using a GPGPU API such as OpenGL's compute shaders. Now, correct me if I'm wrong, but from what I've heard is that the only reason why rasterization is faster is due to hardware optimizations. So exactly how fast is ray-tracing on modern GPUs? Would making a game in it be viable? Or are we just not to that point yet?

Advertisement

I used ray-tracing for a game-jam game a while ago (actually sphere-tracing, but let's not get too technical). I didn't use any acceleration structures (BVH/etc) or intelligent compute-shader-based algorithms, so I just made the game run at ~320x240 for retro styled aesthetics biggrin.png On my low-end GPU, I get 60 frames per second smile.png

For an actually good looking example though, check out Brigade.

It's certainly possible, but whether it's the best choice entierly depends on your game. If you were making a game about refractions and caustics, that's something that's hard to do correctly with rasterization, so a ray-tracer might be more suitable. For typical games that just want good-enough graphics as fast as possible, rasterisation usually wins.
However, modern games are starting to use hybrids -- Unreal4 supports sphere-traced shadows, and cone-traced voxel reflections for example.

Now, correct me if I'm wrong, but from what I've heard is that the only reason why rasterization is faster is due to hardware optimizations.

You heard something that is wrong. For primary rays (rays coming directly out of the camera), they have high coherence and are perfectly suited to rasterisation. For most scenes, it's simply a more efficient algorithm. It's also well suited to hardware implementation, so combine the better algorithm with purpose-built hardware and it's the default choice (at least for primary rays).

It's certainly possible, but whether it's the best choice entierly depends on your game. If you were making a game about refractions and caustics, that's something that's hard to do correctly with rasterization, so a ray-tracer might be more suitable. For typical games that just want good-enough graphics as fast as possible, rasterisation usually wins.

A hybrid system for using the appropriate technique where they perform best makes sense. But it does sound like a pain to implement.

You heard something that is wrong. For primary rays (rays coming directly out of the camera), they have high coherence and are perfectly suited to rasterisation. For most scenes, it's simply a more efficient algorithm. It's also well suited to hardware implementation, so combine the better algorithm with purpose-built hardware and it's the default choice (at least for primary rays).

Good to know.

It's fast. But it's not fast enough to produce frame rates for real time. Take a look at Cycles in blender. It uses the GPU to do some rendering, but it takes a few seconds for it to actually finish computing.


But it's not fast enough to produce frame rates for real time.
https://vimeo.com/124065358

/me slow claps. I'll be a sonova biscuit eater. It still has that noise dithering effect though.

I really would like to know how much GPUs they use for their Brigade videos.
Once tried a demo game on single gtx670 (that one looking a bit like Tomb Raider).
640 x 480, low frame rate, shadows yes but almost no radiosity.
It would have required more GPU power for more bounces and simply looked like low quality direct lighting.
I'd guess they use 4-8 GPUs.

In their new videos they have solved the noise problem, but i'm not sure about the issue of
temporal unstable overall brightness in scenes with mostly indirect lighting.

V-Ray RT GPU is a good reference point, because those guys don't use any "hacks" or prebakes(by default), so it's basically a pure GPU raytracing.

My understanding is that the ray tracing isn't really the hard part. Its building the data structure that is the issue. Whether you're using a BVH, oct-tree, binary tree, grid, or something else, the data structure is essential since it allows you to move your tracing time complexity from O(n) to O(log(n)). So for large enough static scenes, ray tracing can actually beat out standard triangle rasterization. The problem is building that data structure in real time. Sure simple linear transformations within certain limitations can be handled relatively quickly, but vertex skinning, dynamic vertex displacement (like animated water), anything stretching/oozing, many particle effects, basically animation, is what really is the bottleneck at this point.

Really the problem should be re-stated, its not how fast GPUs can raytrace these days, its how fast can they build the raytracing data structure these days.

My understanding is that the ray tracing isn't really the hard part. Its building the data structure that is the issue


While that's common opinion in graphics community, i disagree.
Say you have 10000 dynamic objects: Prebuild a tree per object, and at runtime build a tree from only 10000 nodes, that's <1 ms on GPU.

Research projects rebuild the entire tree every frame, thus they often show similar time for building and tracing.

The challenge is to parallelize tracing so each thread has similar amount of work, and wavefronts run also data coherent.
Animation has similar workload like in rasterization - you may need to refit bounding volumes in the tree, but that's always linear time.

I think raytracing will become primary solution for high frequency stuff like sharp reflections / shadows.
For low frequency stuff (radiosity, glossy reflections, soft shadows) i see much faster methods than path tracing.

This topic is closed to new replies.

Advertisement