GPU Ray Trace SDKs

Started by
2 comments, last by Turbo14 6 years ago

I want to move my CPU based ray tracer to the GPU using one of the GPU ray trace SDKs. I've never used any of them, and I don't know much about them. Has anyone here used them, or know a bit about them to recommend one over the other?

I have an AMD GPU so I was leaning towards Radeon Rays, although I wouldn't mind picking up a cheaper NVIDIA card if they have a better ray trace SDK. From what I can tell Radeon Rays works on any hardware while NVIDIA Optix is limited to NVIDIA hardware.

What would you suggest?

Advertisement

OptiX is built on top of CUDA, so it only works on Nvidia hardware. We've used it for years as part of our lightmap baking pipeline, and I would say that it's pretty stable and robust at this point. Early we on there were plenty of issues, but they are now on their fifth major version. Performance is good, and CUDA is really nice to work in for the most part.

Unfortunately I've never used Radeon Rays, so I can't directly compare it to OptiX for you. My understanding is that Radeon Rays works by having your code give their API a list of rays, and then the API gives you back the intersections. This makes sense given that they have to abstract over their various CPU and GPU implementations, but it's very different from working with OptiX. OptiX actually has a whole high-level programming models where you write separate programs for generating rays, evaluating ray hits, and evaluating ray misses. The OptiX runtime then does all kinds of stuff behind the scenes to make it all work efficiently, and also make it appear to your ray generation program as if everything is happening synchronously. It turns out there's a lot of details in getting good performance between ray generation and hit evaluation, since GPU's need to fill large SIMD units and don't natively have support for fork/join. In other words, there's definitely value in using Nvidia's black-box implementation if you want the best possible hardware. With Radeon Rays (or your own triangle/ray intersection shaders) it will be up to you to try to figure out how to efficiently process your hits and spawn more rays. Or at least, that's my understanding from looking at their docs, samples, and API's. :)

DXRT is also an appealing option, since it has a programming model that's very similar to OptiX. However you currently need a pre-release version of Windows, and a very expensive Nvidia Volta-based GPU if you don't want to use their software fallback path.

Since I generate the same number of rays for every pixel currently, I think I could work with Radeon Rays, although I do prefer as high level programming as I can get. My brother owns an Nvidia card that has about the same performance as my AMD one, so I guess I can test it on his machine too. I probably won't enable CPU devices for the Radeon Rays version, just to simplify things and to hope for better parallelism. If I have performance issues from bad lower level programming, maybe I'll get an Nvidia card to try OptiX.

I wanted to try DXRT but don't want an unstable system, and don't want to buy an expensive card.

This topic is closed to new replies.

Advertisement