OptiX is built on top of CUDA, so it only works on Nvidia hardware. We've used it for years as part of our lightmap baking pipeline, and I would say that it's pretty stable and robust at this point. Early we on there were plenty of issues, but they are now on their fifth major version. Performance is good, and CUDA is really nice to work in for the most part.
Unfortunately I've never used Radeon Rays, so I can't directly compare it to OptiX for you. My understanding is that Radeon Rays works by having your code give their API a list of rays, and then the API gives you back the intersections. This makes sense given that they have to abstract over their various CPU and GPU implementations, but it's very different from working with OptiX. OptiX actually has a whole high-level programming models where you write separate programs for generating rays, evaluating ray hits, and evaluating ray misses. The OptiX runtime then does all kinds of stuff behind the scenes to make it all work efficiently, and also make it appear to your ray generation program as if everything is happening synchronously. It turns out there's a lot of details in getting good performance between ray generation and hit evaluation, since GPU's need to fill large SIMD units and don't natively have support for fork/join. In other words, there's definitely value in using Nvidia's black-box implementation if you want the best possible hardware. With Radeon Rays (or your own triangle/ray intersection shaders) it will be up to you to try to figure out how to efficiently process your hits and spawn more rays. Or at least, that's my understanding from looking at their docs, samples, and API's.
DXRT is also an appealing option, since it has a programming model that's very similar to OptiX. However you currently need a pre-release version of Windows, and a very expensive Nvidia Volta-based GPU if you don't want to use their software fallback path.