• Advertisement
Sign in to follow this  

NVIDIA OptiX benchmarks anywhere?

This topic is 2998 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

NVIDIA just recently released a Beta 1 of their "OptiX 2" GPU-accelerated ray tracing engine. I'm too unfortunate not to have the required hardware to run this, and I'm burning with interest to know how it compares to CPU solutions today. Is anyone using/testing/working on this? Is this already a generic enough approach, i.e. that there are no restrictions on the scene complexity? Do you know what kind of acceleration structures the version 2 uses? (the version 1 page says that they have BVH and KD trees, I wonder if there are any chances to this) I've done some CUDA in the past, and I'm curious how well the random memory access problem is solved for GPU raytracing (or is it?). Do CPUs still outperform GPUs in some ways in raytracing, or is the GPU performance so prominent on all accounts that GPUs are just taking over the raytracing business as well? Has anyone run benchmarks on this? Something along how many primary/secondary rays does it do per second? What kind of rendering times can one get for a scene of X tris/Y lights? Geeks3d: NVIDIA OptiX Ray Tracing SDK Available with GeForce Support contains a few images with some FPS measures - I'd love to know if anyone has something more concrete to report! Best regards,

Share this post


Link to post
Share on other sites
Advertisement
Sign in to follow this  

  • Advertisement