• Advertisement
Sign in to follow this  

Breaking GPU memory limits! At least for ray tracing

This topic is 1989 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everyone, let's discuss them.

CentiLeo guys are showing the demos with ray tracing dynamics in 360 million polygon scene. Dynamics is something where ray tracing was criticised for: http://www.centileo.com/news.html
They support huge amounts of textures and geometry rendered on GPU, around 10x larger than the GPU memory can arrange. And all of this can be rendered very quickly in HD video on a laptop. Although it is not yet a fully featured product, but a project in progress.
Their final images (noise free, full quality) are still around 1000x slower than current game speeds but these scenes are very huge for GPU to render and still pretty fast in photorealistic graphics liga.
And it is great that someone can show the pros of fast ray tracing on really big scenes. This is not just a couple of spheres and planes used to be demoed by others.
I would like to discuss the pros and cons of ray tracing again. Also curious on the guesses how their team did it.

Share this post


Link to post
Share on other sites
Advertisement
The performance on those scenes, with support for scene editing, blows current DCC apps out of the water. Here's hoping for Autodesk/Blender integrations biggrin.png

How they did it isn't that complicated at an abstract level, it's a category of solutions called Out-of-Core algorithms, but of course all the small details are what makes it fast.
The basic outline would be -- instead of calling a function (e.g. [font=courier new,courier,monospace]TraceRay(start, end)[/font]) you instead write the input parameters for that function into a queue. Then when you've got a huge number of calls that need to be made, you sort them by required data. e.g. on a polygonal scene, you could divide the scene into grid-cells, where 1 grid-cell does fit into memory. You then sort the pending [font=courier new,courier,monospace]TraceRay[/font] commands and select only the ones that go through a certain grid-cell, upload just that cell's polygons to the GPU, and perform all the [font=courier new,courier,monospace]TraceRay[/font] commands just for that cell. Repeat until you've got all the results. When the results are all calculated, you can begin resuming the code that actually wanted to call the [font=courier new,courier,monospace]TraceRay[/font] function in the first place.
It's all about deferring/batching up work into sensibly-sized chunks.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement