Photon mapping

Started by
13 comments, last by greenhybrid 17 years, 4 months ago
Quote:primary rays are better processed through a rasterizer than through actual path tracing


Which realtime ray tracer are you referring to? I know of several fast ray tracers (including one I wrote), but none of them uses rasterization to optimize the 'first hit' for primary rays.

Using rasterization would make it rather hard to have adaptive supersampling, by the way.
Advertisement
Quote:Original post by phantomus
Quote:primary rays are better processed through a rasterizer than through actual path tracing


Which realtime ray tracer are you referring to? I know of several fast ray tracers (including one I wrote), but none of them uses rasterization to optimize the 'first hit' for primary rays.

Using rasterization would make it rather hard to have adaptive supersampling, by the way.


Sorry to kind of hijack the thread..

There are a couple from IEEE RT '06 that I saw combining rasterization and ray tracing. Pixar used this method in Cars, though that method is not realtime. The coherency experienced with primary rays is really well-suited for a raster engine. A few posters in the conference did similar things on Cell processors as well.

Don't get me wrong, ray-object intersection w/ Kd-trees is very fast, but hardware rasterization is still winning out for primary rays. I think if raytracing is going to be adopted in realtime, it's going to at least be merged with rasterization in a similar method first. Then if special purpose hardware comes out for raytracing, maybe we'll see a full switch.

I agree with you on adaptive supersampling. I don't think this issue was addressed in any implementation I saw. Pixar uses sub-pixel geometry with many passes, so I think they get around it this way - that won't be the case for real-time. I may be pulling this out from someplace, but I think I remember 16spp using jittered camera.
Quote:I think if raytracing is going to be adopted in realtime, it's going to at least be merged with rasterization in a similar method first.


I fully agree. I recently did some lectures on ray tracing and it's role in future games, and came to the same conclusion with the students: Ray tracing is not going to take over in one revolutionary switch or whatever; it needs to be introduced gently and preferably as an 'optional feature'. Perhaps that will convince NVidia to add hardware support for it. :)

I also suppose you're right that combining rasterizing and ray tracing for the first hit is probably faster than full kd-tree traversal, even for full screen ray tracing (as opposed to ray tracing some objects), especially for the kind of scene complexity we are looking for at the moment (<50k visible triangles). I guess the reason it isn't used by many tracers is the fact that it only speeds up primary rays, while everyone is shooting for recursive ray tracing. Goals contradict with practice though: Real time ray tracers mostly seem to use simple scenes with little recursive effects.

I believe one speaker for RT06 complained about this: Everybody is speeding up the 'core process', but nobody is exploiting the benefits of ray tracing, which in the end causes real time ray tracing to be a mere academic excercise instead of the visual breakthrough that everyone hopes for.

By the way, I just started a project with some students to build a real time ray tracing benchmark to give the 'general public' a better idea of what ray tracing can do and what kind of performance can be expected. We ordered an 8-core machine to test stuff on (Moore will help us out once the benchmark is released), and on this machine, we will have real-time performance (~30 fps). Our course has both programmers and visual artists, so it's an interesting team.
Actually in my tests, raytracing the first hit is usually faster with raytracing than with rasterization, at least with Reyes renderers (i.e. what Pixar use, and what that Cars paper mentioned is about).

The benefits of Reyes rendering come when you want to do antialiasing, motion blur and depth of field. Because the reyes algorithm seperates sampling from shading, you can achieve very high-quality antialiasing (we typically use up to 400 samples per pixel for rendering fur) with a miniscule overhead. Turning on motion blur is essentially free. That's something you'll never get out of a raytracer.

The point made in the Cars paper is that a hybrid rasterization/raytracing algorithm works well for film production, but is a nightmare to maintain as a code base, because you've essentially got two complete renderers sticthed together.
I think "hardware-triangles" may beat ray-tracing-primaries in the triangle count regions phantom mentioned (<50k).

But having a look at CAD-systems or terrain visualisation with >50M triangles, I think ray tracers are unbeatable. I've just written a terrain renderer using ray tracing which is able to render (some of you might hav seen my IOTD the last days) 134M triangles on a 512MB machine (the terrain stored in <400MB, I propose 1G triangles on a 4GB machine) at an semi-interactive framerate (current results on my AhtlonXP1800+[yet not fully optimized traversal]: 0.5-3 fps @ 800x600, afair).
Doubling both the width and the height of the heightmap (so 4x in total) just adds one traversal step in my ray tracer (in a tree of recursion depth 13 the performance just falls by a factor of 1/13), while it may increase the overdraw in a rasterizer by a factor of two and 4x triangles have to be drawn.

Concluding I'd say that a ray tracer (even a full software/main processor one) is able to beat rasterizer hardware.

This topic is closed to new replies.

Advertisement