How fast is hardware-accelerated ray-tracing these days?

Started by
30 comments, last by Hodgman 8 years, 1 month ago
Yeah it's worth pointing out that rasterization is costly when you have a lot of geometry (e.g. tesselate a plane so that you've got a million triangles per pixel - hence LOD'ing), and ray-tracing is expensive when you have a lot of rays.

In a first-person shooter it's common to use ray-tacing to see what was hit by your shots, as this only requires a single ray. The alternative of rasterizing a single pixel would be insane :lol:
Advertisement

BTW this questions seems a bit out of the topic. But I saw on few places in the term "realtime ray-traced shadows" (in the context of games), does anybody know are those "shadows"?



yes, these are shadows.

there were some games released with shadows traced in screenspace to add some finer details. this is obviously not perfect, but better than alternatives at low cost. but the NV Mech demo is really raytraced.

BTW this questions seems a bit out of the topic. But I saw on few places in the term "realtime ray-traced shadows" (in the context of games), does anybody know are those "shadows"?

If it's an old game, then probably screen-space shadows, similar to implementation to SSAO or screen-space reflections, except you trace a ray towards the light source and search for blockers. This has been around for about 10 years, and is useful for fixing up "peter panning" that occurs from shadow-map bias :)

If it's a new game, then probably either:
* sphere tracing against a signed distance field of the scene (UE4 supports this AFAIK), or
* cone tracing against a voxelized version of the scene (NVidia loves to show this off).

BTW this questions seems a bit out of the topic. But I saw on few places in the term "realtime ray-traced shadows" (in the context of games), does anybody know are those "shadows"?

If it's an old game, then probably screen-space shadows, similar to implementation to SSAO or screen-space reflections, except you trace a ray towards the light source and search for blockers. This has been around for about 10 years, and is useful for fixing up "peter panning" that occurs from shadow-map bias smile.png

If it's a new game, then probably either:
* sphere tracing against a signed distance field of the scene (UE4 supports this AFAIK), or
* cone tracing against a voxelized version of the scene (NVidia loves to show this off).

It should be noted that:

* right now UE4 only supports mostly static signed distance fields, you can update it but not animate anything. But the upcoming Dreams is all compute and signed distance fields, so all the shadows are traced there

* Nvidia's implementation of sparse voxel octrees are fantastically, incredibly slow. But Crytek does the same thing much, much faster and will hopefully tell people how at Siggraph or something

This paper about Dreams is a must read:
http://advances.realtimerendering.com/s2015/AlexEvans_SIGGRAPH-2015-sml.pdf

Also this one, they go through every idea that comes up on optimizing voxel cone tracing:
http://fumufumu.q-games.com/archives/TheTechnologyOfTomorrowsChildrenFinal.pdf
* cone tracing against a voxelized version of the scene (NVidia loves to show this off).
+
* Nvidia's implementation of sparse voxel octrees are fantastically, incredibly slow. But Crytek does the same thing much, much faster and will hopefully tell people how at Siggraph or something

So they aren't voxelizing the scene every frame?

So they aren't voxelizing the scene every frame?

That depends where you saw the term mentioned and who 'they' are....

I wrote a raytracer for a signed distance field of size 256x256x256 in CUDA that could run with 15-60 fps at a resolution of 1920x1080 on a Nvidia Geforce 840M (similar to a Geforce 8800 GTX in floating point performance) with raytraced shadow for one light source and simple phong shading.

Terrain could be modified with union/intersect/difference operations in realtime.

I used texture memory for that because this way I could use the GPU interpolation units for neighboring distance field voxels which resulted in smooth terrain and way more than 5 fps.

So maybe in a few years fast GPUs will be common enough to render worlds this way, but for now I'd raytrace different data structures.

The thing that I'm interested in is how do they voxelize geometry via the graphics pipeline? How is the write to a 3D(is is a 3D)texture implemented, how does this work with the depth buffer?

This topic is closed to new replies.

Advertisement