Hello all, I want to start an exploratory discussion on ray tracing voxels and get some ideas on how it works.
A few weeks ago I found this game called Voxelnauts, which is being developed by John Olick from IDSoft. Short video of the game in action can be found here and longer trailer here. At first glance it's a typical Minecraft clone...but as I studied the videos and screen shots closer, the rendering technology looks much different from most voxel games.
Aside from the higher resolution voxels, what I noticed is that there isn't shading on each individual voxel. Typically, the different sides of the cubes are light or darker depending on whether or not they are facing the light. Each voxel is lit depending on the distance from the light source.
I did a ton of digging, and found that they are raytracing sparse voxel octrees.
My goal is NOT to create yet another Minecraft-clone infinite modifiable worlds, but rather imitate the art style. My assumption is, polygonizing meshes at this resolution would wreck havok on GPUs as scenes get more complex...which is why I am looking into raytracing (If i'm wrong, let me know). The problem is...all the information on the internet is basically Masters and PHd thesis/research papers on SVO construction/traversal instead of rendering implementations. And they are all focused on extremely high resolution data sets, realistic rendering, or offline rendering. Not much information for this kind of interactive semi-high resolution rendering
Here is what I know:
1) Voxelnauts uses CUDA to perform raytracing on the GPU, this is essentially required to get interactive frame rates
2) Ray tracing is by looping through each pixel on the screen casting a ray against an octree to find the voxel and resultant color of that pixel.
3) This is a very complicated way of rendering 3d graphics
One method I found is to basically use 3D textures (or SVOs I suppose) for voxel data, render a cube for each "object" using standard triangles, then in the ray trace the voxels in the fragment shader. This is probably the most straight forward way to do things, but I'm not sure if it's a viable method. Full 3D textures would have to be stored, which could be slow and take up a lot of memory, and tracing every fragment of the cube even if they are transparent could be slow.
The alternative is to go all out and ray trace against a massive octree to be able to render full scenes, which I'm not entirely sure how well it would work. It seems like a bad idea to store a large scene in one giant octree, but that's what it seems like Voxelnauts is doing.
I'm looking for ideas, tips, comments, information, experience, anything that might give me a better understanding of how this works or alternative ideas. I done a tone of googling and have read several of the thesis papers on this stuff...but it's still foggy to me.