Voxel Ray Tracing

Started by
8 comments, last by CirdanValen 8 years, 8 months ago

Hello all, I want to start an exploratory discussion on ray tracing voxels and get some ideas on how it works.

A few weeks ago I found this game called Voxelnauts, which is being developed by John Olick from IDSoft. Short video of the game in action can be found here and longer trailer here. At first glance it's a typical Minecraft clone...but as I studied the videos and screen shots closer, the rendering technology looks much different from most voxel games.

zTe1hSe.jpg

Aside from the higher resolution voxels, what I noticed is that there isn't shading on each individual voxel. Typically, the different sides of the cubes are light or darker depending on whether or not they are facing the light. Each voxel is lit depending on the distance from the light source.

I did a ton of digging, and found that they are raytracing sparse voxel octrees.

My goal is NOT to create yet another Minecraft-clone infinite modifiable worlds, but rather imitate the art style. My assumption is, polygonizing meshes at this resolution would wreck havok on GPUs as scenes get more complex...which is why I am looking into raytracing (If i'm wrong, let me know). The problem is...all the information on the internet is basically Masters and PHd thesis/research papers on SVO construction/traversal instead of rendering implementations. And they are all focused on extremely high resolution data sets, realistic rendering, or offline rendering. Not much information for this kind of interactive semi-high resolution rendering

Here is what I know:

1) Voxelnauts uses CUDA to perform raytracing on the GPU, this is essentially required to get interactive frame rates

2) Ray tracing is by looping through each pixel on the screen casting a ray against an octree to find the voxel and resultant color of that pixel.

3) This is a very complicated way of rendering 3d graphics

One method I found is to basically use 3D textures (or SVOs I suppose) for voxel data, render a cube for each "object" using standard triangles, then in the ray trace the voxels in the fragment shader. This is probably the most straight forward way to do things, but I'm not sure if it's a viable method. Full 3D textures would have to be stored, which could be slow and take up a lot of memory, and tracing every fragment of the cube even if they are transparent could be slow.

The alternative is to go all out and ray trace against a massive octree to be able to render full scenes, which I'm not entirely sure how well it would work. It seems like a bad idea to store a large scene in one giant octree, but that's what it seems like Voxelnauts is doing.

I'm looking for ideas, tips, comments, information, experience, anything that might give me a better understanding of how this works or alternative ideas. I done a tone of googling and have read several of the thesis papers on this stuff...but it's still foggy to me.

Advertisement

3) This is a very complicated way of rendering 3d graphics

From a conceptual standpoint, it is a far more straightforward way of rendering 3D graphics, without all the weird depth buffer, transparency and shading hacks that are used in triangle rasterisation. The downside, of course, is performance. Triangle rasterisation was specifically developed to be a very efficient way to approximate true 3D worlds.

which could be slow and take up a lot of memory, and tracing every fragment of the cube even if they are transparent could be slow

You aren't going to achieve this on a mobile phone anytime soon. Gaming PCs have plenty of horsepower to render SVOs, at least at the level of fidelity Voxelnaughts is demonstrating.

It seems like a bad idea to store a large scene in one giant octree, but that's what it seems like Voxelnauts is doing.

I seriously doubt that they are doing that, since it would effectively disallow any modification of the voxels at runtime. I'd expect they divide the world into fixed-size chunks, and store an octree for each non-empty chunk.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

A number of years ago for a TigSource competition I made this:

. The video is of the game running on a nvidia 9800 gtx, which is a pretty old card. Its entirely raytraced, and didn't use CUDA (just a pixel shader). Now I only had 1 month to work on it, so I would do a few things differently if I had the time. But I wouldn't be surprised if you could get similar levels of fidelity on the higher end mobile devices. Its really about the memory management more than the rendering.

In the case of the little demo I posted, the world consists of 16x16x16 voxel 'tiles' or 'blocks' which are grid aligned. The level consists of a 256x256x256 'map' (as I called it) where each texel corresponds to a 16x16x16 tile index. Each 16x16x16 tile was stored as a single 4096 texel line in a 2d texture. So the 'tile' index was just the y coordinate in the tile texture. So you could pretty much store as many tiles as you have memory for. Also I found a 256x256x256 'level' is surprisingly large (in terms of gameplay/traversal) and only took 32 megs uncompressed (16 bits for a tile 'index'). So its certainly streamable, and real-time modifiable. The system isn't as flexible as a SVO type renderer, but modifying is easy, memory management was exceptionally simple.

I wouldn't say its something a beginner should try, but its certainly do-able. You just need to be realistic about the memory requirements and plan ahead accordingly.


A number of years ago for a TigSource competition I made this:

.

Very cool. The 2D/3D mix is very interesting. It looks like the camera is more or less free to render from any angle, but it looks like the character can only move in straight lines on the X or Z axis?

Eric Richards

SlimDX tutorials - http://www.richardssoftware.net/

Twitter - @EricRichards22

A number of years ago for a TigSource competition I made this:

.

...

Interesting way to handle the rendering, thanks for the info.

I found an example on Github of voxel ray marching with a fragment shader in GLES/Javascript and it runs surprisingly well. At 2560x1440 resolution, it stayed between 30 & 60 fps.

Demo: http://wwwtyro.github.io/Voxgrind/examples/columns/ (Doesn't work in Chrome for me, but works great in Microsoft Edge)

Repo: https://github.com/wwwtyro/Voxgrind

This is one of the more straight forward implementations I've found


A number of years ago for a TigSource competition I made this:

.

Very cool. The 2D/3D mix is very interesting. It looks like the camera is more or less free to render from any angle, but it looks like the character can only move in straight lines on the X or Z axis?

Exactly, it played like a 2d platformer, and at 'trigger points' the camera would swing to a new plane.

A number of years ago for a TigSource competition I made this:

.

...

Interesting way to handle the rendering, thanks for the info.

I found an example on Github of voxel ray marching with a fragment shader in GLES/Javascript and it runs surprisingly well. At 2560x1440 resolution, it stayed between 30 & 60 fps.

Demo: http://wwwtyro.github.io/Voxgrind/examples/columns/ (Doesn't work in Chrome for me, but works great in Microsoft Edge)

Repo: https://github.com/wwwtyro/Voxgrind

This is one of the more straight forward implementations I've found

If you've found something that makes sense that's good. Good luck :)

Watching that VoxelNauts trailer again, they seem to have a pretty incredible draw distance going on. I wonder how beefy a card is required to pull that off?

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Watching that VoxelNauts trailer again, they seem to have a pretty incredible draw distance going on. I wonder how beefy a card is required to pull that off?

Yea, that's why I think they are doing something different than just ray tracing the whole scene like Voxlap does.

I was thinking about how to ray tracing multiple chunks, but it would be more complicated than it's worth. Essentially it would increase complexity from N pixels to N pixels times X octrees. Either raytrace against each visible octree back to front, or utilize a depth buffer and only trace against the pixels that aren't occupied by a closer voxel.

The only viable way I can think of that they are handling it is essentially what I mentioned in my initial post. They are still rendering cubes in chunked octrees like Minecraft, but they are ray tracing on each fragment of the cube. This method seems like it would rid most of the benefits of ray tracing because it would still require frustum and occlusion culling, and high poly counts.

I thought the point of raytracing was to allow for more realistic light physics? The lighting in Voxelnauts is terribly washed out and dull.

I thought the point of raytracing was to allow for more realistic light physics? The lighting in Voxelnauts is terribly washed out and dull.

Ray tracing can be used for that, but in this instance it's being used to render geometry instead of rasterizing polygons.

This topic is closed to new replies.

Advertisement