Need advice on drawing large amounts of voxels/cubes

Started by
19 comments, last by rubicondev 13 years, 11 months ago
I've been working on something like this on-and-off (mainly off) for a while. I was able to get quite good performance by rendering a standard cube mesh with instancing, but I never got around to adding more obvious techniques like frustum and occlusion culling.

I've no idea how Minecraft achieve such speeds - culling faces per cube seems like a huge task, as does joining adjacent strips. Especially considering the amount of cubes that appear to be drawn at a time:



[Edited by - Barguast on April 29, 2010 6:03:43 AM]
Using Visual C++ 7.0
Advertisement
Remember though that a lot of the operations don't need to be done every frame, only when the environment changes. If you split the environment into largeish chunks, you don't even need to update the whole environment, just the localised chunk where the change happened.
The guy who wrote "Fez" wrote some blog postings about how he turned pixel data into larger polygons for rendering efficiency.

One thing which occurs immediately to me is to process the voxel data. Cells which are entirely surrounded by other cells are trivially non-visible from any direction and that information won't change unless the structure changes.

To extend that; You may be able to decide for each cell if it's visible from each of the six sides. Since you easily know which side you're looking at, you can filter based on that. I can't trivially decide whether that will work or not...


"I cannot figure out how the game has time to look for adjacent quads?!"

You don't have to do it at RENDER time. You can do it at construction time...

Store in a cell a "type" indicator and extents in 3d... bytes will do.

So a stack of 4 stone cubes is actually one stone block which is 1x1x4 and three spaces which just say "occupied but already rendered". You'd have to mess with visibility culling a bit. Now you only have to draw one object. If you set this up properly, you can fiddle the polygon extents into the right place in vertex shaders.

Think of it like run-length encoding in 3d... but crucially here, you can work out how big the blocks are at the time you load or build the level.
I've been working on my own voxel engine for a few years now (http://www.thermite3d.org), and it quite happily handles volumes up to 512x512x512 voxels. A lot of the comments made so far make good sense but I'll summerise what I do.

Firstly, I break the world down into chunks of somewhere between 16x16x16 and 64x64x64 (the best chunk size depends of several factors). Next, I generate a mesh for each of these chunks. I actually use the Marching Cubes algorithm but this doesn't really matter - any algorithm for generating a mesh from the volume data is fine. I then upload these meshes to the GPU and use frustrum culling to decide what to draw. I can also send them to the physics engine for simulation.

As mentioned, the mesh data for a chunk only needs to be updated when the corresponding part of the volume data is changed. Most changes are fairly localised (explosions and stuff) so this helps keep down the amout of mesh regeneration.

I've also recently been experimenting with combining adjacent faces. There is a lot of research and information on the internet about 'mesh decimation' and 'mesh simplification'.

You can find more information in the following forum thread: http://www.ogre3d.org/forums/viewtopic.php?f=11&t=27394

I also wrote an article called 'Volumetric Representation of Virtual Environments' which is Chapter 3 of the book 'Game Engine Gems'. There's a lot more detail in there.
Indeed, chunking seems to be the way to go. I wrote a very basic voxel tessellator (lacking combining adjacent faces which I'm not sure I actually want). The tessellator produces a 16x16 voxel triangle mesh.

I'm able to draw 4096 (the number as shown in my screenshot above) of these chunks at 10 FPS (without instancing). All the chunks are the same at the moment so I don't know how representative of actual performance this is, but this is without frustum or occlusion culling and - perhaps most significant of all, with an almost 'full' chunk (whereas Minecraft is mostly empty space).

I have a feeling that a few basic optimisations, and perhaps chunk-based LOD will yield some good results.

I'll have an experiment with a more realistic test tomorrow and see if it makes a difference. Thanks for you tips - and hopefully you're helping the OP too. (Sorry for the hijack :p)
Using Visual C++ 7.0
Thanks for the feedback guys.

Quote:Original post by Barguast
Indeed, chunking seems to be the way to go.


I have implemented chunking and quad merging on all 3 axis. The results are better than before but it is hard to tell with random voxel placement. Once I have a terrain generating system the numbers should improve since an even random distribution of voxels does not create many continuous internal features (caves). Here is a screenshot of the progress thus far (8x8x8 chunks, merging on all 3 axis within chunks):


Quote:Original post by Barguast
I have a feeling that a few basic optimisations, and perhaps chunk-based LOD will yield some good results.


Chunk based LOD sounds like a great idea. Does anyone have any suggestions on how to extract only the most important features for distant chunks?

Quote:Original post by Barguast
I'll have an experiment with a more realistic test tomorrow and see if it makes a difference. Thanks for you tips - and hopefully you're helping the OP too. (Sorry for the hijack :p)


Haha, don't worry about it. You answered some questions I had. =)

The idea I had regarding chunk-based LOD was simply to double the size of the blocks and sample based on the most common block type within the larger block.

For example, at the highest LOD, my chunks are 32x32x32 and block size is 1x1x1. At the next one, they are 16x16x16, but the block size is 2x2x2 (which will be of the type most according in that 2x2x2 block). For example, if there are 5 solid blocks, it will be solid. If there are less, it will be air (not there).

Your quad merging looks quite affective. I wonder how this will affect tiling, though? I'm sure it'll speed things up to a degree, but I wouldn't want to lose the ability to wrap textures properly since (I think?) an implementation like this is dependent on using a texture atlas. Unless there is another way?
Using Visual C++ 7.0
I hate to say it, but you've only done the easy bit. You geometry is now full of "T" junctions and these will show up as annoying glimmers when you start putting textures on.

For a solid display, you just cannot have a vertex resting in the middle of a line between two others - you have to stitch them such that any lines passing near or through a vertex actually weld to it properly.
------------------------------Great Little War Game
Quote:Original post by Barguast
Your quad merging looks quite affective. I wonder how this will affect tiling, though? I'm sure it'll speed things up to a degree, but I wouldn't want to lose the ability to wrap textures properly since (I think?) an implementation like this is dependent on using a texture atlas. Unless there is another way?


Yeah I plan on using a texture atlas where my texture has width 1 and height equal to however many textures will be inside it. That will allow me to wrap on 'u'.

Quote:Original post by Rubicon
For a solid display, you just cannot have a vertex resting in the middle of a line between two others - you have to stitch them such that any lines passing near or through a vertex actually weld to it properly.


I did not know about this. Do you have any information on why this happens when you apply textures?

Also, when I don't render wire frames the seams look solid to me. Am I mistaken?


This topic is closed to new replies.

Advertisement