Question about how to store voxel data?

Started by
4 comments, last by studentTeacher 10 years, 10 months ago

So a couple questions here. I'm currently working on a terrain/world-maker, using noise and fractals and such. I happened upon this great blog, chock full of a ton of information. A certain starting page of interest can be found here.

My first question is this: How would someone, using this guy's techniques of an Adaptive Octree, Dual Contouring, and QEF, store/retrieve voxels for this? My main point of confusion is the adaptive octree. Because he is using something like clipmaps, but in 3D (therefore called Adaptive Octree), I don't understand how this can be applied to some form of storage for the voxels. Is he storing voxels, then computing the new terrain? Or is he just computing the mesh in a chain form (point evaluation --> voxels --> DC and QEF --> mesh)?

Secondly, I'm wondering how he gets his surface normals from the voxels' edges. I know you can evaluate points in the terrain to get the normals at a certain point or edge, but what if you make changes to the terrain? Does this change the voxels, or does it just change the resulting mesh from the Dual Contouring with QEF? Sorry if this is a stupid question happy.png

By the way, I'm not looking for specific code, but more for an overall idea of how this can work. I've gotten noise and fractals to work well for me, and I have an implementation of dual contouring with QEF (mainly due to the same reasons the guy I'm talking about chose that transformation of voxels to meshes), and would like to understand how he stores voxels and uses the adaptive octree.

Thanks,

sT

Advertisement

I've gone over the procworld blog a while ago. His setup entails using servers to do all the actual voxel stuff, streaming the meshes (compressed) to game clients, so there are no 'voxels' being dealt with on the actual machine that's rendering the meshes. The adaptive octree simply implies that when generating the meshes to be sent to a remote client, it only iterates to a certain depth for each area of the scene surrounding the remote client's position in the world volume. When generating the mesh chunks, coarser meshes are automatically generated by simply avoiding iterating down to the single-voxel resolution depth of the octree, iterating only a few levels in to get a nice coarse mesh, treating the octree as a multi-resolution voxel structure.

As a remote client moves around the server 'adapts' the octree which is used purely for generating meshes to send to the client. It is *not* the octree which stores the voxels itself, it is only an intermediate data structure used to sample the procedural functions and generate meshes. The whole point of his project is that nothing is stored, except as procedural functions, and the adaptive octree slides around with the users to generate meshes to send them so they can render their vantage point of the scene.

Oh, and for the normals, I think he's using a simple gradient analysis.

Okay, so he uses voxels as a choice of how fine of a 3D grid to sample points? The closer a chunk is to you, the finer the voxels are (the deeper one goes in the adaptive octree), while the farther chunks don't go through all the levels, making the 3D grid points farther apart when they check the procedural functions for data about the material. Once he has a mesh, any changes made to the terrain are changes in the mesh, maybe also treated as a preliminary form of voxels like he does with the terrain generation. Do I have the right idea?

What would this adaptive octree hold then? Would it hold position data, to know where to sample the terrain functions, or...?

Gradient Analysis, ahhh got it.

The octree simply holds the results of sampling the procedural functions - adapting not just to camera position, but also whether or not there is anything at all (eg: empty space doesn't get less empty if you subdivide it more). The octree *is* the voxel representation. If I were to create a similar project, I wouldn't generate any actual raw 3D volume data at any point. Working with the octree to perform marching cubes and generate a mesh seems totally feasible, without having to generate raw-volume datas to run the isosurfacing on. This is all in the name of efficiency, of course.

So, the adaptive octree adapts to not just the vantage point, but also what is discovered when sampling the surrounding areas. Perhaps also subsampling, downsampling, etc to prevent multiple samples being taken from the same point when the viewer moves around - reusing existing (known) data and only sampling where more samples are needed.

Okay I think I understand what you're saying. I'm gonna have to play around with some code and figure it out (should be fun smile.png ) but I got the jist of what you're saying. So focus on having the octree only hold results from the sampling proc. functions, and the depth matters on the coarseness of the checks needed (coarser the farther away one is from the viewer). I'll have to figure out how he's saving changes to the terrain . . . if all he is doing is sampling the proc. functions, then how would the octree know about changes? Guess I just have to go think a little tongue.png

Radioteeth, thanks for the help! Lastly, do you have any suggestions on books/websites to check out for further information on this type of stuff?? Just thought I would ask . . . you seem to really know your stuff.

Thanks,

sT

This topic is closed to new replies.

Advertisement