So a couple questions here. I'm currently working on a terrain/world-maker, using noise and fractals and such. I happened upon this great blog, chock full of a ton of information. A certain starting page of interest can be found here.
My first question is this: How would someone, using this guy's techniques of an Adaptive Octree, Dual Contouring, and QEF, store/retrieve voxels for this? My main point of confusion is the adaptive octree. Because he is using something like clipmaps, but in 3D (therefore called Adaptive Octree), I don't understand how this can be applied to some form of storage for the voxels. Is he storing voxels, then computing the new terrain? Or is he just computing the mesh in a chain form (point evaluation --> voxels --> DC and QEF --> mesh)?
Secondly, I'm wondering how he gets his surface normals from the voxels' edges. I know you can evaluate points in the terrain to get the normals at a certain point or edge, but what if you make changes to the terrain? Does this change the voxels, or does it just change the resulting mesh from the Dual Contouring with QEF? Sorry if this is a stupid question
By the way, I'm not looking for specific code, but more for an overall idea of how this can work. I've gotten noise and fractals to work well for me, and I have an implementation of dual contouring with QEF (mainly due to the same reasons the guy I'm talking about chose that transformation of voxels to meshes), and would like to understand how he stores voxels and uses the adaptive octree.
Thanks,
sT