Jump to content
  • Advertisement
MasterReDWinD

DX11 3D smooth voxel terrain LOD on GPU

Recommended Posts

5 hours ago, MasterReDWinD said:

In examples I've seen the non-manifold sections usually look like hourglass shapes where two parts of a surface converge at the same vertex.

Yeah thanks, this is what i have expected (also with edges as shown in the paper).

Notice Half-Edge will not work easily for this, because usually a vertex points only to a single polygon.

Share this post


Link to post
Share on other sites
Advertisement
14 hours ago, MasterReDWinD said:

 

I'd like to pursue the clipmap idea if it can be made to work for 3D terrain rather than just heightmaps.  I believe Miguel on the Procworld blog managed to achieve this somehow.  Generating 'spheres' of chunks of terrain around the player that halve in sampling resolution with each sphere but double in size to take advantage of the perspective projection.

I think I'll need to move to an octree structure to progress.  I did explore this route before (on the CPU) but I found that I needed to generate the octree from the bottom up, with leaf nodes representing a single voxel.  Collapsing the leaf nodes into single parent nodes one level up proved tricky, along with the (Dual Contouring at the time) triangulation.  Ideally I'd like to do all the LOD on the GPU where my noise and meshes are generated.

 

I suppose something like a clipmap would be doable.  But I don't think you would use an octree in that case. Notice the 2D mesh is fully populated as it has to be since we are dealing with height-mapped data with no holes.   In 3D you do have unused spaces, however as you move you never know where the data will fall so again you have to have a fully populated set of voxels which makes using an octree pretty useless in the simple case.  You could still use one and create and destroy voxels as needed but that kind of the defeats the simplicity of the whole algorithm.

The other thing is, the data is not mesh vertexes but voxels corners, so ostensibly you would have to generate all new meshes every time you moved.  However, since you typically aren't moving all that far, I can think of one cheat that might work where you simply move internal mesh data over by you snap value so only stuff near the edges of chunks needs to be regenerated.

Regarding your comment on bottom up tree generation, I also found this to be the case.  This is where the "ghost tree" thing comes in. I generate down the tree in a pre-phase using a lightweight structure, and on my way up I leave the the parts that I need (where I found geometry) and delete the rest.  I later convert the new branches to the actual full tree format. The trick is to have fast allocation/deallocation.  I use slab allocation with a free list so it's just a push or pop off the list and also it keeps stuff tight in memory for better caching.  There are also some optimizations with this (other than the threading). For instance each node of the tree has a byte which says how far it's been down before, without finding geometry.  If I need to search down the same branch to the distance or less again, I can quickly look at this byte and just eliminate the search if I've already done it and no geometry was found. 

Share this post


Link to post
Share on other sites
On 10/7/2019 at 3:16 PM, MasterReDWinD said:

Thanks for the reply and for the link, it looks like it touches on some new things that I hadn't seen before. 

Were you generating the mesh from scratch or converting one that you already had?  You mentioned full mesh editing.  That is something I am also targetting.  It probably makes much more sense to get the mesh out of Surface Nets in half-edge format to begin with, then I had just make the edits following the half-edge rules.  I think that is what you are doing?

How did you find the performance in your case?

Thanks for your reply.  The collision data will be used for pathfinding and probably for some movable objects/projectiles.  What was your idea involving sdf's?

The meshes are generated at runtime and will need to be recreated if the user edits them.

Obviously I don't know if this is suitable for your setup :) Anyhow how I meant was just that sdf's are good for computing collisions. You know when you intersect with them and you can even know the vector (distance, direction) to closest point. So if you have an sdf in video memory you can do collisions efficiently & robust on the gpu. It's also possible to upload to cpu. You should do it triple buffered otherwise high chance of stalling btw

Edited by 51mon

Share this post


Link to post
Share on other sites
11 hours ago, 51mon said:

Obviously I don't know if this is suitable for your setup :) Anyhow how I meant was just that sdf's are good for computing collisions. You know when you intersect with them and you can even know the vector (distance, direction) to closest point. So if you have an sdf in video memory you can do collisions efficiently & robust on the gpu. It's also possible to upload to cpu. You should do it triple buffered otherwise high chance of stalling btw

Thanks, I'll be sure to investigate this.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!