After my first (semi-failed) algorithm, I started to think about a new approach.
The idea was to utilize simple mechanics such as Ray Marching to extract a polygon mesh from a signed distance function (SDF).
So, this is what I came up with:
The algorithm is divided into 2 separate steps. Ray Marching (or what I call in this case "Ray Sampling") and Mesh Construction from said samples.
Ray march the "scene" (i.e. the SDF), most likely from the player's perspective, on a low resolution. I used 192x108 mostly in my tests. Current GPUs have no problem whatsoever to do this in realtime.
Instead of saving the color at the "hit point", as usual when ray marching, I'm saving the point itself in a buffer. Accompanied by the normal of the SDF at that exact point.
What we end up with after the first step, is a collection of 3D points that resembles the SDF ("samples") & the normals at those positions.
Construct a polygon mesh from those samples by simply connecting neighbouring pixels with each other. Lastly, scale up the mesh to account for the low resolution that we have used when ray marching. (I haven't done this yet in the images/videos you can see at the bottom)
I think the results look quite good. There's problems that I'm still trying to solve of course, such as this weird aliasing (yes, I do know what the root of that problem is)
It currently runs at about 40-70 fps, or takes somewhere between 10 - 25 ms per mesh. (Only the 1st step is parallelized & I haven't done much to optimize the algorithm)
- No complex, underlying data structure such as a voxel grid
- Can run in realtime with no problems, especially if optimized
- No Level-Of-Detail required, which is one of the most painful things when writing a voxel engine. The mesh is as detailed as the image constructed by the Ray Marcher. (Which is pretty good, it's just small! Scaling up a complete mesh works way better than scaling up an image )
- Enables sharp features, caves etc. (because, duh, it's ray marching.)
- Completly relys on SDFs (2D - "heightmap" or even higher dimensional SDFs) Meaning, we could deform the mesh in realtime by applying simple CG operations to the SDF.
- Infinite terrain for free! (We're only rendering what the player can see, if the SDF is "endless" so is our terrain)
- Right now, there's no precomputation. I'm thinking about the possibility of precomputing a mesh by taking "snapshots" from different perspectives. However, at the moment, it's all realtime.
- Only creating a mesh for what we see also means that AI etc. that is not close to the player has no mesh information to rely on.
- I don't know yet. Will update more con's when I find 'em. Maybe you have some ideas ?
All results have been generated using a simple SDF consisting of 2 sinus curves.
A huge terrain constructed by taking "snap shots" from above.
The same mesh in wireframe.
Wireframe close up.