• entries
2
5
• views
602

Life is good, but it could be better.

When I started out with programming, my goal was to develop & publish games. Since then, I have developed more & more a passion for creating systems, tools & algorithms rather than working directly on gameplay mechanics. At least most of the time, that's what I like to do now.

Moreover, it will probably be about my own algorithms, that I'm "inventing" on the way.
The whole point of said algorithms is to simplify existing technology, so that nearly everybody can implement the respective systems, tools or mechanics.

## Extracting A Polygon Mesh From A SDF By Ray-Sampling

After my first (semi-failed) algorithm, I started to think about a new approach. The idea was to utilize simple mechanics such as Ray Marching to extract a polygon mesh from a signed distance function (SDF). So, this is what I came up with:
The algorithm is divided into 2 separate steps. Ray Marching (or what I call in this case "Ray Sampling") and Mesh Construction from said samples. Ray Marching.
Ray march the "scene" (i.e. the SDF), most likely from the player's perspective, on a low resolution. I used 192x108 mostly in my tests. Current GPUs have no problem whatsoever to do this in realtime.
Instead of saving the color at the "hit point", as usual when ray marching, I'm saving the point itself in a buffer. Accompanied by the normal of the SDF at that exact point.

What we end up with after the first step, is a collection of 3D points that resembles the SDF ("samples") & the normals at those positions. Mesh Construction.
Construct a polygon mesh from those samples by simply connecting neighbouring pixels with each other. Lastly, scale up the mesh to account for the low resolution that we have used when ray marching. (I haven't done this yet in the images/videos you can see at the bottom) I think the results look quite good. There's problems that I'm still trying to solve of course, such as this weird aliasing (yes, I do know what the root of that problem is)
It currently runs at about 40-70 fps, or takes somewhere between 10 - 25 ms per mesh. (Only the 1st step is parallelized & I haven't done much to optimize the algorithm) The Pro's No complex, underlying data structure such as a voxel grid Can run in realtime with no problems, especially if optimized No Level-Of-Detail required, which is one of the most painful things when writing a voxel engine. The mesh is as detailed as the image constructed by the Ray Marcher. (Which is pretty good, it's just small! Scaling up a complete mesh works way better than scaling up an image ) Enables sharp features, caves etc. (because, duh, it's ray marching.) Completly relys on SDFs (2D - "heightmap" or even higher dimensional SDFs) Meaning, we could deform the mesh in realtime by applying simple CG operations to the SDF. Infinite terrain for free! (We're only rendering what the player can see, if the SDF is "endless" so is our terrain) The Con's Right now, there's no precomputation. I'm thinking about the possibility of precomputing a mesh by taking "snapshots" from different perspectives. However, at the moment, it's all realtime. Only creating a mesh for what we see also means that AI etc. that is not close to the player has no mesh information to rely on. I don't know yet. Will update more con's when I find 'em. Maybe you have some ideas ?   Results! All results have been generated using a simple SDF consisting of 2 sinus curves.     A huge terrain constructed by taking "snap shots" from above. The same mesh in wireframe.   Wireframe close up.

## A Novel Approach To (Non-Manifold) Dual Contouring

For the sake of "completness", my first entry is about the very first algorithm I came up when I was thinking about new solutions to procedural meshing / extracting a polygon mesh from implicit geometry (e.g. from a signed distance function, SDF) You can find out more about it in this thread right here. At first, I liked the idea. However, the more I worked on the algorithm, the more I found characteristics that I dislike about this approach.
You can read about the pro's in the thread linked above, so here is the stuff that I dislike now: The algorithm is a typical "voxel" algorithm. It relies on a 3D grid. It may suffer from floating-point imprecision It lost it's simpleness. The idea behind the algorithm was to find a simple alternative to DC. This is still very true for the 2D version of the algorithm, but for the 3D version, not so much...
I had to account for more & more "special cases" during the development of the algorithm, resulting in an algorithm that wasn't simple enough anymore for my taste.