Jump to content
  • Advertisement
  • entries
    4
  • comments
    31
  • views
    1554

Bumpy World

Gnollrunner

1482 views

After a LOT of bug fixes, and some algorithm changes changes, our octree marching prisms algorithm is now in a much better state.  We added a completely new way of determine chunk level transitions, but before discussing it we will first talk a bit more about our voxel octree.

Our octree is very explicit. By that we mean it is built up of actual geometric components. First we have voxel vertexes (not to be confused with mesh vertexes) for the corners of voxels. Then we have voxel edges that connect them. Then we have voxel faces which are made up of a set of voxel edges (either three or four) and finally we have the voxels themselves which reference our faces. Currently we support prismatic voxels. since they make the best looking world, however the lower level constructs are designed to also support the more common cubic voxels.  In addition to our octree of voxels, voxel faces are kept in a quadtrees, while voxel edges are organized into binary trees. Everything is pretty much interconnected and there is a reference counting system that handles deallocation of unused objects.

So why go though all this trouble? The answer by doing things this way we can avoid traversing the octree when building meshes using our marching prisms algorithms. For instance, If there is a mesh edge on a voxel face, since that face is referenced by the voxels on either side of it, we can easily connect together mesh triangles generated in both voxels. The same goes for voxel edges.  A mesh vertex on a voxel edge is shared by all voxels that reference it. So in short, seamless meshes are built in place with little effort. This is important since meshes will be constantly recalculated for LOD as a player moves around.

This brings us to chunking.  As we talked about in our first entry, a chunk is nothing more than a sub-section of the octree. Most importantly we need to know where there are up and down chunk transitions.  Here our face quadtrees, and edge binary tress help us out.  From the top of any chunk we can quickly traverse the quadtrees and binary trees and tag faces and edges as transition objects. The algorithm is quite simple since we know there will only be one level difference between chunks, and therefore if there is a level difference, one level will be odd and the other even. So we can tag our edges and faces with up to two chunk levels in a 2 element array indexed by the last bit of the chunk level.  After going down the borders of each chunk, border objects will now have one of two states. They will be tagged with a single level or a two levels one being one higher than the other.  From this we can now generate transition voxels with no more need to look at a voxel's neighboring voxels. 

One more note about our explicit voxels, since they are in fact explicit there is no requirement that they form a regular grid.  As we said before our world grid is basically wrapped around a sphere which gives us fairly uniform terrain no matter where you are on the globe.  Hopefully in he future we can also use this versatility to build trees.

Ok so it's picture time......... We added some 3D simplex nose to get something that isn't a simple sphere.  Hopefully in our next entry we will try a multi-fractal.

 

 

BumpyWorld.jpg



14 Comments


Recommended Comments

That's beautiful geometry - i did not expect that from hearing you talking about prisms and octrees. I want to dig caves into this stuff... seems fun!

To make your explanation clear you would need to add some illustrations, maybe in future posts. (I do not get anything :) )

 

Share this comment


Link to comment
15 hours ago, JoeJ said:

That's beautiful geometry - i did not expect that from hearing you talking about prisms and octrees. I want to dig caves into this stuff... seems fun!

To make your explanation clear you would need to add some illustrations, maybe in future posts. (I do not get anything :) )

 

Yeah explaining stuff in text is away hard for me. I think in my mind I was assuming familiarity with marching cubes but yeah, I probably need more diagrams and stuff.  Thanks for your input.

Share this comment


Link to comment
7 minutes ago, unbird said:

Very nice. And curious. Never heard of marching prisms. Marchers usually create a more "ugly" mesh. This one doesn't seem to. Is it from this paper Marching Prisms: Rapid section generation in 3D geological model.pdf ?

I didn't actually use that paper.  I googled it after I wrote my library and saw someone else had done similar stuff, but it wasn't a big surprise.  I mean it's not rocket science. Both are space filling shapes and both can be subdivided into an octree.  In fact I think they may be the only two shapes that you can do that with.

Each have advantages and disadvantages. The cube is simpler and goes into an matrix better but it's also 256 cases as opposed to 64.  Since I'm using an octree the matrix advantage doesn't really help me much.  The cube also has more ambiguous cases and some of those cases appear in horizontal geometry, while the ambiguous cases of the prism are only encountered with very vertical geometry.  Since terrain is a lot more horizontal in general, that's a bit of an advantage of the prism.  But in my particular algorithm I disambiguate all the ambiguous cases anyway.

None of these are the primary reasons, I chose prisms however.  Since I'm doing a planet I wanted to wrap my voxels so the orientation is always the same everywhere on the surface. For that prisms are way better because you start out with an extruded icosahedron instead of an extruded cube.  The voxels in general end up being a lot more uniform which is I think, what you are noticing.  I'm also going to use this property for a trick I thought of for doing fast noise generated caters for moon like planets.  If it works like hope, it should be cool.

Share this comment


Link to comment

Thanks, now I get it. I should have looked at your other entries too. Indeed this seems very suitable for a planet. Congrats.

Share this comment


Link to comment

Not really related yet, but there are some guys working on generating hexahedral meshes in realtime: https://www.hextreme.eu/

I think this stuff might hold the (wrong?) promises coming from Unlimited Detail and Atomontage. It really combines the advantages of triangles and volumes, instead claiming one would be better that the other in general. Working on quadrangulation i already see a lot of applications for games, but hexmeshes add destruction and diffuse geometry to that.

Seeing a claim for realtime already at present day comes really unexpected and is hard to believe, but reading their crossfield paper - those people know what they do...

Share this comment


Link to comment
2 hours ago, JoeJ said:

Not really related yet, but there are some guys working on generating hexahedral meshes in realtime: https://www.hextreme.eu/

I think this stuff might hold the (wrong?) promises coming from Unlimited Detail and Atomontage. It really combines the advantages of triangles and volumes, instead claiming one would be better that the other in general. Working on quadrangulation i already see a lot of applications for games, but hexmeshes add destruction and diffuse geometry to that.

Seeing a claim for realtime already at present day comes really unexpected and is hard to believe, but reading their crossfield paper - those people know what they do...

Not sure what this is really. I mean it kind of sounds like one of the normal voxels algorithms. Marching cubes, surface nets, dual contouring. etc.  I'm really specifically targeting games however, so everything has to working in real time as far as I'm concerned.

Share this comment


Link to comment
20 minutes ago, Gnollrunner said:

Not sure what this is really. I mean it kind of sounds like one of the normal voxels algorithms. Marching cubes, surface nets, dual contouring. etc.  I'm really specifically targeting games however, so everything has to working in real time as far as I'm concerned.

Yeah... i'm quite alone with my vision here. But i'm sure this has big potential for games, so i'll talk about it (again) in more detail. (It has nothing to do with voxels or surface extraction from volumes.)

This is not meant as a proposal to extend or replace your work! More as a glimpse into a far, potential future, maybe... :)

 

If you look at that bunny, its surface consists only of quads (i do not generate quad geometry because i'm actually interested just in the UV map).

Now imagine how marching cubes would look like: Irregular triangles, almost unstructured, not really an efficient or nice representation of the surface - we know about those problems, nothing new.

With quadrangulation this changes: We have a structured way to store surface data in a very efficient manner, and this has many applications:

* Cache irradiance on the surface, which is necessary to solve the realtime GI problem (that's what i do)

* Seamless texture mapping, which finally enables proper displacement mapping for everything, not just height maps. Also related stuff like Geometry Images. (Compete with high detail voxel idea using a fraction of data.)

* A new solution to the LOD problem by tesselating quads. (For this we need much larger quads than shown on the bunny, which is hard and because industrial design has no interest here, we need to do our own research - i'm at this point now.)

 

In practice i see application for highly efficient and detailed world geometry. The idea plays well together with virtual texturing, object space lighting and photogrammetry.

The problem is: This is very hard - it took me more than a year to catch up with state of the art in research. You know: No degree :( probably Fulcrum would have done this in two months, seriously. :)

 

So this is quadrangulation. Hexahedral meshing extends it with cubes into the inside of the mesh. So destruction, diffuse geometry like vegetation, raytracing is also possible by marching along the vector field which has low curvature. (I don't think we really need all this - likely unpractical... just to exaggerate the idea)

 

Nothing of this is expected to be realtime. Fast methods, like from my link are fast only because they generate very high resolutions. For games we want a low resolution base mesh that we can subdivide on demand. This involves multiple solves, the integer rounding problem is NP hard, etc. The optimal placement of singularities is still an open problem for game demands because all existing methods ignore the target meshing resolution. However, runtime content creation like Minecraft might be possible but i would not consider this for an upcoming game anytime soon.

 

.

 

quadbunny.JPG

Share this comment


Link to comment
6 hours ago, JoeJ said:

Yeah... i'm quite alone with my vision here. But i'm sure this has big potential for games, so i'll talk about it (again) in more detail. (It has nothing to do with voxels or surface extraction from volumes.)

This is not meant as a proposal to extend or replace your work! More as a glimpse into a far, potential future, maybe... :)

I'm not so paranoid.  ☺️  It's always interesting to see what others are doing.  I'm actually still not sure how the mesh is being generated. Is it some sort of projection from a subdivided cubes?

Quote

Now imagine how marching cubes would look like: Irregular triangles, almost unstructured, not really an efficient or nice representation of the surface - we know about those problems, nothing new.

That's true. However the strength of marching cubs is from a function or a very simple data description, you can create your mesh geometry quickly and it's easy to modify too. On this other hand the irregular mesh is why I decided to "wrap" my voxels around the planet. It just kind of makes everything more even. At one point I came up with an idea of starting out with a height map style mesh and then splitting and stretching new triangles. The problem is is wasn't clear to me how to quickly make them conform to a function once you get past a height map function.

Quote

Seamless texture mapping, which finally enables proper displacement mapping for everything, not just height maps. Also related stuff like Geometry Images. (Compete with high detail voxel idea using a fraction of data.)

I'm actually going to see how far I can go with procedural textures which are seamless by nature.  I tried raw simplex noise on my first non-voxelated world and it worked OK but it had anti-aliasing problems. I kind of fixed it with a distance function that fades the pattern out but I really haven't done much with this yet. Here is something in DX9 from a few years back before I put the anti-aliasing code in. You can see the problem. But at least the terrain was good and you could walk on it since I had sphere to mesh collision working.

NorthPoll-1.JPG.c5ba86da4f984836d914c9c9e71d9c37.JPG

 

Quote

So this is quadrangulation. Hexahedral meshing extends it with cubes into the inside of the mesh. So destruction, diffuse geometry like vegetation, raytracing is also possible by marching along the vector field which has low curvature. (I don't think we really need all this - likely unpractical... just to exaggerate the idea)

I've been thinking about raytracing myself. I think I could probably do something with my current code because everything gets generated into an octree. So finding intersections are fast.  I used a quad-tree in my first height-mapped world above and it was good enough to do JIT terrain and collision. I think the occtree is better for this though, because I ended up having to use cylinders for my bounding shapes because of steep terrain where triangles would be large. If I tried to use spheres, they would become so big that too many cells were being checked and it bogged somewhat. For the octree it isn't a problem though since size is limited by the voxel.  I kind of look at this as one of the main advantages of the voxel approach. Even if the mesh isn't as uniform, you do get your geometry divided up into nice neat cells which makes collision easy to calculate.

 

 

Share this comment


Link to comment
23 minutes ago, Gnollrunner said:

I'm actually still not sure how the mesh is being generated. Is it some sort of projection from a subdivided cubes?

No, voxelization has been used in early research about automated polycubes generation, but it inherits the problems coming from a fixed world space grid, which rarely fits the surface well. (Tried it but failed badly.)

For quadrangulation there are many approaches. The most popular is to generate a crossfield on the surface which is aligned to curvature directions. From this field singularities can be extracted (on the bunny you can see some vertices with valence 3 or 5). Then you can trace lines going outwards those 3 or 5 directions to form a motorcycle graph which is guaranteed to consist only of quads with T-junctions. For those T-quads we can form a graph to connect oppositing edges over neighbouring quads, and in this graph we can greedy seek for edge loops in a dijkstra like manner to tesselate the edges (so solving the NP hard rounding problem.)

This whole process becomes a lot easier if we first solve for a initial UV paramtrization which already respects the angle constraints coming from singularities (3 or 5 times 90 degrees). Then the general shape of the resulting parametrization is already given, tracing can be done in 2D and we try to round tesselation so singularities keep their distance as good as possible. (https://www.graphics.rwth-aachen.de/media/papers/campen_sa2015_qgp_medium.pdf)

Other approaches work with some spectral function over the surface and avoid the need for crossfields and parametrization. There has been heavy research in the past decade and it is still going on.

47 minutes ago, Gnollrunner said:

That's true. However the strength of marching cubs is from a function or a very simple data description, you can create your mesh geometry quickly and it's easy to modify too.

Yeah, don't torture yourself trying to achieve this with quadrangulation. (I regret my decision to go there myself - but i really get better quality and performance form my GI stuff, and i'm almost done, sigh... :) )

50 minutes ago, Gnollrunner said:

I'm actually going to see how far I can go with procedural textures which are seamless by nature.

As you mention this, i think vector fields have a great potential to improve this beyond the boring patterns we get there. See this for example: https://www.cs.cmu.edu/~kmcrane/Projects/StripePatterns/

Simple vector, line and cross fields can be computed quickly by diffusing curvature directions without trig. Driving noise generation with this might result in more interesting stuff at low cost. This could be really useful for you. 

Share this comment


Link to comment
JoeJ

Posted (edited)

I tried it out with a very simple approach, but i'm not excited.

The idea is to generate a field from curvature and diffuse it over the mesh, and then using the field directions to offset noise sampling.

The unexpected problem: Usually you create a line field for curvature, so two vectors in oppositing directions along a sharp edge. But for the sample offset we have no use for two opposite directions, they would just cancel out each other. Using a vector field makes the vectors point to spots of high curvature, so the resulting field is twirly and not so nice :(

Here are two images without and with the the field offset. (I have no nice noise function handy so multiplicated 1D gradient noise functions for each dimension, which is ugly, but we see how the field changes this.)

Maybe tomorrow i will try something more advanced using trivial connections (paper on the same site). I expect this to look much better, and the runtime cost should be similar...

(artifacts come from rendering many points instead using pixel shader)

 

no field.JPG

field offset.JPG

Edited by JoeJ

Share this comment


Link to comment

After trying Trivial Connections (which did not look any better although the field is very smooth) i realized my mistake was to diffuse the field only on the vertex tangent plane. But because the bunny is bumpy, the normal variance caused a lot of distortion. I've fixed it with smoothing the field in global 3D space, and also added bumpy perlin noise, but it still looks bad. At least you see a simple approach like that is not worth the effort. The patterns align to the surface somehow, but the stretching it causes is just ugly. :(

 

no field2.JPG

field offset2.JPG

Share this comment


Link to comment

After I get my thread stuff working fully, I'm going to try and do some shading on my world. I have simplex noise in old DX9 HLSL which I should be able to port.  Again the main problem for me is the anti-aliasing. I guess for starters I'll just have the pattern fade into a single predefined color at distance.   That kind of worked before, but it's probably not ideal. I have yet to find a really good solution.  Maybe phasing out higher level frequencies and distance might work.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By petya-kurochkin
      Hello, I have a 'join'-function that joins an array's elements to a string. This is what I want:
      vector<int> a = {1,2,3} array<int, 3> b = {1,2,3} cout << join(a) << endl; // "1, 2, 3" cout << join(b) << endl; // "1, 2, 3" So I'm trying to declare the function this way:
      template <typename T, typename C> string join(const C<const T> &arr, const string &delimiter = ", "){ ... } But without any success. I understand, that I could declare it this way:
       
      template <typename T> string join(T &arr){ ... } or this way:
      template <typename Iter> string join(Iter &begin, Iter &end){ ... }
      But I just wonder, is it possible to implement it like:
       
      template <typename T, typename Collection> string join(const Collection<const T> &data){ ... } Thank you!
    • By lukash
      Hello everybody! I decided to write a graphics engine, the killer of Unity and Unreal. If anyone interested and have free time, join. High-level render is based on low-level OpenGL 4.5 and DirectX 11. Ideally, there will be PBR, TAA, SSR, SSAO, some variation of indirect light algorithm, support for multiple viewports and multiple cameras. The key feature is COM based (binary compatibility is needed). Physics, ray tracing, AI, VR will not. I grabbed the basic architecture from the DGLE engine. The editor will be on Qt (https://github.com/fra-zz-mer/RenderMasterEditor). Now there is a buildable editor. The main point of the engine is the maximum transparency of the architecture and high-quality rendering. For shaders, there will be no new language, everything will turn into defines.
    • By rrlangly
      Hope this is the right forum.
      I'm using Ogre and have rendered a wireframe of triangles by creating manual objects and feeding it vertex and index buffers.
      My question is, if I want to turn some of the edges of my wire mesh a different color, how would that normally be done? I.e., A triangle having 3 edges where 2 edges are white and I change one edge to green.
       
       
    • By VanillaSnake21
      I'm doing a WinAPI/DX10 framework rewrite and this time around I'd like to start handling errors a little more seriously in my code. Up to this point I've allowed the code to fail naturally and catch the exception within VS. I always found that to be the quickest method. Usually a break occurs and I just step through the call stack and find the problem in seconds (hopefully). However I'm not really sure that's the proper way to do this, but it does feel like I can avoid writing code that is littered with FAILED() or if( == NULL) statements. So what is the proper way? I've read a few other threads on here that talk about using asserts, but I've never really seen any code written with that so I'm bit doubtfull. Most of the conditions I'd like to monitor for errors are usually DirectX function return values, sometimes null points, sometimes WinAPI function fail handles, sometimes index overrun errors, but mostly DX function fails. Again, I feel like debug DX libraries are very verbose and do a wonderful job interacting with VS as it is, so up to this point I've just relied on output window messages for that, but again I'd like to know if that is a proper way to do things.
       
      P.S the framework is really just a learning tool for me so it's not going to see serious use. So I lean on the side of cleaner, more readable code vs obfuscated one, so I'd like to not go too crazy with error checking however I'd like to get at least an idea of the way things are done properly. 
       
      P.P.S I'd also like to mention that I am aware of exceptions however I'm particularly asking about catching programming errors, I was under the impression that exception handling is more of a run-time error detection method to catch things that are not really the programmers fault (like memory, harwdware or network issues).
    • By fleabay
      I am looking for resources on best practices (really any info at this point) on how to program a dependency graph with functions that create nodes, create attributes and connect attributes.
      For some reason this topic is kryptonite for my Google Fu.
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!