• Advertisement

An easy explanation to Dual Contouring?

Recommended Posts

Hello,

I'm a trainee for software development and in my free time I try to do various things. I decided I wanted to figure out how "Dual Contouring" works, but I haven't been successful yet. I could find some implementations in code which are very hard to follow along and same sheets of paper "explaining" it written in a way that I have a hard time even understanding what the input and output is. All I know that it is used to make  a surface/mesh out of voxels. Is there any explanation someone can understand without having studied math/computer science? The problem is also that most of the words I can't even translate to German(like Hermite Data) nor I can find a explanation which I can understand. I know how Marching Cubes work but I just can't find anything to Dual Contouring also I don't quite get the sense of octrees. As far I'm aware of this is a set of data organized like a tree where each element points to 8 more elements. I don't get how that could be helpful in a infinite 3D world and why I shouldn't just use a List of coordinates with zeros and ones for dirt/air or something like that.

Thanks

A clueless trainee ^^

Edited by QuesterDesura

Share this post


Link to post
Share on other sites
Advertisement

I have looked through the code, but I have some issues with the variable names and functions used. As far what I have seen the Field consists of a array of planes, what is kinda confusing as I thought Dual Contouring is only using like a 3D array of zeros and ones. Also I'm not quite sure if a voxel is in the middle of a cube or on the corner because it seems like it is iterating with the foreach3D(function?) through each voxel and determines if the surrounding voxels are all outside or inside(below zero and greater zero as it seems). If thats not the case then it makes vertexes to CrossingCorners as it seems. Is that correct for the first part? I find this confusing because voxels in Minecraft terms are simply displaying 1 Cube and can either be 0 or a ID of a block, but here it seems  to be that 8 voxels/coordinates represent 1 Cube or whatever might come out of the algorithm.

Edited by QuesterDesura

Share this post


Link to post
Share on other sites

Hermite data: it's the same volume data (or distance field) that you'd use for regular marching cubes, but with surface normals at each sampled point as well.

If you are using a distance field as your source of volume data, you can find the normal at each sample point using a central difference, and voila, you have hermite data.

Share this post


Link to post
Share on other sites

Conceptually, Dual Contouring is not that different from Marching Cubes. In both cases, you divide your volume data into a grid, and at each cube in the grid you evaluate the surface. Any cube containing edge-crossing - or in other words, any cube that intersects the surface (i.e. isn't entirely empty or entirely full) will generate output geometry.

The key difference between the two is that in Marching Cubes you will just directly generate geometry from the edge crossings. In Dual Contouring you will instead use the normals at each edge-crossing (this is why you need Hermite Data) to locate a single exact point on the surface, and you'll generate geometry from these points (spanning multiple cubes) in a subsequent pass.

The complicated part of Dual Contouring is mostly in how you find the optimal point based on the normals of neighbouring edge crossings. The original paper uses QEF minimisation, which is... a little opaque to those of us without a deep math background.

If you are looking for a more readable implementation, Lin has a pretty simple one on github.

Share this post


Link to post
Share on other sites
25 minutes ago, swiftcoder said:

Conceptually, Dual Contouring is not that different from Marching Cubes. In both cases, you divide your volume data into a grid, and at each cube in the grid you evaluate the surface. Any cube containing edge-crossing - or in other words, any cube that intersects the surface (i.e. isn't entirely empty or entirely full) will generate output geometry.

The key difference between the two is that in Marching Cubes you will just directly generate geometry from the edge crossings. In Dual Contouring you will instead use the normals at each edge-crossing (this is why you need Hermite Data) to locate a single exact point on the surface, and you'll generate geometry from these points (spanning multiple cubes) in a subsequent pass.

The complicated part of Dual Contouring is mostly in how you find the optimal point based on the normals of neighbouring edge crossings. The original paper uses QEF minimisation, which is... a little opaque to those of us without a deep math background.

If you are looking for a more readable implementation, Lin has a pretty simple one on github.

Wow, thank you so much you just cleared up so much stuff I found to be confusing. I don't quite understand how those normals should be working. If for example in my game a Cube gets placed by the player then I will create volume data containing the coordinates of the 8 edges, right? But what normal direction does I give the 8 coordinates and how exactly do I use it? Also how should I locate a point on the surface? I thought Dual Contouring is creating this surface out of a set of data, so how can I determine a location on a shape then?

I made a image on how I thought it is working:

1. User wants to create a cube

bixBenE.png

2. create 8 voxels

ETh3sR4.png

3. Add/Calculate Normals(free hand ^^)

sGWuUYe.png

4.Dual Contouring 

gPKKpKJ.png

 

Is this how it works or did I miss something? And how does Dual Contouring actually use those normals?

Edited by QuesterDesura

Share this post


Link to post
Share on other sites

I took a shot at deciphering the emilk code, and I'm sort of half-triumphant.

I put up a copy of the code in my own GitHub repository:

https://github.com/sjhalayka/dc

I added a function in main.cpp called addFractal (also see below for source code), which adds a quaternion Julia set. I set the quaternion parameters to all zeroes, which produces a 3D ball. If you take a look at the mesh.obj that I put up (also see attached file below), you can make out part of the sphere, but the surface is all choppy. I'm not sure if I'm setting the dist variable or the normal variable correctly. Anyone have any suggestions?

Attached file:

mesh.obj

 

void addFractal(Field& field)
{
    float x_grid_max = 1.5;
    float y_grid_max = 1.5;
    float z_grid_max = 1.5;
    float x_grid_min = -x_grid_max;
    float y_grid_min = -y_grid_max;
    float z_grid_min = -z_grid_max;
    
    float z_w = 0;
    quaternion C;
    C.x = 0.0f;
    C.y = 0.0f;
    C.z = 0.0f;
    C.w = 0.0f;
    unsigned short int max_iterations = 8;
    float threshold = 4;
    
    string error_string;
    quaternion_julia_set_equation_parser eqparser;
    if (false == eqparser.setup("Z = sin(Z) + C * sin(Z)", error_string, C))
    {
        cout << "Equation error: " << error_string << endl;
        return;
    }
    else
    {
        cout << "Equation ok" << endl;
    }
    
    quaternion Z(x_grid_min, y_grid_min, z_grid_min, z_w);
    
	foreach3D(field.size(), [&](Vec3u upos)
    {
		auto& cell = field[upos];

        Z.x = upos.x / float(FieldSize - 1);
        Z.y = upos.y / float(FieldSize - 1);
        Z.z = upos.z / float(FieldSize - 1);
        
        cell.dist = eqparser.iterate(Z, max_iterations, threshold) - threshold;

        Z.x -= 0.01;
        float normal_x1 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.x += 0.02;
        float normal_x2 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.x -= 0.01;
        float normal_x = normal_x1 - normal_x2;
            
        Z.y -= 0.01;
        float normal_y1 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.y += 0.02;
        float normal_y2 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.y -= 0.01;
        float normal_y = normal_y1 - normal_y2;
            
        Z.z -= 0.01;
        float normal_z1 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.z += 0.02;
        float normal_z2 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.z -= 0.01;
        float normal_z = normal_z1 - normal_z2;
            
        normal_x /= 2.0;
        normal_y /= 2.0;
        normal_z /= 2.0;
        
        cell.normal = normalized(Vec3(normal_x, normal_y, normal_z));
    });
}

 

Edited by sjhalayka

Share this post


Link to post
Share on other sites
9 hours ago, QuesterDesura said:

If for example in my game a Cube gets placed by the player then I will create volume data containing the coordinates of the 8 edges, right?

Well, no. You volume data is usually a regular grid (can be 1-bit empty/full data, or floating point density values). You'd initialise the entire block of volume data to empty (i.e. zero). Then if the player places a cube, you'd intersect their cube with the volume data, and set each cell that intersects the box to full (i.e. one).

If you are looking for Minecraft-style terrain, where everything is cubes, then Dual Contouring is going to overcomplicate things significantly, and there are much simpler methods.

Share this post


Link to post
Share on other sites

I have gotten some stuff working now. I used the Unity Editor to make it easier to see what I'm doing. I have "translated" the Code from the Github project in C# and changed/removed stuff that I don't understand. This is my first test:

WJALL5X.png

The small cubes presents voxels which are inside the mesh. Next steps will be to also let the algorithm run through the other sides and make a solid shape.

This is how it looks with a distance of -0.5 to all edges:

O7G4aqD.png

I think/I hope I have now understood how this algorithm works. I will post again if I encounter new problems or hopefully I can get some more results and share them.

Edited by QuesterDesura

Share this post


Link to post
Share on other sites
13 hours ago, QuesterDesura said:

I'm looking for something

" rel="external">like this.

The video looks like they do not use dual contouring, they only snap voxel vertices to the nearest point of target geometry, which should be a lot easier to implement. (Limitation is probably: You can not have two surfaces in one voxel.)

Unfortunately i do not remember a proper name or explantation of the technique, but there are a few around. Maybe someone else...

Share this post


Link to post
Share on other sites
31 minutes ago, JoeJ said:

The video looks like they do not use dual contouring, they only snap voxel vertices to the nearest point of target geometry, which should be a lot easier to implement. (Limitation is probably: You can not have two surfaces in one voxel.)

Unfortunately i do not remember a proper name or explantation of the technique, but there are a few around. Maybe someone else...

They are using Dual Contouring which you can also see when you watch the full video, its also stated on their website.

Share this post


Link to post
Share on other sites
7 hours ago, QuesterDesura said:

They are using Dual Contouring which you can also see when you watch the full video, its also stated on their website.

Still disagree. See here (1:10):

 

The wireframe morphs directly from deformed geometry to voxels. No geometry is generated inside a voxel like dual contouring or marching cubes would do. 

Also the edge artefacts at the beginning when they build a structure from 3 boxes may hint this (missing vertex to represent multiple surfaces inside one voxel.)

Of course i do not know if that's all functionality they have.

Share this post


Link to post
Share on other sites
2 hours ago, JoeJ said:

Found what i have in mind, it's called 'Surface Nets'

https://0fps.net/2012/07/12/smooth-voxel-terrain-part-2/

http://www.merl.com/publications/docs/TR99-24.pdf

Also found this about dual contouring, if you don't know yet, seems quite complete with code: http://ngildea.blogspot.co.at/

At 1:10 you can see that the cube is not being smoothed, if that would be Surface Nets then the cube wouldn't remain sharp also as I have already said: they clearly say on their website they are using Dual Contouring. But that's really not what I want to discuss here.

Share this post


Link to post
Share on other sites
39 minutes ago, QuesterDesura said:

if that would be Surface Nets then the cube wouldn't remain sharp

You have not understood Surface Nets.

Share this post


Link to post
Share on other sites
1 hour ago, JoeJ said:

You have not understood Surface Nets.

Other people might come to this post and hoping to get some help understanding Dual Contouring and while they try to get some compact help and some visuals they need to read through 3 posts of one guy who argues about whether a computer game has used Dual Contouring or not. This thread is about finding a easy explanation/implementation to Dual Contouring and I have already told you that I don't want to discuss here what algorithm they used there. Because I don't care about it. I just want to get a working Dual Contouring algorithm and maybe help some other people too.

--------------------------------------------------------------

11 hours ago, sjhalayka said:

@QuesterDesura -- can you please share your code on GitHub?

I have been continuing working on the Unity Project and made some progress. Sadly it seems like my way of making a mesh out of the crossing corners seems to be rather bad. What I now came up is somehow working but sadly the normals are inverted in some cases.

I have added as many comments as I could so it's easy to follow what I'm trying to do. I have used most of the stuff from the github project of emilk but I'm not quite sure how they triangles are being created there. Maybe you have an idea how I could do it.

I have added .unitypackage here and the two C# files can be found here: https://github.com/PowerOfCreation/Dual-Contouring.

Normals are inverted:

1Zt97hx.png

My current approach is to find the closest 2 corners with a sign change to a corner with a sign change. Then I create a triangle between this 3 corners and mark them as processed. The processed ones won't try to create another triangle but in case a other corner needs corners to form a triangle then they can be reused.

DualContouring.unitypackage

Share this post


Link to post
Share on other sites
44 minutes ago, QuesterDesura said:

Other people might come to this post and hoping to get some help understanding Dual Contouring and while they try to get some compact help and some visuals they need to read through 3 posts of one guy who argues about whether a computer game has used Dual Contouring or not. This thread is about finding a easy explanation/implementation to Dual Contouring and I have already told you that I don't want to discuss here what algorithm they used there. Because I don't care about it. I just want to get a working Dual Contouring algorithm and maybe help some other people too.

Maybe those other people ARE interested in less known alternatives, you already made clear you are not.

Also, YOU brought up the game and said this is what you want.

Sorry for wasting space in your thread (and my time in trying to help), while you are allowed to repeat yourself and being rude. Work on your social competence!

No worries: End of conversation.

 

Share this post


Link to post
Share on other sites

I have cleaned some stuff up in the code and tried a different scenario. As far as I can see it connected all the triangles correctly wich is great :). Sadly I still didn't figure out how to fix my problem with the normals facing sometimes in the wrong direction. I have updated the GitHub project and put the new .unitypackage here as well.

dRcGRsi.png

DualContouring.unitypackage

Edited by QuesterDesura

Share this post


Link to post
Share on other sites

This is a general reminder to keep the conversation civil.  Some private warnings have been issued.

No more personal attacks.  Let's keep it focused on the topic of the mesh data processing.

Share this post


Link to post
Share on other sites
1 hour ago, frob said:

This is a general reminder to keep the conversation civil.  Some private warnings have been issued.

No more personal attacks.  Let's keep it focused on the topic of the mesh data processing.

Quote

Note for member

Several of your replies in the Dual Contouring discussion have been abusive, and a few border on useless. In one reply you state that you don't remember a technique but it is better. In another your declare that the person asking the question doesn't understand your answer in an insulting way, yet make no attempt to help them understand.

Some of the replies include subtle personal attacks, which are not tolerated on this site.  Statements like "stop being rude" and "work on your social competence!" are not okay.

Please tone it down.  Replies should generally help people learn and become better game developers, not insult them.

Well - fair enough. I apologize for being rude myself, instead i should have explained why Questers quoted commend was wrong:

If you look yourself to the video, it's clearly visible that the geometry is just a deformation of grid aligned cubes. Deformed cubes can of course have sharp features. Dual Contouring would contain intersections with the grid and also new geometry inside the cubes for the sphere shown in the video. (I did not say Surface Nets are 'better', just 'easier to implement' - so probably attractive for a beginner.)

Quote

I have already said: they clearly say on their website they are using Dual Contouring. But that's really not what I want to discuss here.

I won't commend on why they state on their website their technique is BASED on dual contouring (whatever that means exactly), but you see that there was no interest in any further explantation as you suggest.

And that made me angry because of the time it took me to find information about the technique i remembered. I still perceive this reaction as ignorant. Asking for help and then rejecting valid points it is not nice either and contradicts public discussion.

But i took that much too personal. Shouldn't get upset that fast, even on a bad day.

Sorry again, guys!

Edit: ... and sorry for wasting even more space;)

Edited by JoeJ

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By fishyperil
      I'm looking for some references that could help me learn how to program some really basic 2D enemy behaviours.
      I wasn't sure whether to post this here or in the AI section but I think it might be more suitable to be posted here since it has more to do with basic maths than any AI related algorithms.
      Could anyone help recommend some resources (books, posts, videos) that could help me understand how to properly implement the basics of enemy movement in 2d games ? So far I've only managed to get them to chase the player character and to stop moving on collision, but the movement is pretty unrealistic and once the collision occurs the enemies all "pile up" on the player character. I'm doing this in C++ so no guides that explain how to script this using an engine api please.
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By nickyc95
      Hi.
      I'm kind of late to this party but I thought I would ask anyway as I haven't found a concrete answer.
       
      When creating a game engine, when should you choose one methodology over another (more specifically OOP and DOD)? Which areas benefit from DOD? Which areas benefit from OOP? Do people typically mix multiple methodologies throughout a project? I.e. certain sub-systems created in one, and others in the another?  
      DOD - Data Oriented Design
      OOP - Object Oriented Design
       
      Pretty simple
      Thanks
  • Advertisement