QuesterDesura

An easy explanation to Dual Contouring?

Recommended Posts

Hello,

I'm a trainee for software development and in my free time I try to do various things. I decided I wanted to figure out how "Dual Contouring" works, but I haven't been successful yet. I could find some implementations in code which are very hard to follow along and same sheets of paper "explaining" it written in a way that I have a hard time even understanding what the input and output is. All I know that it is used to make  a surface/mesh out of voxels. Is there any explanation someone can understand without having studied math/computer science? The problem is also that most of the words I can't even translate to German(like Hermite Data) nor I can find a explanation which I can understand. I know how Marching Cubes work but I just can't find anything to Dual Contouring also I don't quite get the sense of octrees. As far I'm aware of this is a set of data organized like a tree where each element points to 8 more elements. I don't get how that could be helpful in a infinite 3D world and why I shouldn't just use a List of coordinates with zeros and ones for dirt/air or something like that.

Thanks

A clueless trainee ^^

Edited by QuesterDesura

Share this post


Link to post
Share on other sites

I have looked through the code, but I have some issues with the variable names and functions used. As far what I have seen the Field consists of a array of planes, what is kinda confusing as I thought Dual Contouring is only using like a 3D array of zeros and ones. Also I'm not quite sure if a voxel is in the middle of a cube or on the corner because it seems like it is iterating with the foreach3D(function?) through each voxel and determines if the surrounding voxels are all outside or inside(below zero and greater zero as it seems). If thats not the case then it makes vertexes to CrossingCorners as it seems. Is that correct for the first part? I find this confusing because voxels in Minecraft terms are simply displaying 1 Cube and can either be 0 or a ID of a block, but here it seems  to be that 8 voxels/coordinates represent 1 Cube or whatever might come out of the algorithm.

Edited by QuesterDesura

Share this post


Link to post
Share on other sites

Hermite data: it's the same volume data (or distance field) that you'd use for regular marching cubes, but with surface normals at each sampled point as well.

If you are using a distance field as your source of volume data, you can find the normal at each sample point using a central difference, and voila, you have hermite data.

Share this post


Link to post
Share on other sites

Conceptually, Dual Contouring is not that different from Marching Cubes. In both cases, you divide your volume data into a grid, and at each cube in the grid you evaluate the surface. Any cube containing edge-crossing - or in other words, any cube that intersects the surface (i.e. isn't entirely empty or entirely full) will generate output geometry.

The key difference between the two is that in Marching Cubes you will just directly generate geometry from the edge crossings. In Dual Contouring you will instead use the normals at each edge-crossing (this is why you need Hermite Data) to locate a single exact point on the surface, and you'll generate geometry from these points (spanning multiple cubes) in a subsequent pass.

The complicated part of Dual Contouring is mostly in how you find the optimal point based on the normals of neighbouring edge crossings. The original paper uses QEF minimisation, which is... a little opaque to those of us without a deep math background.

If you are looking for a more readable implementation, Lin has a pretty simple one on github.

Share this post


Link to post
Share on other sites
25 minutes ago, swiftcoder said:

Conceptually, Dual Contouring is not that different from Marching Cubes. In both cases, you divide your volume data into a grid, and at each cube in the grid you evaluate the surface. Any cube containing edge-crossing - or in other words, any cube that intersects the surface (i.e. isn't entirely empty or entirely full) will generate output geometry.

The key difference between the two is that in Marching Cubes you will just directly generate geometry from the edge crossings. In Dual Contouring you will instead use the normals at each edge-crossing (this is why you need Hermite Data) to locate a single exact point on the surface, and you'll generate geometry from these points (spanning multiple cubes) in a subsequent pass.

The complicated part of Dual Contouring is mostly in how you find the optimal point based on the normals of neighbouring edge crossings. The original paper uses QEF minimisation, which is... a little opaque to those of us without a deep math background.

If you are looking for a more readable implementation, Lin has a pretty simple one on github.

Wow, thank you so much you just cleared up so much stuff I found to be confusing. I don't quite understand how those normals should be working. If for example in my game a Cube gets placed by the player then I will create volume data containing the coordinates of the 8 edges, right? But what normal direction does I give the 8 coordinates and how exactly do I use it? Also how should I locate a point on the surface? I thought Dual Contouring is creating this surface out of a set of data, so how can I determine a location on a shape then?

I made a image on how I thought it is working:

1. User wants to create a cube

bixBenE.png

2. create 8 voxels

ETh3sR4.png

3. Add/Calculate Normals(free hand ^^)

sGWuUYe.png

4.Dual Contouring 

gPKKpKJ.png

 

Is this how it works or did I miss something? And how does Dual Contouring actually use those normals?

Edited by QuesterDesura

Share this post


Link to post
Share on other sites

I took a shot at deciphering the emilk code, and I'm sort of half-triumphant.

I put up a copy of the code in my own GitHub repository:

https://github.com/sjhalayka/dc

I added a function in main.cpp called addFractal (also see below for source code), which adds a quaternion Julia set. I set the quaternion parameters to all zeroes, which produces a 3D ball. If you take a look at the mesh.obj that I put up (also see attached file below), you can make out part of the sphere, but the surface is all choppy. I'm not sure if I'm setting the dist variable or the normal variable correctly. Anyone have any suggestions?

Attached file:

mesh.obj

 

void addFractal(Field& field)
{
    float x_grid_max = 1.5;
    float y_grid_max = 1.5;
    float z_grid_max = 1.5;
    float x_grid_min = -x_grid_max;
    float y_grid_min = -y_grid_max;
    float z_grid_min = -z_grid_max;
    
    float z_w = 0;
    quaternion C;
    C.x = 0.0f;
    C.y = 0.0f;
    C.z = 0.0f;
    C.w = 0.0f;
    unsigned short int max_iterations = 8;
    float threshold = 4;
    
    string error_string;
    quaternion_julia_set_equation_parser eqparser;
    if (false == eqparser.setup("Z = sin(Z) + C * sin(Z)", error_string, C))
    {
        cout << "Equation error: " << error_string << endl;
        return;
    }
    else
    {
        cout << "Equation ok" << endl;
    }
    
    quaternion Z(x_grid_min, y_grid_min, z_grid_min, z_w);
    
	foreach3D(field.size(), [&](Vec3u upos)
    {
		auto& cell = field[upos];

        Z.x = upos.x / float(FieldSize - 1);
        Z.y = upos.y / float(FieldSize - 1);
        Z.z = upos.z / float(FieldSize - 1);
        
        cell.dist = eqparser.iterate(Z, max_iterations, threshold) - threshold;

        Z.x -= 0.01;
        float normal_x1 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.x += 0.02;
        float normal_x2 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.x -= 0.01;
        float normal_x = normal_x1 - normal_x2;
            
        Z.y -= 0.01;
        float normal_y1 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.y += 0.02;
        float normal_y2 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.y -= 0.01;
        float normal_y = normal_y1 - normal_y2;
            
        Z.z -= 0.01;
        float normal_z1 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.z += 0.02;
        float normal_z2 = eqparser.iterate(Z, max_iterations, threshold);
            
        Z.z -= 0.01;
        float normal_z = normal_z1 - normal_z2;
            
        normal_x /= 2.0;
        normal_y /= 2.0;
        normal_z /= 2.0;
        
        cell.normal = normalized(Vec3(normal_x, normal_y, normal_z));
    });
}

 

Edited by sjhalayka

Share this post


Link to post
Share on other sites
9 hours ago, QuesterDesura said:

If for example in my game a Cube gets placed by the player then I will create volume data containing the coordinates of the 8 edges, right?

Well, no. You volume data is usually a regular grid (can be 1-bit empty/full data, or floating point density values). You'd initialise the entire block of volume data to empty (i.e. zero). Then if the player places a cube, you'd intersect their cube with the volume data, and set each cell that intersects the box to full (i.e. one).

If you are looking for Minecraft-style terrain, where everything is cubes, then Dual Contouring is going to overcomplicate things significantly, and there are much simpler methods.

Share this post


Link to post
Share on other sites

I have gotten some stuff working now. I used the Unity Editor to make it easier to see what I'm doing. I have "translated" the Code from the Github project in C# and changed/removed stuff that I don't understand. This is my first test:

WJALL5X.png

The small cubes presents voxels which are inside the mesh. Next steps will be to also let the algorithm run through the other sides and make a solid shape.

This is how it looks with a distance of -0.5 to all edges:

O7G4aqD.png

I think/I hope I have now understood how this algorithm works. I will post again if I encounter new problems or hopefully I can get some more results and share them.

Edited by QuesterDesura

Share this post


Link to post
Share on other sites
13 hours ago, QuesterDesura said:

I'm looking for something like this.

The video looks like they do not use dual contouring, they only snap voxel vertices to the nearest point of target geometry, which should be a lot easier to implement. (Limitation is probably: You can not have two surfaces in one voxel.)

Unfortunately i do not remember a proper name or explantation of the technique, but there are a few around. Maybe someone else...

Share this post


Link to post
Share on other sites
31 minutes ago, JoeJ said:

The video looks like they do not use dual contouring, they only snap voxel vertices to the nearest point of target geometry, which should be a lot easier to implement. (Limitation is probably: You can not have two surfaces in one voxel.)

Unfortunately i do not remember a proper name or explantation of the technique, but there are a few around. Maybe someone else...

They are using Dual Contouring which you can also see when you watch the full video, its also stated on their website.

Share this post


Link to post
Share on other sites
7 hours ago, QuesterDesura said:

They are using Dual Contouring which you can also see when you watch the full video, its also stated on their website.

Still disagree. See here (1:10):

 

The wireframe morphs directly from deformed geometry to voxels. No geometry is generated inside a voxel like dual contouring or marching cubes would do. 

Also the edge artefacts at the beginning when they build a structure from 3 boxes may hint this (missing vertex to represent multiple surfaces inside one voxel.)

Of course i do not know if that's all functionality they have.

Share this post


Link to post
Share on other sites
2 hours ago, JoeJ said:

Found what i have in mind, it's called 'Surface Nets'

https://0fps.net/2012/07/12/smooth-voxel-terrain-part-2/

http://www.merl.com/publications/docs/TR99-24.pdf

Also found this about dual contouring, if you don't know yet, seems quite complete with code: http://ngildea.blogspot.co.at/

At 1:10 you can see that the cube is not being smoothed, if that would be Surface Nets then the cube wouldn't remain sharp also as I have already said: they clearly say on their website they are using Dual Contouring. But that's really not what I want to discuss here.

Share this post


Link to post
Share on other sites
1 hour ago, JoeJ said:

You have not understood Surface Nets.

Other people might come to this post and hoping to get some help understanding Dual Contouring and while they try to get some compact help and some visuals they need to read through 3 posts of one guy who argues about whether a computer game has used Dual Contouring or not. This thread is about finding a easy explanation/implementation to Dual Contouring and I have already told you that I don't want to discuss here what algorithm they used there. Because I don't care about it. I just want to get a working Dual Contouring algorithm and maybe help some other people too.

--------------------------------------------------------------

11 hours ago, sjhalayka said:

@QuesterDesura -- can you please share your code on GitHub?

I have been continuing working on the Unity Project and made some progress. Sadly it seems like my way of making a mesh out of the crossing corners seems to be rather bad. What I now came up is somehow working but sadly the normals are inverted in some cases.

I have added as many comments as I could so it's easy to follow what I'm trying to do. I have used most of the stuff from the github project of emilk but I'm not quite sure how they triangles are being created there. Maybe you have an idea how I could do it.

I have added .unitypackage here and the two C# files can be found here: https://github.com/PowerOfCreation/Dual-Contouring.

Normals are inverted:

1Zt97hx.png

My current approach is to find the closest 2 corners with a sign change to a corner with a sign change. Then I create a triangle between this 3 corners and mark them as processed. The processed ones won't try to create another triangle but in case a other corner needs corners to form a triangle then they can be reused.

DualContouring.unitypackage

Share this post


Link to post
Share on other sites
44 minutes ago, QuesterDesura said:

Other people might come to this post and hoping to get some help understanding Dual Contouring and while they try to get some compact help and some visuals they need to read through 3 posts of one guy who argues about whether a computer game has used Dual Contouring or not. This thread is about finding a easy explanation/implementation to Dual Contouring and I have already told you that I don't want to discuss here what algorithm they used there. Because I don't care about it. I just want to get a working Dual Contouring algorithm and maybe help some other people too.

Maybe those other people ARE interested in less known alternatives, you already made clear you are not.

Also, YOU brought up the game and said this is what you want.

Sorry for wasting space in your thread (and my time in trying to help), while you are allowed to repeat yourself and being rude. Work on your social competence!

No worries: End of conversation.

 

Share this post


Link to post
Share on other sites

I have cleaned some stuff up in the code and tried a different scenario. As far as I can see it connected all the triangles correctly wich is great :). Sadly I still didn't figure out how to fix my problem with the normals facing sometimes in the wrong direction. I have updated the GitHub project and put the new .unitypackage here as well.

dRcGRsi.png

DualContouring.unitypackage

Edited by QuesterDesura

Share this post


Link to post
Share on other sites

This is a general reminder to keep the conversation civil.  Some private warnings have been issued.

No more personal attacks.  Let's keep it focused on the topic of the mesh data processing.

Share this post


Link to post
Share on other sites
1 hour ago, frob said:

This is a general reminder to keep the conversation civil.  Some private warnings have been issued.

No more personal attacks.  Let's keep it focused on the topic of the mesh data processing.

Quote

Note for member

Several of your replies in the Dual Contouring discussion have been abusive, and a few border on useless. In one reply you state that you don't remember a technique but it is better. In another your declare that the person asking the question doesn't understand your answer in an insulting way, yet make no attempt to help them understand.

Some of the replies include subtle personal attacks, which are not tolerated on this site.  Statements like "stop being rude" and "work on your social competence!" are not okay.

Please tone it down.  Replies should generally help people learn and become better game developers, not insult them.

Well - fair enough. I apologize for being rude myself, instead i should have explained why Questers quoted commend was wrong:

If you look yourself to the video, it's clearly visible that the geometry is just a deformation of grid aligned cubes. Deformed cubes can of course have sharp features. Dual Contouring would contain intersections with the grid and also new geometry inside the cubes for the sphere shown in the video. (I did not say Surface Nets are 'better', just 'easier to implement' - so probably attractive for a beginner.)

Quote

I have already said: they clearly say on their website they are using Dual Contouring. But that's really not what I want to discuss here.

I won't commend on why they state on their website their technique is BASED on dual contouring (whatever that means exactly), but you see that there was no interest in any further explantation as you suggest.

And that made me angry because of the time it took me to find information about the technique i remembered. I still perceive this reaction as ignorant. Asking for help and then rejecting valid points it is not nice either and contradicts public discussion.

But i took that much too personal. Shouldn't get upset that fast, even on a bad day.

Sorry again, guys!

Edit: ... and sorry for wasting even more space;)

Edited by JoeJ

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Announcements

  • Forum Statistics

    • Total Topics
      628345
    • Total Posts
      2982200
  • Similar Content

    • By noodleBowl
      I was wondering if someone could explain this to me
      I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide
      I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA
      In one case I would think that: 
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels
      In the second case I would think that:
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void
      I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end
       
    • By Connor Rust
      I am currently attempting to make a navigation mesh for our 2D top down game, which is a multiplayer game using Node.js as the server communication. At the moment, I have implemented A* over an obstacle hardnessmap, which is awfully slow and laggy at times when we test our game on Heroku. I have been trying to find an algorithm to automatically generate the navmesh after map creation, instead of me having to do this manually. I am currently attempting to use Delaunay's Triangulation Divide and Conquer algorithm, but I am running into some issues. I have already asked a question on StackOverflow and am not getting many suggestions and help from it, so I figured I would come here. Is there another algorithm that might be better to use for the navmesh generation in comparison to Deluanay's Triangulation? My current implementation seems extremely buggy during the merge step and I cannot find the error. I have checked over the code countless times, comparing it to the description of the algorithm from http://www.geom.uiuc.edu/~samuelp/del_project.html. 
      My current code is this:
      class MapNode { constructor(x, y) { this.position = new Vector(x, y); this.neighbors = []; } distance(n) { return this.position.distance(n.position); } inNeighbor(n) { for (let i = 0; i < this.neighbors.length; i++) { if (this.neighbors[i] === n) return true; } return false; } addNeighbor(n) { this.neighbors = this.neighbors.filter((node) => node != n); this.neighbors.push(n); } addNeighbors(arr) { let self = this; arr.forEach((n) => self.neighbors.push(n)); } removeNeighbor(n) { this.neighbors = this.neighbors.filter((neighbor) => neighbor != n); } } class Triangle { constructor(p1, p2, p3) { this.p1 = p1; this.p2 = p2; this.p3 = p3; this.neighbors = []; } addNeighbors(n) { this.neighbors.push(n); } } function genSubMat(matrix, ignoreCol) { let r = []; for (let i = 0; i < matrix.length - 1; i++) { r.push([]); for (let j = 0; j < matrix[0].length; j++) { if (j != ignoreCol) r[i].push(matrix[i + 1][j]); } } return r; } function determinantSqMat(matrix) { if (matrix.length != matrix[0].length) return false; if (matrix.length === 2) return matrix[0][0] * matrix[1][1] - matrix[1][0] * matrix[0][1]; let det = 0; for (let i = 0; i < matrix.length; i++) { let r = genSubMat(matrix, i); let tmp = matrix[0][i] * determinantSqMat(r); if (i % 2 == 0) det += tmp; else det -= tmp; } return -det; } // if d is in the circle formed by points a, b, and c, return > 0 // d is on circle, return 0 // d is outside of circle, return < 0 function inCircle(a, b, c, d) { let arr = [a, b, c, d]; let mat = [ [], [], [], [] ]; for (let i = 0; i < arr.length; i++) { mat[i][0] = 1; mat[i][1] = arr[i].position.x; mat[i][2] = arr[i].position.y; mat[i][3] = arr[i].position.x * arr[i].position.x + arr[i].position.y * arr[i].position.y; } return determinantSqMat(mat); } function walkable(from, to, hardnessMap) { let diff = new Vector(to.x - from.x, to.y - from.y); if (Math.abs(diff.x) > Math.abs(diff.y)) diff.scale(Math.abs(1 / diff.x)); else diff.scale(Math.abs(1 / diff.y)); let current = new Vector(from.x + diff.x, from.y + diff.y); while (Math.round(current.x) != to.x || Math.round(current.y) != to.y) { if (hardnessMap[Math.floor(current.y)][Math.floor(current.x)] === 1) return false; current.x += diff.x; current.y += diff.y; } return true; } function getLowest(nodes) { let lowest = nodes[0]; for (let i = 1; i < nodes.length; i++) { if (nodes[i].position.y < lowest.position.y) lowest = nodes[i]; } return lowest; } // returns the angle between 2 vectors, if cw is true, then return clockwise angle between, // else return the ccw angle between. b is the "hinge" point function angleBetween(a, b, c, cw) { let ba = new Vector(a.position.x - b.position.x, a.position.y - b.position.y); let bc = new Vector(c.position.x - b.position.x, c.position.y - b.position.y); let v0 = new Vector(0, 1); let angleBA = v0.angleBetween(ba) * 180 / Math.PI; if (angleBA < 0) angleBA += 360; let angleBC = v0.angleBetween(bc) * 180 / Math.PI; if (angleBC < 0) angleBC += 360; let smallest = Math.min(angleBA, angleBC); let largest = Math.max(angleBA, angleBC); let angle = largest - smallest; return (cw) ? angle : 360 - angle; } function sortSmallestAngle(a, b, list, cw) { list.sort((m, n) => { let vab = new Vector(a.position.x - b.position.x, a.position.y - b.position.y); let vmb = new Vector(m.position.x - b.position.x, m.position.y - b.position.y); let vnb = new Vector(n.position.x - b.position.x, n.position.y - b.position.y); if (cw) return vab.angleBetween(vmb, cw) - vab.angleBetween(vnb, cw); else return vab.angleBetween(vnb, cw) - vab.angleBetween(vmb, cw); }); } // a is in list, b is in the other list function getPotential(a, b, list, cw) { sortSmallestAngle(b, a, list, cw); for (let i = 0; i < list.length - 1; i++) { let angle = angleBetween(b, a, list[i], cw); if (angle > 180) return false; else if (inCircle(a, b, list[i], list[i + 1]) <= 0) return list[i]; else { a.removeNeighbor(list[i]); list[i].removeNeighbor(a); } } let potential = list[list.length - 1]; if (potential) { let angle = angleBetween(a, b, potential, cw); if (angle > 180) return false; return potential; } return false; } function merge(leftNodes, rightNodes, leftBase, rightBase, hardnessMap) { leftBase.addNeighbor(rightBase); rightBase.addNeighbor(leftBase); let newLeft = leftNodes.filter((n) => n != leftBase); let newRight = rightNodes.filter((n) => n != rightBase); let potentialLeft = getPotential(leftBase, rightBase, newLeft, false); let potentialRight = getPotential(rightBase, leftBase, newRight, true); if (!potentialLeft && !potentialRight) return; else if (potentialLeft && !potentialRight) merge(newLeft, newRight, potentialLeft, rightBase, hardnessMap); else if (potentialRight && !potentialLeft) merge(newLeft, newRight, leftBase, potentialRight, hardnessMap); else { if (inCircle(leftBase, rightBase, potentialLeft, potentialRight) <= 0) merge(newLeft, newRight, potentialLeft, rightBase, hardnessMap); if (inCircle(leftBase, rightBase, potentialRight, potentialLeft) <= 0) merge(newLeft, newRight, leftBase, potentialRight, hardnessMap); } } // divide and conquer algorithm function delaunay(nodes, hardnessMap) { if (nodes.length <= 3) { for (let i = 0; i < nodes.length; i++) for (let j = 0; j < nodes.length; j++) if (i != j) nodes[i].addNeighbor(nodes[j]); return nodes; } else { nodes.sort((a, b) => { let tmp = a.position.x - b.position.x; if (tmp === 0) return b.position.y - a.position.y; return tmp; }); let l = nodes.length; let leftNodes; let rightNodes; if (l === 4) { leftNodes = delaunay(nodes.slice(0, 3), hardnessMap); rightNodes = delaunay(nodes.slice(3, 4), hardnessMap); } else { leftNodes = delaunay(nodes.slice(0, Math.floor(nodes.length / 2)), hardnessMap); rightNodes = delaunay(nodes.slice(Math.floor(nodes.length / 2), nodes.length), hardnessMap); } let leftBase = getLowest(leftNodes); let rightBase = getLowest(rightNodes); merge(leftNodes, rightNodes, leftBase, rightBase, hardnessMap); console.log("=============================MergeComplete================================"); return nodes; } }  
    • By HD86
      As far as I know, the size of XMMATRIX must be 64 bytes, which is way too big to be returned by a function. However, DirectXMath functions do return this struct. I suppose this has something to do with the SIMD optimization. Should I return this huge struct from my own functions or should I pass it by a reference or pointer?
      This question will look silly to you if you know how SIMD works, but I don't.
    • By pristondev
      Hey, Im using directx allocate hierarchy from dx9 to use a skinned mesh system.
      one mesh will be only the skeleton with all animations others meshes will be armor, head etc, already skinned with skeleton above. No animation, idle position with skin, thats all I want to use the animation from skeleton to other meshes, so this way I can customize character with different head, armor etc. What I was thinking its copy bone matrices from skeleton mesh to others meshes, but Im a bit confused yet what way I can do this.
       
      Thanks.
    • By mister345
      Does buffer number matter in ID3D11DeviceContext::PSSetConstantBuffers()? I added 5 or six constant buffers to my framework, and later realized I had set the buffer number parameter to either 0 or 1 in all of them - but they still all worked! Curious why that is, and should they be set up to correspond to the number of constant buffers?
      Similarly, inside the buffer structs used to pass info into the hlsl shader, I added padding inside the c++ struct to make a struct containing a float3 be 16 bytes, but in the declaration of the same struct inside the hlsl shader file, it was missing the padding value - and it still worked! Do they need to be consistent or not? Thanks.
          struct CameraBufferType
          {
              XMFLOAT3 cameraPosition;
              float padding;
          };
  • Popular Now