# Creating volumetric mesh data from polygonal vertex set

This topic is 2090 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

After doing a bunch of searching, I'm drowning in info that isn't quite what I'm looking for; so I'm hoping someone here can at least point me in the right direction. Unfortunately though, it's probably not a quick answer and to really get across where I'm stuck, I need to elaborate a bit...

I'm looking to add voxelation to my game engine for destructibles and deformable objects (like terrain). Problem is, I see TONS of information (as I said, I'm drowning in info) about generating a mesh from (implicitly already created) voxel data (Things like the Marching Algorithm and various versions of it keep coming up), or calculating the volume of a mesh, or creating a giant cube of voxels from the ground up. In laymen's terms, I need to go the other way -- that is, create the cubes of the inside of an already existing model that I made -- that is nothing more than a series of verices, and faces that have vertex indices. Currently, I can create cubes centered on the vertices of a mesh, but that is only voxels being placed at the vertices -- i.e. nothing about the surfaces, much less anything about the inside of the mesh. I've heard of techniques that reference raycasting

(see: http://blog.wolfire.com/2009/11/Triangle-mesh-voxelization). In that example, the developer implicitly KNOWS which triangle to cast to.

A little aside about the raycasting technique:

In my understanding (and implementation), this would work similarly like scene picking would. In scene picking, your geometric ray starts at the near plane, cast towards the far plane (which is a discrete value -- i.e. there's no way of just saying, "cast to lambda and event handler tell me when it hits something" lol) -- you determine the selection by starting with a superset consisting of ALL triangles (again, a discrete value) -- that are part of objects you want to be selectable -- then you cycle through each triangle and determine if a geometric ray is within the bounds of the polygon (For thoroughness, you can calculate the hit coords and return them), then you store any face that passed the hit test and check the calculated hit-depth values, whichever one is the smallest is the face that was actually clicked and ultimately the mesh your looking to highlight. Pretty straight forward stuff, but you start with a discrete set and work backwards towards a narrow subset. At no time does it matter how one face is oriented to the other. With the raycasting technique, you kinda, albeit, implicitly need to absolutely know which triangle is directly across (meaning a change in only a position's single axis) otherwise you wind up criss-crossing all over the place inside the mesh, even if you just picked one triangle and raycasted all others to it, I'm not sure that would work either. Only thing I can think of is pick a face, and loop through all other faces until you find a face with an inverse normal (within deviation) -- but I'd kinda like to do it scan-line style, like the wolfire link I posted earlier, where only a single axis (just x, just y, or just z) is changed.

Surely, there are 3D modelling programs out there that can voxelize the mesh upon requesting it -- despite you rendering it as a triangle-based mesh... How do they do it? Any help would greatly be appreciated! In the meantime, I'll probably be thinking about this on and off for a while yet... I may update this post with edits to make it more readable, or thoughts I have along the way prior to a reply.

EDIT 1: I think I may understand it... You create a bounding box of the mesh, loop through every face, cast in single direction to bounding box edge, each face gets looped to see if any faces are collided with. If they are, you create cubes of specified size along the geometric ray to that collided face. Am I right? It seems the missing puzzle piece was I forgot about the bounding box.

-- StakFallT

Edited by StakFallT

##### Share on other sites

Essentially you want to compute, for a given point in 3d space, whether it's inside or outside your mesh (which needs to be watertight for this to be defined).

For a triangle mesh, a robust way to do this is find the closest point on the mesh to your query point, and then test the dot product of the triangle normal against the vector from the closest point to your query (negative sign = inside, positive = outside, 0 = on the surface).

Alternatively, you can cast a ray in a random direction from your query, and count the number  of crossings of the mesh. This is a little tricky because you have to worry about intersections at an edge or vertex of the mesh, as well as glancing intersections.

Taking the first option, you would need to be able to calculate the closest point on a triangle to a given point (see: http://www.geometrictools.com/Documentation/DistancePoint3Triangle3.pdf) and then use an appropriate spatial query structure (like an octree, r-tree or, kd-tree) to speed up the testing.

Then you simply evaluate this at regular grid coordinates to generate your voxel data.

##### Share on other sites

By the sounds of it, the gist is:

1) Create a bounding box

2) Select an arbitrary point that is inside that box

3) Take that arbitrary point and locate the closest triangle (By checking the distance of it's vertices) to said arbitrary point

4) Take that triangle's normal and test it against the arbitrary point and if - it's inside, if + it's outside, if 0 it's on the surface.

5) If -, or 0 create a cube at that arbitrary point

repeat steps 2 through 5

Edited by StakFallT

##### Share on other sites
Yes, except that the closest triangle test is a bit more complex than just testing vertices, because the closest point may be on an edge or within a face, and you have to account for that. The link I posted provides an algorithm that performs the full test.

##### Share on other sites
Another alternative would be to implement computation of one z slice as a shader. You render your model (orthographic projection, no back face culling), rejecting z values less than your current slice, and counting the number of times each pixel is hit. Any odd values represent positions of cubes in this slice.

##### Share on other sites

well I was saying a vertex because a vertex represents the very end of triangle, it's impossible for something on a face to be passed all of the encompassing 3 vertices (one of those 3 will be the furthest point), but I see what you're saying about an edge -- if 1 vertex is dismissed and you're down to two, and those two happen to be equidistant (side the side)  from said point, then the mid point of that edge would be closer. Thanks btw!

EDIT 1: It's taking some time to go through the algorithm; been a while since I read through a math paper but I think I'm starting to understand parts of it. Currently, my question on it is, what is "B"?

here's the text I'm referring to:

"where a = E0 * E0, b = E0 * E1, c = E1 * E1, d = E0 * (B - P), e = E1 * (B - P), and f = (B - P) * (B - P)."

I get that the Es are the edges, and P is the arbitrary point you're testing, but what is B? The paper doesn't seem to explain that. Also, s and t I'm guessing are the 2d coords of the triangle? So how do I flatten a 3d tri into 2d? Would I use barycentric coords?

Edited by StakFallT

##### Share on other sites

The triangle is defined as the set of points B + sE0 + tE1, where s >= 0, t >=0 and s+t <= 1. s and t are two components of the triangle barycentric coordinates.

B is a vertex (you can think of it as the 'base' vertex of the triangle) of the triangle. E0 and E1 are two edge vectors. If you say that the triangle is defined by vertices A,B,C, then E0 and E1 are C-B and A-B respectively (assuming right hand rule ordering of the triangle).

If you want a complete implementation to look at, you can find one here:

##### Share on other sites

Been looking over the implementation, thanks!  The code is soooo clean...  I follow it, just wish I could visualize the formulas in my head better -- though, that usually comes overtime through working with them. Thanks again! I think I'm on my way now! :)

##### Share on other sites

Sorry to bother, but I've had to some time to work with the algorithm a bit, and I have a couple of last min. questions...

1) This would NOT be suitable for animated meshes -- at least without some kind of trick right? (There's just too much math for each test-point during voxel generation) -- as the naive (sp?) approach would be to regenerate the voxels every frame. I can't really picture a need to do this, but was just something that dawned on me. I guess, not to use a morbid example -- and I definitely apologize for doing so, but like if you took a rag-doll skeleton and wanted to like shoot an arm off off or something, but that's really the only example of an animated mesh I could think of where voxels could be used. Most destructable things are solid non-living objects (buildings, tanks, etc.) and even though some non-living things "animate" (like tanks), generally speaking, they either move, rotate, or scale as a collective whole of triangles. I suppose, also, anything non-living that might move on it's own... something flexible that might animate on their own (maybe by wind or something) would also be a candidate for on-the-fly voxels -- only thing that comes to mind would be a flagpole or a tree swaying in the wind.

2) What about scaling of meshes? When I scale my models, it's merely visual-only (since scaling all vertices and re-updating the vertex buffers on-demand would kinda be out of the question). I say this because, if the mesh gets bigger, the vertex data doesn't reflect this, only the model matrix does which is eventually passed to D3D; what this means is the voxels are calculated off the original vertices and as such if the mesh gets bigger, there won't be enough voxels to fill the inside of the model since the voxels being generated are calculated off the original smaller triangles... Would I create a second set of vertex/face data that scale when voxels are generated to counter that problem or is there a method that entails less overhead? I've heard of a method that some people make sophisticated usage of billboards to achieve voxels in general, but I'm not sure how that would really work -- but for any kind of deformation would definitely require some spaghetti code I'm thinking (As you'd then have to convert the billboards into cubes otherwise at certain camera angles the voxels, being billboards, would "disappear").

Edited by StakFallT

• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 16
• 11
• 23
• 42
• 75