After doing a bunch of searching, I'm drowning in info that isn't quite what I'm looking for; so I'm hoping someone here can at least point me in the right direction. Unfortunately though, it's probably not a quick answer and to really get across where I'm stuck, I need to elaborate a bit...
I'm looking to add voxelation to my game engine for destructibles and deformable objects (like terrain). Problem is, I see TONS of information (as I said, I'm drowning in info) about generating a mesh from (implicitly already created) voxel data (Things like the Marching Algorithm and various versions of it keep coming up), or calculating the volume of a mesh, or creating a giant cube of voxels from the ground up. In laymen's terms, I need to go the other way -- that is, create the cubes of the inside of an already existing model that I made -- that is nothing more than a series of verices, and faces that have vertex indices. Currently, I can create cubes centered on the vertices of a mesh, but that is only voxels being placed at the vertices -- i.e. nothing about the surfaces, much less anything about the inside of the mesh. I've heard of techniques that reference raycasting
(see: http://blog.wolfire.com/2009/11/Triangle-mesh-voxelization). In that example, the developer implicitly KNOWS which triangle to cast to.
A little aside about the raycasting technique:
In my understanding (and implementation), this would work similarly like scene picking would. In scene picking, your geometric ray starts at the near plane, cast towards the far plane (which is a discrete value -- i.e. there's no way of just saying, "cast to lambda and event handler tell me when it hits something" lol) -- you determine the selection by starting with a superset consisting of ALL triangles (again, a discrete value) -- that are part of objects you want to be selectable -- then you cycle through each triangle and determine if a geometric ray is within the bounds of the polygon (For thoroughness, you can calculate the hit coords and return them), then you store any face that passed the hit test and check the calculated hit-depth values, whichever one is the smallest is the face that was actually clicked and ultimately the mesh your looking to highlight. Pretty straight forward stuff, but you start with a discrete set and work backwards towards a narrow subset. At no time does it matter how one face is oriented to the other. With the raycasting technique, you kinda, albeit, implicitly need to absolutely know which triangle is directly across (meaning a change in only a position's single axis) otherwise you wind up criss-crossing all over the place inside the mesh, even if you just picked one triangle and raycasted all others to it, I'm not sure that would work either. Only thing I can think of is pick a face, and loop through all other faces until you find a face with an inverse normal (within deviation) -- but I'd kinda like to do it scan-line style, like the wolfire link I posted earlier, where only a single axis (just x, just y, or just z) is changed.
Surely, there are 3D modelling programs out there that can voxelize the mesh upon requesting it -- despite you rendering it as a triangle-based mesh... How do they do it? Any help would greatly be appreciated! In the meantime, I'll probably be thinking about this on and off for a while yet... I may update this post with edits to make it more readable, or thoughts I have along the way prior to a reply.
EDIT 1: I think I may understand it... You create a bounding box of the mesh, loop through every face, cast in single direction to bounding box edge, each face gets looped to see if any faces are collided with. If they are, you create cubes of specified size along the geometric ray to that collided face. Am I right? It seems the missing puzzle piece was I forgot about the bounding box.
Edited by StakFallT, 22 January 2014 - 09:20 PM.