Sign in to follow this  
Dirge

3D Mesh to Volume Texture

Recommended Posts

Can anyone recommend any good algorithms for converting a 3D Polygonal Mesh to a Volume (3D) Texture (or voxelized data)? Specifically I'm looking for a way to do this with some level of hardware acceleration (D3D9 level hardware). One idea I've had is to use the stencil buffer to mask out the pixels for individual volume slices, however, this fails miserably in the pathological case when polygons are coplaner to the camera frustum planes, e.g. a double-sided square where each face is labeled with it's direction (-x,+x,-y,+y...) will have -x/+x missing if rendered using an orthographic projection. Another crazy idea was to build a 3D Voxel Field and for each voxel a cube map is rendered to with only the polygons _inside_ that voxel (so render origin is at the middle of a voxel wall looking towards the voxel origin). The colors are then summed and averaged which results in a "color" value (rgb) for that voxel. The z-buffer can be used in a similar way to determine "solidness" of that voxel. A 32x32 cubemap is probably more than sufficient and the summing can be done by taking numerous cubemap samples within a special shader (no cpu touching). While slow this would technically still be hardware accelerated. Other options are rasterizing and voxelizing the pixels, or taking the vertices, inserting into a voxel grid and making them solid if a vertex is in that voxel. This means missing texture data though I could probably just manually sample the texture map for a given vertex to get an approximated vertex color... ugh. Note that the efficiency of whatever algorithm I choose to use is merely necessary to reduce development time and is not consequential to the end result as long as it is of sufficient quality. The goal is eventually to voxelize large data sets (1 million polygons+) into a collection of volume textures (at a specified granularity) but right now I'm just hoping to be able to render a 10,000 poly model to a 64x64x64 3d texture in under a minute. I'm likely over thinking this so some thinking outside the box would be very helpful. Thanks ahead of time for any suggestions.

Share this post


Link to post
Share on other sites
Chapter 30 of GPU Gems 3 has some useful information about this: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch30.html

Scroll down to 'Voxelization' (about halfway down).

The only approach I've implemented myself is to do it in software, and I did this by 'rendering' each triangle into the volume. I would find out which voxels the triangles overlapped and set those to be solid. Of course, the generated voxel model was then hollow. I used this approach for the project in my signature.

Share this post


Link to post
Share on other sites
Quote:
Original post by PolyVox
The only approach I've implemented myself is to do it in software, and I did this by 'rendering' each triangle into the volume. I would find out which voxels the triangles overlapped and set those to be solid. Of course, the generated voxel model was then hollow.
Provided that your mesh forms a closed surface, it isn't that hard to flood fill the resulting hollow voxel volume. Wolfire had a blog post on this topic, a little while back.

As to hardware acceleration, you can likely perform roughly the same operation. Just render the entire model to each 2D slice of a 3D texture, with optional culling if you need to improve performance.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoderProvided that your mesh forms a closed surface, it isn't that hard to flood fill the resulting hollow voxel volume. Wolfire had a blog post on this topic, a little while back.


Yeah, I have been meaning to implement something like that. Most likely based on the scanline alogorithm described (should be fast) but region-growing from a seed point might also be an option.

Share this post


Link to post
Share on other sites
Quote:
Original post by spacerat
Here an example video http://www.youtube.com/watch?v=HnKYoH75MTw
The paper linked to from the video is very useful. Somewhat related to the 'render in slices' method I proposed earlier, but with the advantage of filling watertight meshes, and executing in a single pass.

Share this post


Link to post
Share on other sites
PolyVox: The technique in that GPU Gems article was exactly what I was thinking for with the stencil buffer approach. The downside again is that polygons in line to the view direction are missed since there are no pixels visible for the stencil test. I'm not sure how bad this would be -- apparently the results are good enough for the collision detection in that fluid simulation, but is it accurate enough for a volume texture? Perhaps altering coplaner polygons by a small epsilon would work?

swiftcoder: I considered the approach of intersecting the model's polygon bounding boxes against a 3D regular grid to gather the silhouette voxels but hadn't really though about how I would fill the cavity inside the model. Flood fill makes a lot of sense. The only major limitation of this technique is that only matter is filled, not color, which is a requirement for me (and yes I know the stencil technique suffers from the same problem). If the model was completely convex a cubemap capture of the colors would work but I cant rely on that.

As far as rendering directly to the volume texture, unfortunately my target hardware is D3D9 and this is only supported in D3D10 and up.

spacerat: I'm aware of this technique and while it's very good it also suffers from the inability to store voxel colors (since the full 32-bits is used to store the 32 slices). Storing 32 slices in a single pass most DEFINITELY meets my criteria for excellent hardware accelerated performance, ha. I'll have to see if there's some way to store the color (perhaps to a second MRT buffer). I'm not too worried about being limited to 32 slices as I can just cap the model and do as many slice passes as is needed.

Jason Z: Interesting! I like how that technique transfers the problem to image space. The idea of using the surface normals in addition to the distance map to determine interior and exterior voxels is a nice touch. The parity check however can be done more efficiently using the stencil method outlined in the previously mentioned GPU Gems article, equivalent to the "ray-stabbing" method. Using multiple projections might solve the limitations I had mentioned above, however. I'll have to marinate on this further but thanks for that!


Thanks for the suggestions so far guys. I have a lot to think about...

[Edited by - Dirge on February 14, 2010 3:53:47 PM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this