can you convert to on/off voxels to something seeable?

Started by
11 comments, last by Rockoon1 15 years, 5 months ago
say i start with a 3d set of voxels, which are either on or off, representing a solid figure -- meaning all the voxels inside of it are on too. now from this, i have to go to a form that i can show in opengl/directx/whatever. what's the minimum amount of manipulation i have to do to get this done? note that it has to be done fast, because my 3d space will have over 21 million on/off voxels and i want it updated -- that is, from a completely new sets of voxels -- several times a second -- as close to in real-time as possible. i'm actually taking 3-d hyperplanes out of a 4-d set of voxels, so if there's any manipulation i can perform on the entire set before-hand that will allow me to do this fast enough, that would be good. i'll have 4.6 billion 4D voxels in total, but with a 3D array of pointers to run-length encoded rows i should be able to stuff them all into ram in any necessary form. [Edited by - inhahe on November 8, 2008 10:31:47 AM]
Advertisement
You know the figure is closed, but with no guarantee of concavity. Do you know if the figure is fully connected?

If you can guarantee that it is fully-connected on the cardinal axes (by shared faces of the square voxels, not diagonally by shared edges or vertices), I just had a kind-of trippy idea of "growing" a polygonal shape from the volume.

1. Start with a cube of 8 voxels inside the figure (any 2x2x2 solid area), and create a "live" cube with its vertices on their centers. "Eat" those voxels (erase them).
2. Of all the polygon volumes that are currently "live", try to grow each of their vertices outward from all their other vertices, while staying inside the volume; pick a filled voxel adjacent to the current vertex which is strictly further from the average position of all the other vertices on the current volume. Move the current vertex to that voxel and eat it.
3. If no vertices of a given volume were moved, mark that volume as "dead", and eat all voxels whose centers are inside the volume.
4. Pick a face from a dead volume, pick four live voxels touching that face, and create a new "live" cube from the four new vertices and the four vertices from the dead shape. Go back to step 2.
5. If you can't find any face with four live voxels touching it, you're done.

Tweak to get desired results, but just in my head that sounds like it'd work.

Edit; oh, and without some assumptions, AFAIK you cannot devolve a 256x256x256 voxel map into a polygonal mass in realtime. That's up to 257x257x257*6 faces to generate, and the same number of tests to perform.
RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.
Is there any reason why raycasting isnt a viable solution here?

In the medical imaging industry, raycasting is pretty much the defacto standard for realtime rendering of raw (unprocessed) 3D voxel volumes, although this may be because trasparency is quite often a requirement

(transparency pretty much falls right out of raycasting, requiring nothing special or time-consuming to implement)

Then there is marching cubes, which is now out of patent. Generating a simple model should be pretty fast, however the model wont be very optimal (many more triangles than necessary.)

The ideal structure (or companion) for marching cubes is almost certainly a binary quadtree or similar where you are certain to be able to skip very large chunks of the volume rather than iterating over all 24 million cells.
Quote:Original post by Wyrframeoh, and without some assumptions, AFAIK you cannot devolve a 256x256x256 voxel map into a polygonal mass in realtime. That's up to 257x257x257*6 faces to generate, and the same number of tests to perform.


Actually, I think you probably can do it in real time. I have an old project called 'VoxelStudio' which, as I recall, was able to do a 512x512x512 volume at about 1FPS. But you volume is about 1/8th the size, on modern hardware (mine was an AMD 1800 thing, maybe 4 years old now) and I know that my marching cubes algorithm was slower than it could have been.

See the project here: http://david-williams.info/index.php?option=com_content&task=view&id=22&Itemid=34

I used an octree to quickly discard chucks of the volume and find the isosurface - you probably won't have this option and I don't recall how well it worked without it. Also I was using OpenGL immediate mode (no point building index/vertex buffers for one frame). I don't know where the bottleneck was - the speed of the marching cubes algorithm or the graphics card throughput (NVidia 6600 series I think).

However, I do work in medical visualization and in general would suggest you use raycasting or proxy geometry for the rendering of this kind of data set. These will probably be easier to implement, and you will also find it much easier to trade quality for performance if you need to.
thanks for the responses

i misspoke, btw, about it being one solid figure -- it could be several

about raycasting or proxy geometry, i just don't know very much about this field. are those software or hardware solutions? when googling over this problem, i found out about the VolumePro, but it seems expensive and hard to get. i did find one on ebay for cheap, though..

i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)

i really want at least 5 fps -- ideally 30 -- on an average pc, but maybe that's unrealistic

i can't really put any expensive hardware into it -- this isn't for professional purposes, just a little project. i could settle for a really slow framerate, though

thanks
Quote:Original post by inhahe
about raycasting or proxy geometry, i just don't know very much about this field. are those software or hardware solutions? when googling over this problem, i found out about the VolumePro, but it seems expensive and hard to get. i did find one on ebay for cheap, though..


I believe the VolumePro is pretty old technology - I don't think it will have anything to offer over a recent GPU.

Of the techniques I mentioned, proxy geometry will probably be the easiest approach to implement on a GPU, and raycasting will be easiest on a CPU. The problem you will encounter in a GPU implementation is the huge amount of data - it will be harsd to upload it fast enough and compression on the graphics card won't be as easy as on the CPU.

Now that I know you don't actually have to generate a mesh, my gut instinct would be to go for a CPU raycasting solution.

Quote:Original post by inhahe
i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)


Doesn't matter, you can simply use linear interpolation to get a value at any point in the volume. Search for 'central difference' to find info on gradient computation.

Quote:Original post by inhahe
i really want at least 5 fps -- ideally 30 -- on an average pc, but maybe that's unrealistic


Should be ok... I reckon you should get 10-15 FPS. Referring back to my project in the previous post, if you download it it comes with a 256^3 volume and one of the renderers is a software raycaster. You'll have to manage without the octree I used, but actually I don't think it helped that much on a small dataset.
Quote:Original post by inhahe
i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)


Voxels are not points, they are volumes! Voxel is short for Volume Pixel.

If you dont want your individual voxels to be rendered as cubes, then you will need to make some assumptions about the surface being approximated and then fake it, because your data simply doesn't cover broad surface properties. This is true regardless of your rendering method.

i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.

to PolyVox: i looked up central difference, seems to be basically a derivative over data points--makes sense. but which order of derivative is usually used with volume rendering?

thx
Quote:Original post by inhahe
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.
Voxels are an approximation of a real object, and that approximation is formed of small cubes - in the same way that pixels are an approximation of images, formed of small squares. It would be very unlikely that your object was indeed formed of cubes (unless it is Lego), just as I am not actually formed of squares (as my digital portrait would suggest).

As for regarding your data as points or cubes, it only makes sense to regard it as points if your object is sparse (i.e. mostly empty space) - points are a lousy approximation of solid objects.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:Original post by swiftcoder
Quote:Original post by inhahe
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.
Voxels are an approximation of a real object, and that approximation is formed of small cubes - in the same way that pixels are an approximation of images, formed of small squares. It would be very unlikely that your object was indeed formed of cubes (unless it is Lego), just as I am not actually formed of squares (as my digital portrait would suggest).

As for regarding your data as points or cubes, it only makes sense to regard it as points if your object is sparse (i.e. mostly empty space) - points are a lousy approximation of solid objects.


well i'm saying that i can't think of any reasonable way to interpret them as cubes. given that it's not actually made of cubes (and hence not to be interpreted it that way), regarding them as cubes is just extra complexity. the reason i say it's a misnomer for anything but a 3-d monitor is that pixels actually *are* squares, because that's how they're displayed (unless they were displayed via, say, sublimation printing). in voxel space, otoh, you're not actually treating them as cubes in any way, shape or form. i'm not sure what you meant about points being a lousy approximation for a shape, but another way of putting it is: say i have a smooth surface, and at regular intervals (voxel indices) i take samples. since informaiton of that sample's/voxel's exact position has been rounded/floored to the nearest integer (its voxel index), that voxel index might as well be considered a point, because that voxel's information, in itself, has no specific form information for any practical purpose or in relation to the original image (not counting its (int)'d voxel index). I suppose, though, that even though it's not quite a cube, it's not quite a point either, given that the actual position it represents is nebulous, while a point's is exact.

aynway i was looking a http://www.cse.ohio-state.edu/~hertleia/lab3.html, and it seems that 'central difference' is the least accurate way to take the gradiant. so i'm looking at romberg's algorithm now (http://www.google.com/url?sa=U&start=1&q=http://en.wikipedia.org/wiki/Romberg's_method) and trying to determine if it applies to data points that are not derived from an equation..

This topic is closed to new replies.

Advertisement