Sign in to follow this  
inhahe

OpenGL can you convert to on/off voxels to something seeable?

Recommended Posts

say i start with a 3d set of voxels, which are either on or off, representing a solid figure -- meaning all the voxels inside of it are on too. now from this, i have to go to a form that i can show in opengl/directx/whatever. what's the minimum amount of manipulation i have to do to get this done? note that it has to be done fast, because my 3d space will have over 21 million on/off voxels and i want it updated -- that is, from a completely new sets of voxels -- several times a second -- as close to in real-time as possible. i'm actually taking 3-d hyperplanes out of a 4-d set of voxels, so if there's any manipulation i can perform on the entire set before-hand that will allow me to do this fast enough, that would be good. i'll have 4.6 billion 4D voxels in total, but with a 3D array of pointers to run-length encoded rows i should be able to stuff them all into ram in any necessary form. [Edited by - inhahe on November 8, 2008 10:31:47 AM]

Share this post


Link to post
Share on other sites
You know the figure is closed, but with no guarantee of concavity. Do you know if the figure is fully connected?

If you can guarantee that it is fully-connected on the cardinal axes (by shared faces of the square voxels, not diagonally by shared edges or vertices), I just had a kind-of trippy idea of "growing" a polygonal shape from the volume.

1. Start with a cube of 8 voxels inside the figure (any 2x2x2 solid area), and create a "live" cube with its vertices on their centers. "Eat" those voxels (erase them).
2. Of all the polygon volumes that are currently "live", try to grow each of their vertices outward from all their other vertices, while staying inside the volume; pick a filled voxel adjacent to the current vertex which is strictly further from the average position of all the other vertices on the current volume. Move the current vertex to that voxel and eat it.
3. If no vertices of a given volume were moved, mark that volume as "dead", and eat all voxels whose centers are inside the volume.
4. Pick a face from a dead volume, pick four live voxels touching that face, and create a new "live" cube from the four new vertices and the four vertices from the dead shape. Go back to step 2.
5. If you can't find any face with four live voxels touching it, you're done.

Tweak to get desired results, but just in my head that sounds like it'd work.

Edit; oh, and without some assumptions, AFAIK you cannot devolve a 256x256x256 voxel map into a polygonal mass in realtime. That's up to 257x257x257*6 faces to generate, and the same number of tests to perform.

Share this post


Link to post
Share on other sites
Is there any reason why raycasting isnt a viable solution here?

In the medical imaging industry, raycasting is pretty much the defacto standard for realtime rendering of raw (unprocessed) 3D voxel volumes, although this may be because trasparency is quite often a requirement

(transparency pretty much falls right out of raycasting, requiring nothing special or time-consuming to implement)

Then there is marching cubes, which is now out of patent. Generating a simple model should be pretty fast, however the model wont be very optimal (many more triangles than necessary.)

The ideal structure (or companion) for marching cubes is almost certainly a binary quadtree or similar where you are certain to be able to skip very large chunks of the volume rather than iterating over all 24 million cells.

Share this post


Link to post
Share on other sites
Quote:
Original post by Wyrframeoh, and without some assumptions, AFAIK you cannot devolve a 256x256x256 voxel map into a polygonal mass in realtime. That's up to 257x257x257*6 faces to generate, and the same number of tests to perform.


Actually, I think you probably can do it in real time. I have an old project called 'VoxelStudio' which, as I recall, was able to do a 512x512x512 volume at about 1FPS. But you volume is about 1/8th the size, on modern hardware (mine was an AMD 1800 thing, maybe 4 years old now) and I know that my marching cubes algorithm was slower than it could have been.

See the project here: http://david-williams.info/index.php?option=com_content&task=view&id=22&Itemid=34

I used an octree to quickly discard chucks of the volume and find the isosurface - you probably won't have this option and I don't recall how well it worked without it. Also I was using OpenGL immediate mode (no point building index/vertex buffers for one frame). I don't know where the bottleneck was - the speed of the marching cubes algorithm or the graphics card throughput (NVidia 6600 series I think).

However, I do work in medical visualization and in general would suggest you use raycasting or proxy geometry for the rendering of this kind of data set. These will probably be easier to implement, and you will also find it much easier to trade quality for performance if you need to.

Share this post


Link to post
Share on other sites
thanks for the responses

i misspoke, btw, about it being one solid figure -- it could be several

about raycasting or proxy geometry, i just don't know very much about this field. are those software or hardware solutions? when googling over this problem, i found out about the VolumePro, but it seems expensive and hard to get. i did find one on ebay for cheap, though..

i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)

i really want at least 5 fps -- ideally 30 -- on an average pc, but maybe that's unrealistic

i can't really put any expensive hardware into it -- this isn't for professional purposes, just a little project. i could settle for a really slow framerate, though

thanks

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
about raycasting or proxy geometry, i just don't know very much about this field. are those software or hardware solutions? when googling over this problem, i found out about the VolumePro, but it seems expensive and hard to get. i did find one on ebay for cheap, though..


I believe the VolumePro is pretty old technology - I don't think it will have anything to offer over a recent GPU.

Of the techniques I mentioned, proxy geometry will probably be the easiest approach to implement on a GPU, and raycasting will be easiest on a CPU. The problem you will encounter in a GPU implementation is the huge amount of data - it will be harsd to upload it fast enough and compression on the graphics card won't be as easy as on the CPU.

Now that I know you don't actually have to generate a mesh, my gut instinct would be to go for a CPU raycasting solution.

Quote:
Original post by inhahe
i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)


Doesn't matter, you can simply use linear interpolation to get a value at any point in the volume. Search for 'central difference' to find info on gradient computation.

Quote:
Original post by inhahe
i really want at least 5 fps -- ideally 30 -- on an average pc, but maybe that's unrealistic


Should be ok... I reckon you should get 10-15 FPS. Referring back to my project in the previous post, if you download it it comes with a 256^3 volume and one of the renderers is a software raycaster. You'll have to manage without the octree I used, but actually I don't think it helped that much on a small dataset.

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)


Voxels are not points, they are volumes! Voxel is short for Volume Pixel.

If you dont want your individual voxels to be rendered as cubes, then you will need to make some assumptions about the surface being approximated and then fake it, because your data simply doesn't cover broad surface properties. This is true regardless of your rendering method.

Share this post


Link to post
Share on other sites
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.

to PolyVox: i looked up central difference, seems to be basically a derivative over data points--makes sense. but which order of derivative is usually used with volume rendering?

thx

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.
Voxels are an approximation of a real object, and that approximation is formed of small cubes - in the same way that pixels are an approximation of images, formed of small squares. It would be very unlikely that your object was indeed formed of cubes (unless it is Lego), just as I am not actually formed of squares (as my digital portrait would suggest).

As for regarding your data as points or cubes, it only makes sense to regard it as points if your object is sparse (i.e. mostly empty space) - points are a lousy approximation of solid objects.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by inhahe
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.
Voxels are an approximation of a real object, and that approximation is formed of small cubes - in the same way that pixels are an approximation of images, formed of small squares. It would be very unlikely that your object was indeed formed of cubes (unless it is Lego), just as I am not actually formed of squares (as my digital portrait would suggest).

As for regarding your data as points or cubes, it only makes sense to regard it as points if your object is sparse (i.e. mostly empty space) - points are a lousy approximation of solid objects.


well i'm saying that i can't think of any reasonable way to interpret them as cubes. given that it's not actually made of cubes (and hence not to be interpreted it that way), regarding them as cubes is just extra complexity. the reason i say it's a misnomer for anything but a 3-d monitor is that pixels actually *are* squares, because that's how they're displayed (unless they were displayed via, say, sublimation printing). in voxel space, otoh, you're not actually treating them as cubes in any way, shape or form. i'm not sure what you meant about points being a lousy approximation for a shape, but another way of putting it is: say i have a smooth surface, and at regular intervals (voxel indices) i take samples. since informaiton of that sample's/voxel's exact position has been rounded/floored to the nearest integer (its voxel index), that voxel index might as well be considered a point, because that voxel's information, in itself, has no specific form information for any practical purpose or in relation to the original image (not counting its (int)'d voxel index). I suppose, though, that even though it's not quite a cube, it's not quite a point either, given that the actual position it represents is nebulous, while a point's is exact.

aynway i was looking a http://www.cse.ohio-state.edu/~hertleia/lab3.html, and it seems that 'central difference' is the least accurate way to take the gradiant. so i'm looking at romberg's algorithm now (http://www.google.com/url?sa=U&start=1&q=http://en.wikipedia.org/wiki/Romberg's_method) and trying to determine if it applies to data points that are not derived from an equation..

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
to PolyVox: i looked up central difference, seems to be basically a derivative over data points--makes sense. but which order of derivative is usually used with volume rendering?


You will probably find that Central Difference is fine (I guess it's first order?) but if you want better quality then look for the Sobel operator. In both cases these will compute the gradient at a given voxel - you will then want to interpolate that gradient between voxels to get the gradient at an arbitrary point.

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
i'm not sure what you meant about points being a lousy approximation for a shape
Adjacent points have empty space between each other, adjacent voxels do not. If I zoom in on a pixel image, should I expect to see empty space between the pixels? Nor should you expect empty space between voxels.

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points..


You are arguing that you think the point concept will lend itself better to rendering.. but as someone somewhat experienced with voxel rendering (raycasting), I know that simply isnt true.

As another poster has already mentioned, you do not consider the space between voxels as empty so therefore the point abstraction only adds more complexity.

I guess what I am trying to say is that if your data set really is just discrete 3D samples, then you should begin there. Really.

If you do have more surface information than the discrete 3D sample paradigm provides (surface normals, non-discrete samples, and so forth), then you probably shouldn't even be using voxels. There are algorithms which convert point clouds (non-discrete 3D samples) into 3D models for instance (you might use marching triangles for that.)

If you cook up a simple raycaster that treats them as raw cubes, you may actualy find that this is indeed good enough for your purposes. You might be suprised at how much geometric information can be transported to the user with simple diffuse + specular shading (using a point light for variation in the specular,) even with every surface axis aligned. If thats not good enough, then you arent that far from smoothing techniques and you will want to use that lighting model anyways (smoothing techniques basically just play with the surface normals unless we are talking about marchine cubes)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628360
    • Total Posts
      2982262
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now