• Advertisement
Sign in to follow this  

OpenGL can you convert to on/off voxels to something seeable?

This topic is 3384 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

say i start with a 3d set of voxels, which are either on or off, representing a solid figure -- meaning all the voxels inside of it are on too. now from this, i have to go to a form that i can show in opengl/directx/whatever. what's the minimum amount of manipulation i have to do to get this done? note that it has to be done fast, because my 3d space will have over 21 million on/off voxels and i want it updated -- that is, from a completely new sets of voxels -- several times a second -- as close to in real-time as possible. i'm actually taking 3-d hyperplanes out of a 4-d set of voxels, so if there's any manipulation i can perform on the entire set before-hand that will allow me to do this fast enough, that would be good. i'll have 4.6 billion 4D voxels in total, but with a 3D array of pointers to run-length encoded rows i should be able to stuff them all into ram in any necessary form. [Edited by - inhahe on November 8, 2008 10:31:47 AM]

Share this post


Link to post
Share on other sites
Advertisement
You know the figure is closed, but with no guarantee of concavity. Do you know if the figure is fully connected?

If you can guarantee that it is fully-connected on the cardinal axes (by shared faces of the square voxels, not diagonally by shared edges or vertices), I just had a kind-of trippy idea of "growing" a polygonal shape from the volume.

1. Start with a cube of 8 voxels inside the figure (any 2x2x2 solid area), and create a "live" cube with its vertices on their centers. "Eat" those voxels (erase them).
2. Of all the polygon volumes that are currently "live", try to grow each of their vertices outward from all their other vertices, while staying inside the volume; pick a filled voxel adjacent to the current vertex which is strictly further from the average position of all the other vertices on the current volume. Move the current vertex to that voxel and eat it.
3. If no vertices of a given volume were moved, mark that volume as "dead", and eat all voxels whose centers are inside the volume.
4. Pick a face from a dead volume, pick four live voxels touching that face, and create a new "live" cube from the four new vertices and the four vertices from the dead shape. Go back to step 2.
5. If you can't find any face with four live voxels touching it, you're done.

Tweak to get desired results, but just in my head that sounds like it'd work.

Edit; oh, and without some assumptions, AFAIK you cannot devolve a 256x256x256 voxel map into a polygonal mass in realtime. That's up to 257x257x257*6 faces to generate, and the same number of tests to perform.

Share this post


Link to post
Share on other sites
Is there any reason why raycasting isnt a viable solution here?

In the medical imaging industry, raycasting is pretty much the defacto standard for realtime rendering of raw (unprocessed) 3D voxel volumes, although this may be because trasparency is quite often a requirement

(transparency pretty much falls right out of raycasting, requiring nothing special or time-consuming to implement)

Then there is marching cubes, which is now out of patent. Generating a simple model should be pretty fast, however the model wont be very optimal (many more triangles than necessary.)

The ideal structure (or companion) for marching cubes is almost certainly a binary quadtree or similar where you are certain to be able to skip very large chunks of the volume rather than iterating over all 24 million cells.

Share this post


Link to post
Share on other sites
Quote:
Original post by Wyrframeoh, and without some assumptions, AFAIK you cannot devolve a 256x256x256 voxel map into a polygonal mass in realtime. That's up to 257x257x257*6 faces to generate, and the same number of tests to perform.


Actually, I think you probably can do it in real time. I have an old project called 'VoxelStudio' which, as I recall, was able to do a 512x512x512 volume at about 1FPS. But you volume is about 1/8th the size, on modern hardware (mine was an AMD 1800 thing, maybe 4 years old now) and I know that my marching cubes algorithm was slower than it could have been.

See the project here: http://david-williams.info/index.php?option=com_content&task=view&id=22&Itemid=34

I used an octree to quickly discard chucks of the volume and find the isosurface - you probably won't have this option and I don't recall how well it worked without it. Also I was using OpenGL immediate mode (no point building index/vertex buffers for one frame). I don't know where the bottleneck was - the speed of the marching cubes algorithm or the graphics card throughput (NVidia 6600 series I think).

However, I do work in medical visualization and in general would suggest you use raycasting or proxy geometry for the rendering of this kind of data set. These will probably be easier to implement, and you will also find it much easier to trade quality for performance if you need to.

Share this post


Link to post
Share on other sites
thanks for the responses

i misspoke, btw, about it being one solid figure -- it could be several

about raycasting or proxy geometry, i just don't know very much about this field. are those software or hardware solutions? when googling over this problem, i found out about the VolumePro, but it seems expensive and hard to get. i did find one on ebay for cheap, though..

i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)

i really want at least 5 fps -- ideally 30 -- on an average pc, but maybe that's unrealistic

i can't really put any expensive hardware into it -- this isn't for professional purposes, just a little project. i could settle for a really slow framerate, though

thanks

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
about raycasting or proxy geometry, i just don't know very much about this field. are those software or hardware solutions? when googling over this problem, i found out about the VolumePro, but it seems expensive and hard to get. i did find one on ebay for cheap, though..


I believe the VolumePro is pretty old technology - I don't think it will have anything to offer over a recent GPU.

Of the techniques I mentioned, proxy geometry will probably be the easiest approach to implement on a GPU, and raycasting will be easiest on a CPU. The problem you will encounter in a GPU implementation is the huge amount of data - it will be harsd to upload it fast enough and compression on the graphics card won't be as easy as on the CPU.

Now that I know you don't actually have to generate a mesh, my gut instinct would be to go for a CPU raycasting solution.

Quote:
Original post by inhahe
i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)


Doesn't matter, you can simply use linear interpolation to get a value at any point in the volume. Search for 'central difference' to find info on gradient computation.

Quote:
Original post by inhahe
i really want at least 5 fps -- ideally 30 -- on an average pc, but maybe that's unrealistic


Should be ok... I reckon you should get 10-15 FPS. Referring back to my project in the previous post, if you download it it comes with a 256^3 volume and one of the renderers is a software raycaster. You'll have to manage without the octree I used, but actually I don't think it helped that much on a small dataset.

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
i was thinking that raycasting wouldnt work because, since the voxels are just points, it wouldnt know the angle at any given point of the image so it couldn't do shading properly.. (but again i know nothing about this)


Voxels are not points, they are volumes! Voxel is short for Volume Pixel.

If you dont want your individual voxels to be rendered as cubes, then you will need to make some assumptions about the surface being approximated and then fake it, because your data simply doesn't cover broad surface properties. This is true regardless of your rendering method.

Share this post


Link to post
Share on other sites
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.

to PolyVox: i looked up central difference, seems to be basically a derivative over data points--makes sense. but which order of derivative is usually used with volume rendering?

thx

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.
Voxels are an approximation of a real object, and that approximation is formed of small cubes - in the same way that pixels are an approximation of images, formed of small squares. It would be very unlikely that your object was indeed formed of cubes (unless it is Lego), just as I am not actually formed of squares (as my digital portrait would suggest).

As for regarding your data as points or cubes, it only makes sense to regard it as points if your object is sparse (i.e. mostly empty space) - points are a lousy approximation of solid objects.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by inhahe
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points.. i think 'volumetric pixel' is somewhat of a misnomer for any application but a 3-d monitor.
Voxels are an approximation of a real object, and that approximation is formed of small cubes - in the same way that pixels are an approximation of images, formed of small squares. It would be very unlikely that your object was indeed formed of cubes (unless it is Lego), just as I am not actually formed of squares (as my digital portrait would suggest).

As for regarding your data as points or cubes, it only makes sense to regard it as points if your object is sparse (i.e. mostly empty space) - points are a lousy approximation of solid objects.


well i'm saying that i can't think of any reasonable way to interpret them as cubes. given that it's not actually made of cubes (and hence not to be interpreted it that way), regarding them as cubes is just extra complexity. the reason i say it's a misnomer for anything but a 3-d monitor is that pixels actually *are* squares, because that's how they're displayed (unless they were displayed via, say, sublimation printing). in voxel space, otoh, you're not actually treating them as cubes in any way, shape or form. i'm not sure what you meant about points being a lousy approximation for a shape, but another way of putting it is: say i have a smooth surface, and at regular intervals (voxel indices) i take samples. since informaiton of that sample's/voxel's exact position has been rounded/floored to the nearest integer (its voxel index), that voxel index might as well be considered a point, because that voxel's information, in itself, has no specific form information for any practical purpose or in relation to the original image (not counting its (int)'d voxel index). I suppose, though, that even though it's not quite a cube, it's not quite a point either, given that the actual position it represents is nebulous, while a point's is exact.

aynway i was looking a http://www.cse.ohio-state.edu/~hertleia/lab3.html, and it seems that 'central difference' is the least accurate way to take the gradiant. so i'm looking at romberg's algorithm now (http://www.google.com/url?sa=U&start=1&q=http://en.wikipedia.org/wiki/Romberg's_method) and trying to determine if it applies to data points that are not derived from an equation..

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
to PolyVox: i looked up central difference, seems to be basically a derivative over data points--makes sense. but which order of derivative is usually used with volume rendering?


You will probably find that Central Difference is fine (I guess it's first order?) but if you want better quality then look for the Sobel operator. In both cases these will compute the gradient at a given voxel - you will then want to interpolate that gradient between voxels to get the gradient at an arbitrary point.

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
i'm not sure what you meant about points being a lousy approximation for a shape
Adjacent points have empty space between each other, adjacent voxels do not. If I zoom in on a pixel image, should I expect to see empty space between the pixels? Nor should you expect empty space between voxels.

Share this post


Link to post
Share on other sites
Quote:
Original post by inhahe
i called them points because i won't be rendering them as cubes anyway given that my voxels are the same size as my pixels -- and because, since i know my data isn't actually about a structure made of cubes, it makes sense to treat them conceptually/mathematically as data points..


You are arguing that you think the point concept will lend itself better to rendering.. but as someone somewhat experienced with voxel rendering (raycasting), I know that simply isnt true.

As another poster has already mentioned, you do not consider the space between voxels as empty so therefore the point abstraction only adds more complexity.

I guess what I am trying to say is that if your data set really is just discrete 3D samples, then you should begin there. Really.

If you do have more surface information than the discrete 3D sample paradigm provides (surface normals, non-discrete samples, and so forth), then you probably shouldn't even be using voxels. There are algorithms which convert point clouds (non-discrete 3D samples) into 3D models for instance (you might use marching triangles for that.)

If you cook up a simple raycaster that treats them as raw cubes, you may actualy find that this is indeed good enough for your purposes. You might be suprised at how much geometric information can be transported to the user with simple diffuse + specular shading (using a point light for variation in the specular,) even with every surface axis aligned. If thats not good enough, then you arent that far from smoothing techniques and you will want to use that lighting model anyways (smoothing techniques basically just play with the surface normals unless we are talking about marchine cubes)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement