• Advertisement
Sign in to follow this  

Marching Cubes Terrain: Result

This topic is 3425 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, Below is a screenshot of a 2048*2048 sized terrain generated out of a voxel representation created from a height map screenshot It did generate approximately 24.000.000 triangles including the neighbourhood structure. The sceen took several minutes to be generated and smoothed. So the app requires around 600 Mb of memory plus maybe 100-200 Mb when generating per vertex normals My current implementation didn t care about reallocation very much, thats why it is utilizing 8Gb of virtual memory, when in fact it is only using around 600 Mb. So in order to improve the memory consumption I need to make sure it is working with constant memory and I need to reserve large enough buffers in order to avoid reallocation. Lets see how that works when I pass these triangles to a physics engine *hr* I guess I need a new computer, mine only has 1 Gb of memory :) The high memory consumption also originates from the hilly terrain, if it is flat you only need one sixth of memory. P.S.: The cracks visible in the screenshot are due to a bug in the smoothing code. I need to work on smoothing the edges of the chunks [Edited by - Basiror on August 6, 2008 5:56:08 AM]

Share this post


Link to post
Share on other sites
Advertisement
Cool stuff.

Nitpick: it looks like you've got a bit of 'floating' geometry at the top, near the middle.

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
Below is a screenshot of a 1024*1024 sized terrain generated out of a voxel representation created from a height map. It did generate approximately 24.000.000 triangles including the neighbourhood structure. So the app requires around 600 Mb of memory plus maybe 100-200 Mb when generating per vertex normals
My only question here is "Why?". A 1024^2 heightmap is trivial to both tessellate and render, and can result in a far more optimised triangulation.

I assume you are doing this to obtain destructibility for the terrain, but any destruction is only going to increase triangulation further, and 24,000,000 triangles is already far beyond a reasonable terrain budget.

Share this post


Link to post
Share on other sites
-Swift: Height fields are just boring, you can do so much more with a voxel representation(natural caves, tunnels, overhangs, real cliffs with cliff walls that aren't stretched, and pretty much anything else you can create an algorithm for) and while his triangle count is indeed higher than the standard heightfield version, but it would stretch many of those triangles(when the terrain has large variations in the height) where as MC would add more triangles and avoid the stretching. For a perfectly flat piece of terrain they should both use the same # of verts, which is what matters anyway, and the triangle count should be comparable. In my MC implementation i'm getting pretty close to a 2:1 ratio of triangles to verts which is pretty ideal as each vert is reused ~6 times.
Rendering MC generated triangles might not be quite as efficient overall since it is unlikely the vert/triangle ordering will be as optimal but dealing with the increased triangle count isn't too hard as you can just pull the camera back some for the purpose of LOD determination, since a MC terrain of 1k*1k looks more detailed then a height field 1k*1k anyway.

-Basiror: I'm working on something similar but mine uses an octree system so distance chunks are much coarser, my system is entirely 3D though so a 1k*1k has no meaning only 1k*1k*1k. I also have the small cracks between chunks that needs fixen. I'm currently not storing the voxel grids after I use them to generate the verts so my memory consumption is quite low.

One problem I have encountered is that the normals look pretty crummy if you generate them directly from the density grid used by the voxels(assuming you're using the fast noise from Infinity, which I believe you are) so I have to generate the normals from the triangles instead, or use a more accurate gradient noise function.

Share this post


Link to post
Share on other sites
It's great you are making progress with this, but like swiftcoder I have some concerns about how practical it is. I had assumed you wanted to use voxels to allow for a dynamic environment, but it seems that will not be possible if it takes several minutes to regenerate. Of course, any destruction will be localised so you won't have to regenerate the whole map. What kind of frame rate do you get once it is running?

Also, do you know where the bottleneck is in your mesh generation process?. My system runs a bit faster but I do not perform any mesh smoothing (which in turn means I don't have to build the mesh data structures - my MC implementation writes straight to vertex buffers). Of course, for terrain the smoothing could be important...

Quote:
Original post by AndyPandyV2
-Basiror: I'm working on something similar but mine uses an octree system so distance chunks are much coarser, my system is entirely 3D though so a 1k*1k has no meaning only 1k*1k*1k. I also have the small cracks between chunks that needs fixen. I'm currently not storing the voxel grids after I use them to generate the verts so my memory consumption is quite low.


Cool, I'm always interested in other voxel systems. Do you have any links? 1024^3 is also my target but I'm a little way off at the moment. Do you find the use of an octree slows down the MC algorithm (due to it being harder to access individual voxels) or speeds it up (due to discarding large portions of the voxels quickly)?

Quote:
Original post by AndyPandyV2
One problem I have encountered is that the normals look pretty crummy if you generate them directly from the density grid used by the voxels(assuming you're using the fast noise from Infinity, which I believe you are) so I have to generate the normals from the triangles instead, or use a more accurate gradient noise function.


Can you elaborate on 'the fast noise from Infinity'? What is that? For computing normals from the volume you can use the Central-Difference method, or if you want it smoother you can try Sobel gradients. Or you can smooth the volume and then run the normal estimation schemes.

[Edit] - Ah, I see. Infinity is a game and fast noise is a library from it.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyPandyV2
-Swift: Height fields are just boring, you can do so much more with a voxel representation(natural caves, tunnels, overhangs, real cliffs with cliff walls that aren't stretched, and pretty much anything else you can create an algorithm for)
Yeah, I know - I am working on voxel terrains as well. I would point out though, that the terrain pictured is not using any voxel features, and it is already requiring a higher vertex-count than the same height field generated terrain.
Quote:
Rendering MC generated triangles might not be quite as efficient overall since it is unlikely the vert/triangle ordering will be as optimal but dealing with the increased triangle count isn't too hard as you can just pull the camera back some for the purpose of LOD determination, since a MC terrain of 1k*1k looks more detailed then a height field 1k*1k anyway.
The real problem is LOD. I haven't found a good scheme to handle LOD for voxel terrain - naive approaches such as mipmapping the voxel field cause a lot of artefacts (disappearing tunnels, shifting walls, etc.). Height fields are trivial to perform LOD on, which is what I was primarily commenting on.
Quote:
One problem I have encountered is that the normals look pretty crummy if you generate them directly from the density grid used by the voxels(assuming you're using the fast noise from Infinity, which I believe you are) so I have to generate the normals from the triangles instead, or use a more accurate gradient noise function.
That might be as much a problem with Ysaneya's fast noise as it is with the voxel representation - his noise isn't continuous in its derivatives.

Share this post


Link to post
Share on other sites
The image just served for demonstration, it is actually just a raw heightmap generated from gimp, smoothed with a 7x7 gaussian kernel


Now some details:
The grid is 2049*2049*256 voxels large
I store the heightmap in runlength encoded voxels
e.g.:
solid 128, nonsolit 128 for a heightfield value of 127 (0-127)

While generating the triangulation I did iterate over the grid the bruteforce way :), the reason it took so long.
As you can see the terrain is split up in 128*128 chunks of the size 16*16


I can increase the speed by the factor of 6-9 (estimation, need to test it) with a tiny modification of the algorithm

e.g.:
two neighbouring RLE voxels
height field case(18 voxels in a row):
111111000000000000
111110000000000000

only the part
110
100
is of interest for us so we are touching 2 quads(2d)/cubes(3d) instead of 17, thats a speedup by a factor of 9x

looking at a cave scenario

111111000000111000
111110000011111100
there are three regions of interest now
110
100
and
0001
0111
and
100
110

we need to evaluate 2+3+2 quads instead of 17, thats a speedup of factor 2.5x

In general you can expect the heightfield to be pretty flat across large portions, so a speedup of factor 6-9 is more than likely.

Looking at interval ranges 0-255 and assuming around 4 cave layers you approximately only need to touch a minimum of 4*6=32 up to maybe 60 voxels in a row. thats 4-8 times the speed. For flat areas you will get a speedup by factor 80.

Computing these intervals out of neighbouring runlength encoded voxels shouldn t be that hard and offers the nice opportunity to calculate the bounding boxes on the fly for free, instead of touching the vertices output by MC.


Regarding the mesh smoothing artifacts, you need to
a) get a list of boundary vertices
b) associate them with the boundary edges
c) make a copy of the vertex data
d) smooth the edges in a deterministic way, so neighbouring chunks will match up

Since I am working with the lookup table approach polyvox told me in the other thread, I can simply extract the boundary vertices on the fly as well and place their indices into a map.
Same goes for the edges
map<vertexindices>
vector<edges>

Now you only need to take care about the corners.

I am now working on the interval searching algorithm, so stay tuned :)

P.S.: I used a heightmap 2049*2049, I fixed that in the first post
P.P.S.: a flat terrain chunks requires only 17*17*2*2=1156 bytes,
that allows you to stream 26 terrain-chunks/s with only 30 kb bandwidth usage to the client, for terrain updates when logging in (e.g.: when playing a MMO)

a slow DSL connection has around 45 kb downstream.

So terrain updates on the fly shouldn t be a problem. Just compare a timestamp and updates accordingly.

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
The grid is 2049*2049*256 voxels large
I store the heightmap in runlength encoded voxels
e.g.:
solid 128, nonsolit 128 for a heightfield value of 127 (0-127)
So you aren't storing distance at each voxel, just a single bit for empty/non-empty?
Quote:
P.S.: I used a heightmap 2049*2049, I fixed that in the first post
P.P.S.: a flat terrain chunks requires only 17*17*2*2=1156 bytes,
that allows you to stream 26 terrain-chunks/s with only 30 kb bandwidth usage to the client, for terrain updates when logging in (e.g.: when playing a MMO)
How does your RLE approach relate to Ben Houston's work? He seems to have hit a pretty optimal structure for reads and neighbour access, but I am not sure about updates. How do you handle updates in your RLE system? I guess for a 16^3 chunk it might be fast enough to just recompress the chunk, but this doesn't scale as well as it might.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
How does your RLE approach relate to Ben Houston's work? He seems to have hit a pretty optimal structure for reads and neighbour access, but I am not sure about updates. How do you handle updates in your RLE system? I guess for a 16^3 chunk it might be fast enough to just recompress the chunk, but this doesn't scale as well as it might.

Interesting find! I just read the abstract and it seems he is using it for fluid simulation so I guess he must be able to update the structure pretty quickly. I'll have to find time to read the paper properly.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Basiror
The grid is 2049*2049*256 voxels large
I store the heightmap in runlength encoded voxels
e.g.:
solid 128, nonsolit 128 for a heightfield value of 127 (0-127)
So you aren't storing distance at each voxel, just a single bit for empty/non-empty?



No, values (0-127) in a heightfield represent 128 voxels in one xy column, so the voxel stores 128 plus a bit to mark it as solid/non -solid

In order to update a column I just clip the new interval against the old one and copy the results to a new buffer.
In the future I plan to place all RLE Voxels into one continuous
chunk of memory to avoid frequent reallocation, but thats not a top priority now

To find the intervals that modify the RLE voxel you simply trace a line against the boolean volume (e.g.: sphere)
get the two intersection points and use their z values as interval borders

I didn t read the whole article you posted, but from my understanding it is a hierarchical approach to my RLEVoxels,
I use RLE encoding in Z dimension and he extents this to an arbitrary dimension. I dunno whether this really pays back or is
worth the effort to implement it.

If I would represent my heightfield with his approach
I could find homogeneous cube shapes blocks below and above the surfaces
but in each Z column I have to clip the existing RLE Voxels into
two and represent the cube shaped part with a hierarchical voxel.
That doesn t save lots of bytes, why? See the following example

e.g.:
1 = solid
0 = non solid
+ = solid cube
- = non solid cube
A typical heightfield scenarion
111111110000000000
111111000000000000
111111100000000000
111111100000000000
111111111000000000

5*2 RLE Voxels = 20 bytes

++++++1100--------
++++++0000--------
++++++1000--------
++++++1000--------
++++++1110--------

9 RLE Voxels = 18 bytes +2 cube RLE Voxels = 18+? bytes
Nothing saved here and thats a simple example with lots of homogeneous space.

I would save some memory if the volume had completely homogeneous columns adjacent to each other like in the following example

000000000000000
000000000000000
111110000001111
111100000011111
000000000000000
000000000000000
10 RLE Voxels = 20 bytes
+++++++++++++++
+++++++++++++++
111110000001111
111100000011111
+++++++++++++++
+++++++++++++++
6 RLE Voxels = 12 bytes + 2 RLE Voxel blocks size ?

With heightfield data as input you almost never get completely adjacent homogeneous columns, so I see no advantage in it for rendering terrain. I guess updating this complex structure takes more time than copying a 20kb chunk to a new buffer, which is the primary cost factor of my update function

My updates work like this

0 denote unchanged z columns
1 denote changed z columns
all columns are stored in one continuous memory chunk
000000000
000001111
000011111
000111111
000001111


for(y=0;y<ny;++y)
{
for(x=0;x<nx;++x)
{
if(changed(x,y)==false)
{
move_column_to_new_buffer(columnptr(x,y),oldbuffer,length,newbuffer);
}
else
{
update_column_in_temporary_buffer(x,y);
move_column_to_new_buffer(0,tempbuffer,length,newbuffer);
}
}
}



Pretty easy and zero memory fragmentation is involved and reallocation is not required at all, just swap between preallocated buffers or use a block allocator that is executed maybe 1-2 times per update.
You can improve performance when you combine several volume operators at once.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
The real problem is LOD. I haven't found a good scheme to handle LOD for voxel terrain - naive approaches such as mipmapping the voxel field cause a lot of artefacts (disappearing tunnels, shifting walls, etc.). Height fields are trivial to perform LOD on, which is what I was primarily commenting on.


For my undergraduate project a few years ago I wrote a paper and accompanying demo showing one approach to a continuous LOD scheme for voxel-based terrain. It was reasonably fast, scaleable, did not involve touching the vertex buffer, and worked on arbitrary topology. It was possible to define metrics in such a way so as not to collapse bridges or fill in tunnels, and as a nice result it also allowed for flat areas to be tesselated at a lower resolution, avoiding the usual problem of marching cubes generating too many triangles. If there's any interest I can post some more details, I'll just need a few days to dig it up

Share this post


Link to post
Share on other sites
The question is what you need LOD for?
For rendering?
I would evaluate other techniques like occlusion queries first.
I don t think that eleminating some triangles will pay back a lot, since vertex computations are not that expensive. The only thing that concerns me is the bandwidth consumption of large vertex buffers, this can be reduced with an aggressive culling algorithm.

Think about a typical terrain case, its almost flat in Z axis direction, so shrinking the bounding boxes of your terrain chunks in Z direction should yield in higher culling rates. Luckily you can split your meshes using the MC algorithm already.


Also consider the automatic generation of anti portals to further clip the frustrum culled chunks.


As for physics, physic engines usually build a bonuding volume hierarchy of the triangles you pass in, which has logarithmic complexity, even with 4000 triangles
you only need about 3-4 iterations to reach a leaf with very few triangles.



Currently I am thinking about a more difficult problem, how do you assign material weights on a per vertex basis?
e.g.:
You got a overhang with a asphalt road below, you don t want the material of the road to change when some bullet explodes right beside the road and you have to retriangulate the terrain chunk. Your material weights are gone.

Share this post


Link to post
Share on other sites
Quote:
Original post by averisk
For my undergraduate project a few years ago I wrote a paper and accompanying demo showing one approach to a continuous LOD scheme for voxel-based terrain. It was reasonably fast, scaleable, did not involve touching the vertex buffer, and worked on arbitrary topology. It was possible to define metrics in such a way so as not to collapse bridges or fill in tunnels, and as a nice result it also allowed for flat areas to be tesselated at a lower resolution, avoiding the usual problem of marching cubes generating too many triangles. If there's any interest I can post some more details, I'll just need a few days to dig it up


Hi averisk, I would be very interested in this - could you post back when you have more details? I've currently using the approach of LODing the volume and I agree it has drawbacks.

Quote:
Original post by Basiror
As for physics, physic engines usually build a bonuding volume hierarchy of the triangles you pass in, which has logarithmic complexity, even with 4000 triangles
you only need about 3-4 iterations to reach a leaf with very few triangles.


Right, but you have a lot more than 4000 triangles! But I guess you are suggesting that physics structures should only be built for nearby parts of the terrain? I agree this is possible, but what should happen for more distant parts?

Quote:
Original post by Basiror
Currently I am thinking about a more difficult problem, how do you assign material weights on a per vertex basis?
e.g.:
You got a overhang with a asphalt road below, you don t want the material of the road to change when some bullet explodes right beside the road and you have to retriangulate the terrain chunk. Your material weights are gone.

I'm at work now so don't have much time to write, so I'll try to get back to you on this later.

Share this post


Link to post
Share on other sites
Quote:
Original post by PolyVox
Quote:
Original post by Basiror
As for physics, physic engines usually build a bonuding volume hierarchy of the triangles you pass in, which has logarithmic complexity, even with 4000 triangles
you only need about 3-4 iterations to reach a leaf with very few triangles.


Right, but you have a lot more than 4000 triangles! But I guess you are suggesting that physics structures should only be built for nearby parts of the terrain? I agree this is possible, but what should happen for more distant parts?


Not necessarily, you could put the terrain chunks into a quadtree and only update the triangle meshes for the physics engine

For ODE I probably know how to do this, as for Bullet, I have started a topic on their boards.

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
Currently I am thinking about a more difficult problem, how do you assign material weights on a per vertex basis?
e.g.:
You got a overhang with a asphalt road below, you don t want the material of the road to change when some bullet explodes right beside the road and you have to retriangulate the terrain chunk. Your material weights are gone.

Ok, so first you have to remember that I am not working on a terrain engine, so my requirements might be different to yours. But basically I store a material ID for each voxel (0 represents empty space, other values represent materials). I copy these values into my vertices when I generate the mesh and then use them to choose the appropriate texture.

[Edit:] On more careful reading, I notice you specifically refer to material weights. I have no support for material weights in my project so can't help so much there...

Quote:
Original post by Basiror
Not necessarily, you could put the terrain chunks into a quadtree and only update the triangle meshes for the physics engine

For ODE I probably know how to do this, as for Bullet, I have started a topic on their boards.

You may also want to check out my similar post here: http://www.bulletphysics.com/Bullet/phpBB3/viewtopic.php?f=9&t=2050. Can you point me towards yours? I would be interested in reading it...

Share this post


Link to post
Share on other sites
Hi
here is my thread on their forum
Bullet Topic


As for material weights, I could properly apply them to the top patches using textures, but for overhangs I run into problems(e.g.: roads below overhangs).

Some ideas

Compressed Row Storage

Since placing weights somewhere in the nowhere is like filling a sparse matrix, you could save plenty of memory with it and traversing such a structure should be pretty cheap I think, on the otherhand updating it requires you to change the column indices and the row pointers, merely updating each element of these vectors.

But I think a hierarchical approach is more suited for this problem, maybe some kind of a sphere tree, where the leafs hold the material weights for a region of space and the size of the region could be of arbitrary size.

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
The question is what you need LOD for?
For rendering?
I would evaluate other techniques like occlusion queries first.
I don t think that eleminating some triangles will pay back a lot, since vertex computations are not that expensive. The only thing that concerns me is the bandwidth consumption of large vertex buffers, this can be reduced with an aggressive culling algorithm.

Think about a typical terrain case, its almost flat in Z axis direction, so shrinking the bounding boxes of your terrain chunks in Z direction should yield in higher culling rates. Luckily you can split your meshes using the MC algorithm already.
Seems to me that your world is very small: I need 1 metre resolution, on a 10x10 kilometre world - that would be 50,000,000,000 voxels if fully realised.

Obviously, a great majority of that is open-air, or underground, so we are basically dealing with a 10 kilometre height map. That is 200 million triangles at full detail, so LOD is plainly needed. If we assume that most of the world has not been modified, it is a simple height map, and LOD is trivial, but unfortunately, the world is both persistent and destructible. The mipmapping approach will have to do for now, I think, but later I will need something a little more visually pleasing.

Share this post


Link to post
Share on other sites
I am using the quake style coordinate system where a player is 72 units large, a floor is usually 144 units high and the frustrum's far plane is at 4096-8192 units

My terrain is infinite, I divide it into sectors of 16384*16384 that are streamed in with small chunks 256*256 with a spacing of 16 units, a player is 32 units wide, so I need 2049*2049*9*nz of voxels to represent my terrain.
nz=(256-4096)

So I am continuously triangulating small parts 16*16*ns voxels when the player moves towards the border of the center section.

For development I restrict my self to 2049*2049*nz voxels, since my current cpu is a single code that simply doesn t offer the power needed.

000
0x0
000

Share this post


Link to post
Share on other sites
I just finished my interval clipping algorithm, to clip away homogeneous areas of the volume, with results in a huge speedup of factor 4.5-5.0

Before (complete Z-range):
avg: 12.8672 ms
After (clipping)
avg: 2.91016 ms

Usually I behaves like 4.5-5.0x with processes running...

With smoothing the mesh and normal generation I need around 3.5 ms for a 16*16*255 sized chunk of the volume.

If I had a computer with enough memory (more than 4 GB) I could triangulate a volume 2049*2049*255 within 57-70 seconds on a single core.

So thats proof enough that it can be done with nowadays hardware.

I gonna work on texturing and terrain manipulation now.


Share this post


Link to post
Share on other sites
Quote:
Original post by PolyVox
Quote:
Original post by averisk
For my undergraduate project a few years ago I wrote a paper and accompanying demo showing one approach to a continuous LOD scheme for voxel-based terrain. It was reasonably fast, scaleable, did not involve touching the vertex buffer, and worked on arbitrary topology. It was possible to define metrics in such a way so as not to collapse bridges or fill in tunnels, and as a nice result it also allowed for flat areas to be tesselated at a lower resolution, avoiding the usual problem of marching cubes generating too many triangles. If there's any interest I can post some more details, I'll just need a few days to dig it up


Hi averisk, I would be very interested in this - could you post back when you have more details? I've currently using the approach of LODing the volume and I agree it has drawbacks.


My approach is essentially mip-maps with the cracks between levels sewn together. This is done by adding a new vertex in the center of the triangle on the courser side and fanning it out to include all the extra vertices on the finer side.

There are cases where this doesn't make sense, for example when the finer-side has its center point 'inside' and all the corner points are 'outside'. The courser side would just be empty space. This is handled by changing the value for the center point to lie exactly on the surface and other cases are handled similarly.

Share this post


Link to post
Share on other sites
How do you join triangles of chunks with different resolution together?

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
My terrain is infinite, I divide it into sectors of 16384*16384 that are streamed in with small chunks 256*256 with a spacing of 16 units, a player is 32 units wide, so I need 2049*2049*9*nz of voxels to represent my terrain.
nz=(256-4096)
What happens at the horizon - does the terrain just stop, and do you use fog and/or a matching skybox to cover it?

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
How do you join triangles of chunks with different resolution together?


That's what I was explaining in my last post. Let's assume that the resolutions only change by one level across neighbouring nodes, then in most cases you can stitch the triangles of the two chunks together by adding more polygons on the courser side. For every triangle on the courser side that has at least one edge along a finer-level neighbour, we add a new vertex in the center of the triangle and 'fan out' to include all the extra vertices.
Some cases require special attention because it's not as simple as the above but hopefully you get the general idea

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Basiror
My terrain is infinite, I divide it into sectors of 16384*16384 that are streamed in with small chunks 256*256 with a spacing of 16 units, a player is 32 units wide, so I need 2049*2049*9*nz of voxels to represent my terrain.
nz=(256-4096)
What happens at the horizon - does the terrain just stop, and do you use fog and/or a matching skybox to cover it?


I use some kind of fog, actually some sort of atmospheric scattering, there is a paper from ATI about atmospheric scattering

The skydome is just a clock, so nothing fancy, just don t let the player enter these areas :)

@averisk:
Shouldn t it be possible to modify the MC algorithm lookup tables in a way that you don t have to search and split triangles? Such that it works on multiple resolutions on border edges.

Sorry for the ascii art, but this illustrates it for the marching squares

o---o
|\ |
| \ |
| \|
| o
| /|
| / |
|/ |
o---o


For the 3d case you had to touch 9+4=11 voxels, for the corner case 9+6+2=17 voxels.

This should pretty much solve all the stitching issues, all you need is someone to setup the lookup tables :)
(thanks to the author of the MC algorithm to be so kind)

Share this post


Link to post
Share on other sites
It's certainly an interesting idea. You would have to be really clever about reducing redundancies in your tables to make it feasible though.

For the simplest case where only one side has a finer resolution, you have 9+4=13 voxels like you said (2^13 possible configurations). Of course, you'd only want to store this once for the six possible rotations for which this could occur. I worked out the rest of the cases below, I might not have got all of them right:

one face: 2^13 configurations, 6 rotations
two adjacent faces: 2^17 configurations, 12 rotations
two opposite faces: 2^18 configurations, 3 rotations
three lateral adjacent faces: 2^21 configurations, 12 rotations
three non-lateral faces: 2^20 configurations, 8 rotations
four lateral faces: 2^24 configurations, 3 rotations
four non-lateral faces: 2^23 configurations, 12 rotations
five faces: 2^25 configurations, 6 rotations

In the worst case, where all but one side has a finer resolution, we need to define around 33 million polygonal configurations... Is this what you had in mind?

There's probably ways to reduce this further but I'm drawing a blank right now.

I think a better way of achieving this would be to split up nodes on a level boundary into six pyramids (based off of the center point) and use "marching pyramids". You'd generate a lot of extra polys but since most nodes won't be on a level boundary this might be ok, since you'd be using marching cubes most of the time

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement