Sign in to follow this  
Basiror

Marching Cubes & Texture Coordinates

Recommended Posts

Hi, I am looking for an algorithm to compute texture coordinates for a iso surface extracted using the marching cubes algorithm The problem I am facing is, that the MC algorithm produces vertical triangles e.g.: you have a terrain with caves the horizontal tex coordinates can be computed out of the xy coordinates of each vertex, that doesn t work for vertical triangles however Do you have any idea how to get a somewhat continuous coordinate set? Some additional information: + mesh will be smoothed before tex coordinate generation + vertex normals are available thx in advance P.S.: I just did a dirty implementation of the memory alignment to the page sizes of the cpu, it didn t pay back a single millisecond, yet it shoud be considered to make the algorithm run with constant memory requirements, to allows for high uptimes. On the other hand when loading the mesh right at the beginning, the allocated memory won t be fragmented that much, I guess comparing speeds after a couple of thousands of modifications will make a difference. Next thing to do is the bounding box extraction from the RLE voxels to reduce the search spaces for the MC algorithm [Edited by - Basiror on July 31, 2008 3:45:25 PM]

Share this post


Link to post
Share on other sites
In the Nvidia demo that uses volumes to generate 3D meshes they use a method similar to the height field method you mentioned but instead use 3 textures, one along the xy, one zy, and one on xz that they blend depending on the normal of the surface.

Share this post


Link to post
Share on other sites
I think I know how to do that

Basically you find the dominant axis of the normal, project the geometry onto the plane spanned by the 2 axis left and the normalize the sum of the absolute value of the components of the normal to 1.0 to get a proper blend weight.

Thats fairly simple and can be done in the vertex shader

thx for the hint.

P.S.: which nvidia demo is it, I can t seem to find it

Share this post


Link to post
Share on other sites
The problem is, that I used the MC algorithm to triangulate my terrain in realtime, updating a atlast in realtime would certainly kill the performance

I managed it to generate 10000 triangle below 60 ms

thats damn fast on my 3 years old winchester


I ll post some screenshots somewhen the next 3 weeks, currently i am still fighting with the edge collapse algorithm

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
I would recommend splatting from an atlas directly in the pixel shader, and thus remove the need for continuous texture coordinates.

Can you elaborate on that approach? I'm working on a similar project to Basiror and have been using triplanar texturing so far, but I'm interested in new ideas.

Share this post


Link to post
Share on other sites
Quote:
Original post by PolyVox
Quote:
Original post by swiftcoder
I would recommend splatting from an atlas directly in the pixel shader, and thus remove the need for continuous texture coordinates.

Can you elaborate on that approach? I'm working on a similar project to Basiror and have been using triplanar texturing so far, but I'm interested in new ideas.
Take this with a pinch of salt, because I only started on my volumetric terrains about a week ago. In short, I took Ysaneya's excellent texturing technique, and adapted it for volumes. You do still need texture coordinates, of course, but they only have to be locally continuous, rather than globally.

Share this post


Link to post
Share on other sites
Thats really a nice way, but it has one drawback, that makes it unusable for my engine, the texture assignment is fixed in the lookup table, unfortunately.


In my terrain engine I want to dynamically update blend weights

e.g.: there is a path people walk along very often, so I increase the weight factor for mud, or there is a field with grass growing over time.

I really want to implement a interactive terrain with plants growing, people digging holes for trenches or mining ...


How do you get to such interesting journal entries?

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
Thats really a nice way, but it has one drawback, that makes it unusable for my engine, the texture assignment is fixed in the lookup table, unfortunately.
You could generate the lookup texture on the fly, since it isn't very large.
Quote:
In my terrain engine I want to dynamically update blend weights. e.g.: there is a path people walk along very often, so I increase the weight factor for mud, or there is a field with grass growing over time.
Blending is the one real downfall of the system, in that it isn't supported at all. I work around this to some extent by blending the detail textures from the atlas with a low-res colour map, which gives a little blending to the hard edges.
Quote:
I really want to implement a interactive terrain with plants growing, people digging holes for trenches or mining...
So Dwarf Fortress rendered in 3D? Me too...
Quote:
How do you get to such interesting journal entries?
Ysaneya's journal has a permanent home in my RSS reader ;)

@PolyVox: I just reallised that I am using pretty much the same method to generate texture coordinates that is used in your triplanar method - so the difference is that I am splatting from an atlas, rather than using 3 separate textures.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Basiror
Thats really a nice way, but it has one drawback, that makes it unusable for my engine, the texture assignment is fixed in the lookup table, unfortunately.
You could generate the lookup texture on the fly, since it isn't very large.


I need to evaluate this first, currently I can create a terrain 33*33*40 on the fly, with a quadcore this could be improved,
I am worried about the speed however, since I aim for 20 ms to generate one patch on the fly so I get stable 50 fps.

I could shrink the patch sizes, but thats all up until the basics with deformation ... working.

Creating the lookup texture on the fly for 4500 vertices takes maybe 1-2 ms + transfer to GPU + its not suited for blending yet

I thought about a 3d texture to represent the material areas in the volume, so I can generate the blend weights in the shader.

This requires only a low resolution 3d texture

e.g.: for a 33*33*33 volume chunk you only need a 16*16*16 sized 3d texture

But thats still theory

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoderTake this with a pinch of salt, because I only started on my volumetric terrains about a week ago. In short, I took Ysaneya's excellent texturing technique, and adapted it for volumes. You do still need texture coordinates, of course, but they only have to be locally continuous, rather than globally.


Ah, thanks, that is an interesting read but it's more concerned with deciding which texture should be used for each fragment based on it's altitude/slope. I already know which texture to use, but I'm looking for interesting ways to generate texture coordinates.

So are both you guys working on volumetric terrains? As in you are actually storing the world in a volume? I would have thought that for large enough terrains the memory use would be high, even with compression. Depending on what you are trying to achieve you might be interested in a metaball approach like this: http://www.ogre3d.org/phpBB2/viewtopic.php?t=32486

Share this post


Link to post
Share on other sites
Quote:
Original post by PolyVox
Ah, thanks, that is an interesting read but it's more concerned with deciding which texture should be used for each fragment based on it's altitude/slope. I already know which texture to use, but I'm looking for interesting ways to generate texture coordinates.
My point was that I don't generate texture coordinates for the whole terrain. I use the basic grid to correctly size the detail texture, and the normal to orient it on the correct plane.
Quote:
So are both you guys working on volumetric terrains? As in you are actually storing the world in a volume? I would have thought that for large enough terrains the memory use would be high, even with compression. Depending on what you are trying to achieve you might be interested in a metaball approach...
Mine started its life as a 3d visualiser for Dwarf Fortress maps, so yes, it stores the whole volume. DF maps aren't very big (256x256x32, or thereabouts), so memory hasn't been a problem as yet. Now that I am moving to the generation of large scale seamless terrains, it may become a problem, but possibly solvable using RLE compression.

My largest problem so far has been level-of-detail. I assumed at first that I could just mipmap the voxel-field, and run marching cubes again, but this leads to very bad results, because it completely ignores the surface features. A better approach is probably to run a ROAM-like algorithm on the final mesh, but this becomes expensive to compute on the fly. beaugard's meta-terrain approach might be a better pick, as one can use normal chunked-lod.

[Edited by - swiftcoder on July 28, 2008 5:51:27 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by PolyVox
So are both you guys working on volumetric terrains? As in you are actually storing the world in a volume? I would have thought that for large enough terrains the memory use would be high, even with compression. Depending on what you are trying to achieve you might be interested in a metaball approach like this: http://www.ogre3d.org/phpBB2/viewtopic.php?t=32486
I may be a little unclear on beaugard's scheme here, but it seems as if he only gets gains for very localised changes. From his code, it seems that any terrain patch that has overhangs/other voxel effects, must be represented by voxels, as well as the heightmap. Where he gains on storage and performance is in the distance, because those tiles are mostly just represented by a heightmap.

I think that in my case, where the entire terrain is generated with caves and tunnels, I wouldn't save much from the hybrid approach. I could perhaps do the opposite, and compile a heightmap from the voxelsfor distant patches, since only the upper surface will generally be visible.

I also considered a triple-heightmap scheme, but discarded it as too complicated to edit - I may have to reconsider this to achieve sufficient performance.

Share this post


Link to post
Share on other sites
I just did a calculation about memory consumption on my own

to represent a volume of 2048*2048 heightmap values I need exactly
72 Mb using runlength encoding.

so you will end up with around 80-88 Mb when you add some overhangs or tunnels.

Now the clue to increase performance is to allocate the memory for the RLE Voxels locally, meaning, voxels close to each other should be in the same memory region, this will reduce the cache miss rate a lot.

A page is usually 4kb large

I will subdivide my terrain chunks into 16*16 sized patches,
so one page in memory will hold exactly one chunk.
Currently I am just using "operator new", but the allocation scheme will be replaced soon enough.


There is a posix func to allocate align memory
see "man posix_memalign"

So on modification of the volume you have to find the RLE voxels touched, maybe move them into a temporary continuous buffer, perform the operations and write them back to the align memory.

I will write some custom allocators for my containers as well to make sure all the data is page-aligned.

I will post the results once I had time to implement it, unfortunately thats not before saturday.


Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
My largest problem so far has been level-of-detail. I assumed at first that I could just mipmap the voxel-field, and run marching cubes again, but this leads to very bad results, because it completely ignores the surface features. A better approach is probably to run a ROAM-like algorithm on the final mesh, but this becomes expensive to compute on the fly. beaugard's meta-terrain approach might be a better pick, as one can use normal chunked-lod.


At the moment I am mipmapping the volume as you mention, but it is difficult to get the meshes for the different mipmap levels to match up. I have some more work to do on this though and am hopeful I can get something that works. Running a more general mesh decimation algorithm also has problems in that a marching cubes surface usually has sharp angles meaning it needs to be smoothed first. Even then, it can be hard to decimate along the edges of chunks while ansuring they still match up with adjacent chunks. But to be honest, I've yet to determine that the number of vertices is actually the bottleneck.

Quote:
Original post by swiftcoder
I may be a little unclear on beaugard's scheme here, but it seems as if he only gets gains for very localised changes. From his code, it seems that any terrain patch that has overhangs/other voxel effects, must be represented by voxels, as well as the heightmap. Where he gains on storage and performance is in the distance, because those tiles are mostly just represented by a heightmap.


As I (loosly) understand it, he doesn't store voxels at all. He stores a heightmap for the terrain and then uses metaballs for the overhangs, caves, etc. He then voxelises this for the purpose of running marching cubes and generating the mesh, but these voxels are not kept in memory. Hence the memory footprint is relatively low. The terrain is broken into chunks, so when a metaball moves only the nearby chunks need to have their meshes regenerated.

@Basiror - I will be interested at how well marching cubes works on an RLE compressed volume. Naturally it depends on fast access to neighbouring voxels so it will be interesting to see whether an RLE volume provides this. I am am planning a block-based scheme for storing my volumes, but I haven't really implemented it yet.

Share this post


Link to post
Share on other sites
Well I am using RLE Voxels already.

My next aim is to pack chunks(32*32) of RLE voxels into 4kb aligned 4kb sized memory blocks to reduce the page trashing of the cache. I am sure this pays pack with around 10-20% speedup


I did some test running my MC implementation with normal new operator 20 times, at a certain point speed simply doubles,
I guess this has to do with the memory management throwing pages away that are hardly used by the MC algorithm.

In other words, if I manage to put all the data of one chunk and its neighbours into 1-4 pages I will certainly get a speedup.
Since the pages are 4kb in size, it doesn t matter whether the data is continuously packed either.


In general I would try to avoid repeated dynamic allocation when possible.

On Friday I will have time to implement it, so expect the results somewhen next week.

Share this post


Link to post
Share on other sites
Quote:
Original post by PolyVox
But to be honest, I've yet to determine that the number of vertices is actually the bottleneck.
These DF maps become about 600,000 indexed tris, which brings my X1600 mobility down to around 60 fps. Culling helps a lot, but once you pull the camera back to view the whole terrain, there isn't much you can do apart form LOD.

Quote:
@Basiror - I will be interested at how well marching cubes works on an RLE compressed volume. Naturally it depends on fast access to neighbouring voxels so it will be interesting to see whether an RLE volume provides this.
As far as I can tell, RLE works fine for static data, but the overhead of editing the terrain then becomes pretty high, no?

Quote:
As I (loosly) understand it, he doesn't store voxels at all. He stores a heightmap for the terrain and then uses metaballs for the overhangs, caves, etc. He then voxelises this for the purpose of running marching cubes and generating the mesh, but these voxels are not kept in memory. Hence the memory footprint is relatively low.
Ok, that does seem to be what he is doing, but why use voxels at all? Both height maps and spheres are relatively simple implicit surfaces, and we could polygonise these directly (see Bloomenthal '94).

Share this post


Link to post
Share on other sites
I can insert new voxels right into a RLE voxel, its just an interval splitting/merging algorithm where you loop over the segments described by the RLE Voxel and clip away parts of the new interval entered by the boolean modificator

My chunks are 32*32*4096 sized (nx,ny,nz)

1. I move the boolean modifier into the chunks local coordinate system
2. for each column (x,y pair) I intersect a ray
(x,y,0)+lambda(0,0,1) with the sphere
This gives me two intersection points if the ray starts outside, otherwise one

2.a) two intersections span an interval(z_enter,z_leave)
2.b) one intersection spans an interval(0,z_leave)
3. I simply insert these intervals with the interval splitting/merging algorithm into the RLE Voxel

The interval algorithm is already working, I use it in conjunction with the heightmap data

With a constraint that the lowest RLE Voxel of a column is always solid (you cant fall through your terrain) you can omit the Voxelstate during compression,
since this information is implicitly given by the sequences of alternating voxels

Assuming you got 4 layers in your terrain, you need only sizeof(ushort)*8*32*32 bytes, that are 16 kb.
Now instead of storing pointers to the memory pages you use 32 bit integers(ushort might be too small who knows?), which consume anough 4kb
So in general you can represent a 32*32 sized chunk with only 20 kb of memory

In my gameengine this allows me to represent around one square kilometre with
80 mb of memory, considering that 8gb of memory are pretty much standard in a year, you can represent 10 square kilometers with less than 1 gb of data.


Another huge advantage of chunksizes of only 20kb of data is, you can stream it to clients over low bandwidth connections easily with delta compression, a must for client<-->server synchronization. Lots of things you need to keep in mind.



[Edited by - Basiror on July 31, 2008 3:53:12 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
My chunks are 32*32*4096 sized (nx,ny,nz)
Well, no wonder you need RLE - each chunk is twice the size of my entire terrain ;)

I will be interested to see how well your RLE structure works out, I don't recall anyone doing volume RLE on the fly before.

@PolyVox: I have been running some experiments, and I think I can take beaugard's approach and dump the voxel part of it entirely, just directly tesselating the heightmap and spheres, and adding other primitives such as lines as well. Even if it doesn't make for a good game, it will make for a fantastic editor.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Basiror
My chunks are 32*32*4096 sized (nx,ny,nz)
Well, no wonder you need RLE - each chunk is twice the size of my entire terrain ;)

I will be interested to see how well your RLE structure works out, I don't recall anyone doing volume RLE on the fly before.

@PolyVox: I have been running some experiments, and I think I can take beaugard's approach and dump the voxel part of it entirely, just directly tesselating the heightmap and spheres, and adding other primitives such as lines as well. Even if it doesn't make for a good game, it will make for a fantastic editor.


I edited my post as you replied :)

Well I want the player to be able to dig in trenches to take cover, so the surface needs a high resolution. 32*32*4096 are tesselated and smoothed within 54 ms on my 3 years oldwinchester, I can split this into 4 16*16*4096 and make use of multithreading, this would run with less than 17 ms @58fps

Thats without the memory mapping optimization.

The RLE Voxels also allow you to extract parameter intervals for the marching cubes algorithm, so you don t need to interate over the whole z range (0-4096), when in fact only 300-400 are of interest for the result

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
@PolyVox: I have been running some experiments, and I think I can take beaugard's approach and dump the voxel part of it entirely, just directly tesselating the heightmap and spheres, and adding other primitives such as lines as well. Even if it doesn't make for a good game, it will make for a fantastic editor.


I must say, I really like beaugard's approach. If I was doing destructible terrain (rather than arbitrary environments) I would give it some serious consideration. Actually I believe it is based on the the way that terrain is created in Crysis (at design time, not run time).

Quote:
Original post by Basiror
Well I want the player to be able to dig in trenches to take cover, so the surface needs a high resolution. 32*32*4096 are tesselated and smoothed within 54 ms on my 3 years oldwinchester, I can split this into 4 16*16*4096 and make use of multithreading, this would run with less than 17 ms @58fps


It's worth mentioning that, in my experience, the time to generate the marching cubes surface is not the bottle neck, nor is the time required to upload it to the graphics card. Actually I find the bottleneck to be loading the data into the physics engine (though I have yet to optimise this). Depending on what your plan is for physics/collision detection, you might want to consider this.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this