Jump to content
  • Advertisement
Sign in to follow this  
Basiror

Marching Cubes Terrain: Result

This topic is 3635 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, Below is a screenshot of a 2048*2048 sized terrain generated out of a voxel representation created from a height map screenshot It did generate approximately 24.000.000 triangles including the neighbourhood structure. The sceen took several minutes to be generated and smoothed. So the app requires around 600 Mb of memory plus maybe 100-200 Mb when generating per vertex normals My current implementation didn t care about reallocation very much, thats why it is utilizing 8Gb of virtual memory, when in fact it is only using around 600 Mb. So in order to improve the memory consumption I need to make sure it is working with constant memory and I need to reserve large enough buffers in order to avoid reallocation. Lets see how that works when I pass these triangles to a physics engine *hr* I guess I need a new computer, mine only has 1 Gb of memory :) The high memory consumption also originates from the hilly terrain, if it is flat you only need one sixth of memory. P.S.: The cracks visible in the screenshot are due to a bug in the smoothing code. I need to work on smoothing the edges of the chunks [Edited by - Basiror on August 6, 2008 5:56:08 AM]

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Basiror
Below is a screenshot of a 1024*1024 sized terrain generated out of a voxel representation created from a height map. It did generate approximately 24.000.000 triangles including the neighbourhood structure. So the app requires around 600 Mb of memory plus maybe 100-200 Mb when generating per vertex normals
My only question here is "Why?". A 1024^2 heightmap is trivial to both tessellate and render, and can result in a far more optimised triangulation.

I assume you are doing this to obtain destructibility for the terrain, but any destruction is only going to increase triangulation further, and 24,000,000 triangles is already far beyond a reasonable terrain budget.

Share this post


Link to post
Share on other sites
-Swift: Height fields are just boring, you can do so much more with a voxel representation(natural caves, tunnels, overhangs, real cliffs with cliff walls that aren't stretched, and pretty much anything else you can create an algorithm for) and while his triangle count is indeed higher than the standard heightfield version, but it would stretch many of those triangles(when the terrain has large variations in the height) where as MC would add more triangles and avoid the stretching. For a perfectly flat piece of terrain they should both use the same # of verts, which is what matters anyway, and the triangle count should be comparable. In my MC implementation i'm getting pretty close to a 2:1 ratio of triangles to verts which is pretty ideal as each vert is reused ~6 times.
Rendering MC generated triangles might not be quite as efficient overall since it is unlikely the vert/triangle ordering will be as optimal but dealing with the increased triangle count isn't too hard as you can just pull the camera back some for the purpose of LOD determination, since a MC terrain of 1k*1k looks more detailed then a height field 1k*1k anyway.

-Basiror: I'm working on something similar but mine uses an octree system so distance chunks are much coarser, my system is entirely 3D though so a 1k*1k has no meaning only 1k*1k*1k. I also have the small cracks between chunks that needs fixen. I'm currently not storing the voxel grids after I use them to generate the verts so my memory consumption is quite low.

One problem I have encountered is that the normals look pretty crummy if you generate them directly from the density grid used by the voxels(assuming you're using the fast noise from Infinity, which I believe you are) so I have to generate the normals from the triangles instead, or use a more accurate gradient noise function.

Share this post


Link to post
Share on other sites
It's great you are making progress with this, but like swiftcoder I have some concerns about how practical it is. I had assumed you wanted to use voxels to allow for a dynamic environment, but it seems that will not be possible if it takes several minutes to regenerate. Of course, any destruction will be localised so you won't have to regenerate the whole map. What kind of frame rate do you get once it is running?

Also, do you know where the bottleneck is in your mesh generation process?. My system runs a bit faster but I do not perform any mesh smoothing (which in turn means I don't have to build the mesh data structures - my MC implementation writes straight to vertex buffers). Of course, for terrain the smoothing could be important...

Quote:
Original post by AndyPandyV2
-Basiror: I'm working on something similar but mine uses an octree system so distance chunks are much coarser, my system is entirely 3D though so a 1k*1k has no meaning only 1k*1k*1k. I also have the small cracks between chunks that needs fixen. I'm currently not storing the voxel grids after I use them to generate the verts so my memory consumption is quite low.


Cool, I'm always interested in other voxel systems. Do you have any links? 1024^3 is also my target but I'm a little way off at the moment. Do you find the use of an octree slows down the MC algorithm (due to it being harder to access individual voxels) or speeds it up (due to discarding large portions of the voxels quickly)?

Quote:
Original post by AndyPandyV2
One problem I have encountered is that the normals look pretty crummy if you generate them directly from the density grid used by the voxels(assuming you're using the fast noise from Infinity, which I believe you are) so I have to generate the normals from the triangles instead, or use a more accurate gradient noise function.


Can you elaborate on 'the fast noise from Infinity'? What is that? For computing normals from the volume you can use the Central-Difference method, or if you want it smoother you can try Sobel gradients. Or you can smooth the volume and then run the normal estimation schemes.

[Edit] - Ah, I see. Infinity is a game and fast noise is a library from it.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyPandyV2
-Swift: Height fields are just boring, you can do so much more with a voxel representation(natural caves, tunnels, overhangs, real cliffs with cliff walls that aren't stretched, and pretty much anything else you can create an algorithm for)
Yeah, I know - I am working on voxel terrains as well. I would point out though, that the terrain pictured is not using any voxel features, and it is already requiring a higher vertex-count than the same height field generated terrain.
Quote:
Rendering MC generated triangles might not be quite as efficient overall since it is unlikely the vert/triangle ordering will be as optimal but dealing with the increased triangle count isn't too hard as you can just pull the camera back some for the purpose of LOD determination, since a MC terrain of 1k*1k looks more detailed then a height field 1k*1k anyway.
The real problem is LOD. I haven't found a good scheme to handle LOD for voxel terrain - naive approaches such as mipmapping the voxel field cause a lot of artefacts (disappearing tunnels, shifting walls, etc.). Height fields are trivial to perform LOD on, which is what I was primarily commenting on.
Quote:
One problem I have encountered is that the normals look pretty crummy if you generate them directly from the density grid used by the voxels(assuming you're using the fast noise from Infinity, which I believe you are) so I have to generate the normals from the triangles instead, or use a more accurate gradient noise function.
That might be as much a problem with Ysaneya's fast noise as it is with the voxel representation - his noise isn't continuous in its derivatives.

Share this post


Link to post
Share on other sites
The image just served for demonstration, it is actually just a raw heightmap generated from gimp, smoothed with a 7x7 gaussian kernel


Now some details:
The grid is 2049*2049*256 voxels large
I store the heightmap in runlength encoded voxels
e.g.:
solid 128, nonsolit 128 for a heightfield value of 127 (0-127)

While generating the triangulation I did iterate over the grid the bruteforce way :), the reason it took so long.
As you can see the terrain is split up in 128*128 chunks of the size 16*16


I can increase the speed by the factor of 6-9 (estimation, need to test it) with a tiny modification of the algorithm

e.g.:
two neighbouring RLE voxels
height field case(18 voxels in a row):
111111000000000000
111110000000000000

only the part
110
100
is of interest for us so we are touching 2 quads(2d)/cubes(3d) instead of 17, thats a speedup by a factor of 9x

looking at a cave scenario

111111000000111000
111110000011111100
there are three regions of interest now
110
100
and
0001
0111
and
100
110

we need to evaluate 2+3+2 quads instead of 17, thats a speedup of factor 2.5x

In general you can expect the heightfield to be pretty flat across large portions, so a speedup of factor 6-9 is more than likely.

Looking at interval ranges 0-255 and assuming around 4 cave layers you approximately only need to touch a minimum of 4*6=32 up to maybe 60 voxels in a row. thats 4-8 times the speed. For flat areas you will get a speedup by factor 80.

Computing these intervals out of neighbouring runlength encoded voxels shouldn t be that hard and offers the nice opportunity to calculate the bounding boxes on the fly for free, instead of touching the vertices output by MC.


Regarding the mesh smoothing artifacts, you need to
a) get a list of boundary vertices
b) associate them with the boundary edges
c) make a copy of the vertex data
d) smooth the edges in a deterministic way, so neighbouring chunks will match up

Since I am working with the lookup table approach polyvox told me in the other thread, I can simply extract the boundary vertices on the fly as well and place their indices into a map.
Same goes for the edges
map<vertexindices>
vector<edges>

Now you only need to take care about the corners.

I am now working on the interval searching algorithm, so stay tuned :)

P.S.: I used a heightmap 2049*2049, I fixed that in the first post
P.P.S.: a flat terrain chunks requires only 17*17*2*2=1156 bytes,
that allows you to stream 26 terrain-chunks/s with only 30 kb bandwidth usage to the client, for terrain updates when logging in (e.g.: when playing a MMO)

a slow DSL connection has around 45 kb downstream.

So terrain updates on the fly shouldn t be a problem. Just compare a timestamp and updates accordingly.

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
The grid is 2049*2049*256 voxels large
I store the heightmap in runlength encoded voxels
e.g.:
solid 128, nonsolit 128 for a heightfield value of 127 (0-127)
So you aren't storing distance at each voxel, just a single bit for empty/non-empty?
Quote:
P.S.: I used a heightmap 2049*2049, I fixed that in the first post
P.P.S.: a flat terrain chunks requires only 17*17*2*2=1156 bytes,
that allows you to stream 26 terrain-chunks/s with only 30 kb bandwidth usage to the client, for terrain updates when logging in (e.g.: when playing a MMO)
How does your RLE approach relate to Ben Houston's work? He seems to have hit a pretty optimal structure for reads and neighbour access, but I am not sure about updates. How do you handle updates in your RLE system? I guess for a 16^3 chunk it might be fast enough to just recompress the chunk, but this doesn't scale as well as it might.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
How does your RLE approach relate to Ben Houston's work? He seems to have hit a pretty optimal structure for reads and neighbour access, but I am not sure about updates. How do you handle updates in your RLE system? I guess for a 16^3 chunk it might be fast enough to just recompress the chunk, but this doesn't scale as well as it might.

Interesting find! I just read the abstract and it seems he is using it for fluid simulation so I guess he must be able to update the structure pretty quickly. I'll have to find time to read the paper properly.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Basiror
The grid is 2049*2049*256 voxels large
I store the heightmap in runlength encoded voxels
e.g.:
solid 128, nonsolit 128 for a heightfield value of 127 (0-127)
So you aren't storing distance at each voxel, just a single bit for empty/non-empty?



No, values (0-127) in a heightfield represent 128 voxels in one xy column, so the voxel stores 128 plus a bit to mark it as solid/non -solid

In order to update a column I just clip the new interval against the old one and copy the results to a new buffer.
In the future I plan to place all RLE Voxels into one continuous
chunk of memory to avoid frequent reallocation, but thats not a top priority now

To find the intervals that modify the RLE voxel you simply trace a line against the boolean volume (e.g.: sphere)
get the two intersection points and use their z values as interval borders

I didn t read the whole article you posted, but from my understanding it is a hierarchical approach to my RLEVoxels,
I use RLE encoding in Z dimension and he extents this to an arbitrary dimension. I dunno whether this really pays back or is
worth the effort to implement it.

If I would represent my heightfield with his approach
I could find homogeneous cube shapes blocks below and above the surfaces
but in each Z column I have to clip the existing RLE Voxels into
two and represent the cube shaped part with a hierarchical voxel.
That doesn t save lots of bytes, why? See the following example

e.g.:
1 = solid
0 = non solid
+ = solid cube
- = non solid cube
A typical heightfield scenarion
111111110000000000
111111000000000000
111111100000000000
111111100000000000
111111111000000000

5*2 RLE Voxels = 20 bytes

++++++1100--------
++++++0000--------
++++++1000--------
++++++1000--------
++++++1110--------

9 RLE Voxels = 18 bytes +2 cube RLE Voxels = 18+? bytes
Nothing saved here and thats a simple example with lots of homogeneous space.

I would save some memory if the volume had completely homogeneous columns adjacent to each other like in the following example

000000000000000
000000000000000
111110000001111
111100000011111
000000000000000
000000000000000
10 RLE Voxels = 20 bytes
+++++++++++++++
+++++++++++++++
111110000001111
111100000011111
+++++++++++++++
+++++++++++++++
6 RLE Voxels = 12 bytes + 2 RLE Voxel blocks size ?

With heightfield data as input you almost never get completely adjacent homogeneous columns, so I see no advantage in it for rendering terrain. I guess updating this complex structure takes more time than copying a 20kb chunk to a new buffer, which is the primary cost factor of my update function

My updates work like this

0 denote unchanged z columns
1 denote changed z columns
all columns are stored in one continuous memory chunk
000000000
000001111
000011111
000111111
000001111


for(y=0;y<ny;++y)
{
for(x=0;x<nx;++x)
{
if(changed(x,y)==false)
{
move_column_to_new_buffer(columnptr(x,y),oldbuffer,length,newbuffer);
}
else
{
update_column_in_temporary_buffer(x,y);
move_column_to_new_buffer(0,tempbuffer,length,newbuffer);
}
}
}



Pretty easy and zero memory fragmentation is involved and reallocation is not required at all, just swap between preallocated buffers or use a block allocator that is executed maybe 1-2 times per update.
You can improve performance when you combine several volume operators at once.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!