Yet another voxel terrain system: presentation of my work and ideas

Started by
9 comments, last by Ben Bowen 11 years, 5 months ago
Hi,

I've come to show you what I'm doing, to have your opinion or even feedback. Voxel terrains seem quite mainstream these days, so about 6 months ago I began creating my own, curious about what I could achieve. Like many people I was inspired my Minecraft and thus setup some goals I wanted to achieve:

  • smooth surface (not huge pixel looking cubes);
  • dynamically editable (diggable);
  • "decent" view distance;
  • network streaming;

Almost needless to say that I went for marching cubes, implementing the GPU based technique described in GPU Gems 3 (the screenshots were good looking). I obviously made a LOD system. Each level being quite big (128^3 voxels for now), I split them up into smaller regions for generation speed purposes. The terrain geometry is generated as we move around, each slice of regions in the moving direction is computed when needed. Here come some screenshots:

tb_sce_vterrain_shadows2.png tb_sce_vterrain_triplanar5.png

It's still work in progress but I'm quite happy with the results so far. No fancy rendering effects here.

I'm currently working on the networking part. The server stores the voxels into an octree were nodes are chunks of like 32^3 voxels. Nodes are currently stored as in in the hard drive; though I didn't run any serious test for now storage will most likely become an issue. Fortunately this kind of data seems to compress quite well (each voxel is one byte long... for now). However I can't go around compressing/decompressing files on the fly as needed, especially if several clients are modifiying the terrain at the same time, which involves setting up some sort of cache system, which brings synchronization issues. Since I'm working on this I am considering many solutions, ideas are of course most welcome.

There are a couple of things I'd like to do/try in the future, if you have ever done such stuff I'd be glad to know :)

  • (very?) long-range ambient occlusion, using each LOD well I believe giant cave could be nicely darkened;
  • conservative LOD: currently each level of detail erodes the terrain and small parts of it disappear. I was thinking of applying some sort of "max" filter when computing the LODs, and then shrink the surface alongside its normal to keep the terrain from getting fat. It would keep any piece of floating terrain, but fill holes very quickly;
  • I wonder what it'd be like to grow trees, making them match the surface of the terrain, which would add realism;
  • different materials (I've seen it being done in voxelform);
  • growing grass; I guess that would require quite a big amount of computing power, as it would need to keep in memory and regularly update areas where grass can grow;
  • CPU terrain geometry generation?
  • water... ?

The project is open-source and part of my 3D engine, you can find more screenshots, some videos and of course the git repositories on my website: http://www.scengine.org/

You might however not be able to run the terrain demo since the example sources are not always up-to-date (especially when I'm working on them).

Thanks!
Advertisement
You should experiment with more sophisticated normal map generation (which should also improve the automatic tri-planar texture blending). I'm thinking of using a "deeper sampling" into the voxel data to improve the distribution of things. Look at the bottom half quarter of your image. See the unrealistic, voxel-regular distribution of blending between grass and cliff? Also, near the cliff-slopes and thin spans of higher elevation, there shouldn't be any grass (either dirt or cliff). The diffuse lighting looks distributed good enough (although could be enhanced), but the texture distribution needs improvement more than anything else.

The terrain geometry is generated as we move around

This will impose restrictions on the terrain generation. Any part of the terrain you generate can not depend on neighbor chunks. That makes it difficult to have "classic" terrain generation, for example making rivers by following the terrain from high to low elevation. This may be a necessary design limitation.

Almost needless to say that I went for marching cubes
[/quote]
Marching cubes have some limitations. When you have voxels of different types beside each other, it is tricky to determine out what texture to use. And then, some voxel should be drawn as cube (for example bricks).
However I can't go around compressing/decompressing files on the fly as needed[/quote]
Why not? Compressions is good not only for disk space, but also for communication bandwidth. But you may want to consider a very quick algorithm. The server should be able to cache chunks at several stages: on file, loaded to memory compressed, uncompressed in memory. You then need an algorithm that moves data back and forth between these stages. Use a checksum for each chunk, to make it easier and faster to communicate with the clients. And then, the clients also need to cache chunks.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
Thank you guys for your help and suggestions.


You should experiment with more sophisticated normal map generation (which should also improve the automatic tri-planar texture blending). I'm thinking of using a "deeper sampling" into the voxel data to improve the distribution of things. Look at the bottom half quarter of your image. See the unrealistic, voxel-regular distribution of blending between grass and cliff? Also, near the cliff-slopes and thin spans of higher elevation, there shouldn't be any grass (either dirt or cliff). The diffuse lighting looks distributed good enough (although could be enhanced), but the texture distribution needs improvement more than anything else.

Sorry for the late answer, I was busy working on something else. I remember trying to sample a larger area for normal generation at first, but the only difference I could notice was a slight performance drop on geometry generation. I figured sampling 6 values was quite enough, especially if I'm doing some normal mapping or grass rendering (which I plan to do... later) on top of it. I agree that the texture transition looks quite bad though, I didn't really work on rendering yet. I quickly made a noise blending, and it looks a bit better though it still lacks some tweaking:

tb_sce-vterrain-texnoise1.jpg tb_sce-vterrain-texnoise2.jpg


[quote name='Yno' timestamp='1344392980' post='4967218']
The terrain geometry is generated as we move around

This will impose restrictions on the terrain generation. Any part of the terrain you generate can not depend on neighbor chunks. That makes it difficult to have "classic" terrain generation, for example making rivers by following the terrain from high to low elevation. This may be a necessary design limitation.
[/quote]
I was actually talking about geometry generation (via marching cubes) for rendering. The voxel generation is currently hardcoded in my test sample, the entire world is generated (or loaded from HDD) at the beginning. I don't know how I plan to generate the world if the player moves too close to the border. I'm aware of this kind of problem though, Shamus Young made a pretty nice blog entry about it.


Marching cubes have some limitations. When you have voxels of different types beside each other, it is tricky to determine out what texture to use. And then, some voxel should be drawn as cube (for example bricks).

With geometry shaders it seemed to be the easiest way to generate the isosurface (plus I had a nice article to steal from). I haven't tried using multiple materials yet, I guess you are right about texturing, but does it really depend on the isosurface generation algorithm? I don't plan to use the terrain to build any structure, so I should be OK with cubes.


However I can't go around compressing/decompressing files on the fly as needed

Why not? Compressions is good not only for disk space, but also for communication bandwidth. But you may want to consider a very quick algorithm. The server should be able to cache chunks at several stages: on file, loaded to memory compressed, uncompressed in memory. You then need an algorithm that moves data back and forth between these stages. Use a checksum for each chunk, to make it easier and faster to communicate with the clients. And then, the clients also need to cache chunks.
[/quote]
Hm, it may not appear so but it is (except for the checksum) what I was trying to say, I probably didn't make myself clear. By "on the fly" I meant "from HDD to memory to HDD everytime needed, without cache", which is (I believe) too expensive. Anyway the caching system is pretty much done and working now.
Hmm... wait, why do people rely on sampling so much? Be a little more procedural with the way you handle things.

Hmm... wait, why do people rely on sampling so much? Be a little more procedural with the way you handle things.


anything even remotely sophisticated in procedural worlds, is going to take 10 minutes per <insert what you consider a really small area here>
sure you can hack together a nice world, but at least in my case, i want to have more cool stuff =)
erotion, rivers and lakes is a good example of nice things you can't have with procedural math, simply because you can really only generate it using an extreme amount of "octaves"
i expect there will always be purists who applaud people who do things realtime.. i felt the same way when i was doing that
but eventually nice things must be had, and then you create a complex generator that compiles to memory and does what you want with a small section of the world
etc etc etc
all in all, from my standpoint, if it doesnt have rivers and lakes that makes sense, i'm not doing it :)

warning: the above is a personal opinion
Grass: Smartest I've seen is a grass map, greyscale map of grass density and then just spawn based on the map. A single easily generatable texture shouldn't be much memory. Of course you're only getting a single height layer, no grass in caves. But then why would you want that? I suppose if you did could always use a multi channel texture to store it and then have up to X heights depending on the channels you have available.
In a way, sampling is used to interpret implicit information (with a lot of ambiguity to worry about) from an incomplete data set i.e. spatial information but nothing else; hence the need to use a just-in-time generative approach based on the spatial information. A procedural model will describe additional features by inserting their associated details ontop or between the initial form of data. So, it's not even necessary, for that matter, to specify a material for each face of every voxel (which would be extremely consuming). I believe there's a huge variety of techniques you could conceive to describe materials. To elaborate this idea, look at this illustration and think about the way these cliffs work:

9d3b3a507b363c2fb3a89cb6bd2c2387.jpg

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features which voxels are often utilized for with novel alternatives that may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which have the exact same position.
2. Average their normals (sum the normals and then normalize).
3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

For your case, I recommend you just describe enough for the sampling to know what's appropriate i.e. a single scalar code for each voxel which corresponds to a 6-sided set of materials Examples: some material sets may entirely include the cliff texture, sometimes a mix of messy grass at the top and cliff on the sides, dirt ontop and red rock on the sides, or chalky-dirt with weeds on the top and cobble on the sides etc.

Hmm... wait, why do people rely on sampling so much? Be a little more procedural with the way you handle things.

In my case the terrain can be modified, it's not a static procedural defined shape.


Grass: Smartest I've seen is a grass map, greyscale map of grass density and then just spawn based on the map. A single easily generatable texture shouldn't be much memory. Of course you're only getting a single height layer, no grass in caves. But then why would you want that? I suppose if you did could always use a multi channel texture to store it and then have up to X heights depending on the channels you have available.

I don't see why I couldn't have grass in caves, the greyscale map just has to be three dimensional as you said. But once again, since the terrain can be modified I don't see why I'd want to store the grass density in any map, it should be part of the voxel data.

For your case, I recommend you just describe enough for the sampling to know what's appropriate i.e. a single scalar code for each voxel which corresponds to a 6-sided set of materials Examples: some material sets may entirely include the cliff texture, sometimes a mix of messy grass at the top and cliff on the sides, dirt ontop and red rock on the sides, or chalky-dirt with weeds on the top and cobble on the sides etc.

I did plan in fact some sort of merging to describe voxels with grass on top and rock on the sides, but I don't think that having materials that describe each side (or even two) is useful; if there is rock on the sides, so there is on top and everywhere else. I just thought of using a single bit to know whether there should be grass on top or not. For dirt on top and rock on side, just have a voxel layer of dirt on top of rock should be enough.

I did plan in fact some sort of merging to describe voxels with grass on top and rock on the sides, but I don't think that having materials that describe each side (or even two) is useful; if there is rock on the sides, so there is on top and everywhere else. I just thought of using a single bit to know whether there should be grass on top or not. For dirt on top and rock on side, just have a voxel layer of dirt on top of rock should be enough.

Sounds right to me. I use a single type for each block. Then a filter that transforms the blocky view into a smooth view (it moves all coordinates with a little delta). As a last step, dirt block faces with a slope less than a certain limit are drawn using grass instead of dirt.

View2_2012-09-30.jpeg
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

This topic is closed to new replies.

Advertisement