Generating Volumetric Terrain

Started by
18 comments, last by 3Ddreamer 10 years, 8 months ago

I've been trying to understand how to generate volumetric data in order to render voxel terrain.

I'm trying to seperate the generation into parts, and understand the various components. I'm making one assumption at the beginning and if I'm wrong at the onset, please let me know.

I'm assuming that creating a voxel engine requires you to do a few steps:

1. Populate volumetric data by generating noise and filling the data structure with what will be rendered

2. The volumetric data is converted into polygons using an algorithm like marching cubes

3. The vertex buffers generated are draw to the screen

I'm starting with #1 here, and I'm finding some problems understanding how to use NOISE to generate 3D volumetric data that will look realistic. I'm also having problems understanding how one generates an entire scene like this... Assuming that the scene is seperated into "CHUNKS" which are drawn. I'm not sure I understand how each chunk would know about the other chunk, in order to make the transitions smooth. As you generate more chunks to keep the scene going indefinitely. How does a single noise function tell us everything about the world? how do we know where the SKY is? How do we know where the DIRT is? Why is there a seperation between the dirt, and the ground? how is the inside of the ground "solid? while the sky is "empty"? I understand about density when talking about height maps, but how is this same concept applied to the air, and the dirt inside the ground?

Of course, when you think Heightmap, it's all very simple to understand to some degree (although you still have the chunk problem) but it only defines the "surface" of the terrain. Since the Voxel terrain works differently, how do you then use Noise to generate volumetric data that will look like real terrain when it has to include everything that is INSIDE the chunk as well, not just where the "ground" is. If i'm thinking about it "minecraft" style, it's a little bit easier to understand, because of the way the cubes would work. Noise could work quite well in this method, and I have generated this in the past, but using smooth voxel terrain isnt quite the same to me when it comes to noise and how you would generate actual terrain.

I guess I'm missing a piece of the puzzle here that I'm sure would clear things up in my mind.

Having said all this. If I just go and read reference material, like how perlin noise works, etc which I have, it doesn't explain any of the above questions. Thats why I'm hoping that people will help me understand this concept specifically within the context of my question, not in general terms about "how perlin noise works" if that makes sense.

Any help is appreciated. i'll probably bounce off ideas back and forth if someone is willing to help me out!

Advertisement

Hi,

There are so many methods for this area of terrain creating.

I highly recommend using existing, no cost algorithms or coding for procedural voxel terrain, but try to get a library with an editor and preset shaders. I hate to say this, but almost all of the beginner and intermediate voxel terrain that I have seen in completed games was very outdated when made from scratch. They seemed to be a dead end for the developer in those cases.

Look at other types of noise, some of which may be combined with randomization in implementation for nice results. You are going to need noise that produces a more random terrain appearance, of course, instead of a "carpet pattern" which looks unnatural.

Maybe the creator of this video would offer his noise for you. The results speak for themselves:

Article here has links to useful things related to what you are doing:

http://en.wikipedia.org/wiki/Terrain_generation

There are numerous threads and articles on procedural terrain generation right here at game dev.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

3Ddreamer, I guess you highlighted the very problem in your answer. There are plenty of posts, but none actually explain. They just talk about it as if you already understand the subject matter. It makes it difficult to get a handle on it when you can't find any actual explanations that give you the answers you need.

Right now, I'm not interested in the problem you describe about games using this or that, thats not my concern. I'm just trying to learn and understand how to do this for my brain to know, not to produce a video game in the end specifically. I'll use all my knowledge to do that in the end. I need answers to specifics right now.

You pointed out using Noise, but you didn't give me answers as to HOW. You are just pointing me to more questions. (although I appreciate you replying!)

I've read almost every thread in gamedev.net that has to do with Voxel Terrain and Noise. All of them are discussions among people who already have understanding of what I'm missing. So noone points it out because everyone understands it. So I'm left behind because I need specific answers.

At this point, I'm totally ok with having a boring terrain. I just want to generate actual terrain. I've seen some articles that talk about using noise TOGETHER to generate the terrain. The part they skip every time, is how are you putting 2 noises together? ;P

There are several things I do not understand about what I've seen so far:

1. No mention of how you "merge" multiple noise function results together. People talk about it, about using different methods "together" to generate more natural looking terrain. But they don't go into HOW you do that. Are we saying that you multiply the values? divide? add?

2. No mention of how you use noise, to generate volumetric data, even at it's most basic concept. Like when you talk about Voxel Engines, and you talk about a chunk, being a three dimensional array of density values, how do the values on the top end up being sky? and how do the values on the bottom end up being ground?

When you are "filling" the volumetric data structure, you do:


for (int x = 0; x <= size; x++)
{
    for (int y = 0; y <= size; y++)
    {
        for (int z = 0; z <= size; z++)
        {
            double div = 64.0;
            double val = (SimplexNoise.noise((x) / div, (y) / div, (z) / div)) * 128.0;

            VolumeData[x, y, z] = (sbyte)val;
        }
    }
}

But this doesn't generate terrain, it generates a sponge square. So i'm just confused at this point, how 1 function call to SimplexNoise or PerlinNoise, etc. can generate volumetric data that will resemble "terrain". Especially if you are not using a "minecraft" cube system, but using more of a marching cubes type of smooth terrain. (Although should that matter? isn't the volumetric data the same, just represented differently by different types of algorithms when building the polygons?)

.

Part of the noise is the terrain elevation factored into the algorithms, of course.

Since you want to learn the boiler plate and rivets level construction of the coding, then I feel you need a book on the subject, such as found on ebay, Amazon, and so forth.

You must have a strong grasp of abstract math before you do anything at all. Do you have that? Noise should have been covered in college math, if not high school.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

You might check out these journal posts by JTippetts:

http://www.gamedev.net/blog/33/entry-2227887-more-on-minecraft-type-world-gen/
http://www.gamedev.net/blog/33/entry-2249106-more-procedural-voxel-world-generation/

They talk about combining various fractal noise functions to create complex terrain. Essentially, he uses a linear gradient function and a step function to create a sort of membrane with solid below a certain value and open above, then uses the noise function to distort that membrane. The noise function itself is a composite of many simpler functions.

You can also check out libnoise's tutorials to see more of how noise functions can be hooked together.

So i'm just confused at this point, how 1 function call to SimplexNoise or PerlinNoise, etc. can generate volumetric data that will resemble "terrain".

Simplex/perlin noise alone isn't much more interesting than white or pink noise - it's just a pseudo-random function. The interesting part is how you combine many layers of noise.

My planet renderer uses a ridged multifractal generator (8 octaves of Simplex noise) added to a fractal brownian noise generator (12 octaves of Simplex noise), plus an additional two layers of Simplex noise for each of the generators to perturb their input parameters. That's 24 calls to a Simplex noise function for every pixel of the terrain. And I'm working in 2D - for good looking voxel terrain, I'd probably be using significantly more layers of noise.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

3Ddreamer, telling me to go back to school isn't very helpful... I'd say that's not what most Indie developers do to learn something new.

Anyway, thanks to everyone else who responded with really important details. I think at least i knew what I didn't know (or what was missing in my head). It's the fact that I'm getting basic noise because thats what I'm generating :P I need to use Noise properly in order to generate the volumetric data.

Once I have the volumetric data matrix for the region i want to draw, then I have to go to step 2, which is to generate polygons for the volumetric data that I generated in order to render it to the screen.

I did not realize that the noise used to generate basic minecraft like volumetric data, would be very similar to the noise used to generate more complex volumetric information. I suppose it's all in the resolution of the data. While minecraft data would be stored as on/off for whether a specific cube in a chunk exists or not, the more complex volumetric data, would still contain these "regions" or "chunks" but instead, they would describe a single cube or so, of actual space. So now, instead of having lets say:

World -> Region -> Cube (On/Off)

so lets say a region is 32x32x32, and the world is 32x32x32 regions, then you get 1024x1024x1024 cubes.

While in the latter, you probably have something more like:

World -> Region -> Section (Matrix [x,y,z] of densities?)

So now, you could still process the world using the Octree but when you get down to the "cube" or "section" level, you are not just on/off, you are another smaller matrix of volumetric data for that cube region?

Am I understanding the basic data structure? (if what i said even made sense to people, at least it's the words that come to my mind)


My planet renderer uses a ridged multifractal generator (8 octaves of Simplex noise) added to a fractal brownian noise generator (12 octaves of Simplex noise), plus an additional two layers of Simplex noise for each of the generators to perturb their input parameters. That's 24 calls to a Simplex noise function for every pixel of the terrain. And I'm working in 2D - for good looking voxel terrain, I'd probably be using significantly more layers of noise.

Of course that's amazing! I have seen some amazing results. I'm sure that you progressed over time through various discoveries as you created this planet render (which sounds amazing BTW).

At this point I have much much smaller goals. Even if I could generate something as simple as basic rolling hills with a sky vs. ground, that would make me happy and at the very least would allow me to move forward with additional concepts.

At least now I have the ability to focus my energies in generating 3D volumetric data with Noise functions. Of course, now it's a matter of figuring out how to do more of the things you mentioned. I've seen SOME breakdowns in different places but they are usually focusing their attention on other things, and never really break down the basics. Once of them did break down basics, but it was written in this Lua which I have no idea about yet. So I wasn't able to get much detail from it.

Combining noise functions requires a bit of creative thinking to really grok what is going on, and it can be confusing at first. This is where having a well-rounded education in various mathematics, shader programming, etc... can really come in handy. In those other aspects of programming and mathematics you encounter functions that are necessary to fulfill certain tasks, and once you understand them you can apply them to other problems.

As an example, consider your question of differentiating ground from air to generate rolling hills with a sky. At heart, this is as "simple" as a function that returns either a 0 (open) or a 1 (ground). (Of course, it gets more complicated once you start dealing with different types of ground, but stick with me here.) In my journal post (the one with all the confusing Lua) I talk about this. In my usage, I divide ground from sky just as FLeBlanc described, by using a linear gradient and a step function (though my terminology comes more from libnoise, which describes it as a Select function). Essentially, the linear gradient assigns each point in 3D space a value along a gradient scale. The orientation of the gradient is determined by an input line segment, specified by two endpoints. Each point in 3D space is projected onto the line described by the segment, and assigned a value based on where upon the line it falls relative to the specified endpoints.

The output of this linear gradient is then passed to another function, Select, which will select from one input value or another based on the value of a third input function. In this case, the "low" selection is 0, the "high" selection is 1, and the control value is determined from the linear gradient. A threshold is specified for the ground level, say 0. This means that anywhere in the control function that is less than 0 will select the "low" value, while anywhere that is greater than or equal to 0 will select "high". If the threshold is at 0, and if the line segment of the linear gradient function is aligned with the vertical axis such that values grow smaller as Y increases, then the effect is that a ground "plane" is created at Y=0. Any value of Y less than 0 evaluates to solid (1) while any value of greater than or equal to 0 evaluates to open (0). And thus, a perfectly flat ground is formed with sky above.

Where it gets interesting is by inserting additional chained functions in between the linear gradient and the Select function, such that by the time the control function is queried for the value to select between 0 and 1, the function has been made more interesting than just a perfectly even vertical gradient. This is accomplished by combining more noise functions and modifiers into the mix.

The keystone of this "make interesting" phase is the function that, in my library, is called TranslateDomain. This function will take the outputs of an arbitrary number of additional functions and use those outputs to translate the input coordinate passed to yet another noise function. This essentially boils down to this:


transx=xfunction_(x,y,z);
transy=yfunction_(x,y,z);
transz=zfunction_(x,y,z);
return inputfunction_(x+transx, y+transy, z+transz);
This simple behavior has astounding utility in this sort of application. If you set the linear gradient as the inputfunction_, and a Perline noise fractal as the yfunction_, the result is that the nice, smooth, even linear gradient is "noisified", become continuously bumpy instead. When this new function is now used as the control for the Select function, instead of selecting a perfectly flat plane, instead the surface threshold at 0 is "perturbed", or bumpified by the bumpy control function.

By altering the characteristics of the function used to perturb this ground function, you alter the characteristics of the terrain it generates. Perlin noise with a small amplitude and a high frequency, for example, would generate lots of small, tightly packed bumps, while a large amplitude and low frequency might generate large rolling hills.

This is where the creativity comes in. You have a large number of functions at your disposal. Some, such as Perlin noise (and all the associated variants of continuous noise) are generators, generating the signals upon which you build your functions. Others such as arithmetic modifiers can combine (add, multiply, average, min, max, domain transformations, etc...) the output of numerous functions together. Still others such as exp, cos, sin, abs, log, etc... can be used to alter the output of a single function in useful ways. By knowing how these various functions operate, and by experimenting to learn how they operate in conjunction with one another, you build for yourself a toolbox of interesting toys.

JTippets, thanks for that answer. It goes a bit beyond what I'm trying to make sense of right now, however I appreciate the details.

I do not have a well rounded mathematical education. This I cannot change. I have to work with what I have, in order to achieve my goal, there's simply no other way. I cannot go back to university in order to do what I"m trying to do... I have to learn on my own with what i"ve got.

Having said that, I understand generally what you are saying, and it's making sense to me in a general sense. Once I get past the current problem I am having, I believe that playing with these functions in some type of pre-made terrain generator, may assist me in figuring out what types of noise functions can produce various end results.

The part where you talk about the Sky/Ground and how we are talking simply about values of density between -1 and 1 then in terms of starting with a normal 2D heighmap makes sense to me.

Although even that, is tripping me up right now. I know that the volume data is a Matrix of values [x,y,z] The marching cubes algorithm takes this data, and then uses it to generate the mesh.

If the volume data was all 1's or all -1's should this produce a CUBE then for that region of "space"? and a volume data matrix of all values of zero, would be blank space?

If you remove the noise entirely, shouldn't you be able to lets say fill the matrix with half 1's and half 0's and get Half a cube?

I want to make sure I understand the volume data and what it's meant to represent, just to make sure I didn't have a faulty premise on what the volume data is suppose to represent.

This topic is closed to new replies.

Advertisement