Perlin Noise Question

Started by
20 comments, last by cr88192 11 years, 1 month ago

I am very new to noise and had a question about 3D noise. I have a 3d cube that is 12 million meters^3. I want to develop a noise function that is capable of having detailed noise at any point in the volume. From the little I understand about noise, I'd have to use a lot of levels to produce enough detail with the size of the volume being that large. Is this correct? And if so, how would I calculate or tell how complex a noise function I'd have to make so that I can generate noise that would be detailed enough at any point in the volume.

Basically I am trying to make a noise function to generate a planet in a large octree volume, and I want to have detailed noise on the surface of this sphere(which will be millions of meters into the volume).Thank you.

Advertisement

I think you should use Simplex noise instead of Perlin noise, as it is more efficient. 3D noise can be quite expensive, depending on real time requirements. One way to improve on this, is to not compute the noise value for every coordinate in the volume, but for a sub set. And then you interpolate in between (that is what Minecraft is doing). For example, if you compute the value for every second step in each of x, y, and z, then you only need to do 1/8 number of calculations.

If you want a C implementation, you can find it simplexnoise1234.h and simplexnoise1234.cpp.

If you use x, y, and z as argument to snoise3(), you will get a certain distribution of the noise. But if you scale x, y, and c with a constant, e.g. 1/100, then you will get a lower frequency distribution. So I would recommend that you simply do some tests. Using different scaling constants, and see which one fits you best. You can scale x, y, and z with the same constant, but you can also use different constants. I would start with the same, get something that looks the way I want it, and then possibly consider using different constants.

[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

I think you should use Simplex noise instead of Perlin noise, as it is more efficient.

Simplex noise is more efficient at higher dimensions - for 2D and 3D noise the difference is negligible.

Back to the question at hand, assuming that you are combining many octaves of perlin/simplex noise using a fractal function, then for the sake of argument we can suppose that the amplitude and frequency of each layer will decrease by a factor of roughly 2.

That makes: log2(12 million) = ~24 octaves of noise needed to achieve a 1 metre resolution.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Thank you for the great responses! My question then would be this. Would it be possible to add more octaves as the LOD increases, while saving the previous results of the noise function? Since 24 octaves would be a lot to run I would assume, and I wasn't sure if it was possible to save the results of the other octaves.

My second question would be on the topic of the seed. So basically, I make a hash value that is a combination of the x,y,z of my chunk(which is in my octree), and this makes it so that I can use a noise function without having to "move" it?(As is done on the libnoise tutorial http://libnoise.sourceforge.net/tutorials/tutorial3.html where to keep the coherent noise they have to change the position of the function). Sorry, I am just trying to wrap my head around all of this and figure out the best way to have coherent 3d NOISE at multiple LOD's down to a 1m resolution. Thank you very much.

depending on the application, random (white) noise may also still be usable, and is cheaper than perlin noise.

for example, in my case, I use Perlin noise for some large-scale features of the terrain, but use random-numbers for some small-scale / high-frequency features.

BTW: if simulating a planet-sized area, assuming a ground-level perspective, more likely I would just fake it with a large flat plane, and probably make it wrap around at some point. even if much of the surface of the planet were explored, it is unlikely that players would go much into the core.

if something like unbounded sky/depth is desired, possibly chunks/regions can be stacked in a 3D grid, say:

chunks are 16x16x16 meters;

regions are 16x16x16 chunks;

regions may be organized in a 3D grid space.

my engine is sort of like this, except using 32x32x8 regions, and still having a "bottom layer" generated by the terrain generator, although it is possible to build underneath the terrain, as the engine will simply generate a regions as-needed when voxels are placed there. related is that 128m is merely a default sky limit, as building above this point will simply result in region-stacking.

if it can be seen from space, it can instead be faked with a sphere and a texture (probably also generated using Perlin noise).

player probably wont notice...

maybe another trick:

for visited regions, build a sky-view texture;

these are then mipmapped;

from space, the single-pixel versions are then used for updating the world-texture (most of the rest is faked until visited, using large-scale features like biomes or similar to calculate pixel values).

this way, if the players go and build a massive platform or image on the ground, it can still be seen from space.

the polar regions would be a little fudged, since the actual surface would be a torus, which isn't exactly a sphere (one could have slight funkiness that the ground-view and space-view don't really match up spatially, and that going "north" of the north pole will teleport them to the south pole, ...). but, will anyone really care?...

cr88192:

This is kind of similar to what I am doing. The idea right now is to have an octree, where each node is 16^3 voxels. When a planet is veiwed from space, and the octree is split once, each block in the voxel volume will basically be 2km across. As they get closer, the octree is split, and each block will be 1km across. At 18 tree depth, I'd have a resolution of 1M blocks, and could cull most of the data not being seen or used(or even generated yet, as I'd only generate data as they split the tree). This would allow me to have a mountain range 50 miles away at a low voxel resolution that the player could see and say "Hey, I am gonna go check that out!". I was not going to use textures either. I was going the Cube World route, where you just have colored cubes(as you can churn out a ton of textureless colored cubes, as cube world has shown).

I am a little confused about the positions I'd input into my noise function though. If I have a 1 octave noise function, will I be able to have coherent noise from 0-25,000,000? All the examples in lib noise they only generated noise for a 256x256 region, so I am a little confused on the coordinates I'd be using to do this. I am also confused as to what purpose the seed really plays. Thanks for all the help!

One important note, I want to generate the sphere as a sphere of blocks IN the octree volume. So i'd also need to make a noise function that is bounded within a radius and produces air outside of that radius. This way I don't need to mess with texturing on a sphere, etc.

swiftcoder

I forgot to ask, does that mean that I could sample that noise function over a large x,y,z range and get coherent noise at all intervals between say 0 and 20 million for x,y,z? Thank you.

Thank you for the great responses! My question then would be this. Would it be possible to add more octaves as the LOD increases, while saving the previous results of the noise function? Since 24 octaves would be a lot to run I would assume, and I wasn't sure if it was possible to save the results of the other octaves.

My second question would be on the topic of the seed. So basically, I make a hash value that is a combination of the x,y,z of my chunk(which is in my octree), and this makes it so that I can use a noise function without having to "move" it?(As is done on the libnoise tutorial http://libnoise.sourceforge.net/tutorials/tutorial3.html where to keep the coherent noise they have to change the position of the function). Sorry, I am just trying to wrap my head around all of this and figure out the best way to have coherent 3d NOISE at multiple LOD's down to a 1m resolution. Thank you very much.

not used libnoise...

well, as can be noted, from the small-scale, most of the large-scale values will change into being a single constant bias (could be interpolated if needed), and most small-scale features would be ignored by this.

so, for example:

one set of perlin noise functions generates a constant "DC bias" for each region or similar;

small-scale functions simply generate local values, and add-in this DC bias (probably linearly interpolated between regions).

granted, yes, there is still the issue of repeating patterns in the low-level noise functions.

a trick here could be generating low-level noise functions per-region or similar (using a local seed), and then applying a "windowing function" to smooth out the values between adjacent regions (a region's local noise will dominate near the middle of the region, but near the edges it will be interpolated with that of the adjacent regions).

swiftcoder
I forgot to ask, does that mean that I could sample that noise function over a large x,y,z range and get coherent noise at all intervals between say 0 and 20 million for x,y,z? Thank you.

I usually scale my coordinates to the range [-1, 1] in each dimension, and sample the noise function using these scaled coordinates.

While you can generate noise on larger intervals:

  • Most noise implementations assume the [-1, 1] range.
  • Floating point accuracy is greatest in this range (and with an Earth-size planet, you will be really pushing floating point accuracy).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

cr88192:

This is kind of similar to what I am doing. The idea right now is to have an octree, where each node is 16^3 voxels. When a planet is veiwed from space, and the octree is split once, each block in the voxel volume will basically be 2km across. As they get closer, the octree is split, and each block will be 1km across. At 18 tree depth, I'd have a resolution of 1M blocks, and could cull most of the data not being seen or used(or even generated yet, as I'd only generate data as they split the tree). This would allow me to have a mountain range 50 miles away at a low voxel resolution that the player could see and say "Hey, I am gonna go check that out!". I was not going to use textures either. I was going the Cube World route, where you just have colored cubes(as you can churn out a ton of textureless colored cubes, as cube world has shown).

I am a little confused about the positions I'd input into my noise function though. If I have a 1 octave noise function, will I be able to have coherent noise from 0-25,000,000? All the examples in lib noise they only generated noise for a 256x256 region, so I am a little confused on the coordinates I'd be using to do this. I am also confused as to what purpose the seed really plays. Thanks for all the help!

typically, the noise function will wrap around when you hit the edges.

so, if a single noise function is used and extended out, past a certain distance you will start getting repeating patterns, and far enough out and things go weird (due to floating-point issues or similar).

so, more likely, "local" noise functions will be needed on some level.

One important note, I want to generate the sphere as a sphere of blocks IN the octree volume. So i'd also need to make a noise function that is bounded within a radius and produces air outside of that radius. This way I don't need to mess with texturing on a sphere, etc.

the main issue with simulating something like this directly (giant sphere of voxels) is that it is likely to get a lot more expensive from a storage-requirements and processing-power perspective.

doing a large flat-plane world and faking it is likely to be a lot more computationally cheaper.

also, there are problems with floats...

IME floats only really have a "good" accuracy for a range of a few km or so (past this there starts being jitter and graphical artifacts), so to some degree I have ended up using a lot of region-local coordinates as well.

each region has its own local coordinate space, and other regions are translated into position, relative to the camera's local coordinate space (partly a lot of this being because doubles are expensive and the GPU can't really use them anyways). so, when rendering, everything is then translated relative to both the regions' coordinates and also by the camera's base coordinate space, and its local camera position (treated as separate from the origin of the camera's local coordinate space).

typically, within around 1km of the origin or so, the camera's coordinate space is the origin, but will jump around on a 1km grid depending on where the camera is currently located.

all this is mostly invisible in-game, apart from looking at the various sets of coordinates and noticing occasional coordinate jumps.

"While you can generate noise on larger intervals:"

So to avoid floating point issues, can't I just expand the interval and use up to the floating point accuracy between each integer? So instead of forcing 25,000,000 into the space of [-1,1] I could make this range [-100000,100000], and only use 5 decimal places between each integer. Or am I missing something. Thank you all.

EDIT:

It seems it is a non issue, as libnoise uses doubles, so it can easily take and return a value in the coordinate range I am working in.

This topic is closed to new replies.

Advertisement