View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Perlin Noise Question

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

21 replies to this topic

Posted 20 February 2013 - 09:51 PM

I am very new to noise and had a question about 3D noise. I have a 3d cube that is 12 million meters^3. I want to develop a noise function that is capable of having detailed noise at any point in the volume. From the little I understand about noise, I'd have to use a lot of levels to produce enough detail with the size of the volume being that large. Is this correct? And if so, how would I calculate or tell how complex a noise function I'd have to make so that I can generate noise that would be detailed enough at any point in the volume.

Basically I am trying to make a noise function to generate a planet in a large octree volume, and I want to have detailed noise on the surface of this sphere(which will be millions of meters into the volume).Thank you.

### #2larspensjo  Members

Posted 21 February 2013 - 07:58 AM

I think you should use Simplex noise instead of Perlin noise, as it is more efficient. 3D noise can be quite expensive, depending on real time requirements. One way to improve on this, is to not compute the noise value for every coordinate in the volume, but for a sub set. And then you interpolate in between (that is what Minecraft is doing). For example, if you compute the value for every second step in each of x, y, and z, then you only need to do 1/8 number of calculations.

If you want a C implementation, you can find it simplexnoise1234.h and simplexnoise1234.cpp.

If you use x, y, and z as argument to snoise3(), you will get a certain distribution of the noise. But if you scale x, y, and c with a constant, e.g. 1/100, then you will get a lower frequency distribution. So I would recommend that you simply do some tests. Using different scaling constants, and see which one fits you best. You can scale x, y, and z with the same constant, but you can also use different constants. I would start with the same, get something that looks the way I want it, and then possibly consider using different constants.

Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

### #3swiftcoder  Senior Moderators

Posted 21 February 2013 - 10:46 AM

I think you should use Simplex noise instead of Perlin noise, as it is more efficient.

Simplex noise is more efficient at higher dimensions - for 2D and 3D noise the difference is negligible.

Back to the question at hand, assuming that you are combining many octaves of perlin/simplex noise using a fractal function, then for the sake of argument we can suppose that the amplitude and frequency of each layer will decrease by a factor of roughly 2.

That makes: log2(12 million) = ~24 octaves of noise needed to achieve a 1 metre resolution.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

Posted 21 February 2013 - 01:19 PM

Thank you for the great responses! My question then would be this. Would it be possible to add more octaves as the LOD increases, while saving the previous results of the noise function? Since 24 octaves would be a lot to run I would assume, and I wasn't sure if it was possible to save the results of the other octaves.

My second question would be on the topic of the seed. So basically, I make a hash value that is a combination of the x,y,z of my chunk(which is in my octree), and this makes it so that I can use a noise function without having to "move" it?(As is done on the libnoise tutorial http://libnoise.sourceforge.net/tutorials/tutorial3.html where to keep the coherent noise they have to change the position of the function). Sorry, I am just trying to wrap my head around all of this and figure out the best way to have coherent 3d NOISE at multiple LOD's down to a 1m resolution. Thank you very much.

### #5BGB  Members

Posted 21 February 2013 - 01:33 PM

depending on the application, random (white) noise may also still be usable, and is cheaper than perlin noise.

for example, in my case, I use Perlin noise for some large-scale features of the terrain, but use random-numbers for some small-scale / high-frequency features.

BTW: if simulating a planet-sized area, assuming a ground-level perspective, more likely I would just fake it with a large flat plane, and probably make it wrap around at some point. even if much of the surface of the planet were explored, it is unlikely that players would go much into the core.

if something like unbounded sky/depth is desired, possibly chunks/regions can be stacked in a 3D grid, say:

chunks are 16x16x16 meters;

regions are 16x16x16 chunks;

regions may be organized in a 3D grid space.

my engine is sort of like this, except using 32x32x8 regions, and still having a "bottom layer" generated by the terrain generator, although it is possible to build underneath the terrain, as the engine will simply generate a regions as-needed when voxels are placed there. related is that 128m is merely a default sky limit, as building above this point will simply result in region-stacking.

if it can be seen from space, it can instead be faked with a sphere and a texture (probably also generated using Perlin noise).

player probably wont notice...

maybe another trick:

for visited regions, build a sky-view texture;

these are then mipmapped;

from space, the single-pixel versions are then used for updating the world-texture (most of the rest is faked until visited, using large-scale features like biomes or similar to calculate pixel values).

this way, if the players go and build a massive platform or image on the ground, it can still be seen from space.

the polar regions would be a little fudged, since the actual surface would be a torus, which isn't exactly a sphere (one could have slight funkiness that the ground-view and space-view don't really match up spatially, and that going "north" of the north pole will teleport them to the south pole, ...). but, will anyone really care?...

Edited by cr88192, 21 February 2013 - 01:36 PM.

Posted 21 February 2013 - 02:02 PM

This is kind of similar to what I am doing. The idea right now is to have an octree, where each node is 16^3 voxels. When a planet is veiwed from space, and the octree is split once, each block in the voxel volume will basically be 2km across. As they get closer, the octree is split, and each block will be 1km across. At 18 tree depth, I'd have a resolution of 1M blocks, and could cull most of the data not being seen or used(or even generated yet, as I'd only generate data as they split the tree). This would allow me to have a mountain range 50 miles away at a low voxel resolution that the player could see and say "Hey, I am gonna go check that out!". I was not going to use textures either. I was going the Cube World route, where you just have colored cubes(as you can churn out a ton of textureless colored cubes, as cube world has shown).

I am a little confused about the positions I'd input into my noise function though. If I have a 1 octave noise function, will I be able to have coherent noise from 0-25,000,000? All the examples in lib noise they only generated noise for a 256x256 region, so I am a little confused on the coordinates I'd be using to do this. I am also confused as to what purpose the seed really plays. Thanks for all the help!

One important note, I want to generate the sphere as a sphere of blocks IN the octree volume. So i'd also need to make a noise function that is bounded within a radius and produces air outside of that radius. This way I don't need to mess with texturing on a sphere, etc.

swiftcoder

I forgot to ask, does that mean that I could sample that noise function over a large x,y,z range and get coherent noise at all intervals between say 0 and 20 million for x,y,z? Thank you.

Edited by ShadowMan777, 21 February 2013 - 02:04 PM.

### #7BGB  Members

Posted 21 February 2013 - 02:03 PM

Thank you for the great responses! My question then would be this. Would it be possible to add more octaves as the LOD increases, while saving the previous results of the noise function? Since 24 octaves would be a lot to run I would assume, and I wasn't sure if it was possible to save the results of the other octaves.

My second question would be on the topic of the seed. So basically, I make a hash value that is a combination of the x,y,z of my chunk(which is in my octree), and this makes it so that I can use a noise function without having to "move" it?(As is done on the libnoise tutorial http://libnoise.sourceforge.net/tutorials/tutorial3.html where to keep the coherent noise they have to change the position of the function). Sorry, I am just trying to wrap my head around all of this and figure out the best way to have coherent 3d NOISE at multiple LOD's down to a 1m resolution. Thank you very much.

not used libnoise...

well, as can be noted, from the small-scale, most of the large-scale values will change into being a single constant bias (could be interpolated if needed), and most small-scale features would be ignored by this.

so, for example:

one set of perlin noise functions generates a constant "DC bias" for each region or similar;

small-scale functions simply generate local values, and add-in this DC bias (probably linearly interpolated between regions).

granted, yes, there is still the issue of repeating patterns in the low-level noise functions.

a trick here could be generating low-level noise functions per-region or similar (using a local seed), and then applying a "windowing function" to smooth out the values between adjacent regions (a region's local noise will dominate near the middle of the region, but near the edges it will be interpolated with that of the adjacent regions).

### #8swiftcoder  Senior Moderators

Posted 21 February 2013 - 02:10 PM

swiftcoder
I forgot to ask, does that mean that I could sample that noise function over a large x,y,z range and get coherent noise at all intervals between say 0 and 20 million for x,y,z? Thank you.

I usually scale my coordinates to the range [-1, 1] in each dimension, and sample the noise function using these scaled coordinates.

While you can generate noise on larger intervals:

• Most noise implementations assume the [-1, 1] range.
• Floating point accuracy is greatest in this range (and with an Earth-size planet, you will be really pushing floating point accuracy).

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #9BGB  Members

Posted 21 February 2013 - 02:26 PM

This is kind of similar to what I am doing. The idea right now is to have an octree, where each node is 16^3 voxels. When a planet is veiwed from space, and the octree is split once, each block in the voxel volume will basically be 2km across. As they get closer, the octree is split, and each block will be 1km across. At 18 tree depth, I'd have a resolution of 1M blocks, and could cull most of the data not being seen or used(or even generated yet, as I'd only generate data as they split the tree). This would allow me to have a mountain range 50 miles away at a low voxel resolution that the player could see and say "Hey, I am gonna go check that out!". I was not going to use textures either. I was going the Cube World route, where you just have colored cubes(as you can churn out a ton of textureless colored cubes, as cube world has shown).

I am a little confused about the positions I'd input into my noise function though. If I have a 1 octave noise function, will I be able to have coherent noise from 0-25,000,000? All the examples in lib noise they only generated noise for a 256x256 region, so I am a little confused on the coordinates I'd be using to do this. I am also confused as to what purpose the seed really plays. Thanks for all the help!

typically, the noise function will wrap around when you hit the edges.

so, if a single noise function is used and extended out, past a certain distance you will start getting repeating patterns, and far enough out and things go weird (due to floating-point issues or similar).

so, more likely, "local" noise functions will be needed on some level.

One important note, I want to generate the sphere as a sphere of blocks IN the octree volume. So i'd also need to make a noise function that is bounded within a radius and produces air outside of that radius. This way I don't need to mess with texturing on a sphere, etc.

the main issue with simulating something like this directly (giant sphere of voxels) is that it is likely to get a lot more expensive from a storage-requirements and processing-power perspective.

doing a large flat-plane world and faking it is likely to be a lot more computationally cheaper.

also, there are problems with floats...

IME floats only really have a "good" accuracy for a range of a few km or so (past this there starts being jitter and graphical artifacts), so to some degree I have ended up using a lot of region-local coordinates as well.

each region has its own local coordinate space, and other regions are translated into position, relative to the camera's local coordinate space (partly a lot of this being because doubles are expensive and the GPU can't really use them anyways). so, when rendering, everything is then translated relative to both the regions' coordinates and also by the camera's base coordinate space, and its local camera position (treated as separate from the origin of the camera's local coordinate space).

typically, within around 1km of the origin or so, the camera's coordinate space is the origin, but will jump around on a 1km grid depending on where the camera is currently located.

all this is mostly invisible in-game, apart from looking at the various sets of coordinates and noticing occasional coordinate jumps.

Posted 21 February 2013 - 02:45 PM

"While you can generate noise on larger intervals:"

So to avoid floating point issues, can't I just expand the interval and use up to the floating point accuracy between each integer? So instead of forcing 25,000,000 into the space of [-1,1] I could make this range [-100000,100000], and only use 5 decimal places between each integer. Or am I missing something. Thank you all.

EDIT:

It seems it is a non issue, as libnoise uses doubles, so it can easily take and return a value in the coordinate range I am working in.

Edited by ShadowMan777, 21 February 2013 - 03:14 PM.

### #11swiftcoder  Senior Moderators

Posted 21 February 2013 - 03:19 PM

So to avoid floating point issues, can't I just expand the interval and use up to the floating point accuracy between each integer? So instead of forcing 25,000,000 into the space of [-1,1] I could make this range [-100000,100000], and only use 5 decimal places between each integer. Or am I missing something. Thank you all.

You can get around all of this by using an integer PRNG, but performance on current CPU architectures may not be optimal (and porting to the GPU becomes rather involved, should you ever decide to).

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #12BGB  Members

Posted 21 February 2013 - 05:22 PM

So to avoid floating point issues, can't I just expand the interval and use up to the floating point accuracy between each integer? So instead of forcing 25,000,000 into the space of [-1,1] I could make this range [-100000,100000], and only use 5 decimal places between each integer. Or am I missing something. Thank you all.

You can get around all of this by using an integer PRNG, but performance on current CPU architectures may not be optimal (and porting to the GPU becomes rather involved, should you ever decide to).

yeah, it has to do more with the total number of significant figures, than about the exact value-range of the numbers.

if the value range is increased, then the relative precision of the low-order digits goes down.

by the time the range is +/- 100k, for floats, the precision isn't looking so good anymore.

this is related to the "several km from the origin" issue related to rendering. the large values in the coordinates generally kills the precision of the low-order digits, causing lots of ugly side-effects:

the camera shakes as the player moves around;

textures and geometry start shaking around;

...

but, if we essentially "subtract out" the large "absolute" coordinates (by using smaller relative coordinate spaces), then this leaves a lot more precision for the smaller values, and we can have *much* bigger world sizes (at only a modest overall increase in complexity).

granted, memory storage and disk storage may also become an issue (theoretical estimate, in my case: storing enough voxel terrain to accurately represent a planet the size of the moon, as a thin surface plane of voxels, would still require roughly 42TB). it also takes a lot of RAM just to keep the locally visible part of the world loaded. (as-is, it is unlikely a person would be able to store a planet-size area in any real level of detail).

(although, it is possible that a much smaller planet could be used... like something the size of phobos, where then enough voxel terrain for the entire planet could more reasonably fit on someones' HDD...).

also possible: alternate projection strategies could better approximate a sphere than a simple flat plane (such as an elliptical or polyhedral strategy).

(example:

http://en.wikipedia.org/wiki/Mollweide_projection

http://en.wikipedia.org/wiki/HEALPix

)

then, in most places, everything seems to match up fine, and this helps avoid some of the funkiness at the poles.

another note: texels are much cheaper than triangles.

for example, a planet represented by a moderately high-resolution texture-maps could have much more detail than one represented via lots of untextured triangles.

similarly, these texture-maps could also make use of things like normal and parallax mapping to simulate the appearance of 3D features, ...

from a high-level space-like vantage point, it would be functionally more like Google Earth, and probably switch over to using voxel terrain mostly in close proximity to the ground level.

Posted 21 February 2013 - 05:56 PM

Except that you can represent a full planet with only 32768 voxels(Octree where each chunk is 16^3 voxels, and each block represents 200 km of space), then as you zoom into the planet, you are only splitting parts of the octree, which means 18 levels into the octree, if every time you split you only generate 32k voxels, at the surface you only have 576,000 voxels generated(it'd be more than this as you'd split different parts of the tree as well, but the point is that you are not generating higher resolution chunks for 99% of the planet, because you aren't splitting the tree in those areas)

My goal is to make the planet look blocky from a distance, and then as you zoom you generate higher resolution voxels, or better said, the detail of the 16^3 chunk starts to look more and more like the surface you are zooming into. If I see small continent from space, its only made up of maybe 8 voxels, but as I zoom into it, more voxels are generated from the surface I am zooming into, and it starts to look more and more like I am zooming into the continent.

I have done what you are talking about with coordinate systems, as I originally was doing what you proposed, but to keep the feel of blocks, I thought it would look much better if it LOOKS like a blocky planet at all aspects of the LOD. Thanks again for all the input.

Edited by ShadowMan777, 21 February 2013 - 06:01 PM.

### #14swiftcoder  Senior Moderators

Posted 21 February 2013 - 06:38 PM

Except that you can represent a full planet with only 32768 voxels(Octree where each chunk is 16^3 voxels, and each block represents 200 km of space), then as you zoom into the planet, you are only splitting parts of the octree, which means 18 levels into the octree, if every time you split you only generate 32k voxels, at the surface you only have 576,000 voxels generated(it'd be more than this as you'd split different parts of the tree as well, but the point is that you are not generating higher resolution chunks for 99% of the planet, because you aren't splitting the tree in those areas)

It's not actually that bad. Let's assume that you want 1 metre resolution near to the viewer, and that the camera is at head height (roughly 2 metres)...

Each block is 16x16x16 voxels. A viewer height of 2 metres gives a horizon distance of 5km. The closest block to the camera needs to be at a resolution of one voxel/metre. Each subsequent ring of blocks (moving away from the camera) is at a 3rd of the resolution of the previous ring (so that edges line up).

That gives you 1 centre block plus 5 concentric rings of 9 blocks each, which amounts to 46 * (16^3) voxels = ~188,000 voxels.

(of course, that assumes perfect subdivision, but your average case shouldn't be more than an order of magnitude off)

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

Posted 21 February 2013 - 06:46 PM

Thanks for those calculations, but you have proved my point basically. The only thing I cannot think of a good solution for is WHEN to subdivide the octree(as if I have a max depth of 18, if I use the camera's bounding box, it will subdivide that part down to 18, as I am in the octree volume). The only thing I could think of was to have 18 different BB's around the camera that represent at what depth to divide the tree, but there has to be a better solution than this). Or when the camera is close to the center of the node. Thanks!

Edited by ShadowMan777, 21 February 2013 - 06:53 PM.

### #16BGB  Members

Posted 21 February 2013 - 06:54 PM

Except that you can represent a full planet with only 32768 voxels(Octree where each chunk is 16^3 voxels, and each block represents 200 km of space), then as you zoom into the planet, you are only splitting parts of the octree, which means 18 levels into the octree, if every time you split you only generate 32k voxels, at the surface you only have 576,000 voxels generated(it'd be more than this as you'd split different parts of the tree as well, but the point is that you are not generating higher resolution chunks for 99% of the planet, because you aren't splitting the tree in those areas)

It's not actually that bad. Let's assume that you want 1 metre resolution near to the viewer, and that the camera is at head height (roughly 2 metres)...

Each block is 16x16x16 voxels. A viewer height of 2 metres gives a horizon distance of 5km. The closest block to the camera needs to be at a resolution of one voxel/metre. Each subsequent ring of blocks (moving away from the camera) is at a 3rd of the resolution of the previous ring (so that edges line up).

That gives you 1 centre block plus 5 concentric rings of 9 blocks each, which amounts to 46 * (16^3) voxels = ~188,000 voxels.

(of course, that assumes perfect subdivision, but your average case shouldn't be more than an order of magnitude off)

dunno...

I was doing calculations mostly for wrapping a planet in a full-detail Minecraft-like terrain.

my calculations were showing that, at this level, even a Phobos-scale planet would be pretty large (many GB).

quick check though is that a real-life scale Phobos would still be "pretty damn large" by game-world standards.

granted, yes, a player probably wouldn't visit all of it...

(EDIT/ADD: but, probably wouldn't be a big deal to distribute if the game were shipped on BluRay disks or something, just it would be a bit steep for internet-based distribution...).

Edited by cr88192, 21 February 2013 - 07:15 PM.

Posted 21 February 2013 - 06:58 PM

That is exactly what I'd like to do. Except the block is a circular planet, and the large blocks subdivide into smaller blocks. Surprised I didn't see that before.

Edited by ShadowMan777, 21 February 2013 - 07:03 PM.

### #18swiftcoder  Senior Moderators

Posted 21 February 2013 - 07:09 PM

I was doing calculations mostly for wrapping a planet in a full-detail Minecraft-like terrain.

RIght, but you don't ever need the whole planet in memory - just the portion you can see.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #19BGB  Members

Posted 21 February 2013 - 08:00 PM

I was doing calculations mostly for wrapping a planet in a full-detail Minecraft-like terrain.

RIght, but you don't ever need the whole planet in memory - just the portion you can see.

granted, yes.

even as such, if the planet is both large and has full detail terrain (of actually generated terrain), it may still eat a big chunk of HDD space to store it all...

I am not saying it can't be made playable, but being able to persistently store the planet on ones' HDD would presumably be a goal (unless of course the planets are not persistent, then there is little issue with HDD space).

in Minecraft, it is not really an issue, usually because only a small part of the world is ever visited / generated.

if we assume this, then it is also mostly a non-issue.

assuming the whole world is fully generated though:

something Phobos-sized will fit on a modern HDD, albeit still being fairly large (~ 30-40 GB).

much bigger though generally wont really fit on current HDDs in full detail (most larger planets quickly reach up into the TB range, due mostly to their large surface area).

but, the idea of being able to have a full-scale full-detail planet seems pretty cool though, like say if a person can build a minecart track all the way around the whole planet or something, or be able to see their built structures from space, ...

as-is, all this is much bigger than the worlds in my engine at present...

and, also much bigger than my current Minecraft worlds...