• FEATURED

View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Infinite terrain generation in chunks - need hints on handling chunk edges

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

34 replies to this topic

### #1Kylotan  Moderators

Posted 25 February 2013 - 12:09 PM

I'm doing some terrain generation, using the usual Perlin-style interpolated noise (although strictly speaking it's value noise, as I'm not using gradients). However rather than generating a large landmass in one go, I am generating it chunk by chunk, eg. 100m x 100m areas. Therefore it's important that the edges of each chunk line up with each other in terms of their generated height. (I think this is how Minecraft does it, but I couldn't find out exactly how the height values are calculated, especially across changing biomes. And I'm not using voxels, anyway.)

Most algorithms that I've seen for terrain generation using noise seem to assume that you're generating across the whole landmass, that you have precalculated noise values that span the entire world which you then interpolate between (and repeat with higher frequencies at lower amplitudes, etc). But since my world is unbounded I don't have this. I think the answer should be as simple as having some sort of 2D hash function that can return a noise value for any arbitrary x,y in the world, and sampling that value itself is easy. But I'm having trouble thinking about how to generalise this to the system of frequency and amplitude. If I just scale up x and y by the frequency each time then I'm sampling values from elsewhere in the world which seems logically incorrect. Can anybody clear this up for me? Maybe post some pseudocode?

I have one other aspect I am trying to implement - each chunk has itself got a height value, so that I can specify that certain parts of the world are higher than others before the noise algorithm runs. I am contemplating calculating the height at each point as basically a bilinear interpolation of the chunk height with that of its neighbours, and then add the noise value for that position. I think this should be continuous both within chunks and across chunk boundaries, but I'd be interested to hear if there are any problems with this approach, ways it could be improved, or ways to combine it with the previous step to make the algorithm simpler and/or faster.

Edited by Kylotan, 25 February 2013 - 12:10 PM.

### #2Álvaro  Members

Posted 25 February 2013 - 12:16 PM

If you think of the whole terrain as a single function, generating a chunk just means sampling that function in a particular region. If you then sample the region next to it, you'll magically get consistent borders.

As you say, the trick is to use a hash function of the coordinates for noise generation, instead of random numbers. I don't quite understand what part of this you find troublesome, though...

### #3Nercury  Members

Posted 25 February 2013 - 12:47 PM

Maybe it could be interesting to use multiple functions for various terrain features and see them clash in unexpected ways. Also a function can be modified to use sampling from points around it.

Edited by Nercury, 25 February 2013 - 12:59 PM.

### #4SeraphLance  Members

Posted 25 February 2013 - 12:51 PM

As Alvaro says, noise normally handles this problem implicitly.

Imagine your chunk divided into 4 "sub-chunks". Do they line up? If you're using straight value noise, they shouldn't, as value noise isn't inherently continuous.

Hence, it's an intrinsic problem of the algorithm. Gradient and other offline noise algorithms get around that by being continuous, which naturally ensures that any arbitrary chunk boundary lines up.

As for your regions idea, I have a feeling that will lead to artifacts, but don't have the time on hand to evaluate it more closely. One way to do what you're after is to blend your noise function with another very low-frequency noise function. I believe this is how it's usually done.

Edited by SeraphLance, 25 February 2013 - 12:53 PM.

### #5Kylotan  Moderators

Posted 25 February 2013 - 12:54 PM

Alvaro - the problem is that the logic changes when you're no longer mapping an array of noise to an equal-sized array of height values. The algorithms rely not only on the noise being pregenerated but also on it being a fixed size.

eg. In one example I've seen, 1D noise is sampled by taking x * frequency % numSamples - which I can't do directly because numSamples is unbounded. I've considered simply leaving out the numSamples wrapping factor, but I don't understand if the output will be correct - after all, it means that f(x) is going to be based on the hash value of arbitrary multiples of x. Is that always going to work? As x approaches large numbers? And when x is zero?

I'm sure the solution is simple but I'm having trouble convincing myself of the correct mathematics to get it working.

### #6Álvaro  Members

Posted 25 February 2013 - 12:56 PM

Maybe you can post some simple version of the code you are working with, and we can go from there...

### #7Kylotan  Moderators

Posted 25 February 2013 - 01:00 PM

SeraphLance, the subchunks should all line up, because I don't use value noise directly, but interpolate from one value to the next (as most noise algorithms would). I don't see there's any inherent problem with this method of generation. It's not as continuous as other types of noise but the interpolated output can still be C1 continuous.

### #8Kylotan  Moderators

Posted 25 February 2013 - 01:00 PM

Alvaro, I don't have code currently because I'm trying to work out how to do it. It's not a bug in my code but a problem with my thinking.

### #9swiftcoder  Senior Moderators

Posted 25 February 2013 - 01:12 PM

It's not a bug in my code but a problem with my thinking.

I'm having trouble following your thinking.

Most noise functions take arbitrary (continuous) input coordinates, and produce continuous output values. Even in an infinite world, your input coordinates should be continuous, right?

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #10Kylotan  Moderators

Posted 25 February 2013 - 01:48 PM

My thinking is difficult to follow because I don't fully understand the problem myself! All I know is that there are plenty of implementations of fixed-width noise systems and I can't find one that works for arbitrary widths. The fixed width ones all presume a periodic function that yields up their pregenerated noise values, the indexes into it also wrap around, and I can't prove to myself whether this matters or not.

The function should handle continuous input, but in order to decide what to return it picks discrete elements from a pregenerated array of noise samples. In my case I expect I will have to pass discrete values to a hash function. It's knowing which discrete values I need to use which is tricky. I don't fully understand the theory around this. It doesn't help that the various pieces of sample code I find tend to use 'frequency' to mean either frequency, wavelength, or period, depending on each particular implementer's (mis)understanding of the term! This makes it hard for me to understand the fundamental way of deciding what the input value should be for each octave.

### #11BGB  Members

Posted 25 February 2013 - 01:50 PM

note that the sample-count is usually bounded and wraps, but applies over a potentially large area, and is not used up all at once.

going outside of the sample range, as can be noted, tends to result in repeating patterns.
a partial trick here is the use of a per-region noise function (in my case, a region is 512x512x128m), and interpolating over the region-edges via a windowing function. then a hash-function or similar can be used to calculate a per-region seed.

the idea would basically to multiply the values by a sin or sin^2 curve, where as the curve for one side is falling off, the curve for the other side is rising, say to ideally create a case where f(theta0)+f(theta1)=1, where the theta values will depend on the positions within the region.

except for potentially very large/low-frequency features (such as biomes), this works ok, but then potentially a person can use a few global noise functions (the world probably wont be big enough for this to matter).

### #12swiftcoder  Senior Moderators

Posted 25 February 2013 - 02:17 PM

You can't generate an infinite non-repeating terrain from a fixed set of samples - entropy's a bitch.

Your choices are to loop the output (fine for fairly small worlds), or generate additional samples once you are far enough away from the origin.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #13Kylotan  Moderators

Posted 25 February 2013 - 02:32 PM

Swiftcoder: I don't want to use a fixed set of samples. Not sure where it was implied that I did? The typical Perlin noise algorithms do use a fixed set, which I'm trying to do away with by using a hash function instead (as mentioned in my first post), but I'm not sure of the implications for what values I need to pass in for each of the different 'octaves'.

On reflection it might be that I'm thinking too hard about the usual implementation and forgetting that I just need to add together several continuous functions. It's just confusing because I've seen the 'frequency' parameter defined in terms of the original pregenerated noise sample map, and obviously that doesn't work. eg. The usual site: http://freespace.virgin.net/hugo.elias/models/m_perlin.htm - has frequency as basically 'how many samples to take from the whole set of noise' or 'how many samples to take across the whole area being generated', both of which are generally equal in the usual implementation, but which doesn't apply in my case.

I guess I need to think of it as a different type of value, as I don't think frequency itself has any meaning in my situation.

cr88192: I must admit I have no idea how any of that would work - although I'm not sure I would need it if I had a function that was defined for all possible values of x any y. I think "interpolating over the region-edges via a windowing function" is essentially what I described originally, bilinearly interpolating between the values for adjacent regions - right?

### #14swiftcoder  Senior Moderators

Posted 25 February 2013 - 02:55 PM

The usual site: http://freespace.virgin.net/hugo.elias/models/m_perlin.htm

As I've been saying for years, that site ought to die in a fire. It contains more terminology confusion than it does useful information...

Yes, throw out pre-conceived notions of how "standard" noise generators are implemented. You just need to hash on some sort grid, and generate noise based on that.

For example, I have an infinite voronoi cell noise generator. It divides the world into a square grid at some input-defined resolution, and in each cell it generates a fixed number of points. The resulting noise is a function of the distance to the points in the 3x3 grid of cells surrounding the input coordinate.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #15coderx75  Members

Posted 25 February 2013 - 02:58 PM

Detailed generation of "infinite" worlds is a bitch.  I've put myself through it and learned much but, currently am not applying everything I've learned because I'd have to bring my entire project back to the drawing board.  It's a good thing you're asking because you really want to get this right early on.

First off, a noise function will not cut it.  Sampling a point at (x, y) from a noise function to get a height value will produce very boring results.  Having an infinite world is pretty pointless if it's all homogeneous.  You will need to implement different methods of creation.  Noise (spectral synthesis) is always useful.  Displacement (also known as distortion or perturbation) really helps to produce natural results.  The use of convex hull diagrams, such as Voronoi Diagrams, are very important.  The ability to use modifiers on output, from basic mathematical operations to more complex diagram conversion algorithms, will open you up to more flexibility.  All of this is hell on seems though.  Ex: displacement can really put the edges of a terrain patch far out of whack with it's neighbors.

Second, how you generate terrain, handle collision detection of terrain and render terrain should probably not be the same.  It's efficient to generate large patches of terrain but more efficient to render in smaller patches, especially for dynamic detailing.  Collision detection depends on the library you are using but I try to get everything in place as it's generated.  I say it's efficient to generate larger patches because you want to leave yourself plenty of resources for dealing with seems.  The smaller the patch, the more seems that have to be managed.

I would suggest basing the generation of the terrain on both elevation and vertex normals.  Generating patches separately will always leave you with visible seems, even if the elevations match.  For example, the resulting edge of one patch may be horizontal while the edge of it's neighbor rises sharply, creating a highly visible "L" shape along the seem.  So, rather than using something like midpoint averages, include the vertex normal in your calculations to keep a more consistent surface (ex: a surface of an elevation can "lean" away from the midpoint, resulting in a higher midpoint and smoother surface).  If a surface at one edge is horizontal (or anything else), the vertex normal insures that it's neighbor is the same.

Another solution would be to generate "areas" or "continents" before generating actual terrain.  The area can be defined by varying dimensions, by a particular method(s) of generation and produce a terrain with a central high point and lower-lying edges.  This could be overlapped by other areas, only the higher values of the two areas being used and everything else discarded.

There's a lot to this and I've taken tons of notes (much of it in exasperation).  Dealing with texture generation, shadowing, collision detection and, likely, multi-threading, can be a real nightmare.  Once I'm done with work, I'll try to go over my notes and get back to this thread.

Quit screwin' around! - Brock Samson

### #16Kylotan  Moderators

Posted 25 February 2013 - 03:22 PM

I appreciate the mention of Voronoi methods and the like but all that is out of my scope - height-mapped noise is sufficient for my purposes. I may not have made it clear but I can't generate a whole world at any point - the idea is to be able to procedurally generate individual chunks as and when they're required, which requires a continuous function that I can sample at any point in the future to get results from.

I'm also glad that the link I cited is considered to have problems with it because that helps to explain why I've had trouble reconciling my idea of what the terms mean and my idea of how the algorithm would work.

I'm still slightly unclear on how to implement my algorithm however, given that every single resource I'm finding has some terminology issue or a fixed-size assumption. Given an input value 'x', how do I derive a height 'y', given an arbitrary number of 'octaves', ie. coherent noise functions with changing amplitude and frequency in inverse proportions? I figure that I need two adjacent hash values and the frequency value for the current octave determines an interpolated point between the two values - but I have been staring at this all day and really can't work out how to implement it properly. I'm sure it's a trivial one or two liner but I don't know how to do it.

### #17swiftcoder  Senior Moderators

Posted 25 February 2013 - 03:38 PM

Given some arbitrary noise function Noise(x, y), a fairly vanilla fractal function might look like this:
generate(x, y) {
frequency = 1.0
amplitude = 1.0
octaves = 8

result = 0.0

for i from 0 to octaves {
result += Noise(x*frequency, y*frequency) * amplitude

frecuency *= 2.0
amplitude *= 0.5
}

return result
}

Where octaves is the number of layers to combine, frequency is the frequency at which to sample the noise function, and amplitude is the magnitude of each layer.

I'm not seeing any dependency on world size here (at least until you run out of floating point precision). As long as your Noise(x, y) is continuous, your world can be any size, and you can generate it in whatever increments you desire.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #18Kylotan  Moderators

Posted 25 February 2013 - 05:41 PM

But I don't see that would work for an arbitrary noise function. Imagine the noise function is based around a hash function that takes an integer in and yields a float from 0-1 out. The only way I'd ever get any smoothing is if I pass in fractional values of X which force it to interpolate between two adjacent hash results. But in this example, the frequency is always a positive integer. So if I sample at x=1, x=2,... x=123456778, I'm always going to be handling whole numbers in the noise function and getting arbitrary non-interpolated numbers back out, making the whole coherent noise thing pointless.

It seems to me like I want to be dividing the input parameters by frequency rather than multiplying by it? That makes it "the frequency with which we sample between 2 points", which makes sense on some level. But nobody seems to be doing that, hence my (continued) confusion.

Or alternatively, the noise function doesn't expect to have random numbers exactly on the whole-number boundaries - but then I'm not sure how to construct it in a reasonable way.

### #19swiftcoder  Senior Moderators

Posted 25 February 2013 - 05:56 PM

The point of something like fBm (which is roughly the function I reproduced above), is that successive octaves display increased detail - each layer decreases in both scale and amplitude (scale being the inverse of frequency).

Think of it this way: if you view your noise basis as a 2-dimensional image, then increasing the frequency is equivalent to zooming out the image. When you zoom out the image, visible features become smaller, which increases the visible detail.

If we go your route, and take it as the "frequency with which we sample between 2 points", then your noise basis needs to have infinite levels of detail from the get go (at which point you might as well just sample it once; it wouldn't need multiple octaves to build up fine detail on top of a coarse base).

Edited by swiftcoder, 25 February 2013 - 05:56 PM.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #20Kylotan  Moderators

Posted 25 February 2013 - 06:35 PM

I understand the principle but the implementation isn't working. I know the idea is to increase the detail at higher octaves; but as I see it, the detail is going to be at maximum right from the start if I'm sampling at integer boundaries (which I will - because ultimately I need to pass this data to a system that needs discrete values).

Example:

float hash(x) { return [0.56, 0.1, 0.99, 0.44, 0.63, 0.72, 0.1, 0.72, 0.01, 0.2]; } // pretend these are pseudo-random
float Noise1D(x) { return Interpolate(hash(x), hash(x+1), FractionalPartOf(x); }


Now if I call Noise1D with the numbers 0 and 1 consecutively, I get exactly 0.56 and 0.1 out.

If I double the frequency and call it with the numbers 0 and 1, I get exactly 0.56 and 0.99 out.

If I double it again and call it again, the results are 0.56 and 0.63 out.

None of these values will ever hit the interpolation part as FractionalPartOf(x) always returns 0. In this case the Noise function is no different from a discrete random number generator, and no smoothing is being performed. The detail is effectively at maximum all the time, for any whole-number value for frequency.

I need a system where I can call Noise(1) and Noise(2) and get 2 values that have some degree of coherence. Either this requires a rethinking of what exactly the 'frequency' is or I need a better definition of what Noise(x) is meant to look like. I can't see how the frequency value can be premultiplied with X and Y because surely the implementation of Noise needs both the X,Y and the frequency information to find 2 samples to interpolate from?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.