Public Group

# Wrapping my head around Perlin Noise

This topic is 2634 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm trying to understand the reference implementation of Perlin Noise from here, using this as an accompaniment (I'm aware that it explains an older version of the algorithm, but I know of no alternatives). However, I get lost about halfway through.

 public final class ImprovedNoise { static public double noise(double x, double y, double z) { int X = (int)Math.floor(x) & 255, // FIND UNIT CUBE THAT Y = (int)Math.floor(y) & 255, // CONTAINS POINT. Z = (int)Math.floor(z) & 255; x -= Math.floor(x); // FIND RELATIVE X,Y,Z y -= Math.floor(y); // OF POINT IN CUBE. z -= Math.floor(z); double u = fade(x), // COMPUTE FADE CURVES v = fade(y), // FOR EACH OF X,Y,Z. w = fade(z); int A = p[X ]+Y, AA = p[A]+Z, AB = p[A+1]+Z, // HASH COORDINATES OF B = p[X+1]+Y, BA = p+Z, BB = p[B+1]+Z; // THE 8 CUBE CORNERS, return lerp(w, lerp(v, lerp(u, grad(p[AA ], x , y , z ), // AND ADD grad(p[BA ], x-1, y , z )), // BLENDED lerp(u, grad(p[AB ], x , y-1, z ), // RESULTS grad(p[BB ], x-1, y-1, z ))),// FROM 8 lerp(v, lerp(u, grad(p[AA+1], x , y , z-1 ), // CORNERS grad(p[BA+1], x-1, y , z-1 )), // OF CUBE lerp(u, grad(p[AB+1], x , y-1, z-1 ), grad(p[BB+1], x-1, y-1, z-1 )))); } static double fade(double t) { return t * t * t * (t * (t * 6 - 15) + 10); } static double lerp(double t, double a, double b) { return a + t * (b - a); } static double grad(int hash, double x, double y, double z) { int h = hash & 15; // CONVERT LO 4 BITS OF HASH CODE double u = h<8 ? x : y, // INTO 12 GRADIENT DIRECTIONS. v = h<4 ? y : h==12||h==14 ? x : z; return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v); } static final int p[] = new int[512], permutation[] = { 151,160,137,91,90,15, 131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23, 190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33, 88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166, 77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244, 102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196, 135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123, 5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42, 223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9, 129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228, 251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107, 49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254, 138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180 }; static { for (int i=0; i < 256 ; i++) p[256+i] = p = permutation; } }

I understand what X,Y,Z,x,y,z represent, and I understand what the fade function is supposed to do, but I have no idea how it does it in that monster of a return statement. After that though, I don't have a clue. What exactly does p[] represent? Why is it doubled up? Is there any logic to the hashing process and grad function, or is it just a cheap way to generate vectors? I know from his white paper why he doesn't randomly generate gradients, but I'm not sure of what he's doing instead.

##### Share on other sites
The use of the table, p, is just a relatively cheap way to get a pseudo-random number that is dependent upon the given coordinates, and will always be the same for any given set of coordinates. The integers 0 through 255 are placed into permutation and scrambled randomly; then the permutation table is copied twice into p. This way, you can "fold" the coordinates together and generate a pseudo-random number in the range of [0,255]. Coordinates are mod'ed to the range [0,255] so you can take the result of an element indexed from p and add it to a coordinate, and you can use the result again to index p. This result can be added to the next coordinate component and again used to index p. And so on. This has a sort of mixing effect, where the X, Y and Z components are all sort of churned together to get a final random-seeming result.

Consider the case where, say, X=255, Y=140 and Z=24. If you perform the operation A=p[X]+Y, then the result will be 180+140, or 320. So then you perform the operation AA=p[A]+Z. If p wasn't a doubled-up version of permutation in this case, then this would result in an array bounds overflow. But since the maximum value stored in p is 255, and the maximum value that any coordinate can be is 255, then the maximum array index value ever used would be 255+255, or 510. Doubling the permutation table to 512 in length prevents overflow.

The big, ugly return statement is simply interpolating the corners of a cube. Consider the 2D case, where each of the corners of a square is assigned a noise value. The final result is calculated by interpolating the 4 corners. First, the top-left and bottom-left values are interpolated using the fractional part of the Y coordinate. Then the top-right and bottom-right corners are interpolated, again using fractional Y. Finally, these two intermediate results are interpolated using the fractional X coordinate, to get the final output value. By the same token, the 8 corners of the cube are interpolated, using the fractional X,Y and Z components. Since there are more points, more dimensions, it requires more interpolation operations. The exact number of interpolations is equal to 2[sup]N[/sup]-1, where N is the dimensionality of the noise being generated. So if you think the 3D version is hairy, you should see the 4D version.

##### Share on other sites

The use of the table, p, is just a relatively cheap way to get a pseudo-random number that is dependent upon the given coordinates, and will always be the same for any given set of coordinates. The integers 0 through 255 are placed into permutation and scrambled randomly; then the permutation table is copied twice into p. This way, you can "fold" the coordinates together and generate a pseudo-random number in the range of [0,255]. Coordinates are mod'ed to the range [0,255] so you can take the result of an element indexed from p and add it to a coordinate, and you can use the result again to index p. This result can be added to the next coordinate component and again used to index p. And so on. This has a sort of mixing effect, where the X, Y and Z components are all sort of churned together to get a final random-seeming result.

Consider the case where, say, X=255, Y=140 and Z=24. If you perform the operation A=p[X]+Y, then the result will be 180+140, or 320. So then you perform the operation AA=p[A]+Z. If p wasn't a doubled-up version of permutation in this case, then this would result in an array bounds overflow. But since the maximum value stored in p is 255, and the maximum value that any coordinate can be is 255, then the maximum array index value ever used would be 255+255, or 510. Doubling the permutation table to 512 in length prevents overflow.

If the doubling-up is used to avoid overflow, why not simply change the array references to modulo (e.g. AA=p[A & 255]+Z)?

I think I'm starting to get it though. I've always considered "permutation" to be bound to probability calculations in college math classes, but I never thought to see it as just "one permutation" and divorce it from the standard practice of calculation all of the permutations of a list. With that out of the way, I get why the "hashing" does what it does.

So if you think the 3D version is hairy, you should see the 4D version. [/quote]

Haha, I'll bet. Thankfully, this implementation is trivially easy to implement in C#, so I didn't run into any problems (except for the slow-to-come realization that it was mapping to [-1,1], but that's easily fixable). I just don't feel very comfortable implementing algorithms I don't understand.

##### Share on other sites

If the doubling-up is used to avoid overflow, why not simply change the array references to modulo (e.g. AA=p[A & 255]+Z)?

I think I'm starting to get it though. I've always considered "permutation" to be bound to probability calculations in college math classes, but I never thought to see it as just "one permutation" and divorce it from the standard practice of calculation all of the permutations of a list. With that out of the way, I get why the "hashing" does what it does.

You probably could do that, I suppose. In my own implementation, I use a FNV-1A hash instead of the permutation table, since I never really liked the idea of limiting myself to a domain of [0,255]. Using the permutation table as in the reference implementation, with mod'ed coordinates as input, means that the noise pattern repeats every 256 units, and since I've done a lot of procedural planet and other large-scale generation, that was undesirable behavior. But basically any kind of hash will work as long as it maps (X,Y,Z) to a pseudo-random gradient array index in a deterministic fashion.

Haha, I'll bet. Thankfully, this implementation is trivially easy to implement in C#, so I didn't run into any problems (except for the slow-to-come realization that it was mapping to [-1,1], but that's easily fixable). I just don't feel very comfortable implementing algorithms I don't understand.
[/quote]

[-1,1] can in some cases be highly desirable, for instance in the case of turbulence. A noise function can be used to perturb the inputs to another function, and in such a case it can be useful to have the perturbation be centered about the original point. For example, in my procedural island generation, if I use noise in the range [0,1] to perturb instead of [-1,1], it has the effect of offsetting the island away from the center of the mapped region.

So in my library, I use [-1,1] noise output by default, but offer a function that can remap the range to [0,1] or whatever arbitrary range I may require.

1. 1
2. 2
Rutin
29
3. 3
4. 4
5. 5
khawk
14

• 11
• 11
• 23
• 10
• 9
• ### Forum Statistics

• Total Topics
633647
• Total Posts
3013108
×