2D Perlin Noise (gradient noise) range = ?

Started by
42 comments, last by Matthew Doucette 19 years, 3 months ago
Quote:Original post by Doucette
Step 4 was hard to follow, but I followed it and it looks good and has the same results that I got.

It was also quite hard to figure out. It's rather unintuitive unforetunately, because there are a lot of interdependancies.
Quote:
I have not yet coded 3D *original* Perlin Noise. I will code it right now and post the ranges of my results...

Done. I'm calculating my results. So far, the range is:

-0.765153..0.720143 after 2.04 BILLION random samples!

-0.866025..0.866025 is the expected range.

Why isn't it close?

For two reasons. Firstly, 3D noise is calculated from *EIGHT* gradients ... If you think of it, it is much less likely that eight gradients that either closely maximize or minimize the noise value exist out of the 256. ... Secondly, assuming that those eight gradients do exist out of the list of 256, it is much less likely that those eight are going to all be corners around the same 3D "cel" of noise. ... This brings me to my next idea for testing: ...

This gives me a much better idea:

Treat the noise as if you do have optimal gradients. Instead of looking for the gradient vectors through the permutation table, calculate them by normalizing the delta vectors. Better yet, just skip the whole find-gradient-calc-dot-product routine, and return the delta vector length as the "dot product" result (then interpolate those). After sampling this within the unit cube with some good resolution, this should both give you the maximum possible noise value, and the coorodinates where it is obtained (which should be (0.5, 0.5, 0.5 if my analisys was indeed correct). The minimum possible value is then minus the maximum.
Quote:
Concerning "Improved Noise" Perlin Noise:

Quote:Original post by technobot
This is actually the same as for the "Improved Noise" Implementation. The difference is in what's going on behind the scenes, so let's analyze each step.

3D Perlin Noise based on Ken Perlin's improved noise from his "Improving Noise" paper is different (and has a different range) from the original implementation. Here is a partial list of "Improving Noise" differences, specifically only the differences that change the range:

"Improving Noise" Perlin Noise:
1) The gradients are *not* normalized, and have a length of sqrt(2).
2) The gradients are *not* random and do *not* point in the direction that maximizes (or minimizes) the noise value.

In fact, here they are:

1,1,0, -1,1,0, 1,-1,0, -1,-1,0,
1,0,1, -1,0,1, 1,0,-1, -1,0,-1,
0,1,1, 0,-1,1, 0,1,-1, 0,-1,-1,
1,1,0, -1,1,0, 0,-1,1, 0,-1,-1

(The last row is the first row repeated, on purpose.)

These gradients create a different range than the original 3D noise, due to the two points above. Now that we have the range of Perlin's original noise calculated, let's work on his Improved Noise range!

Also notice that:
3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

I've already done this analysis in the thread I linked to earlier, although it's somewhat scattered through the thread. Basically, the maximum value should again be obtained for (0.5, 0.5, 0.5), the actual length of which is 0.5*Sqrt(3). But because of (3), the effective length is only 0.5*Sqrt(2). All the gradient vectors also have a length of Sqrt(2), so the dot products return values between +/- 0.5*Sqrt(2)*Sqrt(2) = 0.5*2 = 1. As the interpolation merely average those values, the noise range is hence exactly -1..1. I will post a more complete analisys later.

However, since s(t) is different, the actual maximum may be elsewhere (not middle of unit cube), possibly allowing for values outside that range... This is also suggested by gamelife's post.

EDIT: Hmm, the last part of point 2 is important. I think I have neglected it in the above quick analysis. I will get back to this later.
Michael K.
Advertisement
(EDIT: I posted this BEFORE reading technobot's post above. This should explain why this post does not address anything he says!)

Re: 3D IMPROVED Perlin Noise:
Quote:Original post by Doucette
My 3D Perlin Noise, based on Ken Perlin's "Improving Noise" has this range: -1.0332..1.03458. (Based on 6 billion random samples. It took over 62 million samples JUST to break the -1..1 barrier, and over 235 million samples to break both ends of the -1..1 barrier.)

I just coded a non-static 3D Perlin Noise "improved noise" tester. Inside of testing over a 0.0..256.0 cubic range, it tests only in one 0.0..1.0 "cel" but randomly changes the gradients each time. This way, I am testing the infinite possibilities of improved Perlin Noise.

Next, I *forced* all the gradients to point inwards (still only using the set of 12 presets) to force the maximum noise possible. I can explain why this helps produce the maximum noise to anyone questioning it.

It only took 486 samples (instead of 62,000,000) to break the 1 barrier! (The -1 barrier will not be broken, as I'm forcing the maximum value, not the minimum.)

UPDATE EDIT:

So far, I am at 7.09 billion samples and the maximum has been the same ever since the 2,266,232,577 sample:
location =
(0.515954047287821,0.500268734573173,0.354526271511759)
gradients =
g000 = (1,1,0)
g001 = (1,0,-1)
g010 = (1,-1,0)
g011 = (1,0,-1)
g100 = (-1,1,0)
g101 = (0,1,-1)
g110 = (-1,-1,0)
g111 = (0,-1,-1)
max = 1.036333179749597

UPDATE EDIT:

Noticing that the y in the location above is very close to 0.5, I set one of the dimensions to be locked to 0.5 (x). I beat the above value by doing this after only 85,077,428 samples:

location =
(0.500000000000000,0.518421096020481,0.354506041964626)
gradients =
g000 = (1,1,0)
g001 = (0,1,-1)
g010 = (1,-1,0)
g011 = (1,0,-1)
g100 = (-1,1,0)
g101 = (0,1,-1)
g110 = (-1,-1,0)
g111 = (-1,0,-1)
max = 1.036353132373597

It wasn't beat after 450,000,000 samples.

UPDATE EDIT:

I locked one at 0.5 and the others within the range of 0.45..0.55 and 0.3..0.4. In 8,522,970 samples I beat it again:

location = (0.481445960659562,0.355424509538798,0.500000000000000)
gradients =
g000 = (1,0,1)
g100 = (-1,0,1)
g010 = (0,-1,1)
g110 = (-1,-1,0)
g001 = (1,0,-1)
g101 = (-1,0,-1)
g011 = (0,-1,-1)
g111 = (-1,-1,0)
max = 1.036353775489912

It wasn't beat after 400,000,000 samples.

I think it's time to try to solve this mathematically.

...

This might get confusing to some... so I'll repeat that this is the 'improved' 3D Perlin Noise, from Ken Perlin's "Improving Noise" 2002 paper, not the original 3D Perlin Noise from Perlin's 1985 paper.

[Edited by - Doucette on November 30, 2004 1:41:20 PM]

Matthew Doucette, Xona Games

Re: 1985 Perlin Noise

Quote:Original post by technobot
This gives me a much better idea:

Treat the noise as if you do have optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

...

Re: 2002 'improved' Perlin Noise

Quote:Original post by technobot
Also notice that:
3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

I did notice that. My test program I posted about in my previous post ignores any potential optimizations by this. I'll think more on this later...

Quote:Original post by technobot
Basically, the maximum value should again be obtained for (0.5, 0.5, 0.5) ...[cut]... the noise range is hence exactly -1..1. I will post a more complete analisys later.

I set my testing program (again, the one I posted about in my previous post) to force (0.5,0.5,0.5) and the max value was exactly 1. It only took 12 samples to find it, and after 1,000,000,000 samples it was never beat. I also performed some simple math on paper and proved this, too. So you are right that the maximum value obtained at (0.5,0.5,0.5) is 1, but wrong in your assumption that the maximum value is obtained at (0.5,0.5,0.5)... assuming I am right. I hope that makes sense.

Quote:Original post by technobot
However, since s(t) is different, the actual maximum may be elsewhere (not middle of unit cube), possibly allowing for values outside that range... This is also suggested by gamelife's post.

I believe this is the case. gamelife's post basically proves this once you go through the math. We no longer have to believe that our code is just buggy for producing values outside of -1..1.

My data in my previous post should prove this, too. If anyone cares to figure it out. I gave the sample location and all the gradient vectors. That's all you need.

[Edited by - Doucette on November 30, 2004 12:59:55 PM]

Matthew Doucette, Xona Games

Quote:Original post by technobot
Also notice that:
3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

Ok, I have had time to think about this and implemented an optimization from it. (This is "improved" 3D Perlin Noise, for those just reading from here.)

As I go through my random x,y,z 0.0..1.0 sample points, for every dot-product (of delta & gradient vector) I choose the gradient vector (out of the 12 presets) that maximizes each dot product result. This reduces the trial and error significantly.

It was easy to do this because the gradient vectors are all values of either -1, 0, or 1. This makes the dot product ONE ADDITION!

For example, this is an unoptimized dot product:

A.B = (A.x * B.x) + (A.y * B.y) + (A.z * B.z)
(3 multiplies and 2 additions)

This is an optimized dot product when B = (1,0,1):

A.B = A.(1,0,1) = A.x + A.z
(1 addition)

...

My previous maximum:
Quote:Original post by Doucette
I locked one at 0.5 and the others within the range of 0.45..0.55 and 0.3..0.4. In 8,522,970 samples I beat it again:

location = (0.481445960659562,0.355424509538798,0.500000000000000)
gradients =
g000 = (1,0,1)
g100 = (-1,0,1)
g010 = (0,-1,1)
g110 = (-1,-1,0)
g001 = (1,0,-1)
g101 = (-1,0,-1)
g011 = (0,-1,-1)
g111 = (-1,-1,0)
max = 1.036353775489912

Now to try to beat this...

[EDIT: MY CODE WAS WRONG. 1.055520362955293 was the maximum value I had posted here, but it is wrong and was exaggerated.]

I have run the program again with the proper, working code. AFter 2,895,347,588 samples, I acheived a new maximum:

location = (0.644795088933775,0.499909066789684,0.481530191209012)
gradients =
g000 = 1.1447
g100 = 0.981439
g010 = 1.14489
g110 = 0.981621
g001 = 1.16326
g101 = 1.01838
g011 = 1.16326
g111 = 1.01856
i = 2895347588
max = 1.036353777355945

The max was not beat after 6,000,000,000 samples.

Next (as I did with my older un-optimized code) I locked x=0.5, as y=0.499909066789684 in my data above seems to suggest that the result contains an exact 0.5. Here's my results:

In *only* 2,141,561 samples, the previous maximum was beat. It was continually beat:

i = 1 max = 0.808986516172702
i = 3 max = 1.018318228389438
i = 16 max = 1.031296019307135
i = 18 max = 1.034753783743431
i = 54 max = 1.036196421499095
i = 435 max = 1.036323797020139
i = 1335 max = 1.036336826993154
i = 2876 max = 1.036341702402226
i = 4502 max = 1.036352946204039
i = 51127 max = 1.036353365009920
i = 297585 max = 1.036353553349657
i = 643643 max = 1.036353650603703
i = 1349779 max = 1.036353691318686
i = 2141561 max = 1.036353808383355 <-- previous max beat
i = 8919930 max = 1.036353808843170
i = 51223134 max = 1.036353809087721
i = 84932703 max = 1.036353809280504
i = 130409817 max = 1.036353809393841
i = 133522358 max = 1.036353810547926
i = 210785009 max = 1.036353810588216
i = 259749544 max = 1.036353810658400
i = 286574578 max = 1.036353810716261
i = 380531512 max = 1.036353810995271
i = 1236970362 max = 1.036353811091619
i = 1439470648 max = 1.036353811095718
i = 2748843691 max = 1.036353811166373
i = 3870810752 max = 1.036353811197563
i = 6330661079 max = 1.036353811203274
i = 9700000000 hasn't been beat yet...

location = (0.500000000000000,0.518508155107934,0.355254297873072)
gradients =
g000 = 1.01851
g100 = 1.01851
g010 = 0.981492
g110 = 0.981492
g001 = 1.16325
g101 = 1.16325
g011 = 1.14475
g111 = 1.14475
i = 6330661079
max = 1.036353811203274

Notes about the data:

1) The gradients are calculated BEFORE s-curve distortion. In fact, I replace my linear interpolation with what I call "s-curve interpolation" which produces the same results as distorting the inputs to linear interpolation by an s-curve. It makes more sense (to me) to consider it an "s-curve
interpolation".

2) I did not show the gradient vectors. Why? I never actually calculate them due to the 'addition optimization' of the dot products. I was able to extract the value from the best gradient vector without finding out what it was. But, in case you want to double check the math of the sample data above, you can figure out the gradients from the sample point location.

For example, from (0,0,0) corner:
A.B = A.(1,1,0) or
= A.(1,0,1) or
= A.(0,1,1)

(Only those three, as the other nine point outwards. Every corner has only three of the twelve gradients that point inwards. Remember, gradients pointing inwards help maximize the noise value, so we only want to consider those.)

so, using the 'addition optimization' due to the gradients gives...

A.B = A.x + A.y or
= A.x + A.z or
= A.y + A.z

Just choose the largest sum, and that tells you the gradient. In this case it was (1,0,1).

...

(FYI, the removal of the multiplications by the 12 preset gradient vectors was one of Ken Perlin's improvements in his "Improved Noise" paper.)

[Edited by - Doucette on December 1, 2004 10:44:34 AM]

Matthew Doucette, Xona Games

Quote:Original post by Doucette
Re: 1985 Perlin Noise

Quote:Original post by technobot
This gives me a much better idea:

Treat the noise as if you do have optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

While I'm deriving the extact solution, could you please run this test both with the old s(t) and the new s(t) and post the results? I want to check whether the excel analisys is flawed or not, and also to compare the behaviors of these two functions..

NOTE:
To adjust the test for the Improved Noise - for each delta vector, calculate the dot product with all 12 gradient vectors, and take the maximum result. Then interpolate those. This is guaranteed to give you the maximum noise range, and the position(s) where the maximum noise value is obtained. I think this is essentially what you did in your last post.. is that correct? Also, try going at 0.001 intervals within the unit cube, to get a more systematic approximation (rather than a statistical one)..
Michael K.
Quote:Original post by technobot
Quote:Original post by Doucette
Re: 1985 Perlin Noise

Quote:Original post by technobot
This gives me a much better idea:

Treat the noise as if you do have optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

While I'm deriving the extact solution, could you please run this test both with the old s(t) and the new s(t) and post the results?

Sure. I will do this tomorrow. So you want to see if the old s-curve and the new s-curve give the same result? I believe they will, as they all result in 0.5 given 0.5.

Quote:Original post by technobot
To adjust the test for the Improved Noise - for each delta vector, calculate the dot product with all 12 gradient vectors, and take the maximum result. Then interpolate those. This is guaranteed to give you the maximum noise range, and the position(s) where the maximum noise value is obtained. I think this is essentially what you did in your last post.. is that correct?

Yes, it is exactly what I did. I made some optimizations which made it faster (with the downside of not actually calculating the gradients to display).

Matthew Doucette, Xona Games

Ok, let's see what's going on in the Improved Noise. I'm assumming 3D, since any other dimentionality would require it's own set of gradient vectors. The four stages are the same as for the old noise, but behave a bit differently:

Step 1:
Just as boring as last time.

Step 2:
The gradient vectors are of length Sqrt(2). One component of each gradient vector is always zero, so the maximum effective length of the delta vectors is also Sqrt(2). Minimum delta vectors length is still 0.

Step 3:
Maximum possible values are when the following conditions are met:
1. The delta vector is in the same direction as the gradient vector (cosine of angle equals 1).
2. The delta vector ends on the containing unit cube's surface (maximum delta vector length given the direction constraint).

The gradient vector's zero coordinate cancels out the corresponding coordinate of the delta vector during the dot product calculation, so we can treat that coordinate as also being zero. The gradient vectors themselves always satisfy both of the above conditions. The result is that if the delta vector satisfies those conditions, we are essentially taking the dot product of the gradient vector with itself, returning the square of its length, i.e. 2 (btw, this would also hold for the old perlin noise, if its gradient vectors were not normalized). So the the dot products give a range -2..2. This is then reduced by the various dependancies in step 4.

Step 4:
The behavior of the new s(t) is almost identical to that of the old s(t), leading to an initial assumption that we should again get the maximum noise value at the center of the unit cube, and the maximum value should be 0.5*Sqrt(2)*Sqrt(2) = 1. However, the supplied test data clearly falsifies this assumption, so there must be something else going on here.

I suspect that part of this might be due to the fact that the dot products are functions of all three components of P, leading to interdependancies that make the 1D analisys I did in excel inaccurate. The other thing to consider is the nature if the gradient vectors. Since all the gradient vectors have a zero coordinate, and since the delta vectors are not constrained to be of unit-length, it does not matter what the corresponding coordinate of the delta vector equals to. This removes one of the depenancies, and alters the behavior of the noise. For example, if the gradient vector is (0,1,1), then the dot product for that corner no longer depends on P.x.


Since the delta vectors always point inward, the largest results will be attained if all the gradient vectors for the containing unit cube also point inward. There are only three such gradient vectors for each corner. For corner (0,0,0), these are (1,1,0), (1,0,1), and (0,1,1). By mirroring around the x, y, and/or z axes, the corresponding vectors for the other corners can be obtained.

For each pair of corners, there are two options: A) both gradient vectors have a zero component on the same axis; or B) the gradient vectors have zero components on different axes. Notice that the vectors are equivalent - we can get from one to another by renaming the axes. This allows us to arbitrarily select which axes have zero components in the gradients.

Let's analyze each case separately. frac(P)=(x,y,z):

Case A:
Since the y and z components of the gradient vectors are non-zero, both the dot products are still functions of y and z. Assumming the excel analysis was correct after all, we should get maximum values for y=z=0.5. This is also hinted by the test data. Hence:

grad(a) = grad(b) = (0,1,1)
delta(a) = (x , 0.5, 0.5)
delta(b) = (x-1, 0.5, 0.5)

a = grad(a).delta(a) = 0*x + 1*0.5 + 1*0.5 = 1
b = grad(b).delta(b) = 0*(x-1) + 1*0.5 + 1*0.5 = 1

An interpolation along x will then give 1. The only way the noise can be more than 1, is if one or more of the other x interpolations returns a value above 1.



Case B:
grad(a) = (0,1,1)
grad(b) = (1,0,1)

As before, we can guess z=0.5, since both a and b are functions of z. Since b is not a function of y, the excel analisys does not apply to y. Similarly, it does not apply to x either. We have to find the point of maximum noise-value some other way. Suppose the point is (x_max, y_max, 0.5). x_max then depends on the selection of y_max, and vice versa. This leaves us with a set of interdependant partial deriviatives, which as far as I can tell have no analytical solution. :-/


If you're interested, here is what I've got for x, treating y as a parameter (assuming y=ymax).

delta(a) = (x , y_max, 0.5)
delta(b) = (x-1, y_max, 0.5)

k=y_max+1
a = grad(a).delta(a) = 0*x + 1*y + 1*0.5 = y+0.5 = k-0.5
b = grad(b).delta(b) = 1*(x-1) + 0*y + 1*0.5 = x - 1 + 0.5 = x-0.5
b-a = x-y-1 = x-k

t = x
s(t) = s(x) = 6*x^5 - 15*x^4 + 10*x^3
p(s) = b*s + a*(1-s) = b*s + a - a*s = (b-a)*s + a
p(x) = (x-k)*(6*x^5 - 15*x^4 + 10*x^3) + k-0.5
p(x) = (6*x^6 - 15*x^5 + 10*x^4) - k*(6*x^5 - 15*x^4 + 10*x^3) + k-0.5
p(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5

(Looking for the maximum:)

p' = dp/dx = 36*x^5 + (-75-30k)*x^4 + (40+60k)*x^3 - 30k*x^2 = 0
p' = x^2*[36*x^3 + (-75-30k)*x^2 + (40+60k)*x - 30k] = 0

x1=x2=0.

36*x^3 + (-75-30k)*x^2 + (40+60k)*x - 30k = 0

This is where I am stuck... Note that while this could be solved parametrically using the cubic equation, that is rather complex, and would still not give any clue as to what k (or y_max) should be.


A note concerning the randomness of the gradient vectors: as Ken Perlin mentions in his paper, the permutation table provides plenty of randomness. This is also true for the old noise. Anyway, it doesn't affect the limits of the noise, only your chances of reaching those limits.
Michael K.
Quote:Original post by technobot
p(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5


Let me get this straight; You want the maximum value of the multivariable function?

p(x, k) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5
I asked my friend who said he was pretty sure that the maximum value is located at x=1 and k=inf...

Although, we aren't too sure if it is correct... I have tried to graph the function with mathematica to no avail...

Now when I cnstrain the two variables to be within -1 and 1, I get x=-1 and k=1

Does any of this sound right?

[Edited by - Dwiel on November 30, 2004 6:44:49 PM]
Quote:Original post by Dwiel
Quote:Original post by technobot
p(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5


Let me get this straight; You want the maximum value of the multivariable function?

Err.. basically, yes. I was hoping to kick k out, and find the optimum x. This is for the x interpolation of the perlin noise. Then, since k behaves the same in the y interpolation, the optimum k would be the same.

EDIT: Correction: what I want, is given a known y, what is the x that gives a maximum.

Solving both interpolations simultaneously (i.e. finding the maximum of the perlin noise function directly) is much more complex, since there are several different cases, depending on the other gradient vectors.

Quote:
Now when I cnstrain the two variables to be within -1 and 1, I get x=-1 and k=1

The doimain that we're interested in is x=0..1, y=0..1. Given that k=y+1, that makes it x=0..1, k=1..2. I'll try to plot this in excel.. don't know if I'll get anything useful..

EDIT: Ok. I did the plot. Unforetunately, for any given y, there is no maximum along x, except for x=0. This isn't very useful, for reasons that are rather hard to explain. We have no choice but to analyze the complete noise function directly. That is way too much work for me (I don't have that much spare time)...

[Edited by - technobot on December 1, 2004 6:35:15 AM]
Michael K.
I really appreciate you guys' insistence to find the exact value :)
I tried to derive it once yesterday, and quickly gave up as I found it more complicated than expected...

This topic is closed to new replies.

Advertisement