# 2D Perlin Noise (gradient noise) range = ?

## Recommended Posts

##### Share on other sites
Sampling at regular intervals is NOT a good way to test the full range of a perlin noise (trust me, I've tried this). You have to take a large number of random samples. Also, note that the distribution of perlin noise is gaussian, so you're not very likely to get the extreme values (which is why the large smaple count is needed - although a smaller count may still give you a good estimate of the range).

You might find this thread useful, although it deals mostly with the improved noise implementation.

EDIT: that thread deals with 3D noise, but it the range doesn't depend on the dimentionality.

##### Share on other sites
technobot,

Sampling at regular intervals *IS* a good way to test the full range of a Perlin Noise if the intervals are small enough. Likewise, random sampling only works if you sample enough random points. My 0.01 interval is plenty small enough.

To reiterate, I sampled it at every 0.01 interval across 0.0..256.0 in 2D. That totals to 655,360,000 evenly spaced sample points. However, the total is irrelevant. What matters is the resolution upon each "cel". For each noise "cel" it is 10,000 sample points, or 100 across x and 100 across y. So, when you picture the worse case scenario of a noise "cel", you can see that it *plenty* of resolution.

But, I did as you said anyway, just to satisfy all types of testing. I performed the same large number of random samples, 655,360,000, on the exact same noise (composed of the exact same gradient array):

min..max = -0.69424..0.695468

Random sampling came up with slightly smaller MINs and slightly larger MAXs, so technobot's method worked better, although either method was sufficient. (It is important to note that you are less likely to make mistakes with random sampling.)

Next, I sampled TEN TIMES that amount: 6,553,600,000 random samples:

min..max = -0.69424..0.695648

This still shows that the range is -sqrt(0.5)..sqrt(0.5), not -1..1 as lots of people think.

...

Concerning 3D Perlin Noise:

Also, assuming that I am correct in my first post (which is what I am here to find out), 3D Noise range *is* different than 2D. And then it is different *again* if you use Ken Perlin's 2002 "Improving Noise" suggestions as he un-randomizes and un-normalizes the gradients which affect the output range. My 3D Perlin Noise, based on Ken Perlin's "Improving Noise" has this range: -1.0332..1.03458. (Based on 6 billion random samples. It took over 62 million samples JUST to break the -1..1 barrier, and over 235 million samples to break both ends of the -1..1 barrier.) I'm not sure if my code is buggy yet, so I do not wish to get into it too much until my 2D range problem is solved.

I would post this in your thread, but it is closed. I *despise* that. Threads that have not solved the issue should not be closed. We might as well continue the 3D discussion here, as I will be getting into that afterwards. But, for now, let's try to avoid getting into 3D and all its extra complications until after we determine whether the 2D range is -sqrt(0.5)..sqrt(0.5). I think 3D discussion will make more sense afterwards.

[Edited by - Doucette on November 29, 2004 3:20:32 PM]

##### Share on other sites
Perlin noise has the potential to create values between -1 and 1, but it is not very likely. As technobot mentioned, it is a gaussian distrobution, so getting a range such as you found with a relatively small domain is not so abnormal.

If you require a uniform distrobution, then you either have to monkey with the output, or use some method other than perlin noise.

karg

##### Share on other sites
I am not concerned with distribution.
I am only concerned with absolute range.

I.e. What is the minimum and maximum value possible from 2D Perlin Noise?

I believe it is *NOT* the assumed, popular -1..1.

##### Share on other sites
I just depends on how you produce it.

Just create a point with the maximum values for each octave and combine them to see what you get. Same for the minimums. Then you can scale to whatever values you want.

Now, just because you have the possibility of getting the maximum and minimum values, doesn't mean that a sample you create will contain those values.

What exactly are you using these numbers for?

karg

##### Share on other sites
Quote:
 Original post by DoucetteSampling at regular intervals *IS* a good way to test the full range of a Perlin Noise if the intervals are small enough.

Actually, come to think of it, you're right - if the intervals are small enough, and if the domain is large enough. Both of these conditions seem to be satisfied in your test though.

Quote:
 [...] Next, I sampled TEN TIMES that amount: 6,553,600,000 random samples [...] This still shows that the range is -sqrt(0.5)..sqrt(0.5), not -1..1 as lots of people think.

As I said, "a smaller count may still give you a good estimate of the range" (because the sample distribution is the same - you just need to have enough samples to get a statistical meaning), so this was expected.

Quote:
 Concerning 3D Perlin Noise:Also, assuming that I am correct in my first post (which is what I am here to find out), 3D Noise range *is* different than 2D. And then it is different *again* if you use Ken Perlin's 2002 "Improving Noise" suggestions as he un-randomizes and un-normalizes the gradients which affect the output range. My 3D Perlin Noise, based on Ken Perlin's "Improving Noise" has this range: -1.0332..1.03458. (Based on 6 billion random samples.) I'm not sure if my code is buggy yet, so I do not wish to get into it too much until my 2D range problem is solved.I would post this in your thread, but it is closed. I *despise* that. Threads that have not solved the issue should not be closed. We might as well continue the 3D discussion here, as I will be getting into that afterwards. But, for now, let's try to avoid getting into 3D and all its extra complications until after we determine whether the 2D range is -sqrt(0.5)..sqrt(0.5). I think 3D discussion will make more sense afterwards.

Ok. Let's see.. assumming the implementation you linked to in the first post, the algorithm is as follows (for all dimentionallities):

1. Find the unit segment/square/cube (or other nD cube) that contains the sample point.
2. For each corner of that find: a) the vector from the corner to the sample point - let's call it the "delta vector"; b) the gradient vector associated with that corner (using the permutation table).
3. Still for each corner, calculate the dot product of the delta vector and the gradient vector for that corner.
4. Interpolate the results, first along the x axis, then the results of that along the y axis, and so on.

This is actually the same as for the "Improved Noise" Implementation. The difference is in what's going on behind the scenes, so let's analyze each step.

Step 1:
Nothing interesting.

Step 2:
The smallest possible delta vector for each corner is of length 0. The largest possible delta vector is when all the components are -/+ 1, giving a vector of length Sqrt(n), with n being the dimentionality of the noise. The gradient vectors are always of length 1, since they are normalized during initialization.

Step 3:
Since A.B = |A|*|B|*cos(A,B), this gives us values in the range -Sqrt(n)..Sqrt(n).

Step 4:
This one is actually trickier than it looks. To simplify the analysis, let's consider the 1D example first:
A---P------B

P is the sample point, A and B are the corners of the containing unit segment, t is the distance from A to P, and s(t) is the interpolation factor. Let's call the dot product results for A and B, a and b, respectively. The value at P is then:
p = b*s + a*(1-s)

This gives us the follwing behavior. What's important to remember is that a and b are also functions of t:

- At t=0.5, p is exactly the average of a and b, which is 0.5 for the 1D case.
- At t=0 and t=1, p is zero, because then you either have "p = b*0 + 0*1" or "p = 0*1 + a*0".
- As we approach from t=0.5 to t=0, a decreases linearly, b increases linearly, but s and 1-s do not change linearly. Depending on how s and s-1 interact with their counterparts, this may result in one of two cases: a) p drops monotonically towards zero; or b) p slightly rises to some maximum, and then drops back to zero monotonically.
- As we approach from t=0.5 to t=1, the same thing occurs as when approaching t=0.

I've entered this into excel, and got this (assuming a and b are such that a maximum value is attained):
t	s	1-s	a	b	p0.00	0.00	1.00	0.00	1.00	0.000.05	0.01	0.99	0.05	0.95	0.060.10	0.03	0.97	0.10	0.90	0.120.15	0.06	0.94	0.15	0.85	0.190.20	0.10	0.90	0.20	0.80	0.260.25	0.16	0.84	0.25	0.75	0.330.30	0.22	0.78	0.30	0.70	0.390.35	0.28	0.72	0.35	0.65	0.430.40	0.35	0.65	0.40	0.60	0.470.45	0.43	0.57	0.45	0.55	0.490.50	0.50	0.50	0.50	0.50	0.500.55	0.57	0.43	0.55	0.45	0.490.60	0.65	0.35	0.60	0.40	0.470.65	0.72	0.28	0.65	0.35	0.430.70	0.78	0.22	0.70	0.30	0.390.75	0.84	0.16	0.75	0.25	0.330.80	0.90	0.10	0.80	0.20	0.260.85	0.94	0.06	0.85	0.15	0.190.90	0.97	0.03	0.90	0.10	0.120.95	0.99	0.01	0.95	0.05	0.061.00	1.00	0.00	1.00	0.00	0.00

As can be seen, p decreases monotonically as we go from t=0.5 to t=0 or t=1 (as I suspected). This means that the maximum values for the x interpolation can only be attained for frac(P.x)=0.5. Similarly, the maximum values for the y, z, and so on interpolations are also obtained at frac(P.coord)=0.5. Hence the maximum overall value is obtained for frac(P)=(0.5,0.5,...).

In that case, all the dot products would be between -0.5*Sqrt(n) and 0.5*Sqrt(n), the x interpolations would return the exact averages of the corresponding dot products, and each successive interpolation would retrun the exact average of the corresponding interpolations along the previous axis. If all the dot products are equal to their maximum value for that case, which is 0.5*Sqrt(n), this would then give us the overall maximum of 0.5*Sqrt(n). So if all of the above is correct, and I didn't miss anything, and there are no bugs in the implementation, then we should get the following ranges:

1D: -0.5*Sqrt(1)..0.5*Sqrt(1) = -0.5..0.5
2D: -0.5*Sqrt(2)..0.5*Sqrt(2) = -0.71..0.71
3D: -0.5*Sqrt(3)..0.5*Sqrt(3) = -0.87..0.87

Although, since at least the 3D case should be -1..1 AFAIK, I won't be surprised if I am indeed missing something (or have mistaken in the analysis).

##### Share on other sites
Quote:
 Original post by KargI just depends on how you produce it.

True. That's why I specifically asked for the range of only one layer of gradient noise based on Ken Perlin's original code (http://mrl.nyu.edu/~perlin/doc/oscar.html#noise).

Quote:
 Original post by KargJust create a point with the maximum values for each octave and combine them to see what you get. Same for the minimums. Then you can scale to whatever values you want.

Perlin Noise is not multiple 'octave' layers. It is just one layer. This is a common misunderstanding as most implementations of Perlin Noise use multiple layers with a lacunarity (gap in frequency) of 2.0 which produces perfect "octave" layers. Perlin Noise, fundamentally, is only one of these layers. It is fine to use multiple layers of Perlin Noise (called spectral synthesis), but defining Perlin Noise as multiple layers is wrong.

The maximum and minimum values of only one Perlin Noise layer is all I want to know. You (Karg) are writing as if the max and min values are already known and as if I am asking for something else. (Please excuse me if I am wrong and misunderstood.) Anyway, figuring out the min/max range is the point of this entire thread.

I hope that explains it well.

...

technobot, your results were *exactly* what I had calculated for the noise ranges of Ken Perlin's original implementation.

Your steps 1..4 look good. Step 4 was hard to follow, but I followed it and it looks good and has the same results that I got.

Quote:
 Original post by technobotSo if all of the above is correct, and I didn't miss anything, and there are no bugs in the implementation, then we should get the following ranges:1D: -0.5*Sqrt(1)..0.5*Sqrt(1) = -0.5..0.52D: -0.5*Sqrt(2)..0.5*Sqrt(2) = -0.71..0.713D: -0.5*Sqrt(3)..0.5*Sqrt(3) = -0.87..0.87Although, since at least the 3D case should be -1..1 AFAIK, I won't be surprised if I am indeed missing something (or have mistaken in the analysis).

I do not think you are missing anything. I think your calculations are correct. If you would have done 4D Perlin Noise, it would have come to a range of exactly -1..1.

I have not yet coded 3D *original* Perlin Noise. I will code it right now and post the ranges of my results...

Done. I'm calculating my results. So far, the range is:

-0.765153..0.720143 after 2.04 BILLION random samples!

-0.866025..0.866025 is the expected range.

Why isn't it close?

For two reasons. Firstly, 3D noise is calculated from *EIGHT* gradients (whereas 2D noise is calculated from only four gradients) and they both only have 256 pseudo-random gradients to choose from. If you think of it, it is much less likely that eight gradients that either closely maximize or minimize the noise value exist out of the 256. That's 3.125% of all the gradients. Secondly, assuming that those eight gradients do exist out of the list of 256, it is much less likely that those eight are going to all be corners around the same 3D "cel" of noise. Therefore, it is going to take A LOT longer to randomly stumble upon values close to the min and max values, assuming the even exist. There is a good chance that the -0.765153..0.720143 range above is the actual range of my noise (which would change if I randomized the pseudo-random 256 gradients used). This brings me to my next idea for testing:

I may code a 3D Perlin Noise function that keeps randomizing the 8 gradients, so that I will be testing an ever changing 3D Perlin Noise area, not a static area of noise that perhaps has no values close to the extremities of the range.

...

Concerning "Improved Noise" Perlin Noise:

Quote:
 Original post by technobotThis is actually the same as for the "Improved Noise" Implementation. The difference is in what's going on behind the scenes, so let's analyze each step.

3D Perlin Noise based on Ken Perlin's improved noise from his "Improving Noise" paper is different (and has a different range) from the original implementation. Here is a partial list of "Improving Noise" differences, specifically only the differences that change the range:

"Improving Noise" Perlin Noise:
1) The gradients are *not* normalized, and have a length of sqrt(2).
2) The gradients are *not* random and do *not* point in the direction that maximizes (or minimizes) the noise value.

In fact, here they are:

1,1,0, -1,1,0, 1,-1,0, -1,-1,0,
1,0,1, -1,0,1, 1,0,-1, -1,0,-1,
0,1,1, 0,-1,1, 0,1,-1, 0,-1,-1,
1,1,0, -1,1,0, 0,-1,1, 0,-1,-1

(The last row is the first row repeated, on purpose.)

These gradients create a different range than the original 3D noise, due to the two points above. Now that we have the range of Perlin's original noise calculated, let's work on his Improved Noise range!

Once we do this, I bet it will explain why we *both* got ranges slightly over and under -1..1 for our 'improved' noise. Everyone thinks Perlin Noise is from -1..1, and I believe that is not the case.

(Here is the "Improved Noise" paper to see the full list of differences, which include using a new s-curve: http://mrl.nyu.edu/~perlin/paper445.pdf.)

[Edited by - Doucette on November 30, 2004 9:10:39 PM]

##### Share on other sites
For reference, here is an example where 3D improved perlin noise yields value outside -1..1.
The output value is the sum of 8 (delta_vector.gradient_vector * weight), shown below.

delta vector     . gradient vector * weight(   x,   y,   z) . ( 1,  0,  1)    * (1-fade(z))*(1-fade(y))*(1-fade(x))(   x,   y, z-1) . ( 1,  1,  0)    * (  fade(z))*(1-fade(y))*(1-fade(x))(   x, y-1,   z) . ( 1,  0,  1)    * (1-fade(z))*(  fade(y))*(1-fade(x))(   x, y-1, z-1) . ( 1, -1,  0)    * (  fade(z))*(  fade(y))*(1-fade(x))( x-1,   y,   z) . (-1,  0,  1)    * (1-fade(z))*(1-fade(y))*(  fade(x))( x-1,   y, z-1) . (-1,  1,  0)    * (  fade(z))*(1-fade(y))*(  fade(x))( x-1, y-1,   z) . (-1,  0,  1)    * (1-fade(z))*(  fade(y))*(  fade(x))( x-1, y-1, z-1) . (-1, -1,  0)    * (  fade(z))*(  fade(y))*(  fade(x))

When the gradient vectors are values as above,
and if (x,y,z) = (0.5, 0.5, 0.75)
then fade(z) = 0.896484375
The 1st line becomes (0.5+0.75) * (1-0.896484375)*0.5*0.5
The 2nd line becomes (0.5+0.5 ) * ( 0.896484375)*0.5*0.5
The other lines are same.

The sum is 1.0258789.
I guess higher value could be possible - though extremely rare in real cases.

##### Share on other sites
Quote:
 Original post by gamelifeI guess higher value could be possible - though extremely rare in real cases.

My range (as shown in the third post in this thread) was -1.0332..1.03458 for improved 3D Perlin Noise. Assuming my code is correct, it shows your assumption is right.

Also, thanks for the breakdown which proves it breaks the -1..1 barrier. It's easier to assume someone's code is wrong without a mathematical proof.

I will get to work on a proof of the actual range today.

[Edited by - Doucette on November 30, 2004 9:16:09 AM]

##### Share on other sites
Quote:
 Original post by DoucetteStep 4 was hard to follow, but I followed it and it looks good and has the same results that I got.

It was also quite hard to figure out. It's rather unintuitive unforetunately, because there are a lot of interdependancies.
Quote:
 I have not yet coded 3D *original* Perlin Noise. I will code it right now and post the ranges of my results...Done. I'm calculating my results. So far, the range is:-0.765153..0.720143 after 2.04 BILLION random samples!-0.866025..0.866025 is the expected range.Why isn't it close?For two reasons. Firstly, 3D noise is calculated from *EIGHT* gradients ... If you think of it, it is much less likely that eight gradients that either closely maximize or minimize the noise value exist out of the 256. ... Secondly, assuming that those eight gradients do exist out of the list of 256, it is much less likely that those eight are going to all be corners around the same 3D "cel" of noise. ... This brings me to my next idea for testing: ...

This gives me a much better idea:

Treat the noise as if you do have optimal gradients. Instead of looking for the gradient vectors through the permutation table, calculate them by normalizing the delta vectors. Better yet, just skip the whole find-gradient-calc-dot-product routine, and return the delta vector length as the "dot product" result (then interpolate those). After sampling this within the unit cube with some good resolution, this should both give you the maximum possible noise value, and the coorodinates where it is obtained (which should be (0.5, 0.5, 0.5 if my analisys was indeed correct). The minimum possible value is then minus the maximum.
Quote:

Concerning "Improved Noise" Perlin Noise:

Quote:
 Original post by technobotThis is actually the same as for the "Improved Noise" Implementation. The difference is in what's going on behind the scenes, so let's analyze each step.

3D Perlin Noise based on Ken Perlin's improved noise from his "Improving Noise" paper is different (and has a different range) from the original implementation. Here is a partial list of "Improving Noise" differences, specifically only the differences that change the range:

"Improving Noise" Perlin Noise:
1) The gradients are *not* normalized, and have a length of sqrt(2).
2) The gradients are *not* random and do *not* point in the direction that maximizes (or minimizes) the noise value.

In fact, here they are:

1,1,0, -1,1,0, 1,-1,0, -1,-1,0,
1,0,1, -1,0,1, 1,0,-1, -1,0,-1,
0,1,1, 0,-1,1, 0,1,-1, 0,-1,-1,
1,1,0, -1,1,0, 0,-1,1, 0,-1,-1

(The last row is the first row repeated, on purpose.)

These gradients create a different range than the original 3D noise, due to the two points above. Now that we have the range of Perlin's original noise calculated, let's work on his Improved Noise range!

Also notice that:
3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

I've already done this analysis in the thread I linked to earlier, although it's somewhat scattered through the thread. Basically, the maximum value should again be obtained for (0.5, 0.5, 0.5), the actual length of which is 0.5*Sqrt(3). But because of (3), the effective length is only 0.5*Sqrt(2). All the gradient vectors also have a length of Sqrt(2), so the dot products return values between +/- 0.5*Sqrt(2)*Sqrt(2) = 0.5*2 = 1. As the interpolation merely average those values, the noise range is hence exactly -1..1. I will post a more complete analisys later.

However, since s(t) is different, the actual maximum may be elsewhere (not middle of unit cube), possibly allowing for values outside that range... This is also suggested by gamelife's post.

EDIT: Hmm, the last part of point 2 is important. I think I have neglected it in the above quick analysis. I will get back to this later.

##### Share on other sites
(EDIT: I posted this BEFORE reading technobot's post above. This should explain why this post does not address anything he says!)

Re: 3D IMPROVED Perlin Noise:
Quote:
 Original post by DoucetteMy 3D Perlin Noise, based on Ken Perlin's "Improving Noise" has this range: -1.0332..1.03458. (Based on 6 billion random samples. It took over 62 million samples JUST to break the -1..1 barrier, and over 235 million samples to break both ends of the -1..1 barrier.)

I just coded a non-static 3D Perlin Noise "improved noise" tester. Inside of testing over a 0.0..256.0 cubic range, it tests only in one 0.0..1.0 "cel" but randomly changes the gradients each time. This way, I am testing the infinite possibilities of improved Perlin Noise.

Next, I *forced* all the gradients to point inwards (still only using the set of 12 presets) to force the maximum noise possible. I can explain why this helps produce the maximum noise to anyone questioning it.

It only took 486 samples (instead of 62,000,000) to break the 1 barrier! (The -1 barrier will not be broken, as I'm forcing the maximum value, not the minimum.)

UPDATE EDIT:

So far, I am at 7.09 billion samples and the maximum has been the same ever since the 2,266,232,577 sample:
location =
(0.515954047287821,0.500268734573173,0.354526271511759)
g000 = (1,1,0)
g001 = (1,0,-1)
g010 = (1,-1,0)
g011 = (1,0,-1)
g100 = (-1,1,0)
g101 = (0,1,-1)
g110 = (-1,-1,0)
g111 = (0,-1,-1)
max = 1.036333179749597

UPDATE EDIT:

Noticing that the y in the location above is very close to 0.5, I set one of the dimensions to be locked to 0.5 (x). I beat the above value by doing this after only 85,077,428 samples:

location =
(0.500000000000000,0.518421096020481,0.354506041964626)
g000 = (1,1,0)
g001 = (0,1,-1)
g010 = (1,-1,0)
g011 = (1,0,-1)
g100 = (-1,1,0)
g101 = (0,1,-1)
g110 = (-1,-1,0)
g111 = (-1,0,-1)
max = 1.036353132373597

It wasn't beat after 450,000,000 samples.

UPDATE EDIT:

I locked one at 0.5 and the others within the range of 0.45..0.55 and 0.3..0.4. In 8,522,970 samples I beat it again:

location = (0.481445960659562,0.355424509538798,0.500000000000000)
g000 = (1,0,1)
g100 = (-1,0,1)
g010 = (0,-1,1)
g110 = (-1,-1,0)
g001 = (1,0,-1)
g101 = (-1,0,-1)
g011 = (0,-1,-1)
g111 = (-1,-1,0)
max = 1.036353775489912

It wasn't beat after 400,000,000 samples.

I think it's time to try to solve this mathematically.

...

This might get confusing to some... so I'll repeat that this is the 'improved' 3D Perlin Noise, from Ken Perlin's "Improving Noise" 2002 paper, not the original 3D Perlin Noise from Perlin's 1985 paper.

[Edited by - Doucette on November 30, 2004 1:41:20 PM]

##### Share on other sites
Re: 1985 Perlin Noise

Quote:
 Original post by technobotThis gives me a much better idea:Treat the noise as if you do have optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

...

Re: 2002 'improved' Perlin Noise

Quote:
 Original post by technobotAlso notice that:3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

I did notice that. My test program I posted about in my previous post ignores any potential optimizations by this. I'll think more on this later...

Quote:
 Original post by technobotBasically, the maximum value should again be obtained for (0.5, 0.5, 0.5) ...[cut]... the noise range is hence exactly -1..1. I will post a more complete analisys later.

I set my testing program (again, the one I posted about in my previous post) to force (0.5,0.5,0.5) and the max value was exactly 1. It only took 12 samples to find it, and after 1,000,000,000 samples it was never beat. I also performed some simple math on paper and proved this, too. So you are right that the maximum value obtained at (0.5,0.5,0.5) is 1, but wrong in your assumption that the maximum value is obtained at (0.5,0.5,0.5)... assuming I am right. I hope that makes sense.

Quote:
 Original post by technobotHowever, since s(t) is different, the actual maximum may be elsewhere (not middle of unit cube), possibly allowing for values outside that range... This is also suggested by gamelife's post.

I believe this is the case. gamelife's post basically proves this once you go through the math. We no longer have to believe that our code is just buggy for producing values outside of -1..1.

My data in my previous post should prove this, too. If anyone cares to figure it out. I gave the sample location and all the gradient vectors. That's all you need.

[Edited by - Doucette on November 30, 2004 12:59:55 PM]

##### Share on other sites
Quote:
 Original post by technobotAlso notice that:3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

Ok, I have had time to think about this and implemented an optimization from it. (This is "improved" 3D Perlin Noise, for those just reading from here.)

As I go through my random x,y,z 0.0..1.0 sample points, for every dot-product (of delta & gradient vector) I choose the gradient vector (out of the 12 presets) that maximizes each dot product result. This reduces the trial and error significantly.

It was easy to do this because the gradient vectors are all values of either -1, 0, or 1. This makes the dot product ONE ADDITION!

For example, this is an unoptimized dot product:

A.B = (A.x * B.x) + (A.y * B.y) + (A.z * B.z)
(3 multiplies and 2 additions)

This is an optimized dot product when B = (1,0,1):

A.B = A.(1,0,1) = A.x + A.z

...

My previous maximum:
Quote:
 Original post by DoucetteI locked one at 0.5 and the others within the range of 0.45..0.55 and 0.3..0.4. In 8,522,970 samples I beat it again:location = (0.481445960659562,0.355424509538798,0.500000000000000)gradients =g000 = (1,0,1)g100 = (-1,0,1)g010 = (0,-1,1)g110 = (-1,-1,0)g001 = (1,0,-1)g101 = (-1,0,-1)g011 = (0,-1,-1)g111 = (-1,-1,0)max = 1.036353775489912

Now to try to beat this...

[EDIT: MY CODE WAS WRONG. 1.055520362955293 was the maximum value I had posted here, but it is wrong and was exaggerated.]

I have run the program again with the proper, working code. AFter 2,895,347,588 samples, I acheived a new maximum:

location = (0.644795088933775,0.499909066789684,0.481530191209012)
g000 = 1.1447
g100 = 0.981439
g010 = 1.14489
g110 = 0.981621
g001 = 1.16326
g101 = 1.01838
g011 = 1.16326
g111 = 1.01856
i = 2895347588
max = 1.036353777355945

The max was not beat after 6,000,000,000 samples.

Next (as I did with my older un-optimized code) I locked x=0.5, as y=0.499909066789684 in my data above seems to suggest that the result contains an exact 0.5. Here's my results:

In *only* 2,141,561 samples, the previous maximum was beat. It was continually beat:

i = 1 max = 0.808986516172702
i = 3 max = 1.018318228389438
i = 16 max = 1.031296019307135
i = 18 max = 1.034753783743431
i = 54 max = 1.036196421499095
i = 435 max = 1.036323797020139
i = 1335 max = 1.036336826993154
i = 2876 max = 1.036341702402226
i = 4502 max = 1.036352946204039
i = 51127 max = 1.036353365009920
i = 297585 max = 1.036353553349657
i = 643643 max = 1.036353650603703
i = 1349779 max = 1.036353691318686
i = 2141561 max = 1.036353808383355 <-- previous max beat
i = 8919930 max = 1.036353808843170
i = 51223134 max = 1.036353809087721
i = 84932703 max = 1.036353809280504
i = 130409817 max = 1.036353809393841
i = 133522358 max = 1.036353810547926
i = 210785009 max = 1.036353810588216
i = 259749544 max = 1.036353810658400
i = 286574578 max = 1.036353810716261
i = 380531512 max = 1.036353810995271
i = 1236970362 max = 1.036353811091619
i = 1439470648 max = 1.036353811095718
i = 2748843691 max = 1.036353811166373
i = 3870810752 max = 1.036353811197563
i = 6330661079 max = 1.036353811203274
i = 9700000000 hasn't been beat yet...

location = (0.500000000000000,0.518508155107934,0.355254297873072)
g000 = 1.01851
g100 = 1.01851
g010 = 0.981492
g110 = 0.981492
g001 = 1.16325
g101 = 1.16325
g011 = 1.14475
g111 = 1.14475
i = 6330661079
max = 1.036353811203274

Notes about the data:

1) The gradients are calculated BEFORE s-curve distortion. In fact, I replace my linear interpolation with what I call "s-curve interpolation" which produces the same results as distorting the inputs to linear interpolation by an s-curve. It makes more sense (to me) to consider it an "s-curve
interpolation".

2) I did not show the gradient vectors. Why? I never actually calculate them due to the 'addition optimization' of the dot products. I was able to extract the value from the best gradient vector without finding out what it was. But, in case you want to double check the math of the sample data above, you can figure out the gradients from the sample point location.

For example, from (0,0,0) corner:
A.B = A.(1,1,0) or
= A.(1,0,1) or
= A.(0,1,1)

(Only those three, as the other nine point outwards. Every corner has only three of the twelve gradients that point inwards. Remember, gradients pointing inwards help maximize the noise value, so we only want to consider those.)

so, using the 'addition optimization' due to the gradients gives...

A.B = A.x + A.y or
= A.x + A.z or
= A.y + A.z

Just choose the largest sum, and that tells you the gradient. In this case it was (1,0,1).

...

(FYI, the removal of the multiplications by the 12 preset gradient vectors was one of Ken Perlin's improvements in his "Improved Noise" paper.)

[Edited by - Doucette on December 1, 2004 10:44:34 AM]

##### Share on other sites
Quote:
Original post by Doucette
Re: 1985 Perlin Noise

Quote:
 Original post by technobotThis gives me a much better idea:Treat the noise as if you do have optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

While I'm deriving the extact solution, could you please run this test both with the old s(t) and the new s(t) and post the results? I want to check whether the excel analisys is flawed or not, and also to compare the behaviors of these two functions..

NOTE:
To adjust the test for the Improved Noise - for each delta vector, calculate the dot product with all 12 gradient vectors, and take the maximum result. Then interpolate those. This is guaranteed to give you the maximum noise range, and the position(s) where the maximum noise value is obtained. I think this is essentially what you did in your last post.. is that correct? Also, try going at 0.001 intervals within the unit cube, to get a more systematic approximation (rather than a statistical one)..

##### Share on other sites
Quote:
Original post by technobot
Quote:
Original post by Doucette
Re: 1985 Perlin Noise

Quote:
 Original post by technobotThis gives me a much better idea:Treat the noise as if you do have optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

While I'm deriving the extact solution, could you please run this test both with the old s(t) and the new s(t) and post the results?

Sure. I will do this tomorrow. So you want to see if the old s-curve and the new s-curve give the same result? I believe they will, as they all result in 0.5 given 0.5.

Quote:
 Original post by technobotTo adjust the test for the Improved Noise - for each delta vector, calculate the dot product with all 12 gradient vectors, and take the maximum result. Then interpolate those. This is guaranteed to give you the maximum noise range, and the position(s) where the maximum noise value is obtained. I think this is essentially what you did in your last post.. is that correct?

Yes, it is exactly what I did. I made some optimizations which made it faster (with the downside of not actually calculating the gradients to display).

##### Share on other sites
Ok, let's see what's going on in the Improved Noise. I'm assumming 3D, since any other dimentionality would require it's own set of gradient vectors. The four stages are the same as for the old noise, but behave a bit differently:

Step 1:
Just as boring as last time.

Step 2:
The gradient vectors are of length Sqrt(2). One component of each gradient vector is always zero, so the maximum effective length of the delta vectors is also Sqrt(2). Minimum delta vectors length is still 0.

Step 3:
Maximum possible values are when the following conditions are met:
1. The delta vector is in the same direction as the gradient vector (cosine of angle equals 1).
2. The delta vector ends on the containing unit cube's surface (maximum delta vector length given the direction constraint).

The gradient vector's zero coordinate cancels out the corresponding coordinate of the delta vector during the dot product calculation, so we can treat that coordinate as also being zero. The gradient vectors themselves always satisfy both of the above conditions. The result is that if the delta vector satisfies those conditions, we are essentially taking the dot product of the gradient vector with itself, returning the square of its length, i.e. 2 (btw, this would also hold for the old perlin noise, if its gradient vectors were not normalized). So the the dot products give a range -2..2. This is then reduced by the various dependancies in step 4.

Step 4:
The behavior of the new s(t) is almost identical to that of the old s(t), leading to an initial assumption that we should again get the maximum noise value at the center of the unit cube, and the maximum value should be 0.5*Sqrt(2)*Sqrt(2) = 1. However, the supplied test data clearly falsifies this assumption, so there must be something else going on here.

I suspect that part of this might be due to the fact that the dot products are functions of all three components of P, leading to interdependancies that make the 1D analisys I did in excel inaccurate. The other thing to consider is the nature if the gradient vectors. Since all the gradient vectors have a zero coordinate, and since the delta vectors are not constrained to be of unit-length, it does not matter what the corresponding coordinate of the delta vector equals to. This removes one of the depenancies, and alters the behavior of the noise. For example, if the gradient vector is (0,1,1), then the dot product for that corner no longer depends on P.x.

Since the delta vectors always point inward, the largest results will be attained if all the gradient vectors for the containing unit cube also point inward. There are only three such gradient vectors for each corner. For corner (0,0,0), these are (1,1,0), (1,0,1), and (0,1,1). By mirroring around the x, y, and/or z axes, the corresponding vectors for the other corners can be obtained.

For each pair of corners, there are two options: A) both gradient vectors have a zero component on the same axis; or B) the gradient vectors have zero components on different axes. Notice that the vectors are equivalent - we can get from one to another by renaming the axes. This allows us to arbitrarily select which axes have zero components in the gradients.

Let's analyze each case separately. frac(P)=(x,y,z):

Case A:
Since the y and z components of the gradient vectors are non-zero, both the dot products are still functions of y and z. Assumming the excel analysis was correct after all, we should get maximum values for y=z=0.5. This is also hinted by the test data. Hence:

delta(a) = (x , 0.5, 0.5)
delta(b) = (x-1, 0.5, 0.5)

a = grad(a).delta(a) = 0*x + 1*0.5 + 1*0.5 = 1
b = grad(b).delta(b) = 0*(x-1) + 1*0.5 + 1*0.5 = 1

An interpolation along x will then give 1. The only way the noise can be more than 1, is if one or more of the other x interpolations returns a value above 1.

Case B:

As before, we can guess z=0.5, since both a and b are functions of z. Since b is not a function of y, the excel analisys does not apply to y. Similarly, it does not apply to x either. We have to find the point of maximum noise-value some other way. Suppose the point is (x_max, y_max, 0.5). x_max then depends on the selection of y_max, and vice versa. This leaves us with a set of interdependant partial deriviatives, which as far as I can tell have no analytical solution. :-/

If you're interested, here is what I've got for x, treating y as a parameter (assuming y=ymax).

delta(a) = (x , y_max, 0.5)
delta(b) = (x-1, y_max, 0.5)

k=y_max+1
a = grad(a).delta(a) = 0*x + 1*y + 1*0.5 = y+0.5 = k-0.5
b = grad(b).delta(b) = 1*(x-1) + 0*y + 1*0.5 = x - 1 + 0.5 = x-0.5
b-a = x-y-1 = x-k

t = x
s(t) = s(x) = 6*x^5 - 15*x^4 + 10*x^3
p(s) = b*s + a*(1-s) = b*s + a - a*s = (b-a)*s + a
p(x) = (x-k)*(6*x^5 - 15*x^4 + 10*x^3) + k-0.5
p(x) = (6*x^6 - 15*x^5 + 10*x^4) - k*(6*x^5 - 15*x^4 + 10*x^3) + k-0.5
p(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5

(Looking for the maximum:)

p' = dp/dx = 36*x^5 + (-75-30k)*x^4 + (40+60k)*x^3 - 30k*x^2 = 0
p' = x^2*[36*x^3 + (-75-30k)*x^2 + (40+60k)*x - 30k] = 0

x1=x2=0.

36*x^3 + (-75-30k)*x^2 + (40+60k)*x - 30k = 0

This is where I am stuck... Note that while this could be solved parametrically using the cubic equation, that is rather complex, and would still not give any clue as to what k (or y_max) should be.

A note concerning the randomness of the gradient vectors: as Ken Perlin mentions in his paper, the permutation table provides plenty of randomness. This is also true for the old noise. Anyway, it doesn't affect the limits of the noise, only your chances of reaching those limits.

##### Share on other sites
Quote:
 Original post by technobotp(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5

Let me get this straight; You want the maximum value of the multivariable function?

p(x, k) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5
I asked my friend who said he was pretty sure that the maximum value is located at x=1 and k=inf...

Although, we aren't too sure if it is correct... I have tried to graph the function with mathematica to no avail...

Now when I cnstrain the two variables to be within -1 and 1, I get x=-1 and k=1

Does any of this sound right?

[Edited by - Dwiel on November 30, 2004 6:44:49 PM]

##### Share on other sites
Quote:
Original post by Dwiel
Quote:
 Original post by technobotp(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5

Let me get this straight; You want the maximum value of the multivariable function?

Err.. basically, yes. I was hoping to kick k out, and find the optimum x. This is for the x interpolation of the perlin noise. Then, since k behaves the same in the y interpolation, the optimum k would be the same.

EDIT: Correction: what I want, is given a known y, what is the x that gives a maximum.

Solving both interpolations simultaneously (i.e. finding the maximum of the perlin noise function directly) is much more complex, since there are several different cases, depending on the other gradient vectors.

Quote:
 Now when I cnstrain the two variables to be within -1 and 1, I get x=-1 and k=1

The doimain that we're interested in is x=0..1, y=0..1. Given that k=y+1, that makes it x=0..1, k=1..2. I'll try to plot this in excel.. don't know if I'll get anything useful..

EDIT: Ok. I did the plot. Unforetunately, for any given y, there is no maximum along x, except for x=0. This isn't very useful, for reasons that are rather hard to explain. We have no choice but to analyze the complete noise function directly. That is way too much work for me (I don't have that much spare time)...

[Edited by - technobot on December 1, 2004 6:35:15 AM]

##### Share on other sites
I really appreciate you guys' insistence to find the exact value :)
I tried to derive it once yesterday, and quickly gave up as I found it more complicated than expected...

##### Share on other sites
Quote:
 Original post by gamelifeFor reference, here is an example where 3D improved perlin noise yields value outside -1..1. ... The sum is 1.0258789.

To test to make sure my code was correct (the code that determines the gradient vectors that maximize the noise for any given location) I perform gamelife's above sample on paper AND in my program.

I came up with *exactly* 1.02587890625.

...

I had a bug in my code that calculated the noise-maximizing gradient vectors! In all likelihood, the 1.05... values are WRONG, and the values around 1.03... are more likely to be the extent of the range.

To explain further, the maximum values around 1.03... were from working code, that actually performed the dot products. The 1.05... values were from bugged code that did *not* perform the dot products, but used an optimization that I coded incorrectly. The optimizatoin works, I just coded a bug in it. I have now deleted the mention of the 1.05... values in (whatever) post that mentioned it and replaced it with an explanation. It made sense to leave an explanation of all this for all those who remembered that I had acheived a maximum noise of 1.05... but would come back here and not be able to find it.

I am going to run my program again with the, now, proper code. I apologize for all of this. I will edit the post in question to explain the bug. You may want to go re-read the post again:

[Edited by - Doucette on December 1, 2004 10:41:48 AM]

##### Share on other sites
Quote:
 Original post by Doucette[...] the values around 1.03... are more likely to be the extent of the range. [...]

Yes, when taking the best possible gradient vectors, I also got a maximum of 1.036something, and a similar minimum (but with a minus). I'm not going to derive the exact maximum or its location mathematically, since it is too complex.

##### Share on other sites
Quote:
 Original post by technobotI'm not going to derive the exact maximum or its location mathematically, since it is too complex.

Complex, indeed!

I should code a "solver" that "moves" the sample point in the direction of maximum noise. I'll think about this. Better to program a smart algorithm than a randomized brute-force dumb algorithm. This should determine the maximum noise. Because the 3D improved Perlin Noise is definitely *not* maximized at (0.5,0.5,0.5), it means there are more than one maximum solution.

##### Share on other sites
My previous maximum (3D 'improved' Perlin Noise):
Quote:
 Original post by Doucettei = 6330661079 max = 1.036353811203274i = 9700000000 hasn't been beat yet...

By clampling the ranges I have beat it again. I believe there are eight solutions (one solution towards each corner of the lattice "cel"). I clampled the ranges towards the (0,0,0) corner:

The final ranges used:

x = 0.5
y = 0.3552567 +/- 0.00005
z = 0.4814922 +/- 0.00005

...or...

x = 0.5
y = 0.3552067..0.3553067
z = 0.4814422..0.4815422

Note that the first values listed below are probably not possible with the ranges above, as they used less-strict ranges. As I got closer and closer, a tightened the ranges more and more. So, the list below used varying ranges. The ranges above are the FINAL ranges used.

You can see from the sample point locations below that these ranges are, relatively speaking, very wide.

location = (0.500000000000000,0.353734595395723,0.486135939624372) max = 1.036286414151063
location = (0.500000000000000,0.351954817141850,0.478337591058123) max = 1.036298132206927
location = (0.500000000000000,0.355508652054978,0.484225816742964) max = 1.036328628884772
location = (0.500000000000000,0.359012266752368,0.480723875966412) max = 1.036336486237516
location = (0.500000000000000,0.356852545468774,0.480541509356442) max = 1.036348982173853
location = (0.500000000000000,0.354308045732161,0.482199279400930) max = 1.036351634836999
location = (0.500000000000000,0.355482026530313,0.481319830336538) max = 1.036353684388719
location = (0.500000000000000,0.355293071726298,0.481408337693841) max = 1.036353789227805
location = (0.500000000000000,0.355217810051933,0.481554293203091) max = 1.036353798834607
location = (0.500000000000000,0.355255029277611,0.481477679170412) max = 1.036353810494775
location = (0.500000000000000,0.355273829146236,0.481483233239090) max = 1.036353810713977
location = (0.500000000000000,0.355264253543061,0.481499274787784) max = 1.036353810928294
location = (0.500000000000000,0.355265768955025,0.481484542632288) max = 1.036353810977317
location = (0.500000000000000,0.355250628359402,0.481492390458837) max = 1.036353811166006
location = (0.500000000000000,0.355260160320146,0.481491714997498) max = 1.036353811197364
location = (0.500000000000000,0.355255085080403,0.481491280297581) max = 1.036353811204319 <-- beat previous max
location = (0.500000000000000,0.355255232581916,0.481493428949773) max = 1.036353811205823
location = (0.500000000000000,0.355257001256468,0.481493270162923) max = 1.036353811207728
location = (0.500000000000000,0.355255179522859,0.481491839828577) max = 1.036353811207921
location = (0.500000000000000,0.355256617744225,0.481492345567539) max = 1.036353811211745
location = (0.500000000000000,0.355256911054280,0.481492196246598) max = 1.036353811211748
location = (0.500000000000000,0.355256649185156,0.481492194928096) max = 1.036353811211798
location = (0.500000000000000,0.355256726419107,0.481492193373430) max = 1.036353811211801
location = (0.500000000000000,0.355256699937575,0.481492214843307) max = 1.036353811211803
location = (0.500000000000000,0.355256698812100,0.481492205859573) max = 1.036353811211803 <-- another run of the program
location = (0.500000000000000,0.355256696077422,0.481492217115131) max = 1.036353811211803 <-- another run of the program
location = (0.500000000000000,0.355256706357284,0.481492212173843) max = 1.036353811211803 <-- another run of the program
location = (0.500000000000000,0.355256705220401,0.481492209891726) max = 1.036353811211803 <-- another run of the program

The last "another run of the program" values show the differences in the sample location that basically come up with the same max value (in terms of the accuracy of a double.)

...

In case you were wondering about the x=0.5, I did the testing again with a variable range for x (10 times wider than the ranges of the "final range" for the data above):

x = 0.5 +/- 0.0005
y = 0.3552567 +/- 0.0005
z = 0.4814922 +/- 0.0005

...or...

x = 0.4995000..0.5005000
y = 0.3547567..0.3557567
z = 0.4809922..0.4819922

After 19,200,000,000 samples:

location = (0.500000115710708,0.355256560627208,0.481492285004789) max = 1.036353811211725

You can see that it is very likely that the value is 0.5.

...

So, it's DONE. As far as random testing is concerned. This is where it stands until someone does a mathematical proof. Anyone up for the challenge? This could be some sort of contest! :)

The range of 3D "improved" Perlin Noise = -1.036353811211803..1.036353811211803

[Edited by - Doucette on December 1, 2004 1:16:31 PM]

##### Share on other sites
Now to jump onto this problem (the "original" 1985 3D Perlin Noise):
Quote:
 Original post by technobotThis gives me a much better idea:Treat the noise as if you do have optimal gradients.

Ok, now I am going to start this.

First, I coded a test program that tests only within a current "cel" of ever-changing random gradients. And the gradients are *not* optimized to produce the best value. (I just want to make sure my code is correct first.)

Our work (technobot and myself) on paper produced this range for 3D "original" Perlin Noise:

-sqrt(0.75)..sqrt(0.75)
-0.5*sqrt(3)..0.5*sqrt(3)
-0.866025403784..0.866025403784

(The above ranges are all the same. technobot's, the second range, makes more sense as you can replace the "3" with n, where n = the dimensions of noise.)

Before I post my random sampling results, I just want to show how UNLIKELY it is to acheive the maximum result. It depends on TWENTY-SEVEN random numbers in any given lattice "cel"! Three for the sampling point, and three each for the eight gradient vectors. Different random numbers used to create the random gradient vectors can produce the same normalize gradient... but that's not helping much.

ORIGINAL S-CURVE / RANDOM (NON-OPTIMIZED) GRADIENT VECTORS:

i = 1 loc = (0.959668362066654,0.654177784293364,0.468324424134351) max = 0.031197212984288
i = 2 loc = (0.996239645969101,0.209476909869043,0.547572401244019) max = 0.077235016841333
i = 4 loc = (0.350242802805503,0.580604226981493,0.111631266136891) max = 0.078176181360040
i = 7 loc = (0.709392704586558,0.326365297369038,0.861358545634306) max = 0.204843986537223
i = 12 loc = (0.055882429689120,0.026447364611439,0.411395665149568) max = 0.218807255408791
i = 27 loc = (0.625739522724977,0.581037591860810,0.412391337613112) max = 0.247470174456908
i = 90 loc = (0.356261117574459,0.141804434234475,0.478884069361120) max = 0.367296947945146
i = 113 loc = (0.306932722684492,0.756682790621315,0.663123913774270) max = 0.427196136150945
i = 918 loc = (0.818340300571302,0.603534116166330,0.241391899654686) max = 0.460406347612861
i = 1062 loc = (0.194467864215274,0.525760180889867,0.443920275467754) max = 0.497558342158556
i = 1080 loc = (0.699555175987780,0.561051409579103,0.114119048735997) max = 0.521367758737155
i = 4890 loc = (0.655902275573691,0.653922305499187,0.636507266572305) max = 0.567098961969464
i = 5033 loc = (0.626609603716594,0.383423102112205,0.534600285730130) max = 0.569056492230879
i = 6271 loc = (0.430803625307457,0.590914338221179,0.134008440970802) max = 0.571064919149885
i = 17639 loc = (0.467646429667576,0.086762515149540,0.504772364286269) max = 0.594992385745419
i = 73893 loc = (0.575208462886898,0.546269977467392,0.061292802072875) max = 0.606101692633073
i = 96453 loc = (0.999008125145398,0.529950774130321,0.566065110822622) max = 0.616349626763465
i = 106380 loc = (0.666696011929084,0.689792392996515,0.536196579771430) max = 0.625312256077212
i = 108581 loc = (0.475712753379947,0.362214918261210,0.347218746242442) max = 0.637318387567621
i = 125441 loc = (0.685568223287247,0.341886796839409,0.559289698980435) max = 0.638038975692868
i = 169055 loc = (0.635227146360184,0.453228519407957,0.404732157297401) max = 0.640466848941340
i = 330880 loc = (0.592053156067819,0.670522496257211,0.469942566203162) max = 0.655392281456257
i = 921984 loc = (0.946751380443852,0.534414377753248,0.517479203598293) max = 0.667146777074085
i = 4681222 loc = (0.377919639436127,0.614951612029197,0.649476141729074) max = 0.698948738367158
i = 5167572 loc = (0.567196388722365,0.528748558720839,0.657381619180078) max = 0.708765362476286
i = 7318514 loc = (0.497973117117161,0.410210461480709,0.545901385493473) max = 0.709515354177553
i = 21245047 loc = (0.398689149783572,0.526529630703511,0.576304972294120) max = 0.712441932685094
i = 56261445 loc = (0.572559303987426,0.579022297306204,0.462310593847222) max = 0.755542738687758
i = 351499682 loc = (0.447138276117166,0.433517363383405,0.619452665141969) max = 0.779842524068273

Stopped at 500,000,000 samples.

0.779842524068273 (randomly tested max) vs.
0.866025403784 (mathematically proven max)

Not very close due to the reasons listed above.

...

Now to try our (technobot's and my own) idea for choosing the noise-maximizing gradients for each sample point. (We both came up with it independantly while working on two different types of noise.)

I will do it with the original s-curve and the improved s-curve:

ORIGINAL S-CURVE (OPTIMIZED GRADIENT VECTORS):

i = 1 loc = (0.939304268872542,0.078858448782808,0.129256649718349) max = 0.217186047380575
i = 2 loc = (0.620736654755742,0.291367073183554,0.434829132962373) max = 0.785659125021669
i = 48 loc = (0.414760140897629,0.625762225404224,0.342609304855628) max = 0.801968030478771
i = 79 loc = (0.652990646580189,0.544291334715085,0.576522582350027) max = 0.825111594494891
i = 142 loc = (0.419234822472715,0.530556661569653,0.441137943035932) max = 0.851370341745179
i = 625 loc = (0.535442196414547,0.410357288093982,0.525171395224068) max = 0.852805621948848
i = 700 loc = (0.539037502519605,0.506009175700788,0.528351532872049) max = 0.862845494675147
i = 5528 loc = (0.484320875843021,0.469641029990835,0.467383482725676) max = 0.863020459960405
i = 5661 loc = (0.508084614712399,0.479820670627790,0.515826951189703) max = 0.865051532528046
i = 10357 loc = (0.479296964926407,0.507836213684599,0.484863141779468) max = 0.865056849420320
i = 15890 loc = (0.511421648957945,0.504918253188785,0.519917027662395) max = 0.865282991392969
i = 28936 loc = (0.495727584066960,0.487826125601438,0.514409650193332) max = 0.865521510188488
i = 74025 loc = (0.496149136296675,0.494620398185915,0.485013446555410) max = 0.865663999107797
i = 97144 loc = (0.488095897386179,0.500499602279615,0.500754433306875) max = 0.865833462329915
i = 130719 loc = (0.505015731331286,0.497537244269540,0.510377768750967) max = 0.865838282276155
i = 324145 loc = (0.501432965222138,0.491006363614398,0.502174106313776) max = 0.865907323697623
i = 443528 loc = (0.501007566008989,0.507390686227989,0.505258805849745) max = 0.865913202215201
i = 1580045 loc = (0.506031390196777,0.500421654560067,0.495104245084652) max = 0.865943871728781
i = 1592591 loc = (0.496745274994317,0.500198539529233,0.500354293905216) max = 0.866010911248234
i = 4561251 loc = (0.502483057040130,0.500374681324975,0.498699570441192) max = 0.866014630605815
i = 10535891 loc = (0.501286173668641,0.498833684831450,0.501560181382245) max = 0.866018063562777
i = 19020890 loc = (0.500154286814565,0.499172631298591,0.498882632011953) max = 0.866022767609589
i = 20122812 loc = (0.500396503086844,0.501135991157310,0.499612364046111) max = 0.866023251105930
i = 110550805 loc = (0.500730479306517,0.500528152202959,0.500570015497016) max = 0.866023871451152

Stopped at 500,000,000 samples.

0.866023871451152 (randomly tested max) vs.
0.866025403784 (mathematically proven max)

IMPROVED S-CURVE (OPTIMIZED GRADIENT VECTORS):

i = 1 loc = (0.306220373105214,0.485208388099953,0.521248543259675) max = 0.806968322550860
i = 96 loc = (0.544996359874924,0.387680780040008,0.599544259850151) max = 0.823161315714997
i = 148 loc = (0.557059935701378,0.565289576122386,0.481960579940255) max = 0.852167182377895
i = 308 loc = (0.524217722481096,0.466631731655121,0.562095154042994) max = 0.856206524031953
i = 353 loc = (0.552858908895227,0.511907219328073,0.522228812987324) max = 0.859963978079263
i = 2921 loc = (0.496292297042729,0.511022416427769,0.470430229728515) max = 0.864232762363548
i = 19811 loc = (0.499116116296131,0.487059471576623,0.509164391554846) max = 0.865576561379324
i = 90750 loc = (0.499129735430039,0.506214135314930,0.494959115590754) max = 0.865910087576415
i = 303364 loc = (0.504690829380369,0.496433475786338,0.504936416083546) max = 0.865920212127488
i = 1197787 loc = (0.499775602993796,0.500326161149363,0.504604228076857) max = 0.865987390283695
i = 2533926 loc = (0.502644093233633,0.497639051233778,0.501695359250987) max = 0.865997919037771
i = 3052777 loc = (0.501158553929155,0.497124892535680,0.499208848527521) max = 0.866007185245710
i = 7329777 loc = (0.498829502538199,0.501035743724951,0.500252540137780) max = 0.866020941628011
i = 91998759 loc = (0.501017590164082,0.500381123843272,0.500401624493473) max = 0.866023014726645
i = 223997975 loc = (0.499489569229980,0.500749497705864,0.500662385643086) max = 0.866023158926329

Stopped at 500,000,000 samples.

0.866023158926329 (randomly tested max) vs.
0.866025403784 (mathematically proven max)

LINEAR (NO S-CURVES) (OPTIMIZED GRADIENT VECTORS):

(Just for the hell of it.)

i = 1 loc = (0.189870865945244,0.453982510898272,0.077090809076594) max = 0.654359844744721
i = 2 loc = (0.641889330712132,0.498357252659819,0.879927285528734) max = 0.747367093417639
i = 8 loc = (0.527920255365351,0.417855165758340,0.342973686946175) max = 0.841306544458203
i = 103 loc = (0.520474032184694,0.555595613083938,0.391213044322300) max = 0.854226623591120
i = 448 loc = (0.596959716367296,0.550092244107328,0.472767024607329) max = 0.856284937610747
i = 463 loc = (0.455601675646504,0.502506815252249,0.587728955984879) max = 0.858589152013766
i = 520 loc = (0.445452454568892,0.499793617255432,0.562359457762853) max = 0.860735106716659
i = 914 loc = (0.546226423830922,0.486049045293994,0.520591408910489) max = 0.863904549151361
i = 949 loc = (0.538499267212227,0.497060954732098,0.511977205818396) max = 0.864768251299370
i = 2880 loc = (0.467429466510970,0.493824864214062,0.521788740739050) max = 0.864813753819769
i = 3793 loc = (0.520475570811829,0.492255557541311,0.507622550071882) max = 0.865611782235035
i = 17566 loc = (0.518320491210153,0.504062991663171,0.493215983289859) max = 0.865718921289213
i = 94590 loc = (0.494281134734729,0.503814030867766,0.516303166101507) max = 0.865784440918204
i = 139504 loc = (0.499509508751548,0.508183146367063,0.513949611584213) max = 0.865823874874581
i = 159739 loc = (0.485874707323666,0.504709436424504,0.495295566195960) max = 0.865837707651920
i = 278061 loc = (0.512282791641813,0.498526612803861,0.502339508109622) max = 0.865903393408348
i = 331080 loc = (0.497976363968502,0.505439374178963,0.494551397977143) max = 0.865976621231882
i = 872447 loc = (0.494830810514499,0.500739891749457,0.503195050667847) max = 0.865996554472511
i = 1849212 loc = (0.500132644053308,0.498786849875717,0.501900552873551) max = 0.866021476702045
i = 13190807 loc = (0.498415903707639,0.500249509530935,0.500605348460868) max = 0.866023142064498
i = 33386039 loc = (0.500750842805989,0.500840501142800,0.499148440920558) max = 0.866023867754062
i = 214349412 loc = (0.499976660049276,0.500749006242127,0.500214012465477) max = 0.866024936241377
i = 231153713 loc = (0.500097103332621,0.499701114385239,0.499466779457471) max = 0.866025108884842
i = 1427464072 loc = (0.499998696054642,0.499960664726466,0.500049360287847) max = 0.866025400716475

Stopped at 4,000,000,000 samples.

0.866025400716475 (randomly tested max) vs.
0.866025403784 (mathematically proven max)

This data shows it is fairly certain that technobot's proof is correct, which corresponded perfectly with my proof on paper (which I have not posted.) Also, it shows that both s-curves and linear (without s-curve distortion) produce the same range, which was fully expected.

[Edited by - Doucette on December 2, 2004 8:54:31 AM]

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account