**0**

# 2D Perlin Noise (gradient noise) range = ?

###
#1
Members - Reputation: **382**

Posted 29 November 2004 - 03:11 AM

###
#2
Members - Reputation: **238**

Posted 29 November 2004 - 04:42 AM

**NOT**a good way to test the full range of a perlin noise (trust me, I've tried this). You have to take a large number of

**random**samples. Also, note that the distribution of perlin noise is gaussian, so you're not very likely to get the extreme values (which is why the large smaple count is needed - although a smaller count may still give you a good estimate of the range).

You might find this thread useful, although it deals mostly with the improved noise implementation.

EDIT: that thread deals with 3D noise, but it the range doesn't depend on the dimentionality.

###
#3
Members - Reputation: **382**

Posted 29 November 2004 - 08:20 AM

Sampling at regular intervals *IS* a good way to test the full range of a Perlin Noise if the intervals are small enough. Likewise, random sampling only works if you sample enough random points. My 0.01 interval is plenty small enough.

To reiterate, I sampled it at every 0.01 interval across 0.0..256.0 in 2D. That totals to 655,360,000 evenly spaced sample points. However, the total is irrelevant. What matters is the resolution upon each "cel". For each noise "cel" it is 10,000 sample points, or 100 across x and 100 across y. So, when you picture the worse case scenario of a noise "cel", you can see that it *plenty* of resolution.

But, I did as you said anyway, just to satisfy all types of testing. I performed the same large number of random samples, 655,360,000, on the exact same noise (composed of the exact same gradient array):

min..max = -0.69424..0.695468

Random sampling came up with slightly smaller MINs and slightly larger MAXs, so technobot's method worked better, although either method was sufficient. (It is important to note that you are less likely to make mistakes with random sampling.)

Next, I sampled TEN TIMES that amount: 6,553,600,000 random samples:

min..max = -0.69424..0.695648

This still shows that the range is -sqrt(0.5)..sqrt(0.5), not -1..1 as lots of people think.

...

Concerning 3D Perlin Noise:

Also, assuming that I am correct in my first post (which is what I am here to find out), 3D Noise range *is* different than 2D. And then it is different *again* if you use Ken Perlin's 2002 "Improving Noise" suggestions as he un-randomizes and un-normalizes the gradients which affect the output range. My 3D Perlin Noise, based on Ken Perlin's "Improving Noise" has this range: -1.0332..1.03458. (Based on 6 billion random samples. It took over 62 million samples JUST to break the -1..1 barrier, and over 235 million samples to break both ends of the -1..1 barrier.) I'm not sure if my code is buggy yet, so I do not wish to get into it too much until my 2D range problem is solved.

I would post this in your thread, but it is closed. I *despise* that. Threads that have not solved the issue should not be closed. We might as well continue the 3D discussion here, as I will be getting into that afterwards. But, for now, let's try to avoid getting into 3D and all its extra complications until after we determine whether the 2D range is -sqrt(0.5)..sqrt(0.5). I think 3D discussion will make more sense afterwards.

[Edited by - Doucette on November 29, 2004 3:20:32 PM]

###
#4
Members - Reputation: **133**

Posted 29 November 2004 - 10:04 AM

If you require a uniform distrobution, then you either have to monkey with the output, or use some method other than perlin noise.

karg

###
#5
Members - Reputation: **382**

Posted 29 November 2004 - 10:32 AM

I am only concerned with absolute range.

I.e. What is the minimum and maximum value possible from 2D Perlin Noise?

I believe it is *NOT* the assumed, popular -1..1.

###
#6
Members - Reputation: **133**

Posted 29 November 2004 - 10:59 AM

Just create a point with the maximum values for each octave and combine them to see what you get. Same for the minimums. Then you can scale to whatever values you want.

Now, just because you have the possibility of getting the maximum and minimum values, doesn't mean that a sample you create will contain those values.

What exactly are you using these numbers for?

karg

###
#7
Members - Reputation: **238**

Posted 29 November 2004 - 11:02 AM

Quote:

Original post by Doucette

Sampling at regular intervals *IS* a good way to test the full range of a Perlin Noise if the intervals are small enough.

Actually, come to think of it, you're right -

**if**the intervals are small enough, and

**if**the domain is large enough. Both of these conditions seem to be satisfied in your test though.

Quote:

[...] Next, I sampled TEN TIMES that amount: 6,553,600,000 random samples [...] This still shows that the range is -sqrt(0.5)..sqrt(0.5), not -1..1 as lots of people think.

As I said, "a smaller count may still give you a good estimate of the range" (because the sample distribution is the same - you just need to have enough samples to get a statistical meaning), so this was expected.

Quote:

Concerning 3D Perlin Noise:

Also, assuming that I am correct in my first post (which is what I am here to find out), 3D Noise range *is* different than 2D. And then it is different *again* if you use Ken Perlin's 2002 "Improving Noise" suggestions as he un-randomizes and un-normalizes the gradients which affect the output range. My 3D Perlin Noise, based on Ken Perlin's "Improving Noise" has this range: -1.0332..1.03458. (Based on 6 billion random samples.) I'm not sure if my code is buggy yet, so I do not wish to get into it too much until my 2D range problem is solved.

I would post this in your thread, but it is closed. I *despise* that. Threads that have not solved the issue should not be closed. We might as well continue the 3D discussion here, as I will be getting into that afterwards. But, for now, let's try to avoid getting into 3D and all its extra complications until after we determine whether the 2D range is -sqrt(0.5)..sqrt(0.5). I think 3D discussion will make more sense afterwards.

Ok. Let's see.. assumming the implementation you linked to in the first post, the algorithm is as follows (for

**all**dimentionallities):

1. Find the unit segment/square/cube (or other nD cube) that contains the sample point.

2. For each corner of that find: a) the vector from the corner to the sample point - let's call it the "delta vector"; b) the gradient vector associated with that corner (using the permutation table).

3. Still for each corner, calculate the dot product of the delta vector and the gradient vector for that corner.

4. Interpolate the results, first along the x axis, then the results of that along the y axis, and so on.

This is actually the same as for the "Improved Noise" Implementation. The difference is in what's going on behind the scenes, so let's analyze each step.

__Step 1:__

Nothing interesting.

__Step 2:__

The smallest possible delta vector for each corner is of length 0. The largest possible delta vector is when all the components are -/+ 1, giving a vector of length Sqrt(n), with n being the dimentionality of the noise. The gradient vectors are always of length 1, since they are normalized during initialization.

__Step 3:__

Since A.B = |A|*|B|*cos(A,B), this gives us values in the range -Sqrt(n)..Sqrt(n).

__Step 4:__

This one is actually trickier than it looks. To simplify the analysis, let's consider the 1D example first:

A---P------B

P is the sample point, A and B are the corners of the containing unit segment, t is the distance from A to P, and s(t) is the interpolation factor. Let's call the dot product results for A and B, a and b, respectively. The value at P is then:

p = b*s + a*(1-s)

This gives us the follwing behavior. What's important to remember is that a and b are also functions of t:

- At t=0.5, p is exactly the average of a and b, which is 0.5 for the 1D case.

- At t=0 and t=1, p is zero, because then you either have "p = b*0 + 0*1" or "p = 0*1 + a*0".

- As we approach from t=0.5 to t=0, a decreases linearly, b increases linearly, but s and 1-s do not change linearly. Depending on how s and s-1 interact with their counterparts, this may result in one of two cases: a) p drops monotonically towards zero; or b) p slightly rises to some maximum, and then drops back to zero monotonically.

- As we approach from t=0.5 to t=1, the same thing occurs as when approaching t=0.

I've entered this into excel, and got this (assuming a and b are such that a maximum value is attained):

t s 1-s a b p

0.00 0.00 1.00 0.00 1.00 0.00

0.05 0.01 0.99 0.05 0.95 0.06

0.10 0.03 0.97 0.10 0.90 0.12

0.15 0.06 0.94 0.15 0.85 0.19

0.20 0.10 0.90 0.20 0.80 0.26

0.25 0.16 0.84 0.25 0.75 0.33

0.30 0.22 0.78 0.30 0.70 0.39

0.35 0.28 0.72 0.35 0.65 0.43

0.40 0.35 0.65 0.40 0.60 0.47

0.45 0.43 0.57 0.45 0.55 0.49

0.50 0.50 0.50 0.50 0.50 0.50

0.55 0.57 0.43 0.55 0.45 0.49

0.60 0.65 0.35 0.60 0.40 0.47

0.65 0.72 0.28 0.65 0.35 0.43

0.70 0.78 0.22 0.70 0.30 0.39

0.75 0.84 0.16 0.75 0.25 0.33

0.80 0.90 0.10 0.80 0.20 0.26

0.85 0.94 0.06 0.85 0.15 0.19

0.90 0.97 0.03 0.90 0.10 0.12

0.95 0.99 0.01 0.95 0.05 0.06

1.00 1.00 0.00 1.00 0.00 0.00

As can be seen, p decreases monotonically as we go from t=0.5 to t=0 or t=1 (as I suspected). This means that the maximum values for the x interpolation can only be attained for frac(P.x)=0.5. Similarly, the maximum values for the y, z, and so on interpolations are also obtained at frac(P.coord)=0.5. Hence the maximum overall value is obtained for frac(P)=(0.5,0.5,...).

In that case, all the dot products would be between -0.5*Sqrt(n) and 0.5*Sqrt(n), the x interpolations would return the exact averages of the corresponding dot products, and each successive interpolation would retrun the exact average of the corresponding interpolations along the previous axis. If all the dot products are equal to their maximum value for that case, which is 0.5*Sqrt(n), this would then give us the overall maximum of 0.5*Sqrt(n). So if all of the above is correct, and I didn't miss anything, and there are no bugs in the implementation, then we should get the following ranges:

**1D:**-0.5*Sqrt(1)..0.5*Sqrt(1) =

**-0.5..0.5**

**2D:**-0.5*Sqrt(2)..0.5*Sqrt(2) =

**-0.71..0.71**

**3D:**-0.5*Sqrt(3)..0.5*Sqrt(3) =

**-0.87..0.87**

Although, since at least the 3D case should be -1..1 AFAIK, I won't be surprised if I

**am**indeed missing something (or have mistaken in the analysis).

###
#8
Members - Reputation: **382**

Posted 29 November 2004 - 03:10 PM

Quote:

Original post by Karg

I just depends on how you produce it.

True. That's why I specifically asked for the range of only one layer of gradient noise based on Ken Perlin's original code (http://mrl.nyu.edu/~perlin/doc/oscar.html#noise).

Quote:

Original post by Karg

Just create a point with the maximum values for each octave and combine them to see what you get. Same for the minimums. Then you can scale to whatever values you want.

Perlin Noise is not multiple 'octave' layers. It is just one layer. This is a common misunderstanding as most implementations of Perlin Noise use multiple layers with a lacunarity (gap in frequency) of 2.0 which produces perfect "octave" layers. Perlin Noise, fundamentally, is only one of these layers. It is fine to use multiple layers of Perlin Noise (called spectral synthesis), but defining Perlin Noise as multiple layers is wrong.

The maximum and minimum values of only one Perlin Noise layer is all I want to know. You (Karg) are writing as if the max and min values are already known and as if I am asking for something else. (Please excuse me if I am wrong and misunderstood.) Anyway, figuring out the min/max range is the point of this entire thread.

I hope that explains it well.

...

technobot, your results were *exactly* what I had calculated for the noise ranges of Ken Perlin's original implementation.

Your steps 1..4 look good. Step 4 was hard to follow, but I followed it and it looks good and has the same results that I got.

Quote:

Original post by technobot

So if all of the above is correct, and I didn't miss anything, and there are no bugs in the implementation, then we should get the following ranges:1D:-0.5*Sqrt(1)..0.5*Sqrt(1) =-0.5..0.52D:-0.5*Sqrt(2)..0.5*Sqrt(2) =-0.71..0.713D:-0.5*Sqrt(3)..0.5*Sqrt(3) =-0.87..0.87

Although, since at least the 3D case should be -1..1 AFAIK, I won't be surprised if Iamindeed missing something (or have mistaken in the analysis).

I do not think you are missing anything. I think your calculations are correct. If you would have done 4D Perlin Noise, it would have come to a range of exactly -1..1.

I have not yet coded 3D *original* Perlin Noise. I will code it right now and post the ranges of my results...

Done. I'm calculating my results. So far, the range is:

-0.765153..0.720143 after 2.04 BILLION random samples!

-0.866025..0.866025 is the expected range.

Why isn't it close?

For two reasons. Firstly, 3D noise is calculated from *EIGHT* gradients (whereas 2D noise is calculated from only four gradients) and they both only have 256 pseudo-random gradients to choose from. If you think of it, it is much less likely that eight gradients that either closely maximize or minimize the noise value exist out of the 256. That's 3.125% of all the gradients. Secondly, assuming that those eight gradients do exist out of the list of 256, it is much less likely that those eight are going to all be corners around the same 3D "cel" of noise. Therefore, it is going to take A LOT longer to randomly stumble upon values close to the min and max values, assuming the even exist. There is a good chance that the -0.765153..0.720143 range above is the actual range of my noise (which would change if I randomized the pseudo-random 256 gradients used). This brings me to my next idea for testing:

I may code a 3D Perlin Noise function that keeps randomizing the 8 gradients, so that I will be testing an ever changing 3D Perlin Noise area, not a static area of noise that perhaps has no values close to the extremities of the range.

...

Concerning "Improved Noise" Perlin Noise:

Quote:

Original post by technobot

This is actually the same as for the "Improved Noise" Implementation. The difference is in what's going on behind the scenes, so let's analyze each step.

3D Perlin Noise based on Ken Perlin's improved noise from his "Improving Noise" paper is different (and has a different range) from the original implementation. Here is a partial list of "Improving Noise" differences, specifically only the differences that change the range:

"Improving Noise" Perlin Noise:

1) The gradients are *not* normalized, and have a length of sqrt(2).

2) The gradients are *not* random and do *not* point in the direction that maximizes (or minimizes) the noise value.

In fact, here they are:

1,1,0, -1,1,0, 1,-1,0, -1,-1,0,

1,0,1, -1,0,1, 1,0,-1, -1,0,-1,

0,1,1, 0,-1,1, 0,1,-1, 0,-1,-1,

1,1,0, -1,1,0, 0,-1,1, 0,-1,-1

(The last row is the first row repeated, on purpose.)

These gradients create a different range than the original 3D noise, due to the two points above. Now that we have the range of Perlin's original noise calculated, let's work on his Improved Noise range!

Once we do this, I bet it will explain why we *both* got ranges slightly over and under -1..1 for our 'improved' noise. Everyone thinks Perlin Noise is from -1..1, and I believe that is not the case.

(Here is the "Improved Noise" paper to see the full list of differences, which include using a new s-curve: http://mrl.nyu.edu/~perlin/paper445.pdf.)

[Edited by - Doucette on November 30, 2004 9:10:39 PM]

###
#9
Members - Reputation: **204**

Posted 29 November 2004 - 09:08 PM

The output value is the sum of 8 (delta_vector.gradient_vector * weight), shown below.

delta vector . gradient vector * weight

( x, y, z) . ( 1, 0, 1) * (1-fade(z))*(1-fade(y))*(1-fade(x))

( x, y, z-1) . ( 1, 1, 0) * ( fade(z))*(1-fade(y))*(1-fade(x))

( x, y-1, z) . ( 1, 0, 1) * (1-fade(z))*( fade(y))*(1-fade(x))

( x, y-1, z-1) . ( 1, -1, 0) * ( fade(z))*( fade(y))*(1-fade(x))

( x-1, y, z) . (-1, 0, 1) * (1-fade(z))*(1-fade(y))*( fade(x))

( x-1, y, z-1) . (-1, 1, 0) * ( fade(z))*(1-fade(y))*( fade(x))

( x-1, y-1, z) . (-1, 0, 1) * (1-fade(z))*( fade(y))*( fade(x))

( x-1, y-1, z-1) . (-1, -1, 0) * ( fade(z))*( fade(y))*( fade(x))

When the gradient vectors are values as above,

and if (x,y,z) = (0.5, 0.5, 0.75)

then fade(z) = 0.896484375

The 1st line becomes (0.5+0.75) * (1-0.896484375)*0.5*0.5

The 2nd line becomes (0.5+0.5 ) * ( 0.896484375)*0.5*0.5

The other lines are same.

The sum is 1.0258789.

I guess higher value could be possible - though extremely rare in real cases.

###
#10
Members - Reputation: **382**

Posted 30 November 2004 - 01:16 AM

Quote:

Original post by gamelife

I guess higher value could be possible - though extremely rare in real cases.

My range (as shown in the third post in this thread) was -1.0332..1.03458 for improved 3D Perlin Noise. Assuming my code is correct, it shows your assumption is right.

Also, thanks for the breakdown which proves it breaks the -1..1 barrier. It's easier to assume someone's code is wrong without a mathematical proof.

I will get to work on a proof of the actual range today.

[Edited by - Doucette on November 30, 2004 9:16:09 AM]

###
#11
Members - Reputation: **238**

Posted 30 November 2004 - 04:09 AM

Quote:

Original post by Doucette

Step 4 was hard to follow, but I followed it and it looks good and has the same results that I got.

It was also quite hard to figure out. It's rather unintuitive unforetunately, because there are a lot of interdependancies.

Quote:

I have not yet coded 3D *original* Perlin Noise. I will code it right now and post the ranges of my results...

Done. I'm calculating my results. So far, the range is:

-0.765153..0.720143 after 2.04 BILLION random samples!

-0.866025..0.866025 is the expected range.

Why isn't it close?

For two reasons. Firstly, 3D noise is calculated from *EIGHT* gradients ... If you think of it, it is much less likely that eight gradients that either closely maximize or minimize the noise value exist out of the 256. ... Secondly, assuming that those eight gradients do exist out of the list of 256, it is much less likely that those eight are going to all be corners around the same 3D "cel" of noise. ... This brings me to my next idea for testing: ...

This gives me a much better idea:

Treat the noise as if you

**do**have optimal gradients. Instead of looking for the gradient vectors through the permutation table, calculate them by normalizing the delta vectors. Better yet, just skip the whole find-gradient-calc-dot-product routine, and return the delta vector length as the "dot product" result (then interpolate those). After sampling this within the unit cube with some good resolution, this should both give you the maximum possible noise value,

**and**the coorodinates where it is obtained (which should be (0.5, 0.5, 0.5 if my analisys was indeed correct). The minimum possible value is then minus the maximum.

Quote:

Concerning "Improved Noise" Perlin Noise:Quote:

Original post by technobot

This is actually the same as for the "Improved Noise" Implementation. The difference is in what's going on behind the scenes, so let's analyze each step.

3D Perlin Noise based on Ken Perlin's improved noise from his "Improving Noise" paper is different (and has a different range) from the original implementation. Here is a partial list of "Improving Noise" differences, specifically only the differences that change the range:

"Improving Noise" Perlin Noise:

1) The gradients are *not* normalized, and have a length of sqrt(2).

2) The gradients are *not* random and do *not* point in the direction that maximizes (or minimizes) the noise value.

In fact, here they are:

1,1,0, -1,1,0, 1,-1,0, -1,-1,0,

1,0,1, -1,0,1, 1,0,-1, -1,0,-1,

0,1,1, 0,-1,1, 0,1,-1, 0,-1,-1,

1,1,0, -1,1,0, 0,-1,1, 0,-1,-1

(The last row is the first row repeated, on purpose.)

These gradients create a different range than the original 3D noise, due to the two points above. Now that we have the range of Perlin's original noise calculated, let's work on his Improved Noise range!

Also notice that:

3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

I've already done this analysis in the thread I linked to earlier, although it's somewhat scattered through the thread. Basically, the maximum value should again be obtained for (0.5, 0.5, 0.5), the actual length of which is 0.5*Sqrt(3). But because of (3), the effective length is only 0.5*Sqrt(2). All the gradient vectors also have a length of Sqrt(2), so the dot products return values between +/- 0.5*Sqrt(2)*Sqrt(2) = 0.5*2 = 1. As the interpolation merely average those values, the noise range is hence exactly -1..1. I will post a more complete analisys later.

However, since s(t) is different, the actual maximum may be elsewhere (not middle of unit cube), possibly allowing for values outside that range... This is also suggested by gamelife's post.

EDIT: Hmm, the last part of point 2 is important. I think I have neglected it in the above quick analysis. I will get back to this later.

###
#12
Members - Reputation: **382**

Posted 30 November 2004 - 04:41 AM

Re: 3D IMPROVED Perlin Noise:

Quote:

Original post by Doucette

My 3D Perlin Noise, based on Ken Perlin's "Improving Noise" has this range: -1.0332..1.03458. (Based on 6 billion random samples. It took over 62 million samples JUST to break the -1..1 barrier, and over 235 million samples to break both ends of the -1..1 barrier.)

I just coded a non-static 3D Perlin Noise "improved noise" tester. Inside of testing over a 0.0..256.0 cubic range, it tests only in one 0.0..1.0 "cel" but randomly changes the gradients each time. This way, I am testing the infinite possibilities of improved Perlin Noise.

Next, I *forced* all the gradients to point inwards (still only using the set of 12 presets) to force the maximum noise possible. I can explain why this helps produce the maximum noise to anyone questioning it.

It only took 486 samples (instead of 62,000,000) to break the 1 barrier! (The -1 barrier will not be broken, as I'm forcing the maximum value, not the minimum.)

UPDATE EDIT:

So far, I am at 7.09 billion samples and the maximum has been the same ever since the 2,266,232,577 sample:

location =

(0.515954047287821,0.500268734573173,0.354526271511759)

gradients =

g000 = (1,1,0)

g001 = (1,0,-1)

g010 = (1,-1,0)

g011 = (1,0,-1)

g100 = (-1,1,0)

g101 = (0,1,-1)

g110 = (-1,-1,0)

g111 = (0,-1,-1)

max = 1.036333179749597

UPDATE EDIT:

Noticing that the y in the location above is very close to 0.5, I set one of the dimensions to be locked to 0.5 (x). I beat the above value by doing this after only 85,077,428 samples:

location =

(0.500000000000000,0.518421096020481,0.354506041964626)

gradients =

g000 = (1,1,0)

g001 = (0,1,-1)

g010 = (1,-1,0)

g011 = (1,0,-1)

g100 = (-1,1,0)

g101 = (0,1,-1)

g110 = (-1,-1,0)

g111 = (-1,0,-1)

max = 1.036353132373597

It wasn't beat after 450,000,000 samples.

UPDATE EDIT:

I locked one at 0.5 and the others within the range of 0.45..0.55 and 0.3..0.4. In 8,522,970 samples I beat it again:

location = (0.481445960659562,0.355424509538798,0.500000000000000)

gradients =

g000 = (1,0,1)

g100 = (-1,0,1)

g010 = (0,-1,1)

g110 = (-1,-1,0)

g001 = (1,0,-1)

g101 = (-1,0,-1)

g011 = (0,-1,-1)

g111 = (-1,-1,0)

max = 1.036353775489912

It wasn't beat after 400,000,000 samples.

I think it's time to try to solve this mathematically.

...

This might get confusing to some... so I'll repeat that this is the 'improved' 3D Perlin Noise, from Ken Perlin's "Improving Noise" 2002 paper, not the original 3D Perlin Noise from Perlin's 1985 paper.

[Edited by - Doucette on November 30, 2004 1:41:20 PM]

###
#13
Members - Reputation: **382**

Posted 30 November 2004 - 04:59 AM

Quote:

Original post by technobot

This gives me a much better idea:

Treat the noise as if youdohave optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

...

Re: 2002 'improved' Perlin Noise

Quote:

Original post by technobot

Also notice that:

3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

I did notice that. My test program I posted about in my previous post ignores any potential optimizations by this. I'll think more on this later...

Quote:

Original post by technobot

Basically, the maximum value should again be obtained for (0.5, 0.5, 0.5) ...[cut]... the noise range is hence exactly -1..1. I will post a more complete analisys later.

I set my testing program (again, the one I posted about in my previous post) to force (0.5,0.5,0.5) and the max value was exactly 1. It only took 12 samples to find it, and after 1,000,000,000 samples it was never beat. I also performed some simple math on paper and proved this, too. So you are right that the maximum value obtained at (0.5,0.5,0.5) is 1, but wrong in your assumption that the maximum value is obtained at (0.5,0.5,0.5)... assuming I am right. I hope that makes sense.

Quote:

Original post by technobot

However, since s(t) is different, the actual maximum may be elsewhere (not middle of unit cube), possibly allowing for values outside that range... This is also suggested by gamelife's post.

I believe this is the case. gamelife's post basically proves this once you go through the math. We no longer have to believe that our code is just buggy for producing values outside of -1..1.

My data in my previous post should prove this, too. If anyone cares to figure it out. I gave the sample location and all the gradient vectors. That's all you need.

[Edited by - Doucette on November 30, 2004 12:59:55 PM]

###
#14
Members - Reputation: **382**

Posted 30 November 2004 - 08:44 AM

Quote:

Original post by technobot

Also notice that:

3) All the gradient vectors have two non-zero coordinates, and one zero coordinate, so you're essentially working with 2D vectors.

Ok, I have had time to think about this and implemented an optimization from it. (This is "improved" 3D Perlin Noise, for those just reading from here.)

As I go through my random x,y,z 0.0..1.0 sample points, for every dot-product (of delta & gradient vector) I choose the gradient vector (out of the 12 presets) that maximizes each dot product result. This reduces the trial and error significantly.

It was easy to do this because the gradient vectors are all values of either -1, 0, or 1. This makes the dot product ONE ADDITION!

For example, this is an unoptimized dot product:

A.B = (A.x * B.x) + (A.y * B.y) + (A.z * B.z)

(3 multiplies and 2 additions)

This is an optimized dot product when B = (1,0,1):

A.B = A.(1,0,1) = A.x + A.z

(1 addition)

...

My previous maximum:

Quote:

Original post by Doucette

I locked one at 0.5 and the others within the range of 0.45..0.55 and 0.3..0.4. In 8,522,970 samples I beat it again:

location = (0.481445960659562,0.355424509538798,0.500000000000000)

gradients =

g000 = (1,0,1)

g100 = (-1,0,1)

g010 = (0,-1,1)

g110 = (-1,-1,0)

g001 = (1,0,-1)

g101 = (-1,0,-1)

g011 = (0,-1,-1)

g111 = (-1,-1,0)

max = 1.036353775489912

Now to try to beat this...

[EDIT: MY CODE WAS WRONG. 1.055520362955293 was the maximum value I had posted here, but it is wrong and was exaggerated.]

I have run the program again with the proper, working code. AFter 2,895,347,588 samples, I acheived a new maximum:

location = (0.644795088933775,0.499909066789684,0.481530191209012)

gradients =

g000 = 1.1447

g100 = 0.981439

g010 = 1.14489

g110 = 0.981621

g001 = 1.16326

g101 = 1.01838

g011 = 1.16326

g111 = 1.01856

i = 2895347588

max = 1.036353777355945

The max was not beat after 6,000,000,000 samples.

Next (as I did with my older un-optimized code) I locked x=0.5, as y=0.499909066789684 in my data above seems to suggest that the result contains an exact 0.5. Here's my results:

In *only* 2,141,561 samples, the previous maximum was beat. It was continually beat:

i = 1 max = 0.808986516172702

i = 3 max = 1.018318228389438

i = 16 max = 1.031296019307135

i = 18 max = 1.034753783743431

i = 54 max = 1.036196421499095

i = 435 max = 1.036323797020139

i = 1335 max = 1.036336826993154

i = 2876 max = 1.036341702402226

i = 4502 max = 1.036352946204039

i = 51127 max = 1.036353365009920

i = 297585 max = 1.036353553349657

i = 643643 max = 1.036353650603703

i = 1349779 max = 1.036353691318686

i = 2141561 max = 1.036353808383355 <-- previous max beat

i = 8919930 max = 1.036353808843170

i = 51223134 max = 1.036353809087721

i = 84932703 max = 1.036353809280504

i = 130409817 max = 1.036353809393841

i = 133522358 max = 1.036353810547926

i = 210785009 max = 1.036353810588216

i = 259749544 max = 1.036353810658400

i = 286574578 max = 1.036353810716261

i = 380531512 max = 1.036353810995271

i = 1236970362 max = 1.036353811091619

i = 1439470648 max = 1.036353811095718

i = 2748843691 max = 1.036353811166373

i = 3870810752 max = 1.036353811197563

i = 6330661079 max = 1.036353811203274

i = 9700000000 hasn't been beat yet...

location = (0.500000000000000,0.518508155107934,0.355254297873072)

gradients =

g000 = 1.01851

g100 = 1.01851

g010 = 0.981492

g110 = 0.981492

g001 = 1.16325

g101 = 1.16325

g011 = 1.14475

g111 = 1.14475

i = 6330661079

max = 1.036353811203274

Notes about the data:

1) The gradients are calculated BEFORE s-curve distortion. In fact, I replace my linear interpolation with what I call "s-curve interpolation" which produces the same results as distorting the inputs to linear interpolation by an s-curve. It makes more sense (to me) to consider it an "s-curve

interpolation".

2) I did not show the gradient vectors. Why? I never actually calculate them due to the 'addition optimization' of the dot products. I was able to extract the value from the best gradient vector without finding out what it was. But, in case you want to double check the math of the sample data above, you can figure out the gradients from the sample point location.

For example, from (0,0,0) corner:

A.B = A.(1,1,0) or

= A.(1,0,1) or

= A.(0,1,1)

(Only those three, as the other nine point outwards. Every corner has only three of the twelve gradients that point inwards. Remember, gradients pointing inwards help maximize the noise value, so we only want to consider those.)

so, using the 'addition optimization' due to the gradients gives...

A.B = A.x + A.y or

= A.x + A.z or

= A.y + A.z

Just choose the largest sum, and that tells you the gradient. In this case it was (1,0,1).

...

(FYI, the removal of the multiplications by the 12 preset gradient vectors was one of Ken Perlin's improvements in his "Improved Noise" paper.)

[Edited by - Doucette on December 1, 2004 10:44:34 AM]

###
#15
Members - Reputation: **238**

Posted 30 November 2004 - 09:55 AM

Quote:

Original post by Doucette

Re: 1985 Perlin NoiseQuote:

Original post by technobot

This gives me a much better idea:

Treat the noise as if youdohave optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

While I'm deriving the extact solution, could you please run this test both with the old s(t) and the new s(t) and post the results? I want to check whether the excel analisys is flawed or not, and also to compare the behaviors of these two functions..

NOTE:

To adjust the test for the Improved Noise - for each delta vector, calculate the dot product with all 12 gradient vectors, and take the maximum result. Then interpolate those. This is guaranteed to give you the maximum noise range, and the position(s) where the maximum noise value is obtained. I think this is essentially what you did in your last post.. is that correct? Also, try going at 0.001 intervals within the unit cube, to get a more systematic approximation (rather than a statistical one)..

###
#16
Members - Reputation: **382**

Posted 30 November 2004 - 10:12 AM

Quote:

Original post by technobotQuote:

Original post by Doucette

Re: 1985 Perlin NoiseQuote:

Original post by technobot

This gives me a much better idea:

Treat the noise as if youdohave optimal gradients.

Of course. Excellent idea. This is *easy* to do with the 1985 Perlin Noise.

While I'm deriving the extact solution, could you please run this test both with the old s(t) and the new s(t) and post the results?

Sure. I will do this tomorrow. So you want to see if the old s-curve and the new s-curve give the same result? I believe they will, as they all result in 0.5 given 0.5.

Quote:

Original post by technobot

To adjust the test for the Improved Noise - for each delta vector, calculate the dot product with all 12 gradient vectors, and take the maximum result. Then interpolate those. This is guaranteed to give you the maximum noise range, and the position(s) where the maximum noise value is obtained. I think this is essentially what you did in your last post.. is that correct?

Yes, it is exactly what I did. I made some optimizations which made it faster (with the downside of not actually calculating the gradients to display).

###
#17
Members - Reputation: **238**

Posted 30 November 2004 - 11:35 AM

__Step 1:__

Just as boring as last time.

__Step 2:__

The gradient vectors are of length Sqrt(2). One component of each gradient vector is always zero, so the maximum effective length of the delta vectors is also Sqrt(2). Minimum delta vectors length is still 0.

__Step 3:__

Maximum possible values are when the following conditions are met:

1. The delta vector is in the same direction as the gradient vector (cosine of angle equals 1).

2. The delta vector ends on the containing unit cube's surface (maximum delta vector length given the direction constraint).

The gradient vector's zero coordinate cancels out the corresponding coordinate of the delta vector during the dot product calculation, so we can treat that coordinate as also being zero. The gradient vectors themselves always satisfy both of the above conditions. The result is that if the delta vector satisfies those conditions, we are essentially taking the dot product of the gradient vector with itself, returning the square of its length, i.e. 2 (btw, this would also hold for the old perlin noise, if its gradient vectors were not normalized). So the the dot products give a range -2..2. This is then reduced by the various dependancies in step 4.

__Step 4:__

The behavior of the new s(t) is almost identical to that of the old s(t), leading to an initial assumption that we should again get the maximum noise value at the center of the unit cube, and the maximum value should be 0.5*Sqrt(2)*Sqrt(2) = 1. However, the supplied test data clearly falsifies this assumption, so there must be something else going on here.

I suspect that part of this might be due to the fact that the dot products are functions of all three components of P, leading to interdependancies that make the 1D analisys I did in excel inaccurate. The other thing to consider is the nature if the gradient vectors. Since all the gradient vectors have a zero coordinate, and since the delta vectors are not constrained to be of unit-length, it does not matter what the corresponding coordinate of the delta vector equals to. This removes one of the depenancies, and alters the behavior of the noise. For example, if the gradient vector is (0,1,1), then the dot product for that corner no longer depends on P.x.

Since the delta vectors always point inward, the largest results will be attained if all the gradient vectors for the containing unit cube also point inward. There are only three such gradient vectors for each corner. For corner (0,0,0), these are (1,1,0), (1,0,1), and (0,1,1). By mirroring around the x, y, and/or z axes, the corresponding vectors for the other corners can be obtained.

For each pair of corners, there are two options: A) both gradient vectors have a zero component on the same axis; or B) the gradient vectors have zero components on different axes. Notice that the vectors are equivalent - we can get from one to another by renaming the axes. This allows us to arbitrarily select which axes have zero components in the gradients.

Let's analyze each case separately. frac(P)=(x,y,z):

__Case A:__Since the y and z components of the gradient vectors are non-zero, both the dot products are still functions of y and z. Assumming the excel analysis was correct after all, we should get maximum values for y=z=0.5. This is also hinted by the test data. Hence:

grad(a) = grad(b) = (0,1,1)

delta(a) = (x , 0.5, 0.5)

delta(b) = (x-1, 0.5, 0.5)

a = grad(a).delta(a) = 0*x + 1*0.5 + 1*0.5 = 1

b = grad(b).delta(b) = 0*(x-1) + 1*0.5 + 1*0.5 = 1

An interpolation along x will then give 1. The only way the noise can be more than 1, is if one or more of the other x interpolations returns a value above 1.

__Case B:__grad(a) = (0,1,1)

grad(b) = (1,0,1)

As before, we can guess z=0.5, since both a and b are functions of z. Since b is not a function of y, the excel analisys does not apply to y. Similarly, it does not apply to x either. We have to find the point of maximum noise-value some other way. Suppose the point is (x_max, y_max, 0.5). x_max then depends on the selection of y_max, and vice versa. This leaves us with a set of interdependant partial deriviatives, which as far as I can tell have no analytical solution. :-/

If you're interested, here is what I've got for x, treating y as a parameter (assuming y=ymax).

delta(a) = (x , y_max, 0.5)

delta(b) = (x-1, y_max, 0.5)

k=y_max+1

a = grad(a).delta(a) = 0*x + 1*y + 1*0.5 = y+0.5 = k-0.5

b = grad(b).delta(b) = 1*(x-1) + 0*y + 1*0.5 = x - 1 + 0.5 = x-0.5

b-a = x-y-1 = x-k

t = x

s(t) = s(x) = 6*x^5 - 15*x^4 + 10*x^3

p(s) = b*s + a*(1-s) = b*s + a - a*s = (b-a)*s + a

p(x) = (x-k)*(6*x^5 - 15*x^4 + 10*x^3) + k-0.5

p(x) = (6*x^6 - 15*x^5 + 10*x^4) - k*(6*x^5 - 15*x^4 + 10*x^3) + k-0.5

p(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5

(Looking for the maximum:)

p' = dp/dx = 36*x^5 + (-75-30k)*x^4 + (40+60k)*x^3 - 30k*x^2 = 0

p' = x^2*[36*x^3 + (-75-30k)*x^2 + (40+60k)*x - 30k] = 0

x1=x2=0.

36*x^3 + (-75-30k)*x^2 + (40+60k)*x - 30k = 0

This is where I am stuck... Note that while this could be solved parametrically using the cubic equation, that is rather complex, and would still not give any clue as to what k (or y_max) should be.

*A note concerning the randomness of the gradient vectors*: as Ken Perlin mentions in his paper, the permutation table provides plenty of randomness. This is also true for the old noise. Anyway, it doesn't affect the limits of the noise, only your chances of reaching those limits.

###
#18
Members - Reputation: **365**

Posted 30 November 2004 - 11:44 AM

Quote:

Original post by technobot

p(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5

Let me get this straight; You want the maximum value of the multivariable function?

p(x, k) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5

I asked my friend who said he was pretty sure that the maximum value is located at x=1 and k=inf...

Although, we aren't too sure if it is correct... I have tried to graph the function with mathematica to no avail...

Now when I cnstrain the two variables to be within -1 and 1, I get x=-1 and k=1

Does any of this sound right?

[Edited by - Dwiel on November 30, 2004 6:44:49 PM]

###
#19
Members - Reputation: **238**

Posted 30 November 2004 - 11:35 PM

Quote:

Original post by DwielQuote:

Original post by technobot

p(x) = 6*x^6 + (-15-6k)*x^5 + (10+15k)*x^4 - 10k*x^3 + k-0.5

Let me get this straight; You want the maximum value of the multivariable function?

Err.. basically, yes. I was hoping to kick k out, and find the optimum x. This is for the x interpolation of the perlin noise. Then, since k behaves the same in the y interpolation, the optimum k would be the same.

EDIT: Correction: what I want, is given a known y, what is the x that gives a maximum.

Solving both interpolations simultaneously (i.e. finding the maximum of the perlin noise function directly) is much more complex, since there are several different cases, depending on the other gradient vectors.

Quote:

Now when I cnstrain the two variables to be within -1 and 1, I get x=-1 and k=1

The doimain that we're interested in is x=0..1, y=0..1. Given that k=y+1, that makes it x=0..1, k=1..2. I'll try to plot this in excel.. don't know if I'll get anything useful..

EDIT: Ok. I did the plot. Unforetunately, for any given y, there is no maximum along x, except for x=0. This isn't very useful, for reasons that are rather hard to explain. We have no choice but to analyze the complete noise function directly. That is way too much work for me (I don't have

**that**much spare time)...

[Edited by - technobot on December 1, 2004 6:35:15 AM]