Noise Generation

Published January 02, 2020 by Eric Nevala, posted by slayemin
Do you see issues with this article? Let us know.
Advertisement

Any time you need to procedurally generate some content or assets in a game, you're going to want to use an algorithm to create something that has both form and variation. The naive approach is to use purely random numbers. This certainly meets the noise requirement but has no form since it is just static noise. What we want is some technique for generating controllable randomness which isn't too different from nearby random values (ie, not static). Pretty much any time you have this requirement, the go-to solution is almost always "Use Perlin noise - end of discussion". Easier said than done. If you've tried to build your own version based off of Perlin's C++ code implementation or Hugo Elias' follow up article on it, you're probably going to be lost and confused. Let's break the problem down into something simple and easy to implement quickly.

Terminology

Noise is a set of random values over some fixed interval. Example: Let X be the interval; Let Y be the noise value; Noise = {(0,6),(1,3),(2,4),(3,8),(4,2),(5,1),(6,5),(6,6),(7,4),(8,2),(9,5)} Static Noise is a type of noise where there is no continuity between values. In the example above, there is no smooth variation in Y values between X values. Smoothed Noise is a type of noise where there is continuity between values. This continuity is accomplished by using an interpolation between values. The most common interpolation function to use is Linear Interpolation. A slightly better function to use is Cosine Interpolation at a slight cost in CPU.

Lerp.png
Cos_Lerp.png

Amplitude is the maximum amount of vertical variation in your noise values (think of Sine waves). Frequency is the periodicity of your noise values along your scale. In the example above, the frequency is 1.0 because a new noise value is generated at every X integer. If we changed the period to 0.5f, our frequency would double.

Concept on how it works

There is a semantic distinction between Perlin's noise implementation and the common implementation. Perlin doesn't add layers together (he uses gradients). The end result is pretty much the same.

The core idea behind the layered summation approach to noise generation is that we first generate a very rough layer by generating noise with low frequency and high amplitude. Let's call this "Layer 0". Then, we create another layer with half the amplitude and double the frequency. Let's call this "Layer 1". We keep creating additional layers, with each layer having half the amplitude and double the frequency. When all of our layers have been created, we merge all of the layers together to get a final result. Layer 0 tends to give the final output noise its general contours (think of mountains and valleys). Each successive layer adds a bit of variation to the contours (think of the roughness of the mountains and valleys, all the way down to pebbles). Here is an example of 3 additive noise layers in 1 dimension:

Layer0_1D.png
Layer1_1D.png
Layer2_1D.png
Final1D.png

When it comes to 2D, the underlying principle is the same:

2D_Layers.png

(click for large version)

Noise generation

There are four distinct parts to generating the final noise texture. 1) Generate a set of random numbers for each layer, with the range being a function of amplitude, and the quantity being a function of the layer resolution. 2) Determine which interpolation method you want to use for smoothing (linear, cubic, etc) 3) For each point in the output, create interpolated noise values for all X, Y values which fall between noise intervals 4) Sum all interpolated noise layers together to get the final product In the code below, I've heavily commented and explained each step of the process to generate 2D noise with as much simplicity as possible.

/// /// Entry point: Call this to create a 2D noise texture /// 
public void Noise2D() 
{ 
	//Set the size of our noise texture 
	int size = 256; 
	
	//This will contain sum of all noise layers added together. 
	float[] NoiseSummation = new float[size * size]; 
	
	//Let's create our layers of noise. Take careful note that the noise layers do not care about the dimensions of our output texture. 
	float[][,] noiseLayers = GenerateLayers(8, 8); 
	
	//Now, we have to merge our layers of noise into a summed product. This is when the size of our result becomes important. Smooth_and_Merge(noiseLayers, ref NoiseSummation, size); //Now, we have a summation of noise values. We need to normalize them so that they are between 0.0 -> 1.0 //this is necessary for generating our RGB values correctly and working in a known space of ranged values. float max = NoiseSummation.Max(); for (int a = 0; a < NoiseSummation.Length; a++) NoiseSummation[a] /= max; //At this point, we're done. Everything else is just for using/displaying the generated noise product. //I've added the following for illustration purposes: //Create a block of color data to be used for building our texture. Color[] vals = new Color[size * size]; //Convert the noise data into color information. Note that I'm generating a grayscale image by putting the same noise data in each //color channel. If you wanted, you could create three noise values and put them in seperate color channels. The Red channel could //store terrain height map information. The green channel could contain vegetation maps. The blue channel could store anything else. for (int a = 0; a < NoiseSummation.Length; a++) vals[a] = new Color(NoiseSummation[a], NoiseSummation[a], NoiseSummation[a]); //Create the output texture and copy the color data into it. The texture is ready for drawing on screen. TextureResult = new Texture2D(BaseSettings.Graphics, size, size); TextureResult.SetData(vals); } /// /// Takes the layers of noise data and generates a block of noise data at the given resolution /// /// Layers of noise data to merge into the final result /// The resulting output of merging layers of noise /// The size resolution of the output texture private void Smooth_and_Merge(float[][,] noiseData, ref float[] finalResult, int resolution) { //This takes all of the layers of noise and creates a region of interpolated values //using bilinear interpolation. http://en.wikipedia.org/wiki/Bilinear_interpolation int totalLayers = noiseData.Length; for (int layer = 0; layer < totalLayers; layer++) { //to figure out the length of our square, we just sqrt our array length. It's guaranteed //to be an integer square root. ie, 25 = 5x5. int squareSize = (int)Math.Sqrt(noiseData[layer].Length); //This is our step size between noise data points for this layer. //as we go into higher resolution layers, this value gets smaller. int gridWidth = resolution / (squareSize - 1); //Go through every X/Y coordinate in the resolution for (int y = 0; y < resolution; y++) { for (int x = 0; x < resolution; x++) { //map each X/Y coordinate to the nearest noise data point int gridY = (int)Math.Floor(y / (float)gridWidth); int gridX = (int)Math.Floor(x / (float)gridWidth); //define the four corners on the unit square float x1 = gridX * gridWidth; float x2 = x1 + gridWidth; float y1 = gridY * gridWidth; float y2 = y1 + gridWidth; //BILINEAR INTERPOLATION: (see wikipedia article) //perform our linear interpolations on the X-axis float R1 = ((x2 - x) / gridWidth) * noiseData[layer][gridX, gridY] + ((x - x1) / gridWidth) * noiseData[layer][gridX + 1, gridY]; float R2 = ((x2 - x) / gridWidth) * noiseData[layer][gridX, gridY + 1] + ((x - x1) / gridWidth) * noiseData[layer][gridX + 1, gridY + 1]; //Now, finish by interpolating on the Y-axis to get our final value float final = ((y2 - y) / gridWidth) * R1 + ((y - y1) / gridWidth) * R2; //Summation step: Add the interpolated result to our existing noise data. finalResult[y * resolution + x] += final; } } } } /// /// Creates a series of layers with STATIC noise data in each layer. /// /// The maximum variation in noise /// The number of layers you want to generate. Each layer has 2^x more data! /// A jagged array of floats which contain the noise data points per each layer private float[][,] GenerateLayers(float amplitude, int layerCount) { /* Note that we do not care about period or frequency here. We're still resolution independent. */ float[][,] ret = new float[layerCount][,]; //A seeded pseudo random number generator. Use different fixed seeds to generate different noise maps. Random r = new Random(5); //we want to generate noise points for each layer for (int layer = 0; layer < layerCount; layer++) { //The number of noise points we need per layer is a function of the layer resolution (implied by the layer ID) //At each successive layer, we halve our amplitude and double our frequency. This becomes important. //At the lowest resolution, 0, we need at least a 3x3 grid of points. (2+1) //At resolution 1, we need at least a 5x5 grid of noise points. (4+1) //At resolution 2, we need at least a 9x9 grid of noise points, etc. (8+1) //We can generalize this to F(x) = 2^(x+1) + 1; //where X is the layer resolution. int arraySize = (int)Math.Pow(2, layer+1) + 1; ret[layer] = new float[arraySize, arraySize]; //For each X/Y point in the noise grid for our current layer, let's generate a random number //which is a function of our amplitude. for (int y = 0; y < arraySize; y++) { for (int x = 0; x < arraySize; x++) { ret[layer][x, y] = (float)(r.NextDouble() * amplitude); } } //Now, we halve our amplitude and repeat for the next layer. amplitude /= 2.0f; } return ret; } 

Usage and Examples

Terrain: I am using the noise technique to procedurally generate the height maps for terrain. My terrain uses GeoMipMapping for calculating Level of Detail (LOD). Although I haven't tried to implement it yet, I could use the various noise layers as a natural LOD for terrain. As the camera gets closer and closer to the terrain, the number of layers being merged into the final noise product causes the terrain detail to increase automatically.

Terrain.png

Clouds: Clouds can be very easily created with 2D noise. Option 1: If you switch to 3D noise and let the Z axis represent time, you can create the illusion of clouds changing shape over time. Option 2: If you pre-generate a bunch of cloud textures with low opacity levels, you can layer them on top of each other and let the layers create an illusion of a change in cloud shape. Distribution patterns: You can also use noise to procedurally figure out distribution patterns for things such as trees, grass, shrubs, etc. Since your noise is continuous and ranges from 0.0 -> 1.0, you can arbitrarily decide "All values between 0.25 and 0.35 indicate where on the terrain shrubs shall be positioned". Density maps: You can also use noise maps to determine the density of "stuff", whatever that happens to be. Height maps are actually just a type of density map, where it describes the density of dirt above sea-level. Texturing: You can tweak various values of the final noise map to procedurally generate some very interesting textures (marble, wood, fire, etc).

See also

While we've talked about Perlin Noise being a very good algorithm for creating noise with continuity, other alternatives exist: 1) Fractals - Fractals are a viable alternative to noise algorithms for solving particular problems. The challenge is finding a suitable fractal and tweaking its values to generate the desired results. 2) Simplex Noise - It looks so complicated, I'm not even going to touch it. It's worth mentioning since it's supposedly better in terms of CPU and memory performance.

References

(1) Ken Perlins noise (1988) (2) Hugo Elias variation on Perlin noise (3) Riemers version (HLSL) (4) DigitalErr0r GPU version (5) Bilinear Interpolation

Cancel Save
0 Likes 7 Comments

Comments

HaitzGames

Very good article.

Let me suggest this one I wrote about the subject, I think people here could find it useful:

http://www.codeproject.com/Articles/785084/A-generic-lattice-noise-algorithm-an-evolution-of

August 25, 2014 09:30 PM
JTippetts

1) Fractals - Fractals are a viable alternative to noise algorithms for solving particular problems. The challenge is finding a suitable fractal and tweaking its values to generate the desired results.


The approach of layering successive layers of noise at decreasing scales is commonly considered to be a fractal; ie, it demonstrates self-similarity at increasingly small scales. So this statement doesn't really make sense, given that you spend a good portion of the article describing how to construct a fractal.

2) Simplex Noise - It looks so complicated, I'm not even going to touch it. It's worth mentioning since it's supposedly better in terms of CPU and memory performance.


Simplex noise looks hairier than it really is. Where traditional 2D lattice noise uses 4 values (the corners of a square) to determine the noise value at a given point, 2D simplex noise simply uses 3 values (the corners of an equilateral triangle). It is still lattice noise, fundamentally similar to the grid-based functions. Where it gets hairy are in a) the skewing transformation from grid-space to simplex-space and b) the function to blend the 3 values into the final result. Once you understand what is going on there, the rest is easy. The thing about simplex noise is that the shape of an N-dimensional simplex has number of vertices equal to N+1, whereas the shape of an N-dimensional grid unit has vertices equal to N^2, meaning that as you increase N the number of interpolations in standard noise increases significantly faster than the number of interpolations in simplex noise.

In my library, the Accidental Noise Library, it is possible to generate up to 6-dimensional noise (useful for seamlessly looping 3D noise volumes). When using a simplex basis, this means that the number of vertex values blended for the final result is 7, whereas in grid noise the result would be calculated as a set of nested linear interpolations from 2^6, or 64 vertices. It has been my experience that you achieve the most performance gains by switching to simplex at higher dimensions; most lower-dimensional (2/3/4D) forms don't achieve that much of a gain. Here, the benefits are mostly aesthetic, ie in the elimination of the rectangular grid bias that underpins lattice noise and that can result in quite visible artifacts in the result. Of course, since simplex noise is still lattice noise, there is another bias that often manifests as artifacts aligned with the triangular lattice structure of the simplex space.
August 26, 2014 10:18 AM
slayemin

1) Fractals - Fractals are a viable alternative to noise algorithms for solving particular problems. The challenge is finding a suitable fractal and tweaking its values to generate the desired results.


The approach of layering successive layers of noise at decreasing scales is commonly considered to be a fractal; ie, it demonstrates self-similarity at increasingly small scales. So this statement doesn't really make sense, given that you spend a good portion of the article describing how to construct a fractal.

You're probably right. When I think of fractals, what comes to mind is the Mandlebrot Set, followed by the Koch snowflake, and then the Julia set, and then I think about fractals and recursion in nature (ferns, evergreens, and cauliflower). I think of fractals as a particular type of noise based on their appearance and functional use, you think of noise as a particular type of fractal based on the algorithmic similarities (if I'm understanding you correctly). I can certainly see the recursive similarities between noise and fractals. Considering the robustness of your noise library and how much time and effort you've put into it, I'll quickly cede the semantic distinction to you since you're much more experienced than I am.

I'll certainly keep simplex noise in mind. It looks complicated and it sounds like it comes with a big upfront R&D cost, but it is a lot more computationally efficient. If I ever need that efficiency, that's probably where I'll start looking (and watch, two years down the road I'll need it and will find myself trying to decipher it...) . For the people who just need to procedurally generate a bunch of noise with the least amount of effort and programmer time, the short and simple solution is the best. If the simple solution isn't sufficient, I provide a few alternatives to start from.

I like your library and would be interested in using it, but I wouldn't know where to begin in integrating it into my project. What would be handy is a DLL and some API documentation :)

August 26, 2014 05:38 PM
JTippetts
For further clarification, this is Perlin noise:

i4WsObJ.png

This is Perlin's simplex noise:

687fYng.png

This is not Perlin noise at all:

x14WbbF.png

Perlin noise denotes the specific noise variant that is calculated by using gradient functions at the grid vertices. It was specifically designed to throw the "highs" and "lows" of the function off of the integral grid boundaries, and thus help to hide the grid-based bias, something that the variant which simply assigns random values to the vertices and interpolates them (the third pic, and Hugo Elias' version, and the version presented in this article) does not do. It is a common error, mostly propagated by the popularity of the Hugo Elias page. But I believe that assigning the label "Perlin noise" to every variant goes directly against what Ken Perlin himself desired, for it was his intention all along to try to reduce the grid bias and associated artifacts. That was the whole point of his gradient and, later, simplex variants.

As far as fractals go, per the Wikipedia entry on fractals, A fractal is a natural phenomenon or a mathematical set that exhibits a repeating pattern that displays at every scale. This is almost exactly what is being demonstrated by layered Perlin/simplex/value noise: self similarity at smaller scales. Your top layer looks very much like the next layer, which looks very much like the next layer, and so on; the chief difference being frequency of the subsequent layers and amplitude of each final layer contribution. So these layered noise fractals do exhibit the repeating patterns at smaller scales that is the hallmark of a fractal; although for purity's sake, it should be noted that noise fractals are not infinite; ie, there is a threshold beyond which the self-similarity disappears, due to the finite amount of layers. For all practical intents and purposes, though, these layered functions are fractals.

In my opinion it is important to clearly differentiate between the basis functions of the layers (gradient, simplex, value, etc...) and the method used to layer them into a fractal. The stereotypical cloud-like fractal variant you demonstrate, which is created by summing layers of increasing frequency and decreasing amplitude, is but one way in which noise layers can be combined. Numerous other ways exist, which drastically alter the character of the final function. Ridged variants, for example, take the absolute value of the layer before summing together. The curious among us would do very well to read what Iñigo Quilez has to say about this topic, as he presents a number of very interesting methods for combining noise layers, including using the derivative of a noise function (also called its gradient) as a factor.

I highly recommend anybody that is interested in this stuff (especially in writing about it) read the book Texturing and Modeling: A Procedural Approach. Co-authored by Ken Perlin himself, as well as by F. Kenton Musgrave, Darwyn Peachey, David Ebert and Steven Worley (all very influential pioneers in the field), the book can really be considered the bible of noise, among other things. It presents a very detailed look at these topics.

Please note that it's not my intention to disparage what you have written here, or to discourage you from working on it further. I just feel that the particular approach you have taken is slightly misguided. More importantly, though, I think that you might be more effective if you took a different approach altogether. Instead of discussing the actual noise (which is thoroughly discussed elsewhere), it might be more beneficial to the community to discuss specific applications of noise, the things that you just sort of glossed over in your Usage and Examples sub-heading. That is where, in my experience, most people get lost. Just how do you use all this stuff to make something cool?

Understanding the guts of noise functions, and applying them to a task, are two different things, and in a time when third-party libraries such as libnoise and ANL are available, I'm not so sure that understanding the internal workings of the functions is really the key thing game developers need to grasp. Instead, showing exactly how noise can be used might be the more influential thing you could write in the long run.

If you do decide to continue tackling the fundamentals of noise functions, though, then in my opinion it is important that you be as thorough and as correct as possible, to differentiate yourself from the whole slew of other beginner Perlin noise tutorials out there. There's really only so much that can be said about the basics, after all.
August 26, 2014 10:22 PM
slayemin

I did add a prominent disclaimer note claiming that the noise I'm using is not Perlin noise. I never claim to create perlin noise, so I feel I'm a bit off the hook there.

As for the book, that is the second time it's been mentioned to me within a week so I just put in an order for it on Amazon.

Thanks for the feedback. It's very valuable. I am in agreement with you on the practical applicability of noise and how to make use of it. I think that's worth exploring in more detail and should be added into the article. I think I should retract the "under review" status and expand on this a bit more and go into more detail on the implementation. The integration with procedural terrain generation might be interesting to some people, though that would also be a bit implementation specific because of how I have architected my terrain data structures.

August 26, 2014 10:53 PM
swiftcoder

I'm with JTippets on this one. Elias' article has done enough damage to the general understanding of what Perlin noise is. You need to clearly define the distinction between gradient noise (like Perlin noise) and value noise (like Elias' noise) - describing Elias' article as a "variant on Perlin noise" is not sufficient.

August 26, 2014 11:03 PM
Buckeye

Overall, I found the article very difficult to follow. There is no clear declaration or description describing what the article is about. As the reader, I was left to try (mostly unsuccessfully) to figure out what the author was talking about at any particular point. See my comments below.

Comments:

"Any time you need to procedurally generate some content or assets in a game, you're going to want to use an algorithm to create something that has both form and variation. The naive approach is to use purely random numbers." "Content" and "assets" for a game are very vague terms that can include any number of things: menus, audio, etc., so.. what do random numbers have to do with those things?

The keywords (noise and perlin) and opening paragraphs promise a simple and easy implementation of Perlin noise. Later, the article states that Perlin does not do it the way the article discusses.

"If you've tried to build your own version.." Version of what?

"Let's break the problem down.." What problem? IF the "problem" is understanding or implementing the referenced articles (as is implied), that doesn't happen.

"Noise is a set of random values over some fixed interval." Following that is not a set of values over an interval, but a single value (well-hidden in an undescribed structure of some sort) at the stated interval.

"There is a semantic distinction between Perlin's noise implementation and the common implementation." Implementation of what? And what is "the common" implementation?

"The core idea behind the layered summation approach to noise generation.." IF the intent of the article is to discuss a "layered summation approach to noise generation," why don't you state that early in the article, or even use that in the title for the article? And, at that point, it hasn't been established what the purpose of any approach to noise generation is, and why it's of any interest. IF the intent is to apply it to texture generation (which I finally concluded was the intent), why don't you state THAT?

"When it comes to 2D, the underlying principle is the same: [followed by pictures]" No explanation of what 2D noise means, or why images are used to demonstrate the (so-far-well-hidden) "underlying principle."

"There are four distinct parts to generating the final noise texture." Texture?? Textures weren't mentioned prior to that statement, and you now mention the "final" texture.

"..the quantity being a function of the layer resolution." Resolution? The previous discussion of a layer didn't mention "resolution" or define what it may be.

The article then contains code without relating it to the "four distinct" parts mention previously.

Strangely, the article ends with what it should've started with - the uses to which a noise generation routine can be put.

For my review: the article is a wrapper for a code post. It promises "something simple and easy to implement quickly." What that "something" may be is never clearly defined. Leaving the reader to determine what the author is attempting to discuss does not make it simple. Posted code, I suppose, could be interpreted as "easy to implement quickly," but cut-and-paste, IMHO, does not make a good article.

September 11, 2014 02:12 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Unapproved

This article explores how to create procedurally generated noise with continuity (inspired by Ken Perlin's work). The resulting products can be applied towards generating terrain, clouds, dynamic textures, etc.

Advertisement

Other Tutorials by slayemin

Advertisement