Jump to content
  • Advertisement
trapazza

Procedural terrain: What's the best approach to calculate noise in the GPU?

Recommended Posts

A few years ago I started creating a procedural planet engine/renderer for a game in Unity, which after a couple of years I had to stop developing due to lack of time. At the time i didn't know too much about shaders so I did everything on the CPU. Now that I have plenty of time and am more aware of what shaders can do I'd like to resume development with the GPU in mind.

For the terrain mesh I'm using a cubed-sphere and chunked LODs. The way I calculate heights is rather complex since it's based on a noise tree, where leaf nodes would be noise generators like Simplex, Value, Sine, Voronoi, etc. and branch nodes would be filters and modifiers (FBM, abs, neg, sum, ridged, billow, blender, selectors, etc). To calculate the heights for a mesh you'd call void CalcHeights( Vector3[] meshVertices, out float[] heights ) on the noise's root node, somewhere in a Unity's script. This approach offers a lot of flexibility but also introduces a lot of load in the CPU. The first obvious thing to do would be (I guess) to move all generators to the GPU via compute shaders, then do the same for the rest of the filters. But, depending on the complexity of the noise tree, a single call to CalcHeights could potentially cause dozens of calls back and forth between the CPU and GPU, which I'm not sure it's a good thing.

How should I go about this?

Thanks.

 

 

Edited by trapazza

Share this post


Link to post
Share on other sites
Advertisement

Firstly, for a lunarscape you can do what you want but how do you path rivers?

The second question is can you pregenerate anything or does it have to be purely run time generation, and by run time does that include 30 seconds for a compute process to assemble the terrain?

Thirdly, why does your process preclude the use of GPU side generation (not sure why you mention dozens of calls back and fourth)?

Fourth question, whats harder is everything else, collision detection (is that GPU or CPU), trees and bushes (lets face it, most vistas you are looking at trees and bushes not barren earth), content generation and did I mention water? How do you know a river will always flow down until it reaches the sea?

 I do a bit of everything http://www.vrmmorpgordie.com/?page=Client/Terrain/Algorithms/FundamentalAlgorithms.html 

Share this post


Link to post
Share on other sites

I'm kind of working on the same thing you are.  Currently I'm also doing noise on the CPU.  I did implement Simplex noise for HLSL on DirectX9 a few years back and it worked OK. However I only used it for shading and didn't use the results on the CPU.  For that I worry about about the numeric stability.  First off planets tend to get big so that typically means double precision which GPUs aren't as optimized for. Second if the results from different GPUs are a little different, that might present a problem especially for online games. However I'm not sure if these are serious issues or not.  I'm pretty much a novice on the GPU side of things so maybe someone else can chime in.

Share this post


Link to post
Share on other sites
3 hours ago, TeaTreeTim said:

Firstly, for a lunarscape you can do what you want but how do you path rivers?

The second question is can you pregenerate anything or does it have to be purely run time generation, and by run time does that include 30 seconds for a compute process to assemble the terrain?

Thirdly, why does your process preclude the use of GPU side generation (not sure why you mention dozens of calls back and fourth)?

Fourth question, whats harder is everything else, collision detection (is that GPU or CPU), trees and bushes (lets face it, most vistas you are looking at trees and bushes not barren earth), content generation and did I mention water? How do you know a river will always flow down until it reaches the sea?

 I do a bit of everything http://www.vrmmorpgordie.com/?page=Client/Terrain/Algorithms/FundamentalAlgorithms.html 

Hello TeaTree thanks for yout reply.

By now, I'm just interested in the calculation of the terrain heights in the GPU. That would be the first step in the whole process. The terrain is subdivided in chunks, which contain about 1600 vertices each. Chunks can't be modified at run-time so whenever a new one is needed its heights are calculated just once.

3 hours ago, TeaTreeTim said:

Thirdly, why does your process preclude the use of GPU side generation (not sure why you mention dozens of calls back and fourth)?

Noise trees can get complex as needed so imagine I have something simple like this:

Quote

private static Noise _buildNoiseTreeSelector()
{
    var selector = new AngleSelector(
        new Ridged(
            new Billow(
                new InputFreqMod( new GradientNoise(), new SineNoise( 3f ) ) ) ),
        new ConstantNoise( 0 ),
        Vector3.back, 0.9f, 0.99f );
    return selector;
}

In this example, every input position (xyz) will be processed by all nodes found in the tree: (perlin + sine)->freqMod->billow->ridged->angle_selector. Each node takes as input a Vector3[] and returns a float[] with the heights, which will be further processed by the next node until the root node is reached.

If I moved calculations in each node to the GPU, the returned heights will be passed back to the CPU so I can direct the results to the next node, which in turn will call the GPU again.

Hope it makes more sense now.

Share this post


Link to post
Share on other sites
10 hours ago, Gnollrunner said:

Second if the results from different GPUs are a little different, that might present a problem

Once upon a time (nearly a decade ago) I had build an entire noise generation framework that ran on CPU/GPU in parallel (i.e. generating meshes on CPU, textures on GPU). It took some effort to get matching results on both, but it wasn't *that* bad.

In this day and age, though, I'd probably recommend just doing all noise generation in compute shaders. DX12/Vulkan give you the necessary low-level control to efficiently stream data back to the CPU where needed. Double precision is available on higher-end cards, but the need for it can also be avoided by working in localised coordinate spaces.

Share this post


Link to post
Share on other sites
22 hours ago, trapazza said:

 

, the returned heights will be passed back to the CPU so I can direct the results to the next node, which in turn will call the GPU again.

 

Why? Having multiple cell sizes isn't black magic. I don't know Unity so I don't know, is there a reason the CPU HAS to even know what the heights are? What just stops you from doing this in the shader:

float distance = lengthSquared(pos.xyz - camera,xyz);

float height = noise(pos.xyz * continentScale) * continentHeight;

if (distance < closeEnoughToSeeMountains)

    height += noise(pos.xyz * mountainScale) * mountainHeight;

if (distance < closeEnoughToSeeHills)

    height += noise(pos.xyz * hillScale) * hillHeight;

 

This can just be done in vertex shader (and pixel shader for texturing), or used in tessellation too if you want to do that. If you wanted to do trees etc that also require height, it would be better to have a compute worker that creates height map texture(s) so the renderer and trees, collision etc know height as well. In reality height will need to be a bit more complicated of course but you get the point.

 

I've done many different variations of chunked terrain, vtf and pregenerated and assembled in compute. For rendering entire planets that you wont be landing on I'd just do a grid something like this: http://www.malleegum.com/timspages/Misc/Mega.html but chunked grids with height determined in compute are ok if there's a reason.

In a later engine to that link, I assembled the below as a heightmap in compute, each of those areas is a chunk, they are different geographical sizes but the same size texture (its just assembled into a single 2D texture like this so you can visualise):

 

HeightMapGrid2.thumb.jpg.88f9943d1c3af36497b4afff445a62cc.jpg

 

These height textures are used for: the rendering of grids, determining tree height, AI (all GPU based), collision detection (all GPU based), placing of bushes and water. The CPU never even knows the height. In fairness I've moved away from this in newer engines because I do server client engines and the CPU was less utilised than the GPU.

 

Share this post


Link to post
Share on other sites
1 hour ago, TeaTreeTim said:
On 9/16/2018 at 11:33 AM, trapazza said:

 

, the returned heights will be passed back to the CPU so I can direct the results to the next node, which in turn will call the GPU again.

 

Why? Having multiple cell sizes isn't black magic. I don't know Unity so I don't know, is there a reason the CPU HAS to even know what the heights are? What just stops you from doing this in the shader:

The problem is not the cell sizes but the noise generation pipeline. The way in which noise is described for each main planet or asteroid is through an artist/designer provided XML file, which is later translated into a noise tree at run-time. Since there could potentially be thousands of combinations I just can't put them all in a shader.

So imagine this simple scenario:

private Noise build_simple_noise_generator()
{
    return new Billow( new GradientNoise() );
}
        
private float[] calc_mesh_heights( Mesh mesh )
{
    Noise noise_generator = build_simple_noise_generator();
    float[] heights = new float[ mesh.vertexCount ];
    noise_generator.process( mesh.vertices, heights );
    return heights;
}

When calling noise_generator.process(...), the Billow node will first call "process" on GradientNoise, which will return some Perlin based noise, which will then be used by the Billow node as input. So, the idea was to walk the tree in the CPU and move the calculations made inside the different nodes (which can be costly) into the GPU. Now, I don't really know if this is a good approach or there is maybe a totally different way to achieve this.

 

Share this post


Link to post
Share on other sites
On 9/17/2018 at 5:00 AM, trapazza said:

So, the idea was to walk the tree in the CPU and move the calculations made inside the different nodes (which can be costly) into the GPU. Now, I don't really know if this is a good approach or there is maybe a totally different way to achieve this.

You can do a simple version of this by basically pasting together snippets of shader code as you walk over the tree, and then compiling it at the end.

Basic material systems often work like this, you need just enough logic to keep the variable names consistent as you paste together source chunks.

Share this post


Link to post
Share on other sites
59 minutes ago, swiftcoder said:

You can do a simple version of this by basically pasting together snippets of shader code as you walk over the tree, and then compiling it at the end.

Basic material systems often work like this, you need just enough logic to keep the variable names consistent as you paste together source chunks.

This is exactly what I'm going to do. I've added a new function to the tree node: "public string ToGLSL()" that will do the trick.

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!