**1**

# GPU normal vector generation for high precision planetary terrain

Started by Sep 27 2012 01:37 PM

,
4 replies to this topic

###
#1
Members - Reputation: **401**

Posted 27 September 2012 - 01:37 PM

I generate procedural planets pretty much entirely using compute shaders. (CPU manages a modified quad tree for LOD calculations)

The compute shader outputs vertex data on a per terrain patch basis, which is stored in buffers.

Normal vectors are calculated during this stage by using a sobel operator on the generated position data:

[source lang="cpp"]// Only operate on non-padded threadsif((GroupThreadID.x > 0) && (GroupThreadID.x < PaddedX - 1) && (GroupThreadID.y > 0) && (GroupThreadID.y < PaddedY - 1)){ // Generate normal vectors float3 C = VertexPosition; float3 T = GetSharedPosition(GroupThreadID.x, GroupThreadID.y + 1); float3 TR = GetSharedPosition(GroupThreadID.x + 1, GroupThreadID.y + 1); float3 R = GetSharedPosition(GroupThreadID.x + 1, GroupThreadID.y); float3 BR = GetSharedPosition(GroupThreadID.x + 1, GroupThreadID.y - 1); float3 B = GetSharedPosition(GroupThreadID.x, GroupThreadID.y - 1); float3 BL = GetSharedPosition(GroupThreadID.x - 1, GroupThreadID.y - 1); float3 L = GetSharedPosition(GroupThreadID.x - 1, GroupThreadID.y); float3 TL = GetSharedPosition(GroupThreadID.x - 1, GroupThreadID.y + 1); float3 v1 = normalize((TR + 2.0*R + BR) * 0.25 - C); float3 v2 = normalize((TL + 2.0*T + TR) * 0.25 - C); float3 v3 = normalize((TL + 2.0*L + BL) * 0.25 - C); float3 v4 = normalize((BL + 2.0*B + BR) * 0.25 - C); float3 N1 = cross(v1, v2); float3 N2 = cross(v3, v4); Normal = (N1 + N2) * 0.5; // Write Normal to Shared Memory SharedMemory[GroupIndex].Normal = Normal;}[/source]

This works very well in most situations.

Unfortunately, once I get to very high LOD levels, floating point precision causes quite a few issues.

In order to illustrate the problem I make the compute shader generate a sphere of radius 1.

I then use the following code to display the error rate of the generated normal vectors:

[source lang="cpp"]float3 NormalError = abs(Normal - normalize(PositionWS)) * 10.0;[/source]

LOD 16 - First signs of errors, no visual artifacts

LOD 20 - First visual artifacts. Can be masked with normal mapping or some perlin noise.

LOD 24 (highest lod): Visual artifacts are visible all over the terrain.

At this LOD, vertices are only 0.0000000596 units apart from each other, hence the problem with my current method for generating normal vectors.

I understand that I'm pushing the limits of floating point precision here, and not having that high of a terrain resolution isn't that big of an issue, but I was wondering if anyone had any ideas on how to squeeze out a little more detail?

Cheers,

Hyu

The compute shader outputs vertex data on a per terrain patch basis, which is stored in buffers.

Normal vectors are calculated during this stage by using a sobel operator on the generated position data:

[source lang="cpp"]// Only operate on non-padded threadsif((GroupThreadID.x > 0) && (GroupThreadID.x < PaddedX - 1) && (GroupThreadID.y > 0) && (GroupThreadID.y < PaddedY - 1)){ // Generate normal vectors float3 C = VertexPosition; float3 T = GetSharedPosition(GroupThreadID.x, GroupThreadID.y + 1); float3 TR = GetSharedPosition(GroupThreadID.x + 1, GroupThreadID.y + 1); float3 R = GetSharedPosition(GroupThreadID.x + 1, GroupThreadID.y); float3 BR = GetSharedPosition(GroupThreadID.x + 1, GroupThreadID.y - 1); float3 B = GetSharedPosition(GroupThreadID.x, GroupThreadID.y - 1); float3 BL = GetSharedPosition(GroupThreadID.x - 1, GroupThreadID.y - 1); float3 L = GetSharedPosition(GroupThreadID.x - 1, GroupThreadID.y); float3 TL = GetSharedPosition(GroupThreadID.x - 1, GroupThreadID.y + 1); float3 v1 = normalize((TR + 2.0*R + BR) * 0.25 - C); float3 v2 = normalize((TL + 2.0*T + TR) * 0.25 - C); float3 v3 = normalize((TL + 2.0*L + BL) * 0.25 - C); float3 v4 = normalize((BL + 2.0*B + BR) * 0.25 - C); float3 N1 = cross(v1, v2); float3 N2 = cross(v3, v4); Normal = (N1 + N2) * 0.5; // Write Normal to Shared Memory SharedMemory[GroupIndex].Normal = Normal;}[/source]

This works very well in most situations.

Unfortunately, once I get to very high LOD levels, floating point precision causes quite a few issues.

In order to illustrate the problem I make the compute shader generate a sphere of radius 1.

I then use the following code to display the error rate of the generated normal vectors:

[source lang="cpp"]float3 NormalError = abs(Normal - normalize(PositionWS)) * 10.0;[/source]

LOD 16 - First signs of errors, no visual artifacts

LOD 20 - First visual artifacts. Can be masked with normal mapping or some perlin noise.

LOD 24 (highest lod): Visual artifacts are visible all over the terrain.

At this LOD, vertices are only 0.0000000596 units apart from each other, hence the problem with my current method for generating normal vectors.

I understand that I'm pushing the limits of floating point precision here, and not having that high of a terrain resolution isn't that big of an issue, but I was wondering if anyone had any ideas on how to squeeze out a little more detail?

Cheers,

Hyu

###
#2
Crossbones+ - Reputation: **2505**

Posted 30 September 2012 - 04:58 PM

Have you considered changing both the model size/co-ordinates and the zoom level when switching LOD levels? Essentially it would be the same as how many tile based systems occasionally re-centre the current tiles around 0,0 to maintain floating point precision. You want your model to have coordinates in a particular range, e.g. -1000 to 1000, so just renormalise them to suit.

###
#3
Members - Reputation: **401**

Posted 30 September 2012 - 06:59 PM

To some extend, yes.

But the base algorithm generating the terrain generates vertices for a [-1, 1] cube, which are then mapped to a unit sphere.

All of this is done entirely on the gpu, which means single precision.

I can scale and translate these vertices of course, which improves normal vector generation.

But the trade-off is that I introduce some jitter in the vertex positions, which will influence the normal vectors.

It is fine most of the time, but in some situations it creates easily recognizable patterns.

I've also experimented with using partial double precision, but support for this is unfortunately still very limited.

But the base algorithm generating the terrain generates vertices for a [-1, 1] cube, which are then mapped to a unit sphere.

All of this is done entirely on the gpu, which means single precision.

I can scale and translate these vertices of course, which improves normal vector generation.

But the trade-off is that I introduce some jitter in the vertex positions, which will influence the normal vectors.

It is fine most of the time, but in some situations it creates easily recognizable patterns.

I've also experimented with using partial double precision, but support for this is unfortunately still very limited.

###
#4
Crossbones+ - Reputation: **2505**

Posted 30 September 2012 - 08:48 PM

My main advice is unchanged, but if you want to squeeze a little extra out, remember that you're throwing away a lot of bits of precision by making everything size 1 or less. Try making the cube -10,000 to 10,000 and the sphere size 10,000. You might get a little extra precision at the low end.