View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# GPU normal vector generation for high precision planetary terrain

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

4 replies to this topic

### #1Hyunkel  Members

Posted 27 September 2012 - 01:37 PM

I generate procedural planets pretty much entirely using compute shaders. (CPU manages a modified quad tree for LOD calculations)
The compute shader outputs vertex data on a per terrain patch basis, which is stored in buffers.
Normal vectors are calculated during this stage by using a sobel operator on the generated position data:

This works very well in most situations.
Unfortunately, once I get to very high LOD levels, floating point precision causes quite a few issues.

In order to illustrate the problem I make the compute shader generate a sphere of radius 1.
I then use the following code to display the error rate of the generated normal vectors:
[source lang="cpp"]float3 NormalError = abs(Normal - normalize(PositionWS)) * 10.0;[/source]

LOD 16 - First signs of errors, no visual artifacts

LOD 20 - First visual artifacts. Can be masked with normal mapping or some perlin noise.

LOD 24 (highest lod): Visual artifacts are visible all over the terrain.

At this LOD, vertices are only 0.0000000596 units apart from each other, hence the problem with my current method for generating normal vectors.

I understand that I'm pushing the limits of floating point precision here, and not having that high of a terrain resolution isn't that big of an issue, but I was wondering if anyone had any ideas on how to squeeze out a little more detail?

Cheers,
Hyu

### #2jefferytitan  Members

Posted 30 September 2012 - 04:58 PM

Have you considered changing both the model size/co-ordinates and the zoom level when switching LOD levels? Essentially it would be the same as how many tile based systems occasionally re-centre the current tiles around 0,0 to maintain floating point precision. You want your model to have coordinates in a particular range, e.g. -1000 to 1000, so just renormalise them to suit.

### #3Hyunkel  Members

Posted 30 September 2012 - 06:59 PM

To some extend, yes.
But the base algorithm generating the terrain generates vertices for a [-1, 1] cube, which are then mapped to a unit sphere.
All of this is done entirely on the gpu, which means single precision.
I can scale and translate these vertices of course, which improves normal vector generation.
But the trade-off is that I introduce some jitter in the vertex positions, which will influence the normal vectors.
It is fine most of the time, but in some situations it creates easily recognizable patterns.

I've also experimented with using partial double precision, but support for this is unfortunately still very limited.

### #4jefferytitan  Members

Posted 30 September 2012 - 08:48 PM

My main advice is unchanged, but if you want to squeeze a little extra out, remember that you're throwing away a lot of bits of precision by making everything size 1 or less. Try making the cube -10,000 to 10,000 and the sphere size 10,000. You might get a little extra precision at the low end.

### #5Hyunkel  Members

Posted 01 October 2012 - 09:07 AM

I think that is indeed the best way to go about it.
I'll need to make some modifications to my cube to sphere mapping formula, but that shouldn't be too difficult.