I generate procedural planets pretty much entirely using compute shaders. (CPU manages a modified quad tree for LOD calculations) The compute shader outputs vertex data on a per terrain patch basis, which is stored in buffers. Normal vectors are calculated during this stage by using a sobel operator on the generated position data:
This works very well in most situations. Unfortunately, once I get to very high LOD levels, floating point precision causes quite a few issues.
In order to illustrate the problem I make the compute shader generate a sphere of radius 1. I then use the following code to display the error rate of the generated normal vectors: [source lang="cpp"]float3 NormalError = abs(Normal - normalize(PositionWS)) * 10.0;[/source]
LOD 16 - First signs of errors, no visual artifacts
LOD 20 - First visual artifacts. Can be masked with normal mapping or some perlin noise.
LOD 24 (highest lod): Visual artifacts are visible all over the terrain.
At this LOD, vertices are only 0.0000000596 units apart from each other, hence the problem with my current method for generating normal vectors.
I understand that I'm pushing the limits of floating point precision here, and not having that high of a terrain resolution isn't that big of an issue, but I was wondering if anyone had any ideas on how to squeeze out a little more detail?
Have you considered changing both the model size/co-ordinates and the zoom level when switching LOD levels? Essentially it would be the same as how many tile based systems occasionally re-centre the current tiles around 0,0 to maintain floating point precision. You want your model to have coordinates in a particular range, e.g. -1000 to 1000, so just renormalise them to suit.
To some extend, yes.
But the base algorithm generating the terrain generates vertices for a [-1, 1] cube, which are then mapped to a unit sphere.
All of this is done entirely on the gpu, which means single precision.
I can scale and translate these vertices of course, which improves normal vector generation.
But the trade-off is that I introduce some jitter in the vertex positions, which will influence the normal vectors.
It is fine most of the time, but in some situations it creates easily recognizable patterns.
I've also experimented with using partial double precision, but support for this is unfortunately still very limited.
My main advice is unchanged, but if you want to squeeze a little extra out, remember that you're throwing away a lot of bits of precision by making everything size 1 or less. Try making the cube -10,000 to 10,000 and the sphere size 10,000. You might get a little extra precision at the low end.