I'm making a lot of progress with my procedural planet project, which is part of my masters thesis.
At the moment I'm generating the terrain entirely on the gpu (using dx11 compute shaders) and I am quite pleased with the speed I am getting
using this method.
However I have noticed that I start to get precision issues when having the camera very close to the planet.
This doesn't come as a big surprise, after all I'm working with rather immense scales, but I'm unsure about how to approach the problem.
I'm currently running 16 divisions on 17x17 terrain patches at max lod's, and the geometry itself seems fine.
However when I move / rotate the camera when close to the terrain at this lod, the terrain starts to wobble.
I'm assuming this is because there is insufficient precision to properly multiply the terrain vertex position
with a WVP matrix.
What's a good way to lessen this problem?
I'm thinking about translating all scene objects so that the surface of the planet under the camera is close to the (0, 0, 0) point
where floating point precision is higher.
Is this worth trying or are there better methods?
Your idea of translating everything to the 0,0,0 point seems fine
Its the method I've used before. ( and other packages I've worked on )
We stored our planet geometry in the CPU in worldspace doubles. ( it was an old project )
Then converted all the double-precision data to a 32bit camera=0,0,0 positioning scheme before sending to the card.
But if you're doing it using the compute shader then yea... I guess convert your matricies in a way that your data is still positioned relative to the camera rather than in world space.
Humas wrote an article in GPUPro 1 which describes how to handle matricies to minimize low-precision jittering.