Geometry clipmaps and float precision...
I am currently trying to do a spherical implementation of geoclipmapping. The approach I am taking is to map the clipmap to a cube which is then normalized into a sphere. All of the calculations for the clipmap (clip regions, toroidal access, etc.) are done on a 2d grid of the unwrapped cube. Using integers in units of meters is more than enough for an Earth sized planet.
The problem arises when going from cube space to the surface of the sphere. With a planet the size of Earth (radius ~ 6,000,000m) and a 32-bit float, you would be limited to only 0.5m resolution. With a double you could go down to around a nanometer.
One solution is to break up each face of the cube into smaller patches that have their own coordinate space. There would still be only one vertex buffer for each level, but each patch would need to make its own draw calls. The transformation from cube space to the patch's coordinate space is not all that complex. The calculations can be done with doubles to maintain precision, while using floats for the final vertex data.
The only reason I am hesitant to go down this road is that the elegance of geoclipmaps is completely destroyed from all of the partitioning into different patches. There would need to be around 100 patches for each face of the cube to get the kind of precision I am looking for.
I am tempted to just get rid of the idea of using geoclipmaps and go with a quadtree / geomipmapping approach, but I really like how well streaming and procedurally generated data work with geoclipmapping. Is there a simpler way to do this that I have overlooked, or is it possible to use doubles for everything?
I am using a different coordinate space for each level of my clip maps. I simply use the center of the base level as the origin, but keep the orientation of the local space the same as the planets space, so I only have to apply a scale and a translation to move from planet to clip-map space.
One problem with this method is that you have to shift the origin every so often or you start to lose precision again. Obviously when the origin moves the entire vertex buffer needs to be moved by the inverse to make everything still line up, and this can get costly with large clipmaps.
I am getting ready to upload a new version of my planetary clipmaping engine this week, which might be of some use to you. There is already a version up there which implements the local coordinate systems, but the rest of the engine is a bit raggedy.
One problem with this method is that you have to shift the origin every so often or you start to lose precision again. Obviously when the origin moves the entire vertex buffer needs to be moved by the inverse to make everything still line up, and this can get costly with large clipmaps.
I am getting ready to upload a new version of my planetary clipmaping engine this week, which might be of some use to you. There is already a version up there which implements the local coordinate systems, but the rest of the engine is a bit raggedy.
Quote:Original post by bluntman
I am using a different coordinate space for each level of my clip maps. I simply use the center of the base level as the origin, but keep the orientation of the local space the same as the planets space, so I only have to apply a scale and a translation to move from planet to clip-map space.
Actually, just a shift is enough. A scaling is not needed. But this is probably the best way to go. The number of updates to re-center the clipmaps is almost negligible and you only have to update maybe 128x128 vertices, i.e. around 200 KB. That should be OK.
Another possibility that I had implemented is to compute the modelview and projection matrices in double precision on the CPU and convert it to single precision in the very end by uploading it to the GPU. The vertices are stored "as usual", i.e. in planet object coordinates. IMO, the precision is just enough for a planet surface. The double precision matrices remove the jittering artifacts you get otherwise. When you fly at very low velocities (<10 km/h), you see a quantisation artifact, though, which is negligible for starships (velocity >> 10 km/h). This variant is cheaper than the one bluntman described since you don't need to update vertices.
Quote:
When you fly at very low velocities (<10 km/h), you see a quantisation artifact, though, which is negligible for starships (velocity >> 10 km/h). This variant is cheaper than the one bluntman described since you don't need to update vertices.
Do you get quantisation when the camera is not translating, just rotating (i.e. looking around at the planets surface) close to the ground?
I have an SIMD method implemented for updating my clipmap offsets, and I stagger the offsets of each level, so its pretty seamless generally, but at n=128 it is noticeable. Not sure if this is the updating of the offsets or the (necessary) full update of the grid VBOs.
Shifting the origin sounds good to me. I can even keep the vertices independent of which face of the cube they came from that way. So am I correct in saying that you shift the origin for each level in a seperate frame? I'm assuming that the lower detail levels of the clipmap don't need to be shifted as often if the shifts are done seperately.
Quote:Original post by bluntman
Do you get quantisation when the camera is not translating, just rotating (i.e. looking around at the planets surface) close to the ground?
This is just for translating. Rotating is very smooth. I'm not sure what actually causes the quantisation.
In my engine I do shift the different levels seperately, but I try to keep them all aligned to the base level. i.e. I only update the origins when the base levels needs updating, and then I do a maximum of 1 level per frame. I am going to split this out more, so that its a level every 4-5 frames, as it makes the engine chug a bit atm.
Strange that it would occur for translation but not rotation! Could it be anything to do with the fact that a matrix has 9 slots used for rotation, but only 3 for translation?
Quote:
This is just for translating. Rotating is very smooth. I'm not sure what actually causes the quantisation.
Strange that it would occur for translation but not rotation! Could it be anything to do with the fact that a matrix has 9 slots used for rotation, but only 3 for translation?
Quote:Original post by bluntman
Strange that it would occur for translation but not rotation! Could it be anything to do with the fact that a matrix has 9 slots used for rotation, but only 3 for translation?
I forgot to mention that I had to subtract a bias from the vertices BEFORE I apply the MVP matrix. You have to do that since otherwise the MVP matrix contains large numbers with different signs. When you apply this matrix in single precision to the vertices, you loose the precision.
The bias (it's just a shift to the center of the clipmaps) is applied to each vertex and inversely to the MVP matrix. The reason why this makes sense is that applying the shift in DOUBLE PRECISION cancels out the large numbers of the MVP and the transformation is stable. The vertex bias, however, had to be subtracted on the GPU, i.e. in single precision. You basically subtract 1.00000xx from 1.00000xx, so you loose a lot of precision. This is probably the reason for the quantisation. However, the artifact is uniform, i.e. equal for all grid points, so there is no jittering. The grid simply moves in steps of ~0.1m.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement