Generally you can get away with having each vertex defined relative to the center of the patch which contains it, and rendering patches relative to the camera... Not necessarily ideal for a recursive implementation.Basically it works nice, but once you go up close then float precision is not enough. What is the best solution to that other than splitting the frustum in near / far ?
Planet Rendering: Spherical Level-of-Detail in less than 100 lines of C++
Generally you can get away with having each vertex defined relative to the center of the patch which contains it, and rendering patches relative to the camera... Not necessarily ideal for a recursive implementation.Basically it works nice, but once you go up close then float precision is not enough. What is the best solution to that other than splitting the frustum in near / far ?
Thats basically what happens. I set 3 corner coords of a patch in the vertex shader, the rest is interpolated. Maybe I should first multiply the matrices to avoid a precision loss
I hadn't looked closely enough to realise you are doing this on the GPU.Thats basically what happens. I set 3 corner coords of a patch in the vertex shader, the rest is interpolated. Maybe I should first multiply the matrices to avoid a precision loss
I did that at first, but abandoned it several years ago in favour of calculating all vertices on the CPU, due to a number of precision issues in GPU land. YMMV.
It's all cached, uses multithreaded updates, and most of the terrain data isn't changing on a frame-by-frame basis anyway.Isnt that slow? There are around 300k to 600k vertices on the screen
yes, but even the geometry is unchanged, the view matrix is different each frame; so you need to transform all vertices every frame for the camera, no?
I think we are talking at cross purposes. I generate the vertex positions on the CPU, subtract the patch center, and then upload them to vertex buffers. Per frame, each patch is rendered relative to the camera.yes, but even the geometry is unchanged, the view matrix is different each frame; so you need to transform all vertices every frame for the camera
The important part is the "subtract the patch center". As far as I can tell, your vertices are relative to the planet center, not the local patch center, and that's a big no-no. You only have somewhere around 5 digits of precision in a 32-bit float, so the larger your distances get, the lower the precision.
Ok, just updated the code.
Here a brief summary
- Changed to double on cpu side
- Relative coords are used for close tiles
- Speed and altitude are displayed in km
- Movement is now via wsad
- Mousewheel adjusts the camera speed
- Camera near/far is adjusted with regard to the camera altitude
- The code is now split in two parts: simple (100 line version) and complex (patch based version)
Left to do is to change to texture based heightmaps - hopefully this will improve the imprecision close up
Update:
I added a textured version to Github (see screenshot). Its much faster than the previous version that calculates the terrain in the shader.
For the next version, I plan to generate the meshes and textures on GPU or CPU and stream them into the GPU for rendering, basically caching the tiles.