Planet Rendering: Spherical Level-of-Detail in less than 100 lines of C++

Started by
17 comments, last by spacerat 7 years, 11 months ago

Basically it works nice, but once you go up close then float precision is not enough. What is the best solution to that other than splitting the frustum in near / far ?

Generally you can get away with having each vertex defined relative to the center of the patch which contains it, and rendering patches relative to the camera... Not necessarily ideal for a recursive implementation.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Advertisement

Basically it works nice, but once you go up close then float precision is not enough. What is the best solution to that other than splitting the frustum in near / far ?

Generally you can get away with having each vertex defined relative to the center of the patch which contains it, and rendering patches relative to the camera... Not necessarily ideal for a recursive implementation.

Thats basically what happens. I set 3 corner coords of a patch in the vertex shader, the rest is interpolated. Maybe I should first multiply the matrices to avoid a precision loss

Thats basically what happens. I set 3 corner coords of a patch in the vertex shader, the rest is interpolated. Maybe I should first multiply the matrices to avoid a precision loss

I hadn't looked closely enough to realise you are doing this on the GPU.

I did that at first, but abandoned it several years ago in favour of calculating all vertices on the CPU, due to a number of precision issues in GPU land. YMMV.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Isnt that slow? There are around 300k to 600k vertices on the screen

Isnt that slow? There are around 300k to 600k vertices on the screen

It's all cached, uses multithreaded updates, and most of the terrain data isn't changing on a frame-by-frame basis anyway.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

yes, but even the geometry is unchanged, the view matrix is different each frame; so you need to transform all vertices every frame for the camera, no?

yes, but even the geometry is unchanged, the view matrix is different each frame; so you need to transform all vertices every frame for the camera

I think we are talking at cross purposes. I generate the vertex positions on the CPU, subtract the patch center, and then upload them to vertex buffers. Per frame, each patch is rendered relative to the camera.

The important part is the "subtract the patch center". As far as I can tell, your vertices are relative to the planet center, not the local patch center, and that's a big no-no. You only have somewhere around 5 digits of precision in a 32-bit float, so the larger your distances get, the lower the precision.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Ok, just updated the code.

Here a brief summary

  • Changed to double on cpu side
  • Relative coords are used for close tiles
  • Speed and altitude are displayed in km
  • Movement is now via wsad
  • Mousewheel adjusts the camera speed
  • Camera near/far is adjusted with regard to the camera altitude
  • The code is now split in two parts: simple (100 line version) and complex (patch based version)

Left to do is to change to texture based heightmaps - hopefully this will improve the imprecision close up

https://github.com/sp4cerat/Planet-LOD

Update:

I added a textured version to Github (see screenshot). Its much faster than the previous version that calculates the terrain in the shader.

For the next version, I plan to generate the meshes and textures on GPU or CPU and stream them into the GPU for rendering, basically caching the tiles.

This topic is closed to new replies.

Advertisement