# Terrain tiles and LOD, a first result

826 views

So, i made a video about where my project is now. Sorry for the bad quality, it is my first video from screen and i haven't wasted time on tweaking things.

It shows my implementation of continuous distance dependent LOD (after F. Strugar). 4 tiles (correctly stitched together if i may say :-)), the white boxes are the tiles bounding boxes (that i plan to use later for building a structure and to check for visibility/culling/loading), the coloured boxes show the view frustum selected nodes out of the hierarchy (a quadtree) of nodes from the tiles. Each tile has its own quad tree and selection is done per frame and per tile, so the nodes come in pre-sorted and loading and unloading should be pretty easy. Rendering is done via a simple flat mesh. In this case the node size is 64/64, but i applied a multiplier (because the srtm data i use is really spacey) of 4. That means, that per post from the (16 bit greyscale) heightmap there are 4 grid positions, thus faking a much higher resolution than there really is.

Between lod levels there is continuous morphing of vertices. Distribution is recalculated when changing depth of view (near- and far plane), as shown the lod levels travel in and out from the camera position. In the same way, they travel with the viewer, i hope it can be seen through the bad quality.

Next steps: render relative to eye, generate and stream heightmap tiles (emanzipate from prefab dem data ;-)), shading/texturing.

Again: this heavily relies on the work of others, i have just typed it in.

All thoughts very welcome !

(source on github)

Nice! What did you use? (DirectX, OpenGL, some library or engine, etc).  Also when you say "render relative to eye " does that mean you will keep the camera at the origin and move the data around?

Thanks :-)

I went on foot with C++/OpenGL 4.5. Additional libs are glm for vectors and matrices, i used png++ for the conversion of the srtm ascii data into pngs, stb_image to load texture data and dear imgui for the ui.

I have played with relative to eye, by converting double precision world positions into two floats (high and low part)

static inline void doubleToTwoFloats( const double &d, float &high, float &low ) {
high = (float)d;
low = (float)(d - high);
}

and pass these into the vertex shader together with the double precision camera position (glslang interface).

layout( location = 0 ) in vec3 inPositionHigh;
layout( location = 1 ) in vec3 inPositionLow;

layout( location = 5 ) uniform vec3 g_cameraPositionHigh;
layout( location = 6 ) uniform vec3 g_cameraPositionLow;

In the vertex shader, camera position is subtracted from the world positions and an mvp matrix relative to eye. which is the mv matrix, stripped of translation (upper left 3-matrix in case of OpenGL) and multiplied with persepctive matrix (C++ code):

	// view model matrix ...
glm::mat4 mv{ c->getViewMatrix() * m_modelMatrix };
// ... strip of translation ...
mv[3].x = 0.0f; mv[3].y = 0.0f; mv[3].z = 0.0f;
// ... and perspective mult
glm::mat4 mvprte{ c->getPerspectiveProjectionMatrix() * mv };
setUniform( m_shader->getProgram(), "u_projectionViewModelMatrixRelativeToEye", mvprte );

passed into the shader and in the shader applied to the coordinates, like so:

vec3 t1 = inPositionLow - g_cameraPositionLow;
vec3 e = t1 - inPositionLow;
vec3 t2 = ( ( -g_cameraPositionLow - e ) + ( inPositionLow - ( t1 - e ) ) ) + inPositionHigh - g_cameraPositionHigh;
vec3 highDifference = t1 + t2;
vec3 lowDifference = t2 - ( highDifference - t1 );
vec3 position = highDifference + lowDifference;
gl_Position = u_projectionViewModelMatrixRelativeToEye * vec4( position, 1.0f );

This way no data has to be changed, only the vertex data needs double the space and it costs several additions in the shader. I got that from the book Cozzi/Ring: 3D Engine Design for Virtual Globes.

In my rep, under Source/Applications/Icosphere/Ellipsoid you can find the corresponding code, it is not yet implemented in the LOD shaders.

Hope that makes some sense ...

Sounds like I'm doing it somewhat differently.  I just offset chunk data by the center of each chunk and then convert to float. Then I add the reverse offset back in when doing transformations on the GPU side. If a chunk is close to the camera it keeps it's precision. If it's far it loses precision but it doesn't matter because it's far.  That all has to be combined with the LOD system however, otherwise you get Z-fighting.  I probably get away with doing this the simple way because almost everything is done on the CPU an I'm constantly re-chunking and sending over new meshes for the updated chunks. However that part is certainly not simple 😜 , at least for the voxel code.  For straight height mapped mesh terrain, it isn't too bad.

Posted (edited)

I have tested the above with a real-world-sized world (rwsw), like 13 millions meters diameter, seen from high orbit. It works without noticeable delay (GTX 970).

Ok, i see. I don't have meshes, only min/max values of the nodes and a single static mesh the size of a leaf node * resolution multiplier. All the logic (without the movement and selection stuff as well as tile loading of course, that is done on the cpu) is in the shader. Framerate is not issue, for now. That way i don't expect problems with streaming of data, and i am not size limited. Double precision should/could/might/hopefully does even cover an inner solar system ...

Edited by Green_Baron

Posted (edited)

43 minutes ago, Green_Baron said:

Double precision should/could/might/hopefully does even cover an inner solar system ...

I remember calculating it once and I believe it does cover it nicely.

The way I set it up is that the camera is just another object in the tree.  So when so when you move close to a planet you can put it under that planet's node (which is under the System node, which is under the star cluster node and so forth.....).  The center of a planet is always at the origin in it's own coordinate system.

Rendering is done by starting at the camera's position in the hierarchy and walking both down and up until the whole tree is covered.  If you reposition the camera in the tree so it's under whatever you are close to, you should be able to render a whole galaxy.  You don't need precision for far away things. The only caveat is that you can't put the camera under an instanced object because it's rendering path becomes ambiguous. But you shouldn't really need to do that anyway.

Edited by Gnollrunner

## Create an account

Register a new account