So, i made a video about where my project is now. Sorry for the bad quality, it is my first video from screen and i haven't wasted time on tweaking things.
It shows my implementation of continuous distance dependent LOD (after F. Strugar). 4 tiles (correctly stitched together if i may say :-)), the white boxes are the tiles bounding boxes (that i plan to use later for building a structure and to check for visibility/culling/loading), the coloured boxes show the view frustum selected nodes out of the hierarchy (a quadtree) of nodes from the tiles. Each tile has its own quad tree and selection is done per frame and per tile, so the nodes come in pre-sorted and loading and unloading should be pretty easy. Rendering is done via a simple flat mesh. In this case the node size is 64/64, but i applied a multiplier (because the srtm data i use is really spacey) of 4. That means, that per post from the (16 bit greyscale) heightmap there are 4 grid positions, thus faking a much higher resolution than there really is.
Between lod levels there is continuous morphing of vertices. Distribution is recalculated when changing depth of view (near- and far plane), as shown the lod levels travel in and out from the camera position. In the same way, they travel with the viewer, i hope it can be seen through the bad quality.
Next steps: render relative to eye, generate and stream heightmap tiles (emanzipate from prefab dem data ;-)), shading/texturing.
Again: this heavily relies on the work of others, i have just typed it in.
All thoughts very welcome !
(source on github)