I tweaked and fixed the last few bugs in the Compute Shader pre-pass I previously discussed and now have it seeding my Hull Shader with additional per-patch data.
However, I'm not really happy with the results. Today's experiments have mostly demonstrated to me that a "one size fits all" metric is very hard to find - some heightmaps suit different heuristics better than others, and also having a multi-variable LOD scheme is very hard to balance regardless of target data. It's proven far too easy to invalidate one variable in favour of another, or have multiple variables cancel each other out, or have variables working well in different parts of the image (my main problem)...
The above is the naive approach - simply take the distance from the camera, clamped to a maximum distance.
Two main problems exist - there is extra detail where its no needed (flat areas are the same/similar shade of green to the hilly areas) and there is no red as the geometry closest to the camera is clipped by the view/projection and near plane clipping.
The above is a revision on the above in that it implements a near plane as well as a far plane. Notice that you can now see all three graduations of detail - red, green and blue.
Still, the problem of detail where its not necessary exists.
The above is a static LOD metric using the standard deviation of height values. The idea here being that 'noisy' patches have a high standard deviation whereas flatter areas will have a very low standard deviation. This should distribute detail to the patches that vary the most and thus deserve the most detail.
It works pretty well, but there are a few cases where it can be thrown off quite badly - particularly where most of a patch is flat and only the edge is raised. Like the skirting tiles around the islands.
The above is based on the spread of heights - basically maximum less minimum. This achieves a similar effect to the standard deviation but isn't so easily fooled by skirting tiles at the expense of generating a few patches with more detail than they probably need.
The above modulates the standard deviation by the distance from camera, which should work well as a hybrid. However the typically very small (max of 0.305 in this image) standard deviation means it either weighs heavily on the distance and gives mostly blue or, if weighted differently, is drowned out by the distance metric.
Above modulates distance with the spread of heights and seems to prove much more pleasing results with a better distribution of detail. At this time it's my preferred hybrid metric for LOD.
I've posted ">another YouTube video of the latter algorithm, and in it (as well as the above images) you can spot some gaps between patches - the twinkling white pixels. This is really not good, and seems to be a discontinuity introduced by my "improved" distance-from-camera equation which is a shame. Something I need to look into tomorrow.
I want to start capturing some of the amplification ratios and other statistics as part of the display. I've got them writing to the console, but I want those in the videos so you can see the actual geometric complexity differences.
Thoughts?
You need a way to measure std.deviation from the average plane of each patch to more accurately measure curvature (rather than from the horizontal plane as you have already tried). You could maybe sample the surface normal in 16 places on the patch and average those normals to find your average plane? You could also measure the std.deviation of the dot product of avg.normal to sampled surface normals.
When I implemented a bicubic spline based terrain, I measured curvature of each patch by taking the second derivative of the spline equation because it was convenient to do so. I ignored the sign of the concavity and just looked at the magnitude to tell me how curvy the patch was on average. It worked pretty well, but I think std.deviation against an average plane would work even better.
You might also want to try modulating it with the dot product of your camera to patch vector and the avg patch normal. My thinking is that any patch forming a horizon from the camera's perspective should get more detail (if curvy) than a patch that is looked at from a perpendicular angle.
This got longer than I intended, just throwing some ideas around I guess.