the movement relative to the eye, before the modulo bit resets the positions of the vertices
The modulo should be the only thing moving the vertices. As far as movement relative to the eye, aside from the standard ModelViewProjection matrix work, nothing should be translating relative to the eye. The camera moves, and the vertices only change position when the modulus changes.
The grid cells need to be odd because each level of detail has twice the number of vertices as the next level, and they need to line up. Like this:
That inner level has 4 vertices along its edge. See how that hits the middle of a quad in the next LOD? By extending it to 5 vertices, the LOD edges always line up.
All grids are 2^n - 1. The reason for this number is that it's generally preferred for textures to have dimensions that are a power of two (2^n), but since odd numbers are required, the vertices only sample from 255 of the 256 pixels available.
Pay attention to the outer levels as the camera moves. See how they only "move" when the camera has moved far enough for the modulus to index the next pixel? They don't slide around with every change in eye position.
Actually I mis-spoke. I meant to say that I use two vertices from the Outer LOD and one from the inner. The whole point of them is to get rid of the T-Junction by drawing a triangle there.
I'm still not understanding what you mean when you say that your layers "slide". In my implementation, the LOD levels move in discrete steps, such that one pixel always corresponds to one vertex. As you fly about the scene, the levels in the grid only move (and shift around with the L shaped strip scheme) when the camera has moved far enough within the LOD level.
So the degenerates stay at the same exact place as you move until the camera passes a threshold, and the LOD levels shift, and the degenerates also shift to accommodate the new boundaries.
So you're talking about doing the degenerate triangles that marry the LOD levels, right? I'm not sure why you say that "these in-between rings scale a bit". They shouldn't scale at all. The way I did it was to create the rings as static vertex buffers that use the Y vertex component to represent the LOD level they were a part of. So each triangle is comprised of two vertices from the inner LOD and one from the outer; the inner ones have a Y component of 0 and the outer ones have a Y component of 1. This lets me pick which LOD texture to fetch the height from in the vertex shader.
I was banging my head against this as well and just solved it. So there are four issues really.
First of all, that image you're comparing against is squashed horizontally. The actual transmittance table is a 256x64 texture, and should look like this:
Second, your texture is upside down. I know because I generated the same one at first. Swap the Y coordinate.
Third, your colors look washed out because you are using an HM value of 12. It should be 1.2
And Fourth, in the Bruneton code he defines "TRANSMITTANCE_NON_LINEAR" (in the common.glsl file). You are using the code from the else clause of this, which indeed results in the curve you are seeing. With the code from the #ifdef TRANSMITTANCE_NON_LINEAR block, you get the correct curve. The difference is in the last two lines of your getTransmittanceRMu function; instead of this: