Yes, it was the normal calculation. I inherited the code form the CPU version so I guess that one was bad too. I eventually settled on a Sobel operator:

float4 PSNHeightToNormal(float4 inPos : SV_POSITION, float2 inTex : TEXCOORD0) : SV_TARGET { float ps = 1 / size; float3 n; float scale = worldScale; n.x = -(h(inTex, ps, ps) - h(inTex, -ps, ps) + 2 * (h(inTex, ps, 0) - h(inTex, -ps, 0)) + h(inTex, ps, -ps) - h(inTex, -ps, -ps)); n.y = -(h(inTex, -ps, -ps) - h(inTex, -ps, ps) + 2 * (h(inTex, 0, -ps) - h(inTex, 0, -ps)) + h(inTex, ps, -ps) - h(inTex, ps, ps)); n.z = 1 / scale; n = normalize(n); n = n * 0.5 + 0.5; return float4(n.x, n.z, n.y, 1); }

Hopefully this one works as expected. I still need to test it a bit because I am having a bit of a brain fart since the switch from RH to LH. I have no longer of an intuitive concept of "forward" and I need to contentiously convert in my head from one system to another .

There is still a bunch of things to decide regarding how to interpret data for LOD transition and if to use mipmaps or just secondary lower resolution textures.

Is there a way to control how DeviceContext->GenerateMips works? What filters it uses? I couldn't find anything.

Additionally, since for LOD I am generating every chunk separately using noise, I have reintroduced the issue of TSeams at chunk borders...