In general, the trend on modern hardware seems to be that math gets cheaper and cheaper while bandwidth gets (relatively) more expensive. So compression at the cost of a few ops is often worthwhile. It does depend on the specific hardware and use case though (and I'm no expert).
I think half floats would struggle a bit to cover 1000m at 0.1m intervals. A half float is only 16 bits so only has 65536 possible values, plus most of them will be focused close to zero, so perhaps not appropriate for position data. Half floats are probably fine for direction and colour though.
They are talking about sRGB encoding. Ordinary images (as in photographs with 8 bits per component) are typically encoded in the sRGB color space. You cannot perform math with these values until you have first converted them to linear RGB color space. If you create a texture using a sRGB format (e.g. GL_SRGB8_ALPHA8 or DXGI_FORMAT_R8G8B8A8_UNORM_SRGB) then this conversion happens automatically when you sample the texture.
Of course, it's unfeasible to render such a scene using metres as my base unit, as I have to specify the spacecraft's position in hundreds of thousands of metres relative to the centre of Earth, and using such massive numbers to position objects in Direct3D seems to cause problems.
Hundreds of thousands of meters doesn't sound like a whole lot, not if you are using 32-bit floats. If you were simulating the entire galaxy, I could see this being an issue, but you are only simulating Earth out to LEO.
Edit: Then again, now that I think about it you would only have accuracy to like 1/10th of a meter far away from the origin. If the origin is centered around the spacecraft then maybe it wouldn't be an issue. You don't need better than 1/10th of a meter accuracy for something >100,000 km away.
Also, it doesn't really matter if you are using meters, kilometers or millimeters as your base unit. This has no effect on the precision of the calculation when you are working with floating point numbers, as you are only changing the exponent.
I could not fathom a reason for why immutable texture vs. mutable texture would have an effect on how the mipmaps are being generated. In the case of RTT you would have no choice but to use glGenerateMipmaps. Well, maybe you could do it yourself with a compute shader if that is available to you. I wonder what the performance difference would be if any.
Also, maybe try using glHint(GL_GENERATE_MIPMAP_HINT, GL_NICEST).
Absolutely, you can in modern OpenGL. Google programmable vertex pulling. One way would be to store your incidences as vertex attributes. Then you store your actual vertex attributes in a SSBO. In your vertex shader you index the SSBO arrays using the incidences stored in the vertex attributes. The second way of handling it would be to forgo traditional vertex attributes all together. Just bind a blank VAO (must be using a core profile context) and store both your vertex attributes and incidences inside of SSBOs. In your vertex shader you index the arrays containing your incidences using gl_VertexID. Then you take those and use them to index the arrays containing your vertex attributes. In both cases you would be using glDrawArrays instead of glDrawElements.
You'll probably get better performance if you just stick to the traditional method, and you'll definitely get wider compatibility.