Thank you very much for the info!
There's a great breakdown of how Just Cause 2 created such huge view distances here - http://www.humus.name/index.php?page=Articles. Just scroll down to Creating Vast Game Worlds.
If I remember right, JC2 has a view distance of around 50km. That article covers some of the details of how they achieved it.
Regarding depth precision, it can depend on the API you're using. I know with D3D11 you can use a 32 bit float depth buffer with reverse-Z, and that will give you a depth precision of about 0.01 out to just under 100,000 units (a.k.a 1cm precision at a depth of 99.9km). It's basically limited by the precision of a 32 bit float, which I believe is about 7.2 digits. If you're using OpenGL I've read there are other options such as logarithmic depth, which like reverse-Z float in D3D11 give a roughly linear depth precision similar to above, although it may disable early-Z culling optimisations. There's a really good writeup someone did about it a while back actually:
I looked over the links. The just cause presentation is quite readable and I'll continue to study it. Some great ideas over there. Yet, with all their tricks, optimization and experience they still have a lot of Z fighting? The second OpenGL link is more difficult to understand.
At first I'll focus on getting things to work as expected and look good without trying to fix the Z buffer. Even at a range of 2 km I started having horrible Z precision near the far plane, so I increased the near plane for now.
It seems that I have both under and overestimated to complexity of the problem and the factors involved. My old view range was just 500 meters by default, with more than 100 meters displaying fog. The spherical fog left you with even less perceived distance. Now I'm trying out a 2 km view range, still with the spherical fog. I'm still having most of the problems I described earlier, but the increased view range makes them all less apparent. I'm still forced to use alpha fog to reduce most of the artifacts.
The problem of making it look good, natural and distant got even more complicated because my maps will be 4x4 or 8x8 kilometers. On the 4x4 map with a 2 km view distance you can see almost 1/4 the way. On top of that, there is the added problem that the 8x8 maps while pretty big, did not feel that big under certain circumstances. Now with the bigger view distance they feel even smaller. Making the map 16x16 will eat up 2 GiB of disk space. My streamer can handle it, but still, pretty big.
It works pretty good at low height when surrounded by higher altitudes. In the cross-hair you can see a distant peak at around 1.9 km distance from the camera:
The look down from the top of the peak is less impressive:
So is the walk to the bottom. The illusion of scale is pretty much broken:
I'll try and add a further level to the geo-mipmapping a one to the quad tree and try to add another kilometer to the view distance to see how it looks. Maybe I can even render the entire map in the distance, but at 1/16 resolution.
It matters not which API you're using. You can interpret and store the position or its part in any way you wish in either GL or DX shaders.
I am using XNA and the max it supports is DepthFormat.Depth24, which I'm already using.