Confused: Very large environments

Started by
22 comments, last by cameni 14 years, 3 months ago
Quote:Original post by zedz
clear depoth
#A
glDepthRange(0,0.5);
#B
glDepthRange(0.5,1.0);

voila, no speedloss, what u are doing with Z/W will in fact result in a speed loss

>>there are problems at the boundaries
perhaps, but take your example 0.1m->300km.
now on earth u cant see anything 300km away typically due to curvature + haze, in the above picture of yours the furtherest mountain is ~10km

thus in such a scenario
A/ 0.1->20km // stuff on ground
B/ 10m->1000km // stuff in air

Of course I have been using the depth range partitioning. But I was trying to say that it is slower when I have to do all the management. On the terrain I can see mountains as far as 150km (even more so now because the haze is unrealistically thin), so I have terrain tiles covering that whole range. I had to split the range 3 times for that, and could not use just the quadtree level to determine what tiles go where because the error metric would occasionally determine that a more distant tile but with larger features requires a refine, resulting in z-buffer artifacts because of the overlapping depth ranges. Then there's splitting the in-air objects, etc etc

All in all, using the logarithmic depth buffer showed to be much easier and elegant for me and others doing planetary rendering, even though it's not without problems.
Advertisement
Wow.. these threads on "super huge rendering ranges!" always make me think.
Sure, there is the distinct case of planet rendering, where you want to go from the surface of a planet, out into space, over to another planet.
But on the surface of a world?
How far away IS the horizon? not 150Km for sure. Now, given that I've been bored more than once while driving between states, I'll say for sure that lots of valleys and tall mountains will give you places where you can see mountains 10-20miles before you get to them.

There is a big difference between having a world that is 150Km in size, and needing to render ALL of that as visible.
Of course, that depends on what you are trying to do.
On the ground the visibility of mountains can be 20-30 miles at best, but from a plane at 14,000 feet you can see mountains 200 miles distant due to thinner air.
If you want an engine capable of this all you have to handle it somehow.

But that doesn't matter. Even at 10 miles you will have the problems with depth buffer. I thought floating point depth buffer would handle that. But it looks like it's adding precision where there was already plenty, and not helping much with the problematic distant part.
So it looks the solution to the floating point depth buffer precision problem is easy. Swapping the values of far and near plane and changing the depth function to "greater" inverts the z/w shape so that it iterates towards zero with rising distance, where there is a plenty of resolution in the floating point.

I've also found an earlier post by Humus where he says the same thing, and also gives more insight into the old W-buffers and various Z-buffer properties and optimizations.

This topic is closed to new replies.

Advertisement