Avoiding huge far clip

Started by
16 comments, last by dpadam450 7 years, 10 months ago

Hi Guys,

Apologises if this has been answered somewhere, I've had a look and can't find an answer.

My question is about rendering large terriain. Currently I'm using a 2048x2048 16bit heightmap in a quadtree as my game area. This gives me a nice large terrain if I treat 1 pixel in the heightmap as 5 meters with nice smooth rolling hills/mountains etc.

Now the problem is unless I increase the farclip to something huge then my hills and mountains in the distance slowly come into view. I've read that I shouldn't really have my farclip much above 1000 or i'll lose too much presicion in the depth buffer and start having z fighting issues.

I can use distance fog or even volumetric fog to hide the horizon a bit but mountains will still come into view rather weirdly. I could put them on the skydome/box too I guess but that's a bit bleh.

Thanks

Advertisement

Have you tried increasing the farclip to see how it works? Z fighting is not always a big issue, especially with modern video cards. In my own app I use a far clip of about 8000+ with no real problems ( also, try increasing nearclip a bit, that can make a big difference)

For farther view than that, you can use "linear depth"... this means you calculate the depth in the shader in a different way than is standard.

Also, you can use two z-buffers, render the far distant stuff using one projection matrix, and the near distance stuff using another one.

Use an infinite far plane.

This is a great paper on improving perspective precision

http://www.geometry.caltech.edu/pubs/UD12.pdf

Quake used a farclip of 4096 - it's not as bad a problem as you seem to think it is.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Quake used a farclip of 4096 - it's not as bad a problem as you seem to think it is.

It's also use 4 for near plane. This is ratio of 1 / 1024.

Absolute value of far plane isn't important the near/far ratio is.

Can someone explain to me why in the year 2016 ( high dynamic range, FSAA, 100 instructions per shader, 4k ... ) graphic cards do not just store the floats coming from the GPU with maybe just 2 bits rounded off? If you do not overflow your GPU you should not overflow the rasterizer. Still GPU only uses floats, not doubles like CPU. I do not get why the CPU has so many SIMD for low precision floats. I want MAC for 64bit floats! I just read the numbers. Okay even float is sufficient for this application. Stupid Cray also only had floats.

If I understood your question correctly, wouldn't that be the case of yes, going for a very distant far-clip, but then implementing something like a logaritimic z-depth buffer to handle the float inaccuracy? E.g.: http://outerra.blogspot.com.br/2012/11/maximizing-depth-buffer-range-and.html

Set far to 0.1/etc, and near to 10000/etc, reverse your depth test (GEqual instead of LEqual), and create a 32bit float depth buffer instead of 24bit int.

The reversed projection biases precision hyperbolicly towards the far plane, and the FP buffer biases precision logarithmically towards thr near plane... Cancelling each other out surprisingly well and giving near linear precision at no extra cost. The 32bit buffer also gives 256x more precision than a legacy 24bit buffer.

This has been standard advice from GPU vendors since about 2006, and these days, AMD doesn't even support 24bit depth buffers in hardware anymore - if you ask for one, they have to emulate it within a 32bit depth buffer and throw away 8 bits of precision for no reason! So there's no reason not to do this in 2016.

One very good method off of the top of my head...

Shadows of the Colossus used a technique that rendered very distant geometry very well. I'm not sure if they were using displacement maps, or actual meshes but the concept remains the same reguardless. You simply render cube maps for your distant terrain, while having them set to lower LODs.

You don't need to use a very high res cube map to do this either. But it needs to be high enough that it looks good (No large square pixels) and small enough that the GPU can render it on the fly.

You can scale in the world a bit around the camera and render it this way. The user might not notice the difference.

You mentioned fog, which was a favorite early on in gaming history, but there are other tricks you can use in addition to fog to give a nice feel to your game.

At some point, you can't have the player seeing infinitely in any direction, or you'll have to load all your entities (or at least LOD versions like billboards) located in that direction for miles and miles.

I can use distance fog or even volumetric fog to hide the horizon a bit but mountains will still come into view rather weirdly. I could put them on the skydome/box too I guess but that's a bit bleh.


On the fifth day of creation, to hide the z-fighting and to prevent distance objects from suddenly popping into view, God added curvature to the earth.

Atmospheric fog and curvature together would probably help set an absolute limit to your terrain. Any ridiculously high objects - like mega towers or mega mountains, you could add just those (hopefully few) extreme situations to your skybox.

This topic is closed to new replies.

Advertisement