Jump to content
  • Advertisement
Sign in to follow this  
DwarvesH

How to have really huge view distance in your game?

This topic is 2121 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Like the tile says, how to get really huge view distance in your open-world game. And I'm not talking about the pure technical aspect. Without LOD you can barely render 1-2 kilometers without your polygon count going over one million and growing exponetially with every bit of perceived distance, but I implemented CPU geo-mipmapping. CPU because the vertex buffers are created on the CPU and I'm not using the implementation that sends textures to the GPU and samples them there. It has 4 levels and the quad tree has only one depth level, so it is basically a grid. It can render considerably larger areas and if I were to make it use 7-8 levels and add at least 1 more depth level to the quad tree, I could probably render 10-15 km of terrain with under 64 MiB of GPU RAM.

 

But this is just the raw technical pushing power of the engine. This is not enough for a large scale terrain, or at least not for a good looking large scale terrain.

 

First there is the issue of the far plane. How far can I push it before I start getting into issues with the depth buffer. In the game 99% of the action happens in the range of about 500 meters, and I need good depth buffer precision there for effects and such. The rest of kilometers long terrain is pretty much just backdrop.

 

So how far could I push the far plane without shadow mapping considering than 1 meter is 1 unit? And how far can I push it with shadow mapping that is not limited to some set distance? Are there known values for some good pairs of near and far plane values for such situations? And when I'm going inside a closed interior, like a cave with a pseudo-level-transition, should I re-scale the far plane to something reasonable, like 1000-2000?

 

Then there is the issue of the far plane intersecting uneven terrain. Think of a largely flat field with a high hill in the middle at the distance and the far plane cutting though the middle of the hill. This looks good as you stand still. But as you look around with the mouse, the cut-out shape of the hill will shimmer around and change drastically with the smallest movement of the mouse. Fog helps just a bit. If the fog is too weak, you will still notice the effect. If the fog is too strong, because the skybox is uneven, there will be spots where the terrain greatly contrasts with the skybox, which both looks ugly as you stand still and further shimmers when you move. Adding an alpha blend property to the fog looks unrealistic and strange, but it does greatly reduces this effect. On the downside, parts of the terrain now being transparent greatly reduces the perceived view distance, while still rendering just as much.

 

If I make it so that the terrain chunks near the far plane are fully contained in the frustum and the far plane never intersects them, then there is chunk pop in. Even with fog. With very strong fog you may not notice it always.

 

Then there is the problem of height. All the solutions that I found to improve the above problems a bit don't work with height. If you are on flat terrain looking at terrain that is as high or higher than you, everything looks great. If you are on top of a 1 km high hill and looking slightly down, the view distance is suddenly not enough, the shimmering is worse, if you go for the chunk pop-in solution the pop in is extremely apparent and if you add alpha blending, you can sometimes see the floor of you skybox.

 

So if anybody is experienced with these issues or has some suggestions, please do tell!

 

In the meanwhile I will try some things that I think make a lot of sense, but I'm not sure if they will actually help.

 

My fog is spherical. As an analogy it could be considered a point light centered on you character, with the light intensity inversely proportional to the fog intensity and an exponential falloff, but with the falloff starting from the radius of the light.

 

This may be realistic, but I think a simple depth fog aligned with the frustum might give better results. 

 

Then I'll make sure that the amount of chunks I want to render never intersect the far plane. I'll put the fog start in such a way that the most distant chunks start to have semi-aggressive fog from 50% to 100% of their size. Then I'll add about two more rows of chunks that pick up on the fog value of the real chunk and really go full on heavy fog, maybe some alpha-blending too. 

 

I'll combine these steps with a 5 km view distance and see how it goes.

 

I'll also research depth of field and see if I can apply a DoF effect that behaves like fog, but blurs instead of blending.

 

Things are made harder by the possible height variation, but ideally I would eventually like to get such a view distance:

 

http://images3.wikia.nocookie.net/__cb20120302231349/justcause/images/0/0f/Panau_City_(2).jpg

http://static.giantbomb.com/uploads/original/13/130684/2437250-8170158684-jc1.j.jpg

 

 

Share this post


Link to post
Share on other sites
Advertisement

Just Adding.

 

About the example you gave, with Just Cause 2.

 

I played that game, and enjoyed it, they basically have a huge zfar value, but one drawback of that, which is seen, is that z figting appears, a lot, specially when looking from far up down to a city.

 

Actually the terrain of Just Cause 2 morphs in their correct form when going closer, so they have some kind of tessellation / displacement mapping.

 

-MIGI0027

Share this post


Link to post
Share on other sites

There's a great breakdown of how Just Cause 2 created such huge view distances here - http://www.humus.name/index.php?page=Articles.  Just scroll down to Creating Vast Game Worlds.

 

If I remember right, JC2 has a view distance of around 50km.  That article covers some of the details of how they achieved it.

 

Regarding depth precision, it can depend on the API you're using.  I know with D3D11 you can use a 32 bit float depth buffer with reverse-Z, and that will give you a depth precision of about 0.01 out to just under 100,000 units (a.k.a 1cm precision at a depth of 99.9km).  It's basically limited by the precision of a 32 bit float, which I believe is about 7.2 digits.  If you're using OpenGL I've read there are other options such as logarithmic depth, which like reverse-Z float in D3D11 give a roughly linear depth precision similar to above, although it may disable early-Z culling optimisations.  There's a really good writeup someone did about it a while back actually:

 

http://outerra.blogspot.co.uk/2012/11/maximizing-depth-buffer-range-and.html

Share this post


Link to post
Share on other sites

It matters not which API you're using. You can interpret and store the position or its part in any way you wish in either GL or DX shaders.

Share this post


Link to post
Share on other sites

It matters not which API you're using. You can interpret and store the position or its part in any way you wish in either GL or DX shaders.

 

I was referring to the hardware depth buffer used in rasterization, rather than outputting depth/position to a seperate render target in a pixel shader. :)

Share this post


Link to post
Share on other sites

If you want a draw distance that reaches so far, your depth buffer precision is effected. One option is to split your frustum into n segments and draw each segment in separate passes with a cleared z-buffer. Setting your znear/zfar to the near/far planes in each segment.

 

n!

Share this post


Link to post
Share on other sites

Thank you very much for the info!

 

 

There's a great breakdown of how Just Cause 2 created such huge view distances here - http://www.humus.name/index.php?page=Articles.  Just scroll down to Creating Vast Game Worlds.

 

If I remember right, JC2 has a view distance of around 50km.  That article covers some of the details of how they achieved it.

 

Regarding depth precision, it can depend on the API you're using.  I know with D3D11 you can use a 32 bit float depth buffer with reverse-Z, and that will give you a depth precision of about 0.01 out to just under 100,000 units (a.k.a 1cm precision at a depth of 99.9km).  It's basically limited by the precision of a 32 bit float, which I believe is about 7.2 digits.  If you're using OpenGL I've read there are other options such as logarithmic depth, which like reverse-Z float in D3D11 give a roughly linear depth precision similar to above, although it may disable early-Z culling optimisations.  There's a really good writeup someone did about it a while back actually:

 

http://outerra.blogspot.co.uk/2012/11/maximizing-depth-buffer-range-and.html

 

I looked over the links. The just cause presentation is quite readable and I'll continue to study it. Some great ideas over there. Yet, with all their tricks, optimization and experience they still have a lot of Z fighting? The second OpenGL link is more difficult to understand.

 

At first I'll focus on getting things to work as expected and look good without trying to fix the Z buffer. Even at a range of 2 km I started having horrible Z precision near the far plane, so I increased the near plane for now.

 

It seems that I have both under and overestimated to complexity of the problem and the factors involved. My old view range was just 500 meters by default, with more than 100 meters displaying fog. The spherical fog left you with even less perceived distance. Now I'm trying out a 2 km view range, still with the spherical fog. I'm still having most of the problems I described earlier, but the increased view range makes them all less apparent. I'm still forced to use alpha fog to reduce most of the artifacts.

 

The problem of making it look good, natural and distant got even more complicated because my maps will be 4x4 or 8x8 kilometers. On the 4x4 map with a 2 km view distance you can see almost 1/4 the way. On top of that, there is the added problem that the 8x8 maps while pretty big, did not feel that big under certain circumstances. Now with the bigger view distance they feel even smaller. Making the map 16x16 will eat up 2 GiB of disk space. My streamer can handle it, but still, pretty big.

 

It works pretty good at low height when surrounded by higher altitudes. In the cross-hair you can see a distant peak at around 1.9 km distance from the camera:

 

[attachment=17954:07_03.png]

 

The look down from the top of the peak is less impressive:

 

[attachment=17953:07_02.png]

 

So is the walk to the bottom. The illusion of scale is pretty much broken:

 

[attachment=17952:07_01.png]

 

I'll try and add a further level to the geo-mipmapping a one to the quad tree and try to add another kilometer to the view distance to see how it looks. Maybe I can even render the entire map in the distance, but at 1/16 resolution.

 

 

 

It matters not which API you're using. You can interpret and store the position or its part in any way you wish in either GL or DX shaders.

 

I am using XNA and the max it supports is DepthFormat.Depth24, which I'm already using.

Share this post


Link to post
Share on other sites

I'm glad the links are proving useful.  I've never really tried large scale terrain rendering so take this advice with a fistful of salt, but I expect your z flickering could be improved a lot by using a much more aggressive LoD on distant terrain patches.  

 

Forgive this terrible explanation, but here goes.  Due to the lack of precision in the depth buffer, multiple triangles that resolve to the same pixel on the screen are in contention for that pixel(the depth buffer can't tell which one is in front), and so each frame a possibly different one of those triangles is drawn at that screen pixel.  Hence the flickering/Z-fighting.  If you aggressively LoD the number of vertices used for distant terrain, that would limit the number of triangles in contention for that pixel.  I think, ideally you'd make sure the minimum distance between vertices at your far depth plane was >= the resolution of your depth buffer at the far plane.  With a standard depth buffer, since depth precision would be hundreds or maybe(?) thousands of meters at a far plane of 10km, the ideal might not be attainable, but you could at least minimise z-fighting a great deal by having extremely distant terrain defined by vertices say 200m or 500m apart.

 

That is a poor explanation so perhaps someone more experienced with terrain rendering could expand upon it.  

 

Looking at the JC2 article, they don't mention their terrain LoD in detail, but they do mention using reverse-Z with a fixed point D24S8 depth buffer for some improvement of z-fighting (page 14 in the PDF version).  You might want to try that first to see how much it improves things (before trying something more time consuming like re-implementing terrain LoD).  I believe it only requires 2 steps - change your depth comparison from LESS to GREATER in the depth view description, and swap your Far and Near values around when creating your projection matrix (eg. pass in near as 10000.0f, far as 0.5f).

 

If that isn't reducing flicker enough, you could also try the OpenGL style logarithmic depth buffer.  XNA uses D3D9 so you should have access to the : DEPTH output semantic in your pixel shader?  In that case you need to edit the vertex and pixel shader used to draw your terrain, and make the following changes (taken from the "Getting rid of the fragment shader computation" section of http://outerra.blogspot.co.uk/2012/11/maximizing-depth-buffer-range-and.html ).  In the vertex shader:

 

1. add a new member to your vertex output struct, a single float, e.g. "float logz".

2. In the main vertex shader, after you've transformed the vertex position to projection space (mul by worldviewproj or however you're doing it), you need to add the following (assumes vertex output struct is called output, and it's position member is called position, and far depth plane is at 10000.0f in projection matrix):

float far = 10000.0f;
float C = 0.01f;
float FC = 1.0 / log(far * C + 1);
 
output.logz = log(output.position.w * C + 1) * FC;
output.position.z = (2 * output.logz - 1) * output.position.w;

You could/should change the far variable to a constant buffer, if you don't want to edit the shader everytime you change your far plane value, but for simplicity I left it hardcoded in the shader in this example.

 

In your pixel shader:

1. If it doesn't use the same vertex output struct definition as the pixel input struct, you need to add the "float logz" member to the pixel shader input.  

2. In the pixel shader output struct, you need to add a depth output member using the DEPTH semantic, e.g. "float depth : DEPTH" below the COLOR0 output member.

3. Finally in the main body of the pixel shader, assign the logz value from the input to the depth value of the output, e.g. "output.depth = input.logz;".

 

That should be all you need to use logarithmic-depth in XNA.  I believe it doesn't use reverse-z, so make sure your depth comparison is set to LESS and the projection matrix near and far values are in their normal places.

 

Edit:  The quick reply box ate some of my post when I submitted it.  Anyway, I don't actually use XNA or D3D9 but the above code should work.  Also worth noting that outputting depth from the pixel shader will disable early-Z I believe, so you may lose some performance depending on how much geometry it was culling beforehand.  Log-depth should greatly reduce the amount of flicker you're seeing, but if performance is a concern, you may want to still implement the more time consuming aggressive vertex LoD that I mentioned at the start of the post.  The original source of the flickering was all the triangles trying to exist at the same pixel, and they all require processing to some extent, so you should get some performance boost from never submitting them to the rendering pipeline (by reducing the vertices/indices submitted for distant terrain).

 

Good luck.  smile.png

Edited by backstep

Share this post


Link to post
Share on other sites

lod, far to near, setting clip planes and clearing zbuf as you go.  

 

i use a 3 pass method in caveman, one for clouds, one for everything else, and one for the player's weapon. 

 

in Airships!, the real world viewing distance for the largest target type is 187 miles! far clip plane is something like 1 million.

 

In SIMTrek/SIMSpace and GAMMA WING, viewing distances are inter-planetary.

 

its all about selecting an appropriate scale and world coordinate system.

 

using two 64 bit unsigned integers per coordinate, and converting to camera relative coordinates just before frustum cull and draw, you can do intergalactic distances to a resolution of 0.1 meters.

 

ought to be good enough for any game, unless your game world is bigger than two entire galaxies! <g>

Share this post


Link to post
Share on other sites

Thank you everybody for you help and hints!

 

Unfortunately, I can not go ahead and try some of the depth buffer precision improvement techniques because I have reached the limit of geo-mipmapping before the limit to the actual depth buffer. I can't go an and increase the draw distance more with my current implementation because of the draw count. While the polygon count is only at around 600-700k at 4 km view distance, my draw calls are over 1300.

 

I need to implement an even better terrain LOD scheme that keeps the poly count, but also reduces the number of draw calls.

 

Here is a screenshot with 4k view distance and fog starting from a distance of 3km:

 

[attachment=18038:08_03.png]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!