LOD terrain with minimal memory usage

Started by
0 comments, last by Krohm 11 years, 3 months ago

Hey, I've recently moved on from my typical 2D / 3D closed environment games to a more open environment 3D game but as everyone else I'm stuck on terrain rendering. I've read a lot of papers on the subject and most people seems to agree that the best way to go is with a streaming LOD/geoclip mapping type of system to minimize the amount of memory needed etc but then I also seen alot of threads about world of warcraft and how they're doing their terrain rendering and I checked it out myself with a trial account.

I took some screenshots on the terrain and what I've heard they're using some type of lod streaming system but how can they know that it's a mountain far far away without saving the entire heightmap in memory :/ and how can they do it so smooth? Is it because XNA is slower than directx ?
Because when I checked this one out http://msmvps.com/blogs/valentin/archive/2008/09/30/smart-terrain-rendering-with-xna.aspx and tried to run in on my computer you see how the terrain changes when you walk around its ugly as fk, maybe because XNA is too slow to keep up?.
Just by looking at the two screenshots it seem they're using something close to the billod system but with a much longer LOD range (forgot the name but I ment the range between the LOD levels) because the graphics seems to work the same way except it's much smoother in wow but I don't understand how they can have this information in memory :/
Maybe someone who knows more then me can show me the directions and maybe answer the question about how you can manage memory with big height maps?
My own thoughts is to use a tiled lod system where you've nine heightmaps like this(1,5 and 9 is not loaded yet.)
1 | 2 3 4
5 | 6 7 8
9 | 10 11 12
and when you move from square 7 to 6 square 4,8 and 12 will be unloaded/switched for squares 1,5,9 but I guess this will take up too much memory since you've to store nine height maps in memory? This would aslo make it much easier to create rectangular maps, since you're not restricted to one quadtree etc and I already got a nice idea how to make an editor for it. Any thoughts about this? or should I just forget it and find another way?
Maybe it's time to move from XNA C# to C++ directx, always wanted to learn c++ but I never took any time to do it :/ and since I'm only developing for windows anyway.
Wow only used around 930mb of memory when I took these screenshots.
Mvh Tobias, sorry for wall of text and my poor english sad.png
Advertisement

First of all. Profile to see what's your problem.

Without hard data, what I have written below is pure speculation.

Is it because XNA is slower than directx ?

This question makes no sense. DirectX is the API below XNA.

Anyway, there are several problems when streaming stuff. Granularity, latency and pressure.

  • Granularity is basically as follows. You know you can spend X ms in computing a tile. By testing we figure out that our "terrain tile" might be... say 200x200 heightmap pixels / model vertices. This of course is a function of the algorithm used.
  • Latency management involves pulling stuff from mass storage. Even with a SSD, it will still be massively slow compared to RAM, let alone VRAM. In general, we will need deferred texel loading. Waiting for the disk to seek is rarely acceptable. If loading is assumed syncronous, it will erode processing time from the LODding algo.
  • Pressure. How much of the above you'll need to do per-frame and per-second.

In my experience the only way to deal with pressure on low-end hardware is to have precomputed LOD representations. Let me elaborate.

Maybe someone who knows more then me can show me the directions and maybe answer the question about how you can manage memory with big height maps?

Your example lacks a key feature. The tiles are not homogeneous because of construction.

That is, let's assume 4,8,12 require LOD 0. Let's say even 3,7,11 ended up in LOD 0.
By constrast, 2,6,10 are at the edge of the viewport. They cannot be at LOD 0, because this would imply you're bruteforcing everything. Now, there are various cases to manage this which depend on the algorithm we use.
In the case of octree-simplified terrains, the cell size would stay constant and reduce the polygon count (a thing I actually don't like at all for modern HW but let's carry on).

Many people just load LOD 0 anyway for 2,6,10 and then decimate polygons. This is not going to work as the work involved in generating a LOD n (n>0) is superior to generating a LOD 0 tile and it gets worse with a growing n. I guess that's what you're doing?

It's worth noticing the article you're referring to is concerned to optimizing the representation (visualization) of a terrain node. Loading LOD 0 each time to run decimation is unacceptable. There must be a way (before the visualization algorithm runs) that figures out what to work on.

So what do we do?

Personally I'd suggest to pre-compute everything (oops, that's impossible for a viewpoint-dependent method) or switch to a regular grid method which allows easier decimation (less compute for same granularity).

Alternatively: use smaller tiles (less computation, but more pressure).

Anything requiring non-trivial per-vertex work will have issues sooner or later, that's it. I'm very surprised the algorithm you refer to speaks about a per-vertex visibility test.

Previously "Krohm"

This topic is closed to new replies.

Advertisement