Best way to downscale a heightmap during runtime

Started by
4 comments, last by max343 11 years, 3 months ago

Hi,

I am working on a game with large outdoor terrains, and I have a heightmap-based terrain system set up. The heightmaps are generally quite detailed, eg 2049x2049 (for 2048x2048 quads), and I do dynamic LOD on the GPU. However, for the physics this detail is not always a good idea. My players (vehicles) get stuck on small bumps etc. and the terrain physics are quite heavy. This is why I would like to scale down the original heightmap to something like half the resolution, or a fourth of the resolution or whatever the level in question works best with, and use this lower detail heightmap for the physics mesh.

What algorithm should I use for something like this? I should be fast enough so that I can run it when the level loads, and generally "work well" for terrain physics meshes. Do I just calculate the average over the nearby vertices or is there some better way to do it?

Thanks!

Advertisement

Oh and I should perhaps mention that my height data is saved as an 8-bit raw file, ie. width * height unsigned char values. These are converted to float before being passed to the physics engine, so the downscaling can be done either on the integers or the floats.

If the problem is units getting stuck, then taking the average will probably make it somewhat better, though I'd advise to rather fix the physics code so units don't get stuck. If units tend to get stuck in some conditions, sooner or later you will have such a condition, no matter what kind of smoothing you use to work around the issue for the moment.

If you do physics on the GPU, the size of the map also should be no problem, since you already have the data on the GPU anyway for rendering.

Something that I'm just experimenting with could be of interest to you: Storing the height data encoded using a 2D Haar wavelet. The Haar transform is dead simple and ultra fast, and the resulting data structure is such that you get a coarse (but not alltogether bad) representation more or less immediately (with very little data) that is refined progressively as you load more data.

The idea behind this is that you can already get a reasonably good preview while a terrain tile is still loading (imagine a client loading terrain over the network) and the quality gets better the more data you receive. Now about LOD, that's something you get more or less for free. Just use only the first or first few levels and forget that you actually have more accurate data, and there's your lower LOD.

Thanks, I'll look into the Haar stuff, sounds interesting.

I am not doing physics on the GPU, only visuals. The LOD is also only for the visuals. I need to do accurate physics for the whole level, since I am doing multiplayer. At least on the server.

I agree that the physics should be "fixed". I am using Bullet though, so the fixing will most likely be creating a smoother terrain, and more movement friendly physics meshes for the vehicles.

If you're using Bullet then you have access to the source, so it should be fairly easy to update btHeightfieldTerrainShape::getRawHeightFieldValue(int x,int y) to sample at a lower resolution, or with some averaging of points . But obviously you may then get odd issues with graphics and physics being a little bit out of synch (objects sinking into the ground slightly or whatever). Still, should save you having to create an extra height map.

Thanks, I'll look into the Haar stuff, sounds interesting.

I am not doing physics on the GPU, only visuals. The LOD is also only for the visuals. I need to do accurate physics for the whole level, since I am doing multiplayer. At least on the server.

I agree that the physics should be "fixed". I am using Bullet though, so the fixing will most likely be creating a smoother terrain, and more movement friendly physics meshes for the vehicles.

Wavelet transform might work for you, but you should know that it doesn't differ from the mean by much. Linear filters are notorious for eliminating a lot of important data (mostly transitions), while all you want is to eliminate small bumps with is as little effect on the other data as possible.
You should really consider non-linear filters, like the bilateral filter or the more general nlm filter. Efficient implementation of these two is a bit tricky but doable as long as you don't go wild with the filter radius. Also, don't even try to implement them in the naive way, they'll be terribly slow.
Wavelet transform might work for you, but you should know that it doesn't differ from the mean by much.
Indeed, they don't (since it's the mean what's being used in the predict step). Though of course there are some quite different (much more complicated) wavelets, too. The reason why I pointed them out is that I find the representation of the data extremely convenient for an application that handles heightfields.

You can reconstruct any level of detail you want simply by leaving away some amount of data at the end. No special code, just stop early. And, it gives a good solution to streaming terrain, with progressive refinement. All the same, using the same algorithm with no special paths, and at a very affordable cost computation-wise.

I'm not even going to mention that it lends to (both lossless and lossy) compression. It's just a darn cool representation :-)
I'm not even going to mention that it lends to (both lossless and lossy) compression. It's just a darn cool representation :-)

Completely agree. Even if the OP isn't going to use wavelets in this case, knowing even the simplest transform (Haar) gives a whole new perspective on the frequency domain.

This topic is closed to new replies.

Advertisement