Jump to content
  • Advertisement
Sign in to follow this  
HexiDave

Large heightmap compression

This topic is 4387 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

A little background on the project: essentially I'm building a planet using noise and terrain modifications (use noise with different functions to build the planet and add modifiers to the landscape which adjust what the noise functions give.) That's not a problem there, but I'd like to store a good bit of this data for the different levels of detail as I approach the planet. For simplicity, I'm using a cube and re-normalizing it to a sphere and I'm using geo-mipmapping for the LOD scheme. Again, not a big issue and it works for the most part, but if I were to compress a large part of the terrain, I believe (I've tested JPEG2000 and a few other methods) I'd have to decompress most of the image all at once, or compress each level of data seperately. Now, I haven't had much time to research and compression isn't really a field I'd like to get really deep into, but what I've gathered is something close to this: * Vector Quantization would be ideal as I have all the time in the world to let it compress, but I want fast decompression. * I've read a bit about Pyramid compression schemes, which I think is closer to what I'm looking for in terms of getting different levels of detail without decompressing the entire terrain. *JPEG2000 does what I want for file-sizes, but I haven't found code that I could decipher well enough to test speeds, however I believe it won't suit me as I'd need to decompress sections of the image to varying degrees of detail without blowing the whole thing up. If anyone can help me out with something that can match the following requirements, be it good reading material or source code (oh, please, let it be for Windows - 20 downloads in a row, *nix only - makes me cry...) I'd be most greatful: - Only needs to be greyscale, 8-bit images - either 512x512 or 513x513 per level of detail. - The levels of detail would work in a quad-tree fashion, so a more detailed page from one 512x512 image would have four 512x512 images and so on down to the most detailed level. - Needs to be relatively fast decompression, but compression can be slow. - Can be lossy (I'd prefer better than JPEG) - I'll be filling in gaps with noise data anyways. - Would prefer 40:1 or better compression Not sure if I missed anything - my head hurts from reading all these papers tonight. I'm by no means a mathematician, but I've done a good bit of work with noise algorithms so I'm not a total loss. Thanks for any help you can provide!

Share this post


Link to post
Share on other sites
Advertisement
If you're generating the world algorithmatically, then why do you need to store the output at all?
Would it be possible to just store the parameters used by the algorithm?

Share this post


Link to post
Share on other sites
Oh, yes, I can run it on-the-fly, I just want to cut out as much as I can as it is a bit slow - plus I'd like to have a few features on the planet not done by the noise (large canyons with specific designs, specialized mountain ranges, etc. that'd be way too much effort to generate and rebuild with the modifiers I was talking about.) Seeing as hard drive space is so easy to use these days, I might just go with something like PNG or high-quality JPEGs, but I've only got a few ideas floating in my head on how to break down the next levels - I'm just wondering if there already exists a good-standing method that I overlooked that may have good source code available (or research papers that aren't written in an incomprehensible format :D )

Share this post


Link to post
Share on other sites
from my blog
height compression idea, store terrain as patches eg 4x4blocks or 8x8 (like jpeg/dxt) each block contains a height + offset pattern value, thus 3/4bytes per block thus youre down like 16:1 compression ratio + i believe u will see no visual difference. i must try this out
----
ild prolly go for ~8x8 sized blocks and use a lookup table of 4096 or 16384 ie 2^10 2^12 values the other 12/10bytes u store the initial height in

fpeg2000 is not idea
A/ its designed to store colors as well
B/ offset values offer much better acuuracy than absolute values

Share this post


Link to post
Share on other sites
Hmm, you mention vector qualization, but I don't know if that's what you'd want to use for compression. Wavelet compression sounds like it would fit the bill much better.

Anyway, here's a gamasutra article on vector quantization
The jist of vq is to store the data using less "sets" of data, using what they call a "code book" where you'd basically store blocks (say 8x8) with a predefined 8x8 "code" block, thereby saving space.

Wavelet compression sounds like more of what you're looking for.

MrSid is a great compression method for what you're doing, but it's commercial.

You might search for something similar to mrSid, but that's free for your use.

If you do end up writing your own format, if I were you, I'd just go with something quadtree-ish.

If you don't need to save space, you can totally index the sucker and load only the data point you want. You can also store successive tree depths with scaled relative (offset) values, thereby improving accuracy with each sucessive iteration.

Depending on the terrain complexity, you could store the top level of the quadtree as a block size of as large as you like, 256x256 blocks, 2048x2048 blocks, or more - it's really cheap to store higher level blocks since there wouldn't be many of them.

-M

Share this post


Link to post
Share on other sites
Quote:
Original post by Thr33d
Hmm, you mention vector qualization, but I don't know if that's what you'd want to use for compression. Wavelet compression sounds like it would fit the bill much better.

Anyway, here's a gamasutra article on vector quantization
The jist of vq is to store the data using less "sets" of data, using what they call a "code book" where you'd basically store blocks (say 8x8) with a predefined 8x8 "code" block, thereby saving space.

Wavelet compression sounds like more of what you're looking for.

MrSid is a great compression method for what you're doing, but it's commercial.

You might search for something similar to mrSid, but that's free for your use.

If you do end up writing your own format, if I were you, I'd just go with something quadtree-ish.

If you don't need to save space, you can totally index the sucker and load only the data point you want. You can also store successive tree depths with scaled relative (offset) values, thereby improving accuracy with each sucessive iteration.

Depending on the terrain complexity, you could store the top level of the quadtree as a block size of as large as you like, 256x256 blocks, 2048x2048 blocks, or more - it's really cheap to store higher level blocks since there wouldn't be many of them.

-M

since you want to compress the heightmap, why not split the heightmap into little chunks, calculate the offsets from an average value to minimize the data type size of each entry an compress the result with the zlib and at runtime you stream that data in, decompress it with zlib again, add the average to your offsets and your done

Share this post


Link to post
Share on other sites
If you're set on storing data at resolution level X, then adding the lower resolution levels (a la MIP mapping) will only add 33% to your overall storage size. Further, if you interpolate using cubic interpolation from the decompressed images, there are many places where you can probably get away with not storing the lowest level, as it'll be pretty flat (or at least predictably curvy).

Thus, I would recommend using some image compression function, like JPEG 2000 if that has quantization that works for you, and storing each level of the terrain as a separate image. The overhead just isn't that much.

An alternate approach would be to, again, store the MIP maps, but then store the next level as the delta from the interpolated previous MIP level; this will generate images with low variation fairly quickly, which might compress better. In fact, they may even compress well enough with lossless compression, such as PNG grayscale or even plain ZLib compressed data.

Share this post


Link to post
Share on other sites
Thank you both very much for your input - I was mulling over a lot of the VQ info last night (research papers, lecture notes, tutorial-type pages and some really cryptic source code) and I couldn't tell if it was right or not (it fit the really-fast decompression part, so I looked further into it.) I'm actually kind of glad it's NOT what I want as the compression times were pretty slow and I'm compressing a good bit of data :D

I messed around with that SDK yesterday, but I got so side-tracked with VQ that I forgot about it... I'll look more into it, though. Space isn't a major issue - I just wanted to avoid creating the entire planet on the fly as download speeds are just getting faster and faster (Comcast just mailed me to tell me they doubled my upload speed, hurray!) plus hard drive space is so cheap these days it'd be a waste of resources just to brute-force creation of everything.

My mind is a lot clearer than last night and I've got a better grip of the material now, thanks very much. If anyone has any more input, I'm still all ears!

* If I come up with anything useful, I'll most likely release the code for it.

EDIT: 2 more posts beat mine in - good tips, I'll definately look into those as well.

Share this post


Link to post
Share on other sites
In the clipmap terrain paper they take the high res terrain and then box filter it down to produce a pyramid of mip maps. Starting with the lowest res mip map they use interpolation to predict the intermediate values at the next finer level and then store the difference between the prediction and the actual value (the residual). This gives a chain of increasing sized maps containing residuals, they then compress each level of residuals using a compression technique called PTC. There's a paper available online describing PTC but no freely available implementations that I've found and it looks quite tricky to implement from the paper. They claim very good compression ratios with this technique.

It seems like a wavelet type technique should be well suited to terrain compression because many of them support progressively decompressing coarser representations of the terrain. For something like clipmapping where you don't want to have the whole terrain decompressed in memory at once though you want to be able to selectively decompress a particular subregion of an image at high resolution and as far as I've been able to tell none of the freely available wavelet compression implementations are designed to support this efficiently.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!