Sign in to follow this  
JoeyBlow2

Large heightmaps, how to handle data

Recommended Posts

I was experimenting with DEM data to load in landscapes around the world. The issue I'm seeing is that the DEM data is 30 meters. Which means a 108 km "grid" of the world is 3600x3600 (over 13 million points). Loading this data and creating vertex data uses an extreme amount of memory. What methods are best to combat this? 1) Reparse these DEM files and make them less resolution. Maybe 100 meters 1200x1200 points (1.5 million points). But this degrades the detail. 2) Load the points into system memory, then calculate LOD on the points, and create vertex data high resolution for around the camera, and lower resolution for things farther away and dynamically create the vertex data as things come in and go out of range. But this seems to be alot of calculating. It can be done in a thread which isn't a problem, but is this worth doing? 3) Analyze the file, creating detail where it needs it, but scale down areas around locations that seem similarly flat, and leaving hilly areas higher resolution. This leaves data to be static but requires someone to go through the files and create patches based on the DEM files. Does anybody have a better idea? And if one of mine is an ideal solution, which one would it be?

Share this post


Link to post
Share on other sites
Well, you already have the highest-detail LOD on-disk in the DEM data. All I think you need to do is first partition the DEM data into smaller sub-patches (size dependend on how you present them) and then pre-gen up some lower-resolution patches from the same data. Then you can load the lower-resolution sub patches for surrounding areas in the distance, and just present a small number (4, 9, 16, etc) of the highest detail sub-patches for immediately around the camera. computing up the sub-patches is a little tricky, though, as the edges of the lower-resolution ones have to match up with the higher-resolution ones, or else you will get "tears". As such, it is more than simply removing points at regular intervals.

I take it you will probably be presenting this data from high in the air, as 30 m eters between vertices is going to be rather low resolution at ground level for human-sized viewing contexts. ;)

Share this post


Link to post
Share on other sites
You might be suprised at how well JPEG can compress a grayscale version of a DEM file without much data loss :)

Havent tried DXT compression, but I assume theres something workable right here in everyones hardware.

If one of the DXT compressions does turn out to be acceptable, then its a big big win. Lower bus traffic AND the vertex shader can do all the hard work.

..just a thought..

Share this post


Link to post
Share on other sites
the solution is up to you:
1) if a loss of resolution is not a problem for you.
2) probably the best method because it is dynamic and automatic (no human intervention in the process) and you have scalability (the quality can be adjusted in real time for low-end systems).
3) less "useless" datas stored on disk and in memory but it needs a pre-process step.

for "2", just have a look at LOD algorithms for terrain rendering (geomipmapping, roam, soar)

Share this post


Link to post
Share on other sites

At this moment vertex cannot be used to add vertices.

When doing lodding one option is to either change the whole mesh to represent another LOD or just change the triangle indices and send all the vertices to the GPU (having some extra vertices doesn't hurt so much since the GPU doesn't process them until they are needed).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this