• Advertisement
Sign in to follow this  

geoclipmaps, streaming from system memory and disc?

This topic is 3764 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

ive just implemented geoclipmaps using a 4096x4096 texture in videomemory as basedata... but how would i go about with bigger heightmaps? should i load a part of it into systemmemory which i then update the data in the videomemory.. or should i update the videomemory with data directly fromthe drive? should i split the larger data into blocks where i always ahve 4 blocks in memory.. or should i use toroidal updates here also? id like to have some suggestions... im a bit unsure which is the best approach to this...

Share this post


Link to post
Share on other sites
Advertisement

I feel that the main strength of geoclipmaps are the incremental updates using toroidal array access. If you don't take advantage of this, you might as well have implemented some other algorithm (e.g. Geomipmapping). That is what makes the algorithm capable of handling large terrains without unlimited GPU memory.

How you should manage the dataset, that is not stored on the GPU (in your 4096x4095 texture), depends on how large datasets you want to support (larger than system memory?) and whether you can come up with a caching scheme that avoids disk-hickups when paging in new data.

In any case you need to partition your dataset such that it supports efficient spatial accessing. To achieve this you could split up the dataset into blocks, maybe 4 or 8. When updating the regions, you stream in the additional data using toroidal access as described in the Asirvatham/Hoppe paper. Then you could have another thread prefetch neighboring blocks from the disk to the application memory. Maybe make a LRU caching scheme for the blocks and free up memory as you go along.

kind regards,
Nicolai

Edited by ndhb

Share this post


Link to post
Share on other sites
would system memory as a secondary medium sized cache work well? or would there be no point if im loading from the disc anyway?

anyways my main issue is that i have barely any idea on how to efficiently load data from files... i was hoping for some tips on how this is best done and maybe some links to information on this topic?

EDIT:: u mean splitting it up into several blocks which i load blockwise into system memory... meaning im not doin toroidal updates from the disc?

EDIT2:: btw how fast can u load from a normal 7.2k disc? is there any good way to compress a 16bit raw file? which would increase the amount of data i can load per sec...

[Edited by - Dragon_Strike on November 4, 2007 6:32:50 PM]

Share this post


Link to post
Share on other sites

I wouldn't build a system that relies on, that is so dependent on being able to load from disk fast enough - you have very little control over what other applications in the operating system that uses disk resources (and it's a scare resource). Imagine a scenario where search indexing, anti-virus or disk defragmentation starts running.

Load in blocks of data, into application memory, well in advance. Instead of being dependent on your read request is in front of the IO queue. The main problem with the idea of loading directly on demand, is that the seek latency on disks is very high (above > 15 ms). You can see that this latency puts a limit on fps. In practice you're likely to see FPS drop when a new block is loaded. Since memory access has a MUCH lower latency, you should be doing your toroidal updates from blocks cached in system memory.

How to load files efficiently is really dependent on programming language, data structures, operating system and so on. Just try and load large blocks of data that you're likely to be using to minimize the number of disk seeks (disk throughput is most likely not a problem).

In theory it makes sense to load a compressed file to increase throughput. There are *tons* of lossless compression algorithms you can use. In practice it's not throughput that limits you, but the seek latency and compressing data doesn't reduce latency (in most cases). So, only compelling reason to use compression here, is to reduce storage requirement. That reason is not even very strong, unless you plan on supporting extremely large (> 50Mb ?) terrain dataset.

What I would do is:
Split up the large height map into blocks. At runtime, load in blocks in advance from disk (asynchronously if you feel confident about it and do look-ahead caching based on your viewing direction, if your camera moves fast). Implement an efficient caching scheme for the blocks and update your height map from these cached blocks (not directly from disk). Most importantly, don't make it more complicated than you have to:

1) Start by splitting up the height map into sizable blocks.
2) Load them as needed into memory and free up memory from old blocks.
3) Perform updates to the GPU from these.

4) Then, if this is too slow... (benchmark to find bottleneck)
4a) Implement look-ahead caching.
4b) Increase the block size.
4c) Perform loading asynchronously.
4d) Think about other possible optimizations.

kind regards,
Nicolai

Edited by ndhb

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement