Performance of using harddrive to load map data dynamically.

Started by
8 comments, last by Sirisian 17 years, 5 months ago
I'm considering implementation methods for a feature I want to include in my project. However, I'm really uncertain whether this particular implementation is feasible due to performance considerations. Here is the implementation... Picture an infinitely sized pacman map; something impracticable to store all in the RAM. What if every time you moved left or right, it loaded (DISPLAY_HEIGHT / GRID_HEIGHT) number of tiles from the harddrive. And everytime you moved up or down, it loaded (DISPLAY_WIDTH / GRID_WIDTH) number of tiles from the harddrive. IIRC, Ultima used this approach, and probably justly so, it's an incredibly easy implementation. But my concern is, is this going to be a bottleneck? For every grid movement, it would have to seek() to a location in the file and than read it.
Advertisement
I'd probably do it in larger chunks or blocks, to minimize the file open/close overhead, but other than that, I think you'll just have to test it and see how significant the impact is. You could store each block of tiles as a continuous block, rather than storing row after row, so whenever you need to load a block, you move to a specific offset in the file and read the tiles in one go, instead of continually seeking for each next row of that block.

Just a thought. :)
Create-ivity - a game development blog Mouseover for more information.
Quote:Original post by Captain P
I'd probably do it in larger chunks or blocks, to minimize the file open/close overhead


There would not be a file open/close overhead. I would only open the file once, and close the file once. Between seeks() or reads(), I don't see the need to close and reopen the file.



Quote:Original post by Captain P
You could store each block of tiles as a continuous block, rather than storing row after row, so whenever you need to load a block, you move to a specific offset in the file and read the tiles in one go, instead of continually seeking for each next row of that block.


I do not understand. Assuming you assumed that the map file is written so that it stores the tiles row-by-row (which it does, but I never mentioned this), you are pointing out that reading a row (moving up or down) would work out ok, but reading a column (in the case of moving left or right) would require several seeks() and that perhaps a better arrangement of the tile data in the file would be more sufficient?
A good rule of thumb is that random access to a harddrive are 1,000,000 times slower than RAM.

So, it could be worth it, but only if 1) loading new data is rare, and 2) it can be done well in advance of the time at which it's actually needed.
A naive implementation is likely to cause some nasty stalls.
In Sacred we divided the world into sectors of - let's say 64x64 'patches' of eg 48x24 pixels (iso-view).
We read this info on a per-sector basis. Note that the patches only
have an id of which graphic to load and a few other basic info's
like 'can we enter this patch'.

One thing which brings huge gains is to compress (eg zip) your sectors,
since loading into ram and then uncompressing is way faster then loading
an uncompressed file into ram.

Now you only have to combine this system with smart pre-streaming
in a seperate thread (so i/o doesn't block your render-loop).

Just my 2 ct :)
visit my website at www.kalmiya.com
Quote:Original post by Thevenin
There would not be a file open/close overhead. I would only open the file once, and close the file once. Between seeks() or reads(), I don't see the need to close and reopen the file.


Ah, ok. :)

Quote:Original post by Thevenin
I do not understand. Assuming you assumed that the map file is written so that it stores the tiles row-by-row (which it does, but I never mentioned this), you are pointing out that reading a row (moving up or down) would work out ok, but reading a column (in the case of moving left or right) would require several seeks() and that perhaps a better arrangement of the tile data in the file would be more sufficient?


Yes, exactly. Kitt3n just explained it more thorough than I did, though. :) You'd then want to load new patches only when players get near the edge of the current patch, or have another smart prediction system for when new patches need to get loaded. Just because the player moves one tile left doesn't mean you need to load stuff on the left of him. Maybe he's going to go to the right after that. You'd want to buffer some data and only load and unload patches when necessary.

Quote:Kitt3n
One thing which brings huge gains is to compress (eg zip) your sectors,
since loading into ram and then uncompressing is way faster then loading
an uncompressed file into ram.


Interesting. What performance gain did you get, roughly speaking?
Create-ivity - a game development blog Mouseover for more information.
There are a couple of options to creating a seemingly infinite tiling game, either loading the tiles from a file or calculating them dynamically. Before we get into that, you need to think about how you can compress the tiles as much as possible. For example, in your pac-man game you will probably just have two options per tile, either wall or empty space to walk. In that case, each tile could be represented by a single bit. So if your screen consisted of 32x32 tiles, then that would only take 128 bytes.

Option 1: Loading from disk
If you load the tiles from a file then you must do two important things: load them in a separate thread and do preloading. Think of a 3x3 grid of cells and imagine you are in the center piece (I’m calling a cell the 32x32 tiles that take up the whole screen). You should have a thread that loads all 8 surrounding cells every time you change to a different screen. Actually, once you get the initial 9 cells loaded, you will only need to load 3 other cells each time you move to a new cell because the other 6 are already in memory.

Option 2: Calculating Dynamically
If you consider your tiles as an array of bits, you could calculate your tiles dynamically based on your cell number. Create a hash function that takes your cell number (row & column) and returns a 32x32 grid of bits that represents your current screen cell. This option would be much more efficient, but it may be a little difficult coming up with the math for your hash function.
<a href="http://www.slimcalcs.com>www.slimcalcs.com
As pointed out, going to disk is waaay slower than not, and going to CD or DVD is waaay slower than going to HD. That said, streaming is a necessity for big games. Depending on a number of factors (including size of grid cells, max movement speed, decompression if any, your use of threads), it may or may not be a problem.

Btw, one way to avoid the multiple seeks per read is to store the data twice - once arranged by rows and once arranged by columns. If you're moving vertically, you use your row-based data. If you're moving horizontally, you use your column-based data. That way, you do one seek/read pair per movement.

You can take this to a further extreme by storing 8 (or more) copies of the data, for different motion patterns. To take it to a really silly extreme, you could have 36 copies of the data, arranged for the read requirements of 10 degree arcs. But, you'd probably only want to do that if your seek times were bad (like CD/DVD) and you had the space to blow. :)
Quote:Kitt3n
One thing which brings huge gains is to compress (eg zip) your sectors,
since loading into ram and then uncompressing is way faster then loading
an uncompressed file into ram.


Interesting. What performance gain did you get, roughly speaking?

Hmm... It was some time ago and I don't remember details - especially
since we also did another optimization at the same time, to reduce the
amount of data. I just remember it was really worthwhile.

We had eg 2d trees and by 'tiling them' and cutting away irrelevant parts
(eg next to the trunk there is lots of 'air') and storing u/v maps into
that (and afterwards combining them again :)
And that dramatically reduced our assets-size.

These two optimizations made it from "too slow" to "okay" :)

Having the i/o run in a thread would have improved it to "great".

/R
visit my website at www.kalmiya.com
binary files mixed with a smart map editor would also suffice and zipping wouldn't be needed, though it would accomplish the same basic thing with a little more compression. For continuous map files binary is the way to go.

This topic is closed to new replies.

Advertisement