Optimizing 2D Tile Map

Started by
9 comments, last by Kjansen92 9 years, 4 months ago

Hello GameDev community!

How many here have valuable experience with 2D tile maps?

I have a question for you that may be very valuable to any readers of this page, which will give you good opportunity to share your expertise.

The begin, I have what is called here and 'Action Screen,' which loads a matrix of what are called here 'Maps.' Each map is 100 * 100 tiles, which also contain objects (in this implementation).

The Action Screen works great, and allows a player to travel across any number of maps.

The challenge is getting it to save and load maps efficiently. Currently, maps are serialized and deserialized as needed, along with a vector of Tile objects within them.

Rendering and displaying are both very fast. The challenge currently lies in loading the maps from a directory (which takes a lot of time, due to the number of objects within them).

One solution I've considered is to serialize and deserialize tiles individually, which would work very well for loading, but horribly for saving new maps, which contain unique tiles. Also, that is an ungodly amount of individual files.

Is there anyone out there that might have what it takes to field this one, or some relevant resources?

Thanks in advance!

Advertisement

Its all going to come down to how big those map files are, how large their dependencies (if any) are, and what their structure is. In general, I wouldn't have thought they'd be large enough to form a bottleneck, so I would guess that either you are storing a lot of things in the file, storing it inefficiently, or perhaps are booting dependencies of the previously-loaded maps and then reloading it anew, which if true is probably the real source of the inneficiency -- I think your last paragraph indicates that this is what you're doing.

I would absolutely keep tile images separate from their maps. There may be tiles that are unique to a map, but its still probably better to store them separately. You for sure want to share common tiles between maps, and once you're doing that, making a special case to support tiles also coming from within the map itself is just extra work for no real gain. But you can do that if it floats your boat.

If you only have map data inside the file and its still a bottleneck, then you probably want to look first at whether you are storing your maps efficiently -- preferably something that can be loaded directly into memory with minimal translation. Text formats like XML or JSON are right out as a runtime format (though might be appropriate during production), even binary formats that require fine-grained parsing can be troublesome.

If you're sure that the size and the shape of your data isn't the problem, then you'll want to look breaking the map into even smaller sections (by the way, one of the advantages of moving tile data out of the map file itself, is that you can break up the map into smaller sections without affecting which file it is where those unique tiles land) -- maybe 64x64, 50x50, or 32x32. Even if you end up loading the same amount of a map, you can load the nearest parts first and continue on your way.

Finally, since its often faster to load and decompress compressed data than it is to read the uncompressed data from a disk, compression is another avenue that you could explore, either on its own with larger maps or combined with smaller maps for even further reduced load times. very simple run-length encoding will do pretty well, Huffman encoding will do even better -- its more complex, but there are very good free libraries for it. You could also use a .zip archive library for all of your game assets and let it handle it all -- again, there are good free libraries for this.

If you can describe the contents/format/size of your map files in more detail, we can help you out more, or I can give you a better idea of what parts of the above are most relevant to your situation.

throw table_exception("(? ???)? ? ???");

Load speeds mostly come down to how your file format is laid out, and the amount of content being loaded.

What do your map files look like? Are they text files? If so, you can pretty much get an instant speedboost by using binary files instead.

Servant of the Lord, yes, they are text files, although by using Boost, switching to binary is very simple.

Ravyne, the maps do contain a lot of data. Again the 10,000 tiles in each (I know, an atrocity), as well as all of the objects. One example is a forest map (100 * 100), each containing about 5000 trees. Currently, the whole purpose for the tiles is simply for rendering. If I know which tiles are active, then I can render just the objects within those tiles.

Will look into compressing files, as that is one major question that I had - having a database that will allow loading of files as needed, and at the same time being able to compartmentalize files.

One other direct application of the same functionality was to create a 300 * 300 tile vector in the Action Screen, and passing all of the objects into them. This way, if an AI unit wants to travel from map to map, it won't actually have to load very many objects (just things like buildings and geographical data), in order to determine travel time and long-term pathfinding.

Understanding that this is not comprehensive, I will give you the data here, so you may be able to help me.

Each map, currently, is 170kb, and by removing the Tile portion of each, will reduce that number to about 100kb. Saving objects separately will reduce that very dramatically.

The maps contain a few private integers and a string variable, as well as a private vector of objects. When the map is saved, it clears a global vector of objects, passes it's objects into it, and then serializes all of the data.

The funny thing is that I don't even use tile images yet!

Will change serialization to binary, save objects in a folder separate from the map itself, and implement object passing to the Action Screen Tile vector.

Thank you for your insights. Just with this, the bottleneck will be almost gone, less the need to still load all of the objects of a map.

If by objects, you mean things that have position on the map but are not "tiles" in the strictest sense, then no, you don't need to separate them from the tilemap, although you can if you want. What I meant was that it sounded like your map files also contained tile/object images inside of it, and that's what would be bad.

It looks like you're using about 17.4 bytes per tile space, which could probably be reduced but is not unreasonable either. My guess is that simply switching to binary will save you a good deal of time, you'll probably see load times cut in half or better just by that one change. If you can reduce the size of the map file, either by reducing the size of atomic data elements, reducing the map area contained in a file, or by using compression, you will see still-further gains that will be roughly linear (e.g. reducing the file-size by half will reduce load-time by half, since IO is such a bottleneck.)

Other things you should consider, that I neglected to mention before, is to make sure you pre-allocate all the memory your data structures will need. For example, don't just create an empty vector and then push items onto it because it'll grow as needed. Even though this works perfectly well, its far more efficient to tell the vector to pre-allocate space for the 10k tiles when you create it -- or before you start loading the tiles in -- you can still just push items onto it afterwards, though. Pre-allocate for everything you can, if you know how many of something you need before you need space for it, you can pre-allocate it -- if your file format doesn't tell you how many tiles/objects to expect, ad that information to the file so you can use it. If you can't, just pre-allocate to a reasonable guess, you might waste some memory, but its not a problem unless you're tight on memory already. That'll be another worthwhile boost if you're not already pre-allocating.

Finally, make sure you're walking memory linearly -- sometimes people walk memory down their columns rather than down their rows -- When you start talking about contiguous memory regions larger than 64k you're guaranteed to be blowing out your L1 data cache if you're walking memory the wrong way, you'll have a cache-miss literally every time (and that's on the newest Haswell processors, too). You'll probably see adverse affects even at 32k due to other tasks on the CPU. At the sizes you're talking about, between 128k and 256k, thats large enough to even impact the L2 cache. If you're doing that wrong, fixing it will be another worthwhile win -- affects on load-time will be nice, but other properly walking memory during runtime will pay big dividends too.

throw table_exception("(? ???)? ? ???");

Have you actually measured what's being slow?

For example, a gzip-compressed text file can easily load far faster than certain binary formats. The CPU can do an awful lot of decompression and text parsing in the time it takes to load one file off of disk. Just switching to a binary file may help, but not help as much as just gzipping your currnet file (text files tend to compress better than binary ones, though you can of course also compress binary files and compare the two to see which works better for your data).

Time how long it takes to read the data. Time how long it takes to parse that data. Time how long it takes to populate the game objects in your world. The actual slow down could be something silly like your "insert physics object" taking >1ms each (seen it happen in weird cases on engines that have shipped far larger AAA games... weird stuff happens and you need to be experienced at profiling to find it).

Ravyne's advice is sound, but it's an awful lot of work that might be unnecessary to get your game to an acceptable performance envelope. Figure out your actual problem then just fix that until you have the time to fully revisit your asset conditioning and content loading pipelines.

Sean Middleditch – Game Systems Engineer – Join my team!

Servant of the Lord, yes, they are text files, although by using Boost, switching to binary is very simple.

Hold on a second. I don't mean just switching from Load(MyTextFile, AsText) to Load(MyTextFile, AsBinary). It's still a text file (even if you save it as "binary").

I mean rewriting your file format entirely, so nothing is stored in strings, and things are stored directly as integers and bools and whatnot.

I don't mean saving it as an integer stored as a string saved as a 'binary' file. I mean zero conversion from strings to integers.

It's almost certainly the string parsing that is killing your load times. dry.png

If you have hundreds of files you're trying to load at once, it's probably the hard drive overhead for each file. But if you have a map stored in a single file, and that single file is taking a long time to load, you need to redesign your file format, not just swap out one function call for another. You need to write it so you don't have to process the loaded data (converting strings to ints or whatever). Load it already in a suitable format.

Compression is likely not the issue. Databases are likely not the solution.

170KB? Not a problem. A single photograph can be several MB (ten times your map file, and that's with compression), but ten year old computers can open them nigh-instantaneously. smile.png

Post your map-loading function.

SeanMiddletich, thank you for your input. All things considered, compression is an obvious benefit in this scenario. I've tested all of the approaches and functions, and it is most definitely the loading from disc.

ServantoftheLord, thank you as well. I would not have imagined that changing strings into binary format would be another approach.

Obviously, I have much to learn about this. Everyone here has given me a wealth of information on the subject, and I've found all of it useful and to be overall a better approach than what I was using. Not only that, it meets all of my optimization goals (even the lofty ones).

Will leave an update after everything is switched over, with time differences, just for reference sake.

Thank again!


Ravyne's advice is sound, but it's an awful lot of work that might be unnecessary to get your game to an acceptable performance envelope. Figure out your actual problem then just fix that until you have the time to fully revisit your asset conditioning and content loading pipelines.

In this case, its sounding more and more like compression would act as sort of a magic bullet. If OP is okay taking a compression library as a dependency, it might really be the path of least resistance while also giving biggest gains -- and as you said, if the data is semi-regular in either form, compressed ASCII ought to be just a bit larger than compressed binary (The number of dictionary entries should be the same or about the same, but the entiries themselves will likely be a bit larger). I generally assume that people want to put off dependencies if they can, which is where the other advice is relevant.

I think the thing I would do in either case is to just evaluate whether OPs current format is wasteful -- are you writing 32bit values to disk when the data would fit in 16bits? Are you padding ascii strings with additional "empty" characters to make parsing easier? Stop doing those kinds of things first. Also do pre-allocate large data structures and profile for unexpected bottlenecks (like in your physics system insert example) -- then, if performance is still to be desired at that point, compression is the next step.

throw table_exception("(? ???)? ? ???");

Another thing that might be affecting you if you're a Visual Studio user is _ITERATOR_DEBUG_LEVEL -- in release mode, this defaults to level 0 which is no checks, and in debug mode it defaults to level 2 which is iterator debugging. iterator performance is markedly different between these two settings, I've seen differences as much as an order of magnitude. You can also set it to level 1 which performs basic iterator checks that cost less than level 2. You can't do level 3 in release mode.

To set these to another value, for each configuration of the solution:

  • In Project Pages / Configuration Properties / C,C++ / Preprocessor / Preprocessor Definitions., add "_ITERATOR_DEBUG_LEVEL=<LEVEL>" where <LEVEL> is the level of iterator debugging that you want for that configuration.

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement