Jump to content
  • Advertisement
Sign in to follow this  
Kjansen92

Unity Optimizing 2D Tile Map

This topic is 1320 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello GameDev community!

 

How many here have valuable experience with 2D tile maps?

 

I have a question for you that may be very valuable to any readers of this page, which will give you good opportunity to share your expertise.

 

The begin, I have what is called here and 'Action Screen,' which loads a matrix of what are called here 'Maps.' Each map is 100 * 100 tiles, which also contain objects (in this implementation).

 

The Action Screen works great, and allows a player to travel across any number of maps.

 

The challenge is getting it to save and load maps efficiently. Currently, maps are serialized and deserialized as needed, along with a vector of Tile objects within them. 

 

Rendering and displaying are both very fast. The challenge currently lies in loading the maps from a directory (which takes a lot of time, due to the number of objects within them).

 

One solution I've considered is to serialize and deserialize tiles individually, which would work very well for loading, but horribly for saving new maps, which contain unique tiles. Also, that is an ungodly amount of individual files.

 

 

Is there anyone out there that might have what it takes to field this one, or some relevant resources?

 

 

Thanks in advance!

Share this post


Link to post
Share on other sites
Advertisement

Its all going to come down to how big those map files are, how large their dependencies (if any) are, and what their structure is. In general, I wouldn't have thought they'd be large enough to form a bottleneck, so I would guess that either you are storing a lot of things in the file, storing it inefficiently, or perhaps are booting dependencies of the previously-loaded maps and then reloading it anew, which if true is probably the real source of the inneficiency -- I think your last paragraph indicates that this is what you're doing.

 

I would absolutely keep tile images separate from their maps. There may be tiles that are unique to a map, but its still probably better to store them separately. You for sure want to share common tiles between maps, and once you're doing that, making a special case to support tiles also coming from within the map itself is just extra work for no real gain. But you can do that if it floats your boat.

 

If you only have map data inside the file and its still a bottleneck, then you probably want to look first at whether you are storing your maps efficiently -- preferably something that can be loaded directly into memory with minimal translation. Text formats like XML or JSON are right out as a runtime format (though might be appropriate during production), even binary formats that require fine-grained parsing can be troublesome.

 

If you're sure that the size and the shape of your data isn't the problem, then you'll want to look breaking the map into even smaller sections (by the way, one of the advantages of moving tile data out of the map file itself, is that you can break up the map into smaller sections without affecting which file it is where those unique tiles land) -- maybe 64x64, 50x50, or 32x32. Even if you end up loading the same amount of a map, you can load the nearest parts first and continue on your way.

 

Finally, since its often faster to load and decompress compressed data than it is to read the uncompressed data from a disk, compression is another avenue that you could explore, either on its own with larger maps or combined with smaller maps for even further reduced load times. very simple run-length encoding will do pretty well, Huffman encoding will do even better -- its more complex, but there are very good free libraries for it. You could also use a .zip archive library for all of your game assets and let it handle it all -- again, there are good free libraries for this.

 

 

If you can describe the contents/format/size of your map files in more detail, we can help you out more, or I can give you a better idea of what parts of the above are most relevant to your situation.

Share this post


Link to post
Share on other sites

Load speeds mostly come down to how your file format is laid out, and the amount of content being loaded.

 

What do your map files look like? Are they text files? If so, you can pretty much get an instant speedboost by using binary files instead.

Share this post


Link to post
Share on other sites

Servant of the Lord, yes, they are text files, although by using Boost, switching to binary is very simple.

 

Ravyne, the maps do contain a lot of data. Again the 10,000 tiles in each (I know, an atrocity), as well as all of the objects. One example is a forest map (100 * 100), each containing about 5000 trees. Currently, the whole purpose for the tiles is simply for rendering. If I know which tiles are active, then I can render just the objects within those tiles.

 

Will look into compressing files, as that is one major question that I had - having a database that will allow loading of files as needed, and at the same time being able to compartmentalize files. 

 

One other direct application of the same functionality was to create a 300 * 300 tile vector in the Action Screen, and passing all of the objects into them. This way, if an AI unit wants to travel from map to map, it won't actually have to load very many objects (just things like buildings and geographical data), in order to determine travel time and long-term pathfinding.

 

Understanding that this is not comprehensive, I will give you the data here, so you may be able to help me.

 

Each map, currently, is 170kb, and by removing the Tile portion of each, will reduce that number to about 100kb. Saving objects separately will reduce that very dramatically.

 

The maps contain a few private integers and a string variable, as well as a private vector of objects. When the map is saved, it clears a global vector of objects, passes it's objects into it, and then serializes all of the data. 

 

The funny thing is that I don't even use tile images yet!

 

Will change serialization to binary, save objects in a folder separate from the map itself, and implement object passing to the Action Screen Tile vector.

 

Thank you for your insights. Just with this, the bottleneck will be almost gone, less the need to still load all of the objects of a map.

Share this post


Link to post
Share on other sites
Have you actually measured what's being slow?

For example, a gzip-compressed text file can easily load far faster than certain binary formats. The CPU can do an awful lot of decompression and text parsing in the time it takes to load one file off of disk. Just switching to a binary file may help, but not help as much as just gzipping your currnet file (text files tend to compress better than binary ones, though you can of course also compress binary files and compare the two to see which works better for your data).

Time how long it takes to read the data. Time how long it takes to parse that data. Time how long it takes to populate the game objects in your world. The actual slow down could be something silly like your "insert physics object" taking >1ms each (seen it happen in weird cases on engines that have shipped far larger AAA games... weird stuff happens and you need to be experienced at profiling to find it).

Ravyne's advice is sound, but it's an awful lot of work that might be unnecessary to get your game to an acceptable performance envelope. Figure out your actual problem then just fix that until you have the time to fully revisit your asset conditioning and content loading pipelines.

Share this post


Link to post
Share on other sites

SeanMiddletich, thank you for your input. All things considered, compression is an obvious benefit in this scenario. I've tested all of the approaches and functions, and it is most definitely the loading from disc. 

 

ServantoftheLord, thank you as well. I would not have imagined that changing strings into binary format would be another approach.

 

Obviously, I have much to learn about this. Everyone here has given me a wealth of information on the subject, and I've found all of it useful and to be overall a better approach than what I was using. Not only that, it meets all of my optimization goals (even the lofty ones). 

 

Will leave an update after everything is switched over, with time differences, just for reference sake.

 

Thank again!

Share this post


Link to post
Share on other sites


Ravyne's advice is sound, but it's an awful lot of work that might be unnecessary to get your game to an acceptable performance envelope. Figure out your actual problem then just fix that until you have the time to fully revisit your asset conditioning and content loading pipelines.

 

In this case, its sounding more and more like compression would act as sort of a magic bullet. If OP is okay taking a compression library as a dependency, it might really be the path of least resistance while also giving biggest gains -- and as you said, if the data is semi-regular in either form, compressed ASCII ought to be just a bit larger than compressed binary (The number of dictionary entries should be the same or about the same, but the entiries themselves will likely be a bit larger). I generally assume that people want to put off dependencies if they can, which is where the other advice is relevant.

 

I think the thing I would do in either case is to just evaluate whether OPs current format is wasteful -- are you writing 32bit values to disk when the data would fit in 16bits? Are you padding ascii strings with additional "empty" characters to make parsing easier? Stop doing those kinds of things first. Also do pre-allocate large data structures and profile for unexpected bottlenecks (like in your physics system insert example) -- then, if performance is still to be desired at that point, compression is the next step.

Share this post


Link to post
Share on other sites

Another thing that might be affecting you if you're a Visual Studio user is _ITERATOR_DEBUG_LEVEL -- in release mode, this defaults to level 0 which is no checks, and in debug mode it defaults to level 2 which is iterator debugging. iterator performance is markedly different between these two settings, I've seen differences as much as an order of magnitude. You can also set it to level 1 which performs basic iterator checks that cost less than level 2. You can't do level 3 in release mode.

 

To set these to another value, for each configuration of the solution:

  • In Project Pages / Configuration Properties / C,C++ / Preprocessor / Preprocessor Definitions., add "_ITERATOR_DEBUG_LEVEL=<LEVEL>" where <LEVEL> is the level of iterator debugging that you want for that configuration.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!