If by objects, you mean things that have position on the map but are not "tiles" in the strictest sense, then no, you don't need to separate them from the tilemap, although you can if you want. What I meant was that it sounded like your map files also contained tile/object images inside of it, and that's what would be bad.
It looks like you're using about 17.4 bytes per tile space, which could probably be reduced but is not unreasonable either. My guess is that simply switching to binary will save you a good deal of time, you'll probably see load times cut in half or better just by that one change. If you can reduce the size of the map file, either by reducing the size of atomic data elements, reducing the map area contained in a file, or by using compression, you will see still-further gains that will be roughly linear (e.g. reducing the file-size by half will reduce load-time by half, since IO is such a bottleneck.)
Other things you should consider, that I neglected to mention before, is to make sure you pre-allocate all the memory your data structures will need. For example, don't just create an empty vector and then push items onto it because it'll grow as needed. Even though this works perfectly well, its far more efficient to tell the vector to pre-allocate space for the 10k tiles when you create it -- or before you start loading the tiles in -- you can still just push items onto it afterwards, though. Pre-allocate for everything you can, if you know how many of something you need before you need space for it, you can pre-allocate it -- if your file format doesn't tell you how many tiles/objects to expect, ad that information to the file so you can use it. If you can't, just pre-allocate to a reasonable guess, you might waste some memory, but its not a problem unless you're tight on memory already. That'll be another worthwhile boost if you're not already pre-allocating.
Finally, make sure you're walking memory linearly -- sometimes people walk memory down their columns rather than down their rows -- When you start talking about contiguous memory regions larger than 64k you're guaranteed to be blowing out your L1 data cache if you're walking memory the wrong way, you'll have a cache-miss literally every time (and that's on the newest Haswell processors, too). You'll probably see adverse affects even at 32k due to other tasks on the CPU. At the sizes you're talking about, between 128k and 256k, thats large enough to even impact the L2 cache. If you're doing that wrong, fixing it will be another worthwhile win -- affects on load-time will be nice, but other properly walking memory during runtime will pay big dividends too.