I'm working on an infinite terrain generator - and so far, so good. I've got what is likely to be a problem in the future though. The player-save-data.
The terrain itself is reading from perlin noise with some modifications to let me control it's result. When the player changes anything, that object's relevant data is processed into a uLong and put in a list. Every so many loops, I then check the complete list to see if the rendered tile's position is within the render area. If it is, i put it into an array that corresponds with the tile - so my terrain-maker can access the data without looping through the entire list. My saved data doesn't attempt to save the entire infinite world - just the player's alterations.
Right now, there's no problems whatsoever, but I'm just testing now, so I have maybe 10-100 tiles in the list. So when i do the periodic "InRenderArea" check, it only has to process those 10-100 items. BUT.. players could have 1,000 to 100,000. Or higher I suppose. Since 100,000 tiles would be .8mb, I'm guessing that it would have an impact on performance since I'm trying to populate my Render-Area array every 10th loop. I'm not too sure how much data is too much to process in a single loop.
I'm not sure what the best way to deal with this would be. I thought about doing a fairly complex system that splits the world up into grids and stores the data in structures that correspond to them. But, seeing as I'm lazy I'd rather find an easier solution than that. My lazy-solution was going to be to have a "near list" and "far list" and every loop, process through maybe 100-500 or whatever "far list" items. If they're close to the render-area, then they'd be put into the "near list".
Would this be sufficient enough to avoid a performance hit (or late updates, resulting in non-present tiles) , or should i go for the more elaborate multi-grid solution?
I apologize for the lengthy preamble, but I figured details were necessary.