So, someone was asking about the game Factorio and how it keeps tons of data and AI loaded in memory, seemingly without having enough space in memory to do so. So with a simple image example, I think I understand what they do but would like to hear some input (because I'm a newbie and could be talking nonsense):
So, instead of having:
1 address per tilesheet or all of them (cause you may or may not use only 1 per map type)
alt. x addresses for x individual tiles of that/those sheets ('cause you may not call/search the tilesheets every time, especially if we're talking about other datastructures that are bigger and/or more abstract/complex than images).
2 for double buffer (for anything that needs buffering)
1 for applied data (rendered image, etc)
You now have:
1 address per tilesheet
1 for ALL tiles
1 for preBuffer (that you put the tile address' data into before it changes to the new data)
2 for double buffer (the first of which receives the preBuffer data)
1 for rendered image
Now, you can probably understand where I'm going with this, when considering things that are more dynamic across the worldspace, and not just output onto the screen (things like logical changes, AI, and such).
My question is:
While this should (I assume) should be good for saving memory, I'm thinking that it'll probably decrease the processing budget a lot, since you're doing more stuff per second? But in general, having multiple scopes of memory management within one scope of statements, can be useful?
I mean, for Factorio it shouldn't be that bad, cause it's probably not the most demanding code anyways and you may still want to design LOD/mipmapping not only for rendering, but for AI and other things as well.
Just a newbie/sophomoric question. Thanks for any input.