Jump to content
Posted 06 February 2014 - 08:27 PM
Posted 06 February 2014 - 08:54 PM
Edited by Nypyren, 06 February 2014 - 09:10 PM.
Posted 06 February 2014 - 09:37 PM
Posted 06 February 2014 - 10:07 PM
Edited by Nypyren, 06 February 2014 - 11:12 PM.
Posted 06 February 2014 - 10:53 PM
Edited by Nypyren, 06 February 2014 - 11:16 PM.
Posted 07 February 2014 - 12:25 AM
What exactly do you mean by "load" if you want to load 512x512, but you also want to assume they are so large you cannot have more than a few in memory at the same time?
If you're just processing them, you'll want to have the data you actually need to process for all 512x512 tiles in a contiguous block of your data file so you can take one disk hit instead of 250,000 disk hits. As an example, game programmers often do this with height information, pathing data, and tons of other things by just throwing the data in a texture. An added bonus is then we have that data easily accessed in shaders if we want it. You can do this with any data you want.
Beyond that, make sure you're doing block reads instead of byte-by-byte or variable by variable. Just read the whole file into a char* and sscanf off of the char*, or a std::stringstream if you don't like sscanf, or whatever your language uses if you're not using c/c++.
If that doesn't get the performance you want, you should really provide more details. Optimization is very application-specific, so it's kind of hard guessing what will get you orders of magnitude performance improvements or if that is even possible without knowing what you need to do.