Followers 0

# binary save, adding vars, and changing savegame file formats

## 29 posts in this topic

One of the problems i run into building games, that I've never found a satisfactory solution for is the problem of getting save game to run fast by using binary files, adding variables to structs which are saved, the resulting file format changes, and its impact on saved playtest games.

long term play testing is done concurrently with development. many features added require additional variables that need to be loaded and saved. each time you add a variable, the save game file format changes, and you have to convert your save game files, or use a game editor to edit a new game up to where you were, or add a routine you run once that "edits up" a new game after it starts.

all fine and good til the file size gets big, and your "quick save" is pushing on 10 seconds, almost time enough to grab a beer right quick. and definitely too much time if your doing it every 30 seconds or so, cause you die a lot. you spend a lot of your testing time waiting for saves.

so then you go binary, and your 10+ second save is down to 2-3 seconds. quite reasonable for capturing the entire state of a simulation software program. but as soon as you add a new variable to a struct that already gets saved, your file format changes. then you need old and new versions of the struct declaration, and the load game and save game routines to convert the saved playtest games.

this often drives me to reuse or double up on the usage of existing variables that already get saved, to avoid converting save games. however, i consider this poor design. while doubling up on variable usage does speed up loads and saves, and avoids file format changes (whic is the reason i do it), it must be done with care to ensure the two usages of the variable do not overlap, and requires good commenting to make it clear whats going on. this leads to readability and maintainability issues, hence poor design.

another thing i've tried is adding filler fields to the end of struct declarations such as: unsigned filler[100]; // 100 bytes of filler

then when you add a new variable, you remove some filler, and the file format doesn't change. the problem with this is that adding enough filler to ensure room to grow slows down the binary save to almost the speed of the text file save.

another thing i've considered is just declaring structs with a few obvious variables, and then something like a data[10] array to hold whatever else is needed. as vars are needed, data[n] is assigned that duty.

here's the challenge:

you want a way to save and load data so that you can add a new variable to the file format and easily convert all old format files to the new format, with less hassle than the usual when changing binary file formats (IE old and new format declarations and versions of load and save). i found that with text files i could just add new variables to the end of load and save. you init a bank game to default values, then load. any variables not in the file because its an old format simply don't get loaded and the default values already set are used. i guess i'm looking for similar functionality at binary speeds.

does anyone have any suggestions?

clever database tricks was never my strong suit.

1

##### Share on other sites
For structs, there are generally two problems:

- You add a new field, but the old file doesn't have it.
- You remove a field, but the old file still has it.

There are two main types of implementations I've seen:

=== Option 1 ===

Keep a version number for each struct, serialize the version number (either per struct or once for the entire save file), then write a deserializer which can handle any version up to now. The main benefit here is that you can fread the entire struct if its version number matches your code's struct. If not, you fall back to a generalized reader which if/else reads each individual field based on whether it existed at that version or not (including skipping fields that you've removed). The idea is that it only uses the slow deserializer when the app gets patched, and then when you overwrite the save file, it will be blazing fast again.

Terraria uses the if/else approach, but most of its loading time is spent rebuilding data that isn't actually saved to the file.

The code can look like:
if (version == kCurrentVersion)
{
ReadEntireStruct(); // fread in C, other languages may or may not have a fast equivalent.
}
else
{

if (version > 12)

if (version > 13) // Needed to increase bitfield size to 32 bits from 16
else
}

=== OPTION 2 ===

Each field keeps a 'field id', and the serialized struct stores that field ID before the field contents. In addition, if you want the ability to deserialize "unknown" (removed) fields, you need to add more data that indicates how long (or what type) the field is.

The code here can be written in various ways. For example, protobuf.net can generate deserialization classes on the fly and compile them with JIT.
while (fieldInfo = ReadFieldId())
{
switch (fieldInfo.id)
{
case 1: field1 = ReadString(); break;
case 2: field2 = ReadInt32(); break;
case 3: field3 = ReadInt16(); break;
case 4: field3 = ReadInt32(); break;   // notice the field id also had to change when changing the field's type.

default: SkipField(); break;
}
}

Edited by Nypyren
2

##### Share on other sites
It's a basic serialization problem. Google can find a few bajillion web pages on serialization methods.

The solution I prefer is to store objects in the format:

{size, id, version, key1, a, key2, b, key3, c ... }

Read and write values by keys.

When serializing an object give it a writer object that accepts keys and values. When you get that back, encode the size, version, and then the table of values.

When deserializing the object it reads the next /size/ bytes, the ID so it knows what to write, the version in case you need to completely invalidate the values with a different structure version, followed by the table of values.

You then can have a family of functions:
bool writer.WriteBool ( u32 uniqueKey, bool value) {}
bool writer.WriteInt16( u32 uniqueKey, s16 value) {}
bool writer.WriteUInt16( u32 uniqueKey, u16 value) {}
bool writer.WriteInt32( u32 uniqueKey, s32 value) {}
bool writer.WriteString( u32 uniqueKey, const string& value) {}
....

...

If a value wasn't written in the last save version, or if the version doesn't match, you can replace it with your own expected default.
2

##### Share on other sites

The solution I prefer is to store objects in the format:

{size, id, version, key1, a, key2, b, key3, c ... }

How do you handle the case where an unknown key occurs? It seems like you have to abandon the remainder of the struct at that point because you don't have info to skip individual fields.

Do your keys include some bits that identify the field's size? Edited by Nypyren
0

##### Share on other sites

The solution I prefer is to store objects in the format:{size, id, version, key1, a, key2, b, key3, c ... }

How do you handle the case where an unknown key occurs? It seems like you have to abandon the remainder of the struct at that point because you don't have info to skip individual fields.Do your keys include some bits that identify the field's size?
As stated in the first post, when loading a value there is a default value parameter if the key/value pair does not exist.

So perhaps:

If the field exists then mystruct.Color is loaded with the rgba value that was persisted. If the field does not exist, then mystruct.Color is set to a default value.
0

##### Share on other sites

For structs, there are generally two problems:

- You add a new field, but the old file doesn't have it.
- You remove a field, but the old file still has it.

yep, that's it in a nutshell. actually, its case #1 that's my big problem. if i stop using a var, i just leave it in the save game file. when i need a new var, i already have a slot ready to go (assuming sizeof() is the same). anything unused gets stripped out at the end in the final format.

Keep a version number for each struct

this is the hassle i'm trying to avoid.

granted, it not that much to copy and paste a struct, call it struct2, add a field, copy and paste load and save game, call them loadgame2 and savegame2, and change them to use struct2's, but after a while you get a lot of struct declarations. since i try to keep all in house saved play test games updated as the file format changes, i usually only have 2 version of a struct going at once. and all that copy and pasting is much more work than adding writefileint(f,newvar); and newvar=readfileint(f);

Each field keeps a 'field id', and the serialized struct stores that field ID before the field contents.

saving more data - slows things down - same problem as adding filler. i'm already using fwrite_nolock to get 3 second save times. plus you cant get that nice big write of an entire array at once. and its probably as much work to code as keeping multiple versions of structs. its would be nice to have the speed of binary and the ease of coding of text.

i did consider something along these lines. instead of saving an entire struct (or array of structs) as a binary chunk of data, you save the individual fields one at a time as binary. no tags, flags, counts, etc. just read and write in the ordered declared. when you add a var, you add it to the end of the load and save routine. it does require looping through the array again to write the new field, but you're not writing any extra data, and your still at fast binary speed. but many small writes vs a few big ones may still be slow. on the load side, you test for EOF and only attempt a read of a new var if not EOF.

0

##### Share on other sites
I think it's not worth writing whole structs unformatted, even in a binary file. It's totally inflexible and doesn't gain much.

Many small writes is not a problem if you buffer the data, and the processing time to write individual fields is negligible.

For versioning just store a version number in the file and write loading code that can handle old versions.
0

##### Share on other sites

>> "quick save" is pushing on 10 seconds

Even an inexpensive 5200RPM HDD can handle around 100 MB/sec bursts. I really don't think the size of your save data is the real issue. Unless you are trying to save out some massive 600+ MB data files, or you are writing to a relatively slow SD card or USB storage, you likely have a bottleneck somewhere else.

You could test this by commenting out the code that actually writes to files. Profile your app and figure out where the real bottlenecks are taking place.

Over the years I've seen quite a few things that can slow a persistence system down: Using the wrong file I/O functions, or using the correct functions poorly, such as writing out singe bytes at a time instead of writing out larger blocks. Using fancy binary packers to attempt to save space (which sometimes can make sense if you have tightly enforced file sizes, such as on a game console). Using improper buffering techniques, such as a std::vector that did not have reserved space and is forced to resize itself many times over its lifetime. Poor navigation of the tree of objects to be saved. Etc, etc.

You cannot just blame the size of an individual struct as the root cause of slow writes. With ten seconds you should easily be able to get a gigabyte or more of raw output.
2

##### Share on other sites

Many small writes is not a problem if you buffer the data

so if writing one field at a time in binary is too slow, write 2048 bytes to a buffer, then fwrite() the buffer, eh?

>> "quick save" is pushing on 10 seconds

that was the speed of the text mode save i was using at first. its was easier to add new vars to text load and save. but when i got to 10 seconds, i switched to binary, which was the plan all along. that 10 second text save dropped to 2-3 seconds with binary fwrite_nolock.

Unless you are trying to save out some massive 600+ MB data files, or you are writing to a relatively slow SD card or USB storage, you likely have a bottleneck somewhere else.

savegame backs up the last save, then overwrites the last save game, then saves the 10 local maps currently cached in memory.

a savegame file is 57 meg. the back up is also 57 meg (obviously). each local map is 273K. grand total: 116.73 meg.

i may need to speed up the copy routine used for the backup. it takes about as long as a save. the local maps take practically no time by comparison.

without the automatic backup, save times would be on the order of 1-2 seconds.

0

##### Share on other sites

Many small writes is not a problem if you buffer the data

so if writing one field at a time in binary is too slow, write 2048 bytes to a buffer, then fwrite() the buffer, eh?

Yes.  Always buffer your I/O.  Doing it one item at a time you will be killed with the overhead of millions of unnecessary function calls.  Don't do one thing at a time when you can do millions of things at the same time for the same effort.

>> "quick save" is pushing on 10 seconds

that was the speed of the text mode save i was using at first. its was easier to add new vars to text load and save. but when i got to 10 seconds, i switched to binary, which was the plan all along. that 10 second text save dropped to 2-3 seconds with binary fwrite_nolock.

It should not have taken ten seconds.

Our save game system generates about 50MB of data, and writes it out in about a half second, with another half second or so to figure out exactly what to save.  The bottleneck is not writing the data.

Unless you are trying to save out some massive 600+ MB data files, or you are writing to a relatively slow SD card or USB storage, you likely have a bottleneck somewhere else.

savegame backs up the last save, then overwrites the last save game, then saves the 10 local maps currently cached in memory.

a savegame file is 57 meg. the back up is also 57 meg (obviously). each local map is 273K. grand total: 116.73 meg.

i may need to speed up the copy routine used for the backup. it takes about as long as a save. the local maps take practically no time by comparison.

without the automatic backup, save times would be on the order of 1-2 seconds.

Again, with the PC game I'm currently working on we write about 50MB in a half second.   The bottleneck is not in writing the data to disk; our performance hotspot is pruning the tree of what to include in the save data and what to leave behind.

If your write time really is too large you can buffer the entire save file in memory (100MB is cheap these days with virtual memory) and then start an asynchronous write.  Then the only time required is the time to navigate your object tree and actually serialize the data into a buffer.

0

##### Share on other sites

The solution I prefer is to store objects in the format:{size, id, version, key1, a, key2, b, key3, c ... }

How do you handle the case where an unknown key occurs? It seems like you have to abandon the remainder of the struct at that point because you don't have info to skip individual fields.Do your keys include some bits that identify the field's size?

As stated in the first post, when loading a value there is a default value parameter if the key/value pair does not exist.

So perhaps:

If the field exists then mystruct.Color is loaded with the rgba value that was persisted. If the field does not exist, then mystruct.Color is set to a default value.
That's not what I meant. I meant problem #2 in my first post.

Version 1 of your code has a struct with four fields:

firstname,
lastname,
age,
phone

Then you save this to a file.

Version 2, you alter the format to support multiple phone numbers per person. But when you load the file, the OLD phone field has to be deserialized in order to advance the file pointer correctly. But if your program no longer knows what data type the old "phone" field is, how can you advance the file pointer the correct amount? Edited by Nypyren
0

##### Share on other sites

Keep a version number for each struct

this is the hassle i'm trying to avoid.

granted, it not that much to copy and paste a struct, call it struct2, add a field, copy and paste load and save game, call them loadgame2 and savegame2, and change them to use struct2's, but after a while you get a lot of struct declarations. since i try to keep all in house saved play test games updated as the file format changes, i usually only have 2 version of a struct going at once. and all that copy and pasting is much more work than adding writefileint(f,newvar); and newvar=readfileint(f);
You should never need to copy/paste the actual struct. You only have the most up-to-date structs in your actual code, but you have a function which knows which field(s) changed at each version. If the data you're about to load matches your up-to-date version, then you block copy the entire struct. Otherwise you read one field at a time using if/elses.

Each field keeps a 'field id', and the serialized struct stores that field ID before the field contents.

saving more data - slows things down - same problem as adding filler. i'm already using fwrite_nolock to get 3 second save times. plus you cant get that nice big write of an entire array at once. and its probably as much work to code as keeping multiple versions of structs. its would be nice to have the speed of binary and the ease of coding of text.

i did consider something along these lines. instead of saving an entire struct (or array of structs) as a binary chunk of data, you save the individual fields one at a time as binary. no tags, flags, counts, etc. just read and write in the ordered declared. when you add a var, you add it to the end of the load and save routine. it does require looping through the array again to write the new field, but you're not writing any extra data, and your still at fast binary speed. but many small writes vs a few big ones may still be slow. on the load side, you test for EOF and only attempt a read of a new var if not EOF.
Overall, saving or loading entire structs (or even the entire file at once, then doing pointer fixup) is more suited for constant data that ships with your game, such as levels or scripts. This is generally called "memory-ready" serialization, and doesn't support versioning.

Text files are essentially the same as per-field serialization, with two major disadvantages:
1. You have to parse numeric fields from text, which is costly.
2. You have to manually escape certain data to prevent ambiguity with your text file delimiters (newlines, quotes, commas, whatever).

Buffering your I/O will eliminate the cost of reading each field in a separate call. As long as you assume that only one thread can access the file at a time, you don't need any synchronization, so all your code will be doing is copying memory. Your memory access patterns will be cache-friendly, as well. Edited by Nypyren
0

##### Share on other sites

That's not what I meant. I meant problem #2 in my first post.

Version 1 of your code has a struct with four fields:

firstname,
lastname,
age,
phone

Then you save this to a file.

Version 2, you alter the format to support multiple phone numbers per person. But when you load the file, the OLD phone field has to be deserialized in order to advance the file pointer correctly. But if your program no longer knows what data type the old "phone" field is, how can you advance the file pointer the correct amount?

There is no file pointer.

Recall per earlier in the discussion we are reading everything in advance, in bulk, as a single large read.

Once this huge block of data is read into memory, each individual object is deserialized.  Each node exists in the save file, with this format:

{ size, ID, key1=a, key2=b, key3=c, ...}

In memory the reader class essentially looks like this:

std::map< uint key, void* data > entries;  // In reality we have a flat table rather than the collection of pointers required for a map, but it is functionally equivalent.

...

}

When the line to read the file is encountered:

The function will look up kColorField in the map.  If the entry does exist it will convert the pointed-to data into an unsigned int and store the value.  If the entry doesn't exist it will set the constant to kDefaultColor.

There is no file pointer that needs to be advanced.

0

##### Share on other sites

The description of your first pass seems to indicate that it deserializes UNTYPED data into an untyped associative array/map, which the second pass then requests individual elements from and does the type conversion then. That makes perfect sense.

But, how do you determine where the start and end of each field's untyped data is during your first pass?

Or do you literally have a text file with comma delimiters (like JSON)? Edited by Nypyren
0

##### Share on other sites

The description of your first pass seems to indicate that it deserializes UNTYPED data into an untyped associative array/map, which the second pass then requests individual elements from and does the type conversion then. That makes perfect sense.

But, how do you determine where the start and end of each field's untyped data is during your first pass?

Again, when serializing, you nearly always go in the format:

{SIZE, ID, data }

The first piece of data you read is the size.  You always know exactly how big it is.
1

##### Share on other sites
OK. So you have the size of each field, not one size for the entire struct. That's what I wanted to clarify.

If you only had the size of the struct, but not each field, you cannot skip an individual field unless you use delimiters/ids per field or have a fixed width or homogenous type that all fields must adhere to - you would have to skip to the end of the entire struct, skipping all remaining fields within that struct because you have no way to know where a field may start in the remainder of the struct. Edited by Nypyren
0

##### Share on other sites

savegame backs up the last save, then overwrites the last save game, then saves the 10 local maps currently cached in memory.

a savegame file is 57 meg. the back up is also 57 meg (obviously). each local map is 273K. grand total: 116.73 meg.

i may need to speed up the copy routine used for the backup. it takes about as long as a save. the local maps take practically no time by comparison.

without the automatic backup, save times would be on the order of 1-2 seconds.

Why dont you just rename the old file and the write a new one or make a new one with a temporary name and if successful delete the old and rename the new instead of copying and overwriting?

1

##### Share on other sites

Why dont you just rename the old file and the write a new one or make a new one with a temporary name and if successful delete the old and rename the new instead of copying and overwriting?

yes, i was thinking that rename, or some type of round robin file naming convention might speed things up by reducing it to a single save and no copy.

the whole idea is to make it so that the player's progress isn't lost due to power outage during overwrite of a save game.

one could simply toggle between save1 and save2 for filenames, overwrite the older of the two on save, and load the newer of the two on load.

0

##### Share on other sites

here's the code for the binary save in the game. other than doing things like only writing active records in the databases, there's not much left to slice out.

am i correct in assuming that buffering the write of an entire static array of structs will not get any speedups?

void new_savegame_core2(char *s)
{
FILE *f;
f=outfilebin(s);
_fwrite_nolock((unsigned char *)&is_raining,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&creeklvl,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&year,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&day,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&hour,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&minute,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&second,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&frame,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&cm0,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)&pressure,1,(size_t)sizeof(float),f);
_fwrite_nolock((unsigned char *)&temp,1,(size_t)sizeof(float),f);
_fwrite_nolock((unsigned char *)&cloudcover,1,(size_t)sizeof(float),f);
_fwrite_nolock((unsigned char *)&windspd,1,(size_t)sizeof(float),f);
_fwrite_nolock((unsigned char *)&winddir,1,(size_t)sizeof(float),f);
_fwrite_nolock((unsigned char *)&Dpressure,1,(size_t)sizeof(float),f);
_fwrite_nolock((unsigned char *)&watertable,1,(size_t)sizeof(float),f);
_fwrite_nolock((unsigned char *)&difflvl,1,(size_t)sizeof(int),f);
_fwrite_nolock((unsigned char *)savename,1,(size_t)100,f);
_fwrite_nolock((unsigned char *)numconvos,1,(size_t)sizeof(int)*maxcavemen*maxnpcs,f);           // number of conversations today
_fwrite_nolock((unsigned char *)npc,1,(size_t)sizeof(npcrec)*maxnpcs,f);
_fwrite_nolock((unsigned char *)relation,1,(size_t)sizeof(int)*(maxcavemen+maxnpcs)*(maxcavemen+maxnpcs),f);
_fwrite_nolock((unsigned char *)pit,1,(size_t)sizeof(pitrec)*maxstoragepits,f);                 // the list of storepits
_fwrite_nolock((unsigned char *)cm,1,(size_t)sizeof(cavemanrec)*maxcavemen,f);              // the list of bandmembers
_fwrite_nolock((unsigned char *)map,1,(size_t)sizeof(maprec)*mapsize*mapsize,f);      // the world map
_fwrite_nolock((unsigned char *)pmap,1,(size_t)sizeof(maprec)*mapsize*mapsize,f);        // player's world map
_fwrite_nolock((unsigned char *)worldobj,1,(size_t)sizeof(objrec2)*max_world_objects,f);            // the list of world objects
_fwrite_nolock((unsigned char *)animal,1,(size_t)sizeof(animalrec)*maxanimals,f);                   // the list of active animals
_fwrite_nolock((unsigned char *)missile,1,(size_t)sizeof(missilerec)*maxmissiles,f);                 // the list of active missiles
_fwrite_nolock((unsigned char *)plant,1,(size_t)sizeof(plantrec)*maxplants,f);     // location of trees, scrub brush, and jungle trees
_fwrite_nolock((unsigned char *)plant2,1,(size_t)sizeof(plantrec)*maxplants,f);     // location of berry bushes or fruit trees
_fwrite_nolock((unsigned char *)plant3,1,(size_t)sizeof(plantrec)*maxplants,f);     // location of rocks terrain
_fwrite_nolock((unsigned char *)plant4,1,(size_t)sizeof(plantrec)*maxplants,f);     // location of misc rocks (used by dirt terrrain)
_fwrite_nolock((unsigned char *)woods_grass_map,1,(size_t)sizeof(int)*woods_grass_map_size*woods_grass_map_size,f);       // plants in woods terrain
fclose(f);
}
0

##### Share on other sites
You've got thirty-something calls to fwrite. That is overhead that can be trivially avoided.

You know how big things are going to be. Create a really big buffer of that known size, dump everything into that big buffer, and make one call.
2

##### Share on other sites

There are three problems with that code:

1) multiple calls to fwrite instead of a single call as frob mentions.

2) fopen/fclose are horribly slow on most platforms due to permission checks and all that, you really want all the data (and your files) in a single big archive file which you just keep open.

3) just "omg" I want to run and hide after seeing that. :)

Ignoring 1 and 2 for the time being, even with C you can simplify all of that by writing at worst a macro.  Macro's may be evil but they will save you huge amounts of typing and possibility of error:

#define WRITE_ITEM( type, item ) _fwrite_nolock((unsigned char*)item, 1, sizeof(type....... blah blah.......

That's just scary code to see all those casts and sizeofs written out over and over.

Worrying about #1 and #2 are not worth discussion at this time.

1

##### Share on other sites
Maybe I'm biased from my current job, but I totally second the initial comment from Nypyren. The Google Protobuf approach is amazingly powerful, as you can continue to tack on as many additional "optional" fields as you want after the fact. Both pre-version and post-version parsers will ignore unknown field ids (added or removed), so as long as you
1) never change a given field (for field Id 1, the type stays the same.)
2) never re-use a given field number (really the same as 1 restated)
3) use optional as often as possible (because you can't un-required a field later if you have data stored with that field)
Then you avoid nearly all the versioning problems.

If you're really afraid of the data overhead this adds, just shove your data stream through zlib before writing it out to disk.
2

##### Share on other sites
Generally what I do for buffering is use something like C#'s BinaryWriter and a MemoryStream.

MemoryStream is a wrapper around a byte array which allows the array to dynamically grow, and also keeps a Position pointer, which tracks where to read/write next.

BinaryWriter is a fairly simple class that just converts values to bytes, copies them to the underlying stream, and increments the stream's position pointer as it goes.

After you're done writing to the MemoryStream, you just grab its internal byte array and dump the entire thing into your file in a single call.

It's trivial to do this in C++ - you can either use existing stream classes which support most of this, or if you want to use a C interface, it's trivial to write it from scratch with minimal effort (it's a hundred lines of code or less - basically a few dozen functions with 1-3 lines apiece). Edited by Nypyren
1

##### Share on other sites

You've got thirty-something calls to fwrite. That is overhead that can be trivially avoided.

You know how big things are going to be. Create a really big buffer of that known size, dump everything into that big buffer, and make one call.

good point.

0

##### Share on other sites

2) fopen/fclose are horribly slow on most platforms due to permission checks and all that, you really want all the data (and your files) in a single big archive file which you just keep open.

won't that be less bullet-proof in case of machine lockup or power outage? I'm trying to design a bullet proof save so even power outage during an overwrite won't wipe the user's game.    I would think that any open file when power goes out you could kiss goodbye.   that was the nice thing about the copy followed by overwrite. if you lost power during the copy, the original was still there. if you lost power during overwrite, you had a backup copy. all you lost was the progress the player was trying to save when the power died.

I've now gone to a round robin naming scheme (save_a and save_b), overwriting the older on save, and loading the newer on load. save is now down to two seconds. probably acceptable. but i may try for 1 second.

0

## Create an account

Register a new account