This topic is 2499 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

What happens if the computer crash while a program is writing to a file? What I think can happen is that only part of the file is written to disk.

In C++ we can use std::ofstream to write to a file, and if the file already exists the old file content will be replaced. If a crash happened it could result in that both the new and the old content was destroyed. This is what I think could happen, but I'm not sure if the OS makes something to prevent such things from happening.

If this happened in a game while saving a savefile it could destroy the whole savefile and if there is no backups the player has to start from the beginning again.

I thought that maybe we can avoid this problem by first rename the file we want to replace and then write the new file and if nothing went wrong we remove the old file, otherwise we change the old file back to the original name. Something like this:
void writeToFile(const std::string& text) { std::rename("file.txt", "file.backup"); std::ofstream file("file.txt"); file << text; if (file) { std::remove("file.backup"); } else { std::remove("file.txt"); std::rename("file.backup", "file.txt"); } }
Now if a crash happens we still have the old file (with another name) so we have not lost everything. Do you think this will work? What happens if a crash happens during a call to std::rename? Can the file somehow disappear if a crash happens here?

I'm not sure if this is a situation worth worrying about but I know that savefiles that stop to work can be a pain so I want to minimize the risk if possible.

##### Share on other sites
Are you talking about the program itself crashing while it has a file open, or are you talking about the whole machine going down?

##### Share on other sites
The normal way to handle this is to not touch the original file. Instead, write the data to a file in a temporary location, then atomically rename the temporary file with the original name. The OS filesystem should guarantee that the rename either succeeds, or does not occur.

• The old file is intact right until it is overwritten. For some non-technical users, renaming a file ".backup" is effectively the same thing as losing the file. They will not know how to restore it*.

• If there is a crash, the potentially corrupt file is clearly in a temporary folder, which means the user (or the system) can be reasonably sure that it is safe to remove it.

* [size="1"]You could write code to automatically detect and restore ".backup" files, but this is more work still.

##### Share on other sites
See here for some pointers.

In short, it depends completely and entirely on the OS and file system used. C++ alone cannot safely perform such operation, one needs to use OS-specific facilities.

I'm not sure if this is a situation worth worrying about but I know that savefiles that stop to work can be a pain so I want to minimize the risk if possible.[/quote]

An operation is either safe or it's not. File corruptions of this type are fairly rare (mostly due to internet), but they do happen. I once had to manually patch bytes of a word perfect file cause by exactly such problem.

The risk can at very least be minimized by not reusing old files. That way, if something does go wrong, at least there are previous versions left.

##### Share on other sites
ApochPiQ, I'm talking about the whole machine going down.

rip-off, thanks, your scheme is much better. What is the best way to rename the temporary file with the original name? On windows std::rename will fail if the new name is an existing file. Maybe I have to use system specific functions to do this.

##### Share on other sites
If you're at risk of your machine dying, all bets are off. There are techniques for ensuring files get written correctly even in the case of a program crash (see Structured Exception Handling on Windows, for example), but if the OS itself is crapping out, there's nothing you can do but cross your fingers and hope.

Your best bet is redundancy: have a separate machine that acts as a watchdog, send all your log data to it, and if the primary machine goes down, the watchdog detects the disconnect and flushes to disk. Of course, that only defers the risk; if the watchdog dies, you're up a creek. This quickly devolves into a philosophically impossible problem, and you can sink a ridiculous amount of time into trying to combat the unknown. It really depends on how vital your data is and how much you are willing to spend (in terms of time, effort, and money) to protect it.

• 9
• 9
• 13
• 41
• 15