But really, that's overkill.
yeah, probably, but what can i say? i'm a one percent-er. just because the engineering products we develop are for entertainment purposes doesn't mean we should become complacent and not try to deliver as solid a product as possible to the consumer. loading and saving files in a manner that can handle power outages and other crashes is not that difficult.
I think the point people are making here is that the environment of a typical PC gamer's computer is not controlled enough to be able to 100% guarantee that data will not be lost anyway. You may be able to get something incredibly reliable by using the atomic swap idiom on certified hard drive hardware/firmware/drivers with code that's been shown to be bug-free by a theorem prover, the whole thing being backed up by a triple redundant storage system inside a shielded, radiation-hardened nuclear bunker with its own power source.
But the average gamer has a basic hard drive with generic drivers and consumer firmware (which is probably lying to you anyway when asked whether it has actually flushed data to disk) and a power supply that can go out at any instant, not to mention the possibility for your program to simply be killed by a bluescreen or a hard freeze of the system, or the hard drive failing just as you finish writing to it, or a cosmic ray to flip a critical bit in memory that causes your program to malfunction and write corrupted data to disk without even being aware of it.
This is a bit reminiscent of Amdahl's law; you seem to be under the impression that you can achieve 100% reliability by simply writing "good code" when writing savefiles to disk, that there exists some sequence of statements that essentially guarantees that no player progress will ever be lost. But it turns out that you don't control all the variables, many environmental factors are independent of your program and will still find a way to screw up the process *no matter* how good your code is. In other words there is an upper limit to how reliable you can get the process given those factors, this is effectively what you have to "work with". It might be 95%, 99%, or even 99.9% for the everyday user (this depends on what your program does, the assumptions it makes and what it ultimately runs on) but it's not going to be 100% because there are way more bizarre things that can happen than conveniently timed power outages (with increasingly low probability, thankfully).
In short, don't spend too much time trying to solve issues which have underlying causes that you have no control over, instead work with what you have to produce something that works almost all of the time under normal circumstances and, should failure occur, gives the user a chance to recover in most cases, again under normal circumstances. In this particular instance the usual trick of writing the savefile to a dummy temporary file and "atomically" moving it to the right place when everything has been written works pretty reliably on today's systems, and in the unlikely event that something does go wrong (something well-defined at the level of your program that your program can detect and respond to in some way such as "execution halts while writing to temporary file, garbage was written but temporary file was not renamed to a proper savefile") the player will hopefully still have his before-last savefile to continue on, and maybe a notification that something has gone wrong with the last save, instead of being presented with a corrupted save. So this is a good, cheap and easy solution that seems to work well, and indeed it has been used for many years with very few problems.
If you do want to do even better than that (should "better" exist on at least some system/hardware that you are targeting), then, like people said before, instead of rolling your own you may want to look into how high-reliability software like databases have solved the problem, and do not be surprised if you find that the way they do it is actually not that much better than yours practically speaking when you have little to no control over the environment your program runs in; in general it might address a few more corner cases, have special code for various hardware combinations, and be more conservative overall in how it handles data, for correspondingly rarer failure cases (this is not to say that databases are unreliable, this is just to point out that the less control you have, the less guarantees you can make, the less you can do).