(MM)O(RPG) database design

Started by
19 comments, last by wodinoneeye 12 years, 7 months ago
I havent checked recently to see what kind of checkpointing and save mechanism exist in the current MYSQL versions.

Potentially if there is no sufficienetly fast/nondisruptive mechanism (where the DB continues to operate while the checkpoint is in progress)
there are ways to program around the limitations (same actually for a DB that is a homegrown DB).

You frontend the 'DB' with a mode switch that can one way push data directly to the DB (normal ops) and the other use the DB read-only (while iimage is being
checkpoint copied) and a temporary patch DB is used along with transaction logging. Once the checkpoint is complete the primary DB is updated from the T logs
and then resumes its normal operation (the copy is used read-only with the temp patching DB in the interim, while the log is applied to the master to bring it upto date).

Seperate server machines would be the destinations/residency of the different image/log/archive file sets and RAID etc could handle the simpler errors.

The image copy then gets copied off to archives and the master is already resumed. This solves the frequent backup requirement with cohesive
backup checkpoint images without stopping operations. The DB files size is an important limiting factor. I had looked at one case of 4GB (seconds to copy) and this
copy speed was part of the decision of how often you would want to checkpoint -- WITH a metered copy so you wouldnt slow down primary operations).
Network speeds betwen servers (a 10K backplane would be nice..) to move the copies is another limiting factor.

Transaction logging would be the first redundancy mechanism - last checkpoint image plus the transaction log set when you have a primary DB failure
(no data lost, a short patching delay and minimizing the vulnerable period when the redundancy isnt available(while a new primary image is rebuilt) - though further schemes of redundant transaction logging
can also be done ). If you want, an addition of a second write-thru DB image could also be used to harden it.

Of course much easier if the native DB has mechanism like these built into them, though how much control you have when integrating ALL the features may be an issue (redundancy + uninteruptive checkpoints).
--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement