Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Backup architecture
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Backup architecture

nfnnfn Veteran

I have two backups servers that pull data from the production vps.
I use rsnapshot in both servers so the data is archived the same way and I have always two backups in two distinct locations.

With two servers would you setup a different architecture? If so, what would you do?

Note: I have some GB's of data so archiving (tar, gz) is not an option

Comments

  • NickMNXioNickMNXio Member, Host Rep
    edited October 2016

    When I architect a backup solution -- there is generally more information required to make suggested changes..

    • What type of app is running here?
    • What type of data?
    • How much data?
    • How often is the data changing?
    • How much data / time can you afford to lose?
    • What are your recovery time objectives?
    • How often should you send your data offsite?
    • How many copies of your data do you need to retain?

    Many times -- user/admin deletion of data is a cause for restoration. Consider how your current backup strategy deals with this.

    Some additional thoughts...

    • Use Percona Innobackupex to perform consistent MySQL backups. Send it offsite, and/or perform from an offsite replica.
    • Keep a replicate/slave of your database offsite.
    • Use a tool like R1soft CDP from backupsy.com -- and run the R1soft server on one or both of your offsite servers.
    • Consider using attic-backup, or Obnam as part of your strategy.

    And maybe most import... TEST and document your recovery process.

  • @NickMNXio said:
    When I architect a backup solution -- there is generally more information required to make suggested changes..

    • What type of app is running here?
    • What type of data?
    • How much data?
    • How often is the data changing?
    • How much data / time can you afford to lose?
    • What are your recovery time objectives?
    • How often should you send your data offsite?
    • How many copies of your data do you need to retain?

    Many times -- user/admin deletion of data is a cause for restoration. Consider how your current backup strategy deals with this.

    Some additional thoughts...

    • Use Percona Innobackupex to perform consistent MySQL backups. Send it offsite, and/or perform from an offsite replica.
    • Keep a replicate/slave of your database offsite.
    • Use a tool like R1soft CDP from backupsy.com -- and run the R1soft server on one or both of your offsite servers.
    • Consider using attic-backup, or Obnam as part of your strategy.

    And maybe most import... TEST and document your recovery process.

    +1 on this, especially the very last part.

    while not considered to be really part of backup strategy in addition I won't and don't use services based on single disk in first place.

    as backup I push rdiff-backups to an external location to be able to do various restores out of a time span while having an automatic rotation in place. comes in handy if users mess up their data and need things only partially restored some days later.
    I also tend to keep another mirror of the whole rdiff-backup storage elsewhere.

    further I use rsync to mirror the whole box to another location to have a second daily mirror. that rsync archive again is mirrored into a third location using borg (attic clone) with deduplication and rotation.

    on the servers itself I keep rotating mysql-backups done with automysqlbackup locally and those for sure are rsynced and rdiffed to the backup storage with the rest of the box like said above...

    if it comes to dedicated servers I use proxmox to virtualize them and make snapshots via vzdump to different external locations. the use of sparse images for the VMs with the right filesystem for the guest os helps to keep the size of those backups low ;-)

    Thanked by 2nfn karjaj
  • joepie91joepie91 Member, Patron Provider

    nfn said: I have two backups servers that pull data from the production vps.

    For security reasons, I would always go with push backups, specifically to an endpoint of some sort that only accepts writes, not modifications/deletions/listing.

    If your backup server gets compromised, they can't get access to your production system. If your production system is compromised, they can't mess with the backups.

    Thanked by 2nfn deadbeef
  • nfnnfn Veteran

    Thanks for your replies.

    What type of app is running here: XenForo
    What type of data: php, mysql dumps and attachments
    How much data? 60GB
    How often is the data changing? its always changing
    How often should you send your data offsite? I run rsnapshot for all data and i dump mysql once a day
    How many copies of your data do you need to retain? I have 4/4h with a weekly full and a month full

    I'm thinking about using my home raspberry to pull backups to a disk too.

    @joepie91 the backup servers can only execute rsync on the production. If someone access one of the backup servers, he/she can't ssh to the production unless he/she could access the root user.

  • joepie91 said: If your backup server gets compromised, they can't get access to your production system. If your production system is compromised, they can't mess with the backups.

    I GPG the backups on the source, then upload (in one step by ssh and tar + gzip in a single pipe) by scp to one server which syncs it to others.

    Thanked by 1nfn
  • I need to set up a decent backup strategy, I've been winging it for months with a cobbled together mess...

  • nfnnfn Veteran

    Nekki said: I need to set up a decent backup strategy

    me to :)

  • Me 3

  • raindog308raindog308 Administrator, Veteran

    nfn said: @joepie91 the backup servers can only execute rsync on the production. If someone access one of the backup servers, he/she can't ssh to the production unless he/she could access the root user.

    You missed the point:

    1. You have server1 and you do a backup using rsync.

    2. I break into server1 and nuke your most precious assets.

    3. On my way out, I notice your rsync job in cron, soI helpfully kick it off for you.

    4. rsync works and destroys the backup server's copy.

    5. You have no more precious assets.

    "unless he/she could access the root user"...well, why have that loophole? My backups are exactly as @joepie91 recommends:

    • client can only SFTP via key.

    • client pushes his backup to a directory on the backup server. His account can only use sftp (locked down in sshd_config) and the account is chrooted to a directory

    • on the backup server, a job regularly moves and chmods files on that system so they cannot be listed, fetched, or deleted by the client

    Thanked by 2mehargags deadbeef
  • @Jorbox said:
    Me 3

    Me 4 - ad hoc backups are not really the way to go .. :(

  • I like to 'pull' changes with rsnapshot onto a LUKS rootfs kimsufi that does NOTHING ELSE.

    Then encrypt and push changed/new blocks to another DC(time4vps) with borgbackup (attic's modern fork). (only the borg binary is allowed in authorized_keys on destination server)

    Satisfies several criteria of the backup ' rule of 3' .

  • With 2 identical servers, I'd set up a complete mirror of the other (database, css, php/html/etc). The backup server will monitor the primary one to make sure everything is ok. If not, the monitor script should automatically change the DNS record to point the backup server. You should also display a notification message on the index.php/html page saying that they're on the backup server because there's an issue with the primary server.

    This gives you 2N redundancy + failover. If you do the backup method, it takes time for you to get your main server online (might be hardware failure so it can take a very long time) + time restoring your backup.

    Thanked by 1Jorbox
  • @black said:
    With 2 identical servers, I'd set up a complete mirror of the other (database, css, php/html/etc). The backup server will monitor the primary one to make sure everything is ok. If not, the monitor script should automatically change the DNS record to point the backup server. You should also display a notification message on the index.php/html page saying that they're on the backup server because there's an issue with the primary server.

    This gives you 2N redundancy + failover. If you do the backup method, it takes time for you to get your main server online (might be hardware failure so it can take a very long time) + time restoring your backup.

    What will happen to the changes on server2 , if there's any update to the data while the server1 off , are this changes will be restored when server1 up !! Are server2 will restore the old data from server1 and all edited will be removed?

  • Jorbox said: What will happen to the changes on server2 , if there's any update to the data while the server1 off , are this changes will be restored when server1 up !! Are server2 will restore the old data from server1 and all edited will be removed?

    >

    That would depend on your website and the service you're offering. You have a couple of options.

    1) On your backup site, the index.php/html can have an error message saying that the primary server is down and all transactions on the backup server will not be saved.

    2) You can merge backup's data with the data on your primary server, which will require some work.

    3) You can replace the primary's server data with the one from the backup (since backup will have the most up to date information). However, there are a few issues because there might some information loss depending on how often you sync the server.

    When dealing with databases, it's best to set it up as a redundant system like master-master replication or whatever you want to use.

    Thanked by 1Jorbox
  • @black said:

    Jorbox said: What will happen to the changes on server2 , if there's any update to the data while the server1 off , are this changes will be restored when server1 up !! Are server2 will restore the old data from server1 and all edited will be removed?

    >

    That would depend on your website and the service you're offering. You have a couple of options.

    1) On your backup site, the index.php/html can have an error message saying that the primary server is down and all transactions on the backup server will not be saved.

    2) You can merge backup's data with the data on your primary server, which will require some work.

    3) You can replace the primary's server data with the one from the backup (since backup will have the most up to date information). However, there are a few issues because there might some information loss depending on how often you sync the server.

    When dealing with databases, it's best to set it up as a redundant system like master-master replication or whatever you want to use.

    Logically it's hard to understand because server1 well always send the backups to server2 even the old ones.
    Thank you very much for this Informations

  • nfnnfn Veteran

    @raindog308 lets suppose server1 is production and we have backup1 and backup2.

    When I said rsync we are talking about rsnapshot that does hard link backups using rsync.
    There is no cron running in the server1.

    Now, since I use pull backups and both backup1 and backup2 pull the same data (they do not sync) if someone breaks server1 there is no way to get into backup1 neither backup2.

    Even if they delete all data, I have the previous backup intact in both backup servers.

    When using push backups and we break server1, I think it's easier to get into backup1 and backup2 then when using pull backups.

    Thanked by 1Jorbox
  • raindog308raindog308 Administrator, Veteran

    nfn said: When using push backups and we break server1, I think it's easier to get into backup1 and backup2 then when using pull backups.

    Not if you restrict the user to sftp, chown his access, etc.

    Also, you have a different problem now: if someone breaks into backup1, then can get into server1, server2, server3, ...server1024, etc.

    Really, it's up to you, but there's been a lot of discussion here and people seem to come down on push to a server with write-only perms.

    Thanked by 1nfn
  • sinsin Member

    Right now I just use rsync and automysqlbackup to push to a dedicated backup server and a storage vps at another provider. It has worked out really well for me but I plan on redoing the setup.

  • Push or pull, either is fine - depends on which servers you trust more. You can also mitigate your security concerns with a PGP style protocol - let the production sign the backup with its private key and the backup servers public key. This way only the production server can create backups, and only the backups servers can decrypt them. An attacker would need to compromise both types of servers to insert a malicious update.

    If you are dealing with a lot of critical backups, consider adding parity data to check for on-disk corruption. Or use checksums.

    My own needs are fairly simple: a cronjob that rclone's to Dropbox, but here is a list of backup software from my bookmarks:

    Duplicity and its clones Duplicati and Duplicacy.

    Rdedup, Cryptomator, Borg, Restic.

Sign In or Register to comment.