Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Well, duplicity sucks. Any other backup recs? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Well, duplicity sucks. Any other backup recs?

2»

Comments

  • I also advocate keeping the backup data in at least two different formats. (mitigate against bugs in a format implementation, eg future bug in borg chunking/checksumming algo)

    I have rsnapshot pulls on a luks-crypted kimsufi, plus a borg push from that kimsufi to time4vps.

  • Try git-annex. It's unfortunately named, but is a very customizable backup option: gpg support, lots of remote sync options.

    raindog308 said: Also, it meant building a chroot environment for the user, which is a maintenance headache (.e.g, you have to have a bin, an etc, etc. in the user's chroot, and then either you have 50 copies of that for every user, or one shared environment, or a lot of symlink trickery).

    Can't this be solved by a read-only bind mount, even if you need something that complex?

    Thanked by 1vimalware
  • emgemg Veteran

    duplicity: deceitfulness; double-dealing

    Source: The dictionary that comes with the operating system on my computer.

  • raindog308 said: i.e., there's no versioned backups in rsync. Also no full vs. incr. Also no encryption. Also...

    That is why g-d made ZFS.

    https://github.com/willgrz/wBak-Autobackup-ZFS

    500k snapshots of various machines and counting on, works fine.

  • @William said:

    raindog308 said: i.e., there's no versioned backups in rsync. Also no full vs. incr. Also no encryption. Also...

    That is why g-d made ZFS.

    Aren't we supporting Isrial to help GOD? (Intentionally written to annoy all of you)

  • dfroedfroe Member, Host Rep

    +1 for ZFS.

    On my Backup Storage KVM VPS I am using a Debian GNU/Linux with LUKS (for encryption) and ZFS on Linux (for snapshots). Depending on the backup source I use either zfs send|recv, or rsync for one-way synchronization, or unison for two-way synchronization. Transfers over WAN are done through SSH. If you know the tools, it's quite easy to write some scripts to automate your particular daily workflows. Personally I prefer self-made scripts using such tools instead of having an all-in-one tool. Most of the time it is easier to customize own scripts and my particular requirements aren't often matched by all-in-one tools. :) I am using ZFS on FreeBSD for 9 and ZFS on Linux for 2 years and I am still impressed how rock solid and effective it works.

  • ZFS should rather be used on Ubuntu though, especially in the extreme snapshot amount scenario.

    I see some operational things going on, like listing snapshots taking 5minutes and the general RAM weirdness of deduplication but it works fine on this i7, notably i use a HW RAID5 and just the single volume (after luks) as RAID0/single disk in ZFS, Z1 or higher would cost a lot of CPU.

    The real saver here is not compression but deduplication which runs across the snapshots and thus needs only the same space as the rsync diff:

    I like ZFS on BSD, but not BSD (or Solaris) much, so that was the best option available for now.

    "Full" Encryption is obviously not implementable in my thing as the latest snapshot would always need to be opened for the diff rsync, however the external sync option (badly implemented, but works) does GPG (thus AES-NI in most cases, fast) and then syncs the file to remote.

    This needs 1:1 space for each backup obviously as syncing incremental encrypted changes as file is... mostly pointless, or hard to re-assemble (multi source to keep working if you recycle old space and so on, not simple).

    ZFS also adds ability to use iSCSI or CEPH (GlusterFS or other also) volumes over network as pool and then do either a second backup run (doing two syncs and maintaining 2 independent snapshot trees) or a copy of each snapshot (saves BW inbound and some minor disk space) to network automatic:

    https://prnt.li/f/94dc458af4c2ecd38df31351eb06f7f4-yiedoomu5h.png

    This allows near live sync to external and RO mount (for this tree, then RW for another local one) on other backups nodes + clients - easier (not in setup, being ceph) than zfs send and more reliable also.

    Thanked by 1WSS
  • zfs is an excellent proposition for a VPS as VPSs tend to have plenty of GB of RAM ...

    @raindog308

    I also don't know of any reliable backup solution with a reasonable feature set. There is plenty of "does deduplication and also paints your doors along the way" stuff but nothing I'd consider reasonable, reliable, and practical.

    I myself currently do backup with a mksh script working through a list of dirs/files to back up. I'm not really happy but it does the job and reliably so. Maybe something like that could solve your problem, too.
    Plus some usage of cron for timing. Example: At 01:20 the DB gets backed up to a backup directory (which is in my list of dirs to backup); incremental daily plus a full backup on sundays. Then some other scripts (e.g. reports, logs from the passed day,...), and finally as last one my backup script that pushes the whole shebang to my backup server through scp.

  • WSSWSS Member
    edited February 2017

    @bsdguy said:
    zfs is an excellent proposition for a VPS as VPSs tend to have plenty of GB of RAM ...

    Hammer.

  • @WSS said:

    @bsdguy said:
    zfs is an excellent proposition for a VPS as VPSs tend to have plenty of GB of RAM ...

    Hammer.

    HAMMER site says:

    >

    General Administrative Notes

    >

    HAMMER is designed for use on storage media greater than 50G> @WSS said:

    Plus: Dragonfly is somewhat irky, behind bigger BSDs, and has other problems due to its very small community (but it's certainly a very cool project; just not for professional use)

  • @bsdguy said:
    Plus: Dragonfly is somewhat irky, behind bigger BSDs, and has other problems due to its very small community (but it's certainly a very cool project; just not for professional use)

    I thought you were being snarky, "VPS tend to have plenty of GB of RAM", so I suggested a very hungry filesystem.

    I tried to get DFly setup in a 512MB VPS, but it just wouldn't work right, dangit. I respect the author- he's pretty smart, but I still run Theo and (formerly) jkh for production, except when Debian based.

  • I don't have any issues running ZFS on 1GB RAM VPSes.

  • @raindog308 said:

    nullnothere said: @raindog308 - did @vimalware's suggestion of running borg serve via authorized_keys not work out?

    nullnothere said: I'm still a bit confused (for lack of a better term) on what the concern is in allowing borg to run similarly

    I confess I didn't look at it deeply, but my impression was that

    • if you want to chroot, you must run borg serve
    • borg serve will chroot for you
    • but the user still must have a valid shell (e.g., bash)
    • hence the user could login normally, which I would prefer to not allow

    Also, it meant building a chroot environment for the user, which is a maintenance headache (.e.g, you have to have a bin, an etc, etc. in the user's chroot, and then either you have 50 copies of that for every user, or one shared environment, or a lot of symlink trickery).

    Unless you're running an ancient OpenSSH server, you can block users from getting a shell by configuring an sftponly group once and adding your users there.

    @bsdguy said:
    I also don't know of any reliable backup solution with a reasonable feature set. There is plenty of "does deduplication and also paints your doors along the way" stuff but nothing I'd consider reasonable, reliable, and practical.

    I myself currently do backup with a mksh script working through a list of dirs/files to back up.

    Can you feel the cognitive dissonance between the 2 paragraphs? :D

    Thanked by 1vimalware
  • lurchlurch Member
    edited February 2017

    Have you looked at syncthing?

  • vimalwarevimalware Member
    edited February 2017

    @lurch said:
    Have you looked at syncthing?

    Syncing is usually not the ideal paradigm for 'backup'. (yes, some complex sync software like seafile and Dropbox do handle file level versioning, but it's not ideal for 'backup')

    Why no love for good old rsnapshot?edit:saw a couple mentions. :)

    It's really nice to simply check a regular unix folder labelled hourly.0 for that client sqldump you fat-finger deleted on Friday night at 3.42am.

    Quick restores let you go to bed sooner.

    Works best from your local NAS/rpi2/3 and lower counts of thousands of files.

  • @deadbeef said:

    @bsdguy said:
    I also don't know of any reliable backup solution with a reasonable feature set. There is plenty of "does deduplication and also paints your doors along the way" stuff but nothing I'd consider reasonable, reliable, and practical.

    I myself currently do backup with a mksh script working through a list of dirs/files to back up.

    Can you feel the cognitive dissonance between the 2 paragraphs? :D

    Feel free to enlighten me ...

    Thanked by 1deadbeef
  • raindog308 said: Chiefly that I'm backing up, not synchronizing.

    i.e., there's no versioned backups in rsync. Also no full vs. incr. Also no encryption. Also...

    I rsync-over-ssh from production-server to backup-server.

    Then build redundancy on backup-server (with rdiff-backup).

    Then mirror backup-server to backup-server-2.

    Then some bash script fun for restores....

  • @bsdguy said:
    Feel free to enlighten me ...

    Paragraph 1: There is no backup software that is reliable with enough features

    Paragraph 2: Made my own shell scripts

    Newsflash: The possibility that your shell scripts are more robust than the backup software out there, is like ... zero.

    Thanked by 1vimalware
  • Interesting. It works well since quite a while. Checked and verified.
    But then, it's quite easy to write some solution for a strongly limited scenario but hard to write backup software that is capable to deal with all the diverse scenarios out there in the wild.

  • MaouniqueMaounique Host Rep, Veteran

    deadbeef said: The possibility that your shell scripts are more robust

    Robust as in never breaks? I think complex things have much more bugs and features you don't need than your own simple scripts doing exactly what you want.
    A few lines are much easier to debug and change too.
    But, again, it depends on usage scenario. Some demand a lot of features, you cannot simply make for yourself easy and maintainable enough.

Sign In or Register to comment.