New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I also advocate keeping the backup data in at least two different formats. (mitigate against bugs in a format implementation, eg future bug in borg chunking/checksumming algo)
I have rsnapshot pulls on a luks-crypted kimsufi, plus a borg push from that kimsufi to time4vps.
Try git-annex. It's unfortunately named, but is a very customizable backup option: gpg support, lots of remote sync options.
Can't this be solved by a read-only bind mount, even if you need something that complex?
What about this
Set up an incremental backup with duplicity, rsync, and backupninja on Debian
https://uname.pingveno.net/blog/index.php/post/2015/02/17/Set-up-files-and-database-incremental-backup-with-duplicity,-rsync,-and-backupninja-on-Debian
duplicity: deceitfulness; double-dealing
Source: The dictionary that comes with the operating system on my computer.
That is why g-d made ZFS.
https://github.com/willgrz/wBak-Autobackup-ZFS
500k snapshots of various machines and counting on, works fine.
Aren't we supporting Isrial to help GOD? (Intentionally written to annoy all of you)
+1 for ZFS.
On my Backup Storage KVM VPS I am using a Debian GNU/Linux with LUKS (for encryption) and ZFS on Linux (for snapshots). Depending on the backup source I use either zfs send|recv, or rsync for one-way synchronization, or unison for two-way synchronization. Transfers over WAN are done through SSH. If you know the tools, it's quite easy to write some scripts to automate your particular daily workflows. Personally I prefer self-made scripts using such tools instead of having an all-in-one tool. Most of the time it is easier to customize own scripts and my particular requirements aren't often matched by all-in-one tools. I am using ZFS on FreeBSD for 9 and ZFS on Linux for 2 years and I am still impressed how rock solid and effective it works.
ZFS should rather be used on Ubuntu though, especially in the extreme snapshot amount scenario.
I see some operational things going on, like listing snapshots taking 5minutes and the general RAM weirdness of deduplication but it works fine on this i7, notably i use a HW RAID5 and just the single volume (after luks) as RAID0/single disk in ZFS, Z1 or higher would cost a lot of CPU.
The real saver here is not compression but deduplication which runs across the snapshots and thus needs only the same space as the rsync diff:
I like ZFS on BSD, but not BSD (or Solaris) much, so that was the best option available for now.
"Full" Encryption is obviously not implementable in my thing as the latest snapshot would always need to be opened for the diff rsync, however the external sync option (badly implemented, but works) does GPG (thus AES-NI in most cases, fast) and then syncs the file to remote.
This needs 1:1 space for each backup obviously as syncing incremental encrypted changes as file is... mostly pointless, or hard to re-assemble (multi source to keep working if you recycle old space and so on, not simple).
ZFS also adds ability to use iSCSI or CEPH (GlusterFS or other also) volumes over network as pool and then do either a second backup run (doing two syncs and maintaining 2 independent snapshot trees) or a copy of each snapshot (saves BW inbound and some minor disk space) to network automatic:
https://prnt.li/f/94dc458af4c2ecd38df31351eb06f7f4-yiedoomu5h.png
This allows near live sync to external and RO mount (for this tree, then RW for another local one) on other backups nodes + clients - easier (not in setup, being ceph) than zfs send and more reliable also.
zfs is an excellent proposition for a VPS as VPSs tend to have plenty of GB of RAM ...
@raindog308
I also don't know of any reliable backup solution with a reasonable feature set. There is plenty of "does deduplication and also paints your doors along the way" stuff but nothing I'd consider reasonable, reliable, and practical.
I myself currently do backup with a mksh script working through a list of dirs/files to back up. I'm not really happy but it does the job and reliably so. Maybe something like that could solve your problem, too.
Plus some usage of cron for timing. Example: At 01:20 the DB gets backed up to a backup directory (which is in my list of dirs to backup); incremental daily plus a full backup on sundays. Then some other scripts (e.g. reports, logs from the passed day,...), and finally as last one my backup script that pushes the whole shebang to my backup server through scp.
Hammer.
>
>
Plus: Dragonfly is somewhat irky, behind bigger BSDs, and has other problems due to its very small community (but it's certainly a very cool project; just not for professional use)
I thought you were being snarky, "VPS tend to have plenty of GB of RAM", so I suggested a very hungry filesystem.
I tried to get DFly setup in a 512MB VPS, but it just wouldn't work right, dangit. I respect the author- he's pretty smart, but I still run Theo and (formerly) jkh for production, except when Debian based.
I don't have any issues running ZFS on 1GB RAM VPSes.
Unless you're running an ancient OpenSSH server, you can block users from getting a shell by configuring an sftponly group once and adding your users there.
Can you feel the cognitive dissonance between the 2 paragraphs?
Have you looked at syncthing?
Syncing is usually not the ideal paradigm for 'backup'. (yes, some complex sync software like seafile and Dropbox do handle file level versioning, but it's not ideal for 'backup')
Why no love for good old rsnapshot?edit:saw a couple mentions.It's really nice to simply check a regular unix folder labelled hourly.0 for that client sqldump you fat-finger deleted on Friday night at 3.42am.
Quick restores let you go to bed sooner.
Works best from your local NAS/rpi2/3 and lower counts of thousands of files.
Feel free to enlighten me ...
I rsync-over-ssh from production-server to backup-server.
Then build redundancy on backup-server (with rdiff-backup).
Then mirror backup-server to backup-server-2.
Then some bash script fun for restores....
Paragraph 1: There is no backup software that is reliable with enough features
Paragraph 2: Made my own shell scripts
Newsflash: The possibility that your shell scripts are more robust than the backup software out there, is like ... zero.
Interesting. It works well since quite a while. Checked and verified.
But then, it's quite easy to write some solution for a strongly limited scenario but hard to write backup software that is capable to deal with all the diverse scenarios out there in the wild.
Robust as in never breaks? I think complex things have much more bugs and features you don't need than your own simple scripts doing exactly what you want.
A few lines are much easier to debug and change too.
But, again, it depends on usage scenario. Some demand a lot of features, you cannot simply make for yourself easy and maintainable enough.