New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Thanks to all!
does rsync work with database?
Not with just copying the files. For a live DB, there is an almost sure danger that you will get corrupted database. You have either dump the db or use other tools to live copy it (e.g. mysql sync).
cronjob + FTP storages
I use rclone to push a tarball to google cloud storage and some storage vps's that pull data with rsnapshot.
If you have a snapshot-capable filesystem you may also place a write lock on the database, flush logs, create a snapshot of the filesystem, release the lock and afterwards copy/rsync the files from the snapshot.
Bugs in one tool is definitely one of the reasons I'm using both borg and restic.
I prefer to use restic because it allows a greater choice of storage backend for the backups: I can use my own dedicated server, S3 buckets or an equivalent object-based storage, so if someday I decide to store my backup elsewhere I just have to switch destinations.
btrfs send | gzip | ssh
I've been using gdrive for a few months
https://www.lowendtalk.com/discussion/123053/automating-mysql-backup-and-uploading-to-gdrive
Works quite well. @irm is an awesome human.
I usually write my own custom backup scripts tailored to whatever I intend to back up. Example for Centmin Mod database backups via dbbackup.sh with Amazon s3 and storage class support, up to 3x remote ftp backups, smarter per database characterset/collation detection support and mobile push notification support on backup completion.
Right now in the middle of updating them all to utilise facebook's zstd compression algorithm instead of gzip/bzip2/xz as it's faster and more efficient. Tar + zstd benchmark comparison write up I did ^_^ or jump straight to the final benchmark results here.
Folks doing backups, seriously need to check out using zstd compression instead !!! Benchmarks I did compared to other compression algorithms here.
For several years I had a single storage VPS that automatically backed up all of my various servers using rdiff-backup, which that was a really nice solution that worked extremely well (the only notable issue was that restoring anything but the last backup required a brief review of the documentation for rdiff-backup as I could never remember the exact syntax when pullling older versions of backed up files). However nowadays I only have a single server and I don't bother with anything beyond semi-frequent manual runs of a rsync-based script to back it up to my home computer, which in turn is backed up continuously offsite to Backblaze), although I certainly miss having automated rdiff-backup from time to time.
In my experience Duplicity seems to do fairly well at incremental backups, and because I'm obsessed with compartmentalizing my servers using Docker I found an image that wraps up Duplicity into a nice package. This basically lets me have a drop-in docker image for backing up my servers.
I'm using Wasabi for storage.
What type of database? I use backupninja, which uses
mysqldump
to output a .sql file of a MySQL database, and then I include that in my backups. There's alsomysqlhotcopy
as an alternative, but I think it only works on MyISAM tables, whereas most are InnoDB these days.I used Duplicity for many years. The issue I ended up having with it is that how it works is it does one full backup, and then layers each incremental backup on top of that. That means the size of the backup will keep getting bigger and bigger, until you do a new full backup and delete all the old ones (which wastes bandwidth, as even files that haven't changed since the last full backup will be copied across again). Essentially, its mechanism to "prune" old backups is by starting completely from scratch again.
I ended up switching to Borgbackup which has a nicer deduplication model. Rather than having a full backup with incremental backups layered on top, it stores backups as "chunks" of data. Chunks only need to be transferred once, and when no backups use a particular chunk any more (eg. an old backup is pruned), it deletes that chunk. This avoids the main downside of the Duplicity's approach, as identical data never needs to be retransferred over the network.
Interesting info, thanks for detailing. To be quite honest I didn’t put too much thought into it since my backups are usually quite small, at least for my VPSs. I was put off from borg because I felt it was a little less approachable as it doesn’t come with easy destinations (e.g. GDrive) baked-in, unlike Duplicity. I assume rsync or something similar could cover that limitation, but it feels like a lot of setup.
My goto backup strategy is:
https://github.com/gilbertchen/duplicacy
duplicacy every 12 hours. This tool can deduplicate, encrypt and compress.
duplicacy runs also a pre-backup job to dump mysql
duplicacy also out of the box suport backup to almost any cloud/remote storage (no need to use rclone).
So i run 2 daily jobs that also backups up mysql (i can run any minute since it's deduplicated) and in real time i upload the backups to digital ocean spaces.
Having a cloud storage website, and I wrote some script to backup data to gsuite drive. Currently we have about 28TB of data, and they are backed up everyday.
How about folks doing backups uncompressed? Storage is cheap these days.
btrfs send | ssh | btrfs receive
Even faster than rsync-based stuff for incremental backups.
Maybe cheap but not unlimited
zstd low compression level still fast you can do level - 1 to default 3 and get speeds close to disk speed and still get decent compression ratio which helps if moving back up files over the network where network speed is limited i.e those 100-250Mbps network capped servers. So sending 1gb uncompressed or 90MB compressed over 100Mbps network.
Well had some spare time to benchmark tar + zstd vers tar uncompressed and with some tuned zstd settings, you can achieve tar + zstd at speeds faster than tar uncompressed and still reduce file size from 1.60GB tar uncompressed vs 908MB tar zstd compressed. Full benchmarks and info at https://community.centminmod.com/threads/custom-tar-archiver-rpm-build-with-facebook-zstd-compression-support.16243/#post-69994
Man, that was looking pretty awesome until I saw they charge $99/year for commercial use.
You might be interested in this: zstd just added an interesting "adaptive" mode that dynamically adjusts the compression level based on disk / network speed (if the network slows down, the compression level is increased, since the compression is not the bottleneck): https://code.fb.com/core-data/zstandard/
From https://github.com/facebook/zstd/releases/tag/v1.3.6:
So zstd is better than .xz/LZMA?
yup benchmarked zstd adaptive mode at https://community.centminmod.com/threads/custom-tar-archiver-rpm-build-with-facebook-zstd-compression-support.16243/#post-69530
see benchmark comparisons
Yep! It's a newer, more advanced algorithm. The zstd site has a comparison with a bunch of other algorithms: https://facebook.github.io/zstd/. In general the compression ratio is better than other algorithms, while also being faster in a lot of cases.
Thanks.
Ahh, I didn't see that. Thanks for the link.
By the way, I didn't know you're Aussie. I'm from Melbourne but am living in the USA now.
yup Brisbanite
@eva2000 Thanks for your work on centminmod, very informative.
Especially the CC IP range(s).
You're welcome.. zstd can also be used to further compress your logs during log rotation too - guide on using zstd for logrotate compression for nginx and php-fpm logs https://community.centminmod.com/threads/compressed-log-files-rotation-with-facebook-zstd-for-smaller-log-sizes.16371/
as you can tell, zstd is my new fav tool