Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Low End Backups - How do you backup your small VPS? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Low End Backups - How do you backup your small VPS?

2

Comments

  • Thanks to all!

  • does rsync work with database?

  • cybertech said: does rsync work with database?

    Not with just copying the files. For a live DB, there is an almost sure danger that you will get corrupted database. You have either dump the db or use other tools to live copy it (e.g. mysql sync).

    Thanked by 2cybertech vimalware
  • cronjob + FTP storages

  • nfnnfn Veteran

    I use rclone to push a tarball to google cloud storage and some storage vps's that pull data with rsnapshot.

  • dfroedfroe Member, Host Rep
    edited January 2019

    @cybertech said: does rsync work with database?

    @jvnadr said:
    Not with just copying the files. For a live DB, there is an almost sure danger that you will get corrupted database. You have either dump the db or use other tools to live copy it (e.g. mysql sync).

    If you have a snapshot-capable filesystem you may also place a write lock on the database, flush logs, create a snapshot of the filesystem, release the lock and afterwards copy/rsync the files from the snapshot.

    Thanked by 2uptime eol
  • @mfs said:

    quicksilver03 said: dedicated server running borg and another dedicated running restic (which I prefer over borg)

    those should be pretty similar, any reason to use two different tools? If a disaster happens, I'd prefer to follow a recovery path the more straightforward and unequivocal as possible; having to deal with two different tools seems unintuitive. Maybe you fear a bug in one of those tools? Also, why do you prefer restic in this scenario (your own dedi with ssh access)?

    Bugs in one tool is definitely one of the reasons I'm using both borg and restic.

    I prefer to use restic because it allows a greater choice of storage backend for the backups: I can use my own dedicated server, S3 buckets or an equivalent object-based storage, so if someday I decide to store my backup elsewhere I just have to switch destinations.

    Thanked by 1mfs
  • btrfs send | gzip | ssh

  • JohnMiller92JohnMiller92 Member
    edited January 2019

    I've been using gdrive for a few months

    https://www.lowendtalk.com/discussion/123053/automating-mysql-backup-and-uploading-to-gdrive

    Works quite well. @irm is an awesome human.

  • eva2000eva2000 Veteran
    edited January 2019

    I usually write my own custom backup scripts tailored to whatever I intend to back up. Example for Centmin Mod database backups via dbbackup.sh with Amazon s3 and storage class support, up to 3x remote ftp backups, smarter per database characterset/collation detection support and mobile push notification support on backup completion.

    Right now in the middle of updating them all to utilise facebook's zstd compression algorithm instead of gzip/bzip2/xz as it's faster and more efficient. Tar + zstd benchmark comparison write up I did ^_^ or jump straight to the final benchmark results here.

    Folks doing backups, seriously need to check out using zstd compression instead !!! Benchmarks I did compared to other compression algorithms here.

  • JTRJTR Member

    For several years I had a single storage VPS that automatically backed up all of my various servers using rdiff-backup, which that was a really nice solution that worked extremely well (the only notable issue was that restoring anything but the last backup required a brief review of the documentation for rdiff-backup as I could never remember the exact syntax when pullling older versions of backed up files). However nowadays I only have a single server and I don't bother with anything beyond semi-frequent manual runs of a rsync-based script to back it up to my home computer, which in turn is backed up continuously offsite to Backblaze), although I certainly miss having automated rdiff-backup from time to time.

  • In my experience Duplicity seems to do fairly well at incremental backups, and because I'm obsessed with compartmentalizing my servers using Docker I found an image that wraps up Duplicity into a nice package. This basically lets me have a drop-in docker image for backing up my servers.

    I'm using Wasabi for storage.

  • Daniel15Daniel15 Veteran
    edited January 2019

    cybertech said: does rsync work with database?

    What type of database? I use backupninja, which uses mysqldump to output a .sql file of a MySQL database, and then I include that in my backups. There's also mysqlhotcopy as an alternative, but I think it only works on MyISAM tables, whereas most are InnoDB these days.

    ouvoun said: In my experience Duplicity seems to do fairly well at incremental backups

    I used Duplicity for many years. The issue I ended up having with it is that how it works is it does one full backup, and then layers each incremental backup on top of that. That means the size of the backup will keep getting bigger and bigger, until you do a new full backup and delete all the old ones (which wastes bandwidth, as even files that haven't changed since the last full backup will be copied across again). Essentially, its mechanism to "prune" old backups is by starting completely from scratch again.

    I ended up switching to Borgbackup which has a nicer deduplication model. Rather than having a full backup with incremental backups layered on top, it stores backups as "chunks" of data. Chunks only need to be transferred once, and when no backups use a particular chunk any more (eg. an old backup is pruned), it deletes that chunk. This avoids the main downside of the Duplicity's approach, as identical data never needs to be retransferred over the network.

    Thanked by 1uptime
  • @Daniel15 said:
    This avoids the main downside of the Duplicity's approach, as identical data never needs to be retransferred over the network.

    Interesting info, thanks for detailing. To be quite honest I didn’t put too much thought into it since my backups are usually quite small, at least for my VPSs. I was put off from borg because I felt it was a little less approachable as it doesn’t come with easy destinations (e.g. GDrive) baked-in, unlike Duplicity. I assume rsync or something similar could cover that limitation, but it feels like a lot of setup.

  • sdfantinisdfantini Member
    edited January 2019

    My goto backup strategy is:

    https://github.com/gilbertchen/duplicacy

    duplicacy every 12 hours. This tool can deduplicate, encrypt and compress.

    duplicacy runs also a pre-backup job to dump mysql

    duplicacy also out of the box suport backup to almost any cloud/remote storage (no need to use rclone).

    So i run 2 daily jobs that also backups up mysql (i can run any minute since it's deduplicated) and in real time i upload the backups to digital ocean spaces.

    Thanked by 2eol jaden
  • Having a cloud storage website, and I wrote some script to backup data to gsuite drive. Currently we have about 28TB of data, and they are backed up everyday.

  • @eva2000 said:
    Folks doing backups, seriously need to check out using zstd compression instead !!! Benchmarks I did compared to other compression algorithms here.

    How about folks doing backups uncompressed? ;) Storage is cheap these days.

  • @Zerpy said:

    @eva2000 said:
    Folks doing backups, seriously need to check out using zstd compression instead !!! Benchmarks I did compared to other compression algorithms here.

    How about folks doing backups uncompressed? ;) Storage is cheap these days.

    btrfs send | ssh | btrfs receive

    Even faster than rsync-based stuff for incremental backups.

  • @Zerpy said:

    @eva2000 said:
    Folks doing backups, seriously need to check out using zstd compression instead !!! Benchmarks I did compared to other compression algorithms here.

    How about folks doing backups uncompressed? ;) Storage is cheap these days.

    Maybe cheap but not unlimited

    zstd low compression level still fast you can do level - 1 to default 3 and get speeds close to disk speed and still get decent compression ratio which helps if moving back up files over the network where network speed is limited i.e those 100-250Mbps network capped servers. So sending 1gb uncompressed or 90MB compressed over 100Mbps network.

  • eva2000eva2000 Veteran
    edited January 2019

    @Zerpy said:
    How about folks doing backups uncompressed? ;) Storage is cheap these days.

    Well had some spare time to benchmark tar + zstd vers tar uncompressed and with some tuned zstd settings, you can achieve tar + zstd at speeds faster than tar uncompressed and still reduce file size from 1.60GB tar uncompressed vs 908MB tar zstd compressed. Full benchmarks and info at https://community.centminmod.com/threads/custom-tar-archiver-rpm-build-with-facebook-zstd-compression-support.16243/#post-69994 :)

    Thanked by 2eol Daniel15
  • @sdfantini said:
    My goto backup strategy is:

    https://github.com/gilbertchen/duplicacy

    duplicacy every 12 hours. This tool can deduplicate, encrypt and compress.

    duplicacy runs also a pre-backup job to dump mysql

    duplicacy also out of the box suport backup to almost any cloud/remote storage (no need to use rclone).

    So i run 2 daily jobs that also backups up mysql (i can run any minute since it's deduplicated) and in real time i upload the backups to digital ocean spaces.

    Man, that was looking pretty awesome until I saw they charge $99/year for commercial use.

  • Daniel15Daniel15 Veteran
    edited January 2019

    eva2000 said: if moving back up files over the network where network speed is limited

    You might be interested in this: zstd just added an interesting "adaptive" mode that dynamically adjusts the compression level based on disk / network speed (if the network slows down, the compression level is increased, since the compression is not the bottleneck): https://code.fb.com/core-data/zstandard/

    The last feature introduced for large data streams is automatic level determination (--adapt). Adaptive mode measures the speed of the input and output files, or pipes, and adjusts the compression level to match the bottleneck. This mode can be combined with multithreading and long range mode and finely controlled through lower and upper bounds. Combining all three is perfect for server backups because the algorithm can opportunistically convert wait time due to network congestion into improved compression ratio. In initial experiments, we measured an improvement of approximately 10 percent in compression ratio for equivalent or better transmission time.

    From https://github.com/facebook/zstd/releases/tag/v1.3.6:

    A new command --adapt, makes it possible to pipe gigantic amount of data between servers (typically for backup scenarios), and let the compressor automatically adjust compression level based on perceived network conditions. When the network becomes slower, zstd will use available time to compress more, and accelerate again when bandwidth permit. It reduces the need to "pre-calibrate" speed and compression level, and is a good simplification for system administrators. It also results in gains for both dimensions (better compression ratio and better speed) compared to the more traditional "fixed" compression level strategy.

  • eoleol Member

    So zstd is better than .xz/LZMA?

  • eol said: So zstd is better than .xz/LZMA?

    Yep! It's a newer, more advanced algorithm. The zstd site has a comparison with a bunch of other algorithms: https://facebook.github.io/zstd/. In general the compression ratio is better than other algorithms, while also being faster in a lot of cases.

    Thanked by 2eol eva2000
  • Ahh, I didn't see that. Thanks for the link.

    By the way, I didn't know you're Aussie. I'm from Melbourne but am living in the USA now. :)

  • Daniel15 said: By the way, I didn't know you're Aussie. I'm from Melbourne but am living in the USA now

    yup Brisbanite :)

  • eoleol Member

    @eva2000 Thanks for your work on centminmod, very informative.
    Especially the CC IP range(s).

    Thanked by 1eva2000
  • eva2000eva2000 Veteran
    edited January 2019

    @eol said:
    @eva2000 Thanks for your work on centminmod, very informative.
    Especially the CC IP range(s).

    You're welcome.. zstd can also be used to further compress your logs during log rotation too - guide on using zstd for logrotate compression for nginx and php-fpm logs https://community.centminmod.com/threads/compressed-log-files-rotation-with-facebook-zstd-for-smaller-log-sizes.16371/ :)

    as you can tell, zstd is my new fav tool

    Thanked by 2eol Daniel15
Sign In or Register to comment.