Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What is your favorite server backup tool and why?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What is your favorite server backup tool and why?

For me the best two remain Restic (which is what I use now) and Borg. By "best" I mean first and foremost reliable, then comes the speed and features.

Restic in particular just works and is ridiculously easy to set up and use without any complicated commands. And it's exceptionally reliable: I have had to do restores many times with this tool from small and large repositories and it has never failed me, never.

Borg has always looked rock solid to me as well.

Another good one which I kinda like for the nice UI is duplicacy. I have a license but restoring is slow.... it takes ages to find the list of files from the snapshot to restore. But many people are happy with its reliability. Perhaps I should try it again.

The one that has disappointed me the most is Kopia. It's promising, but it's not reliable at all at this stage. On the one server I was testing it it corrupted the repository 3 times in a row, until I gave up and switched back to Restic.

What do you use to back up your servers? I mean "old school" servers where you install stuff directly or with Docker etc. I am not considering in this question advanced stuff like Kubernetes environments etc (for Kubernetes I use Velero and Kasten anyway).

«1

Comments

  • I have positive experience with Veeam Backup & Replication.

    Thanked by 1Pwner
  • I use Borg for backups, often in conjunction with LVM snapshots. While deduplication may not work as well with this combination, Borg is still a great tool.

    In addition to Borg, I also use some small shell scripts to rsync important files from my servers to a backup server, following the "pull" principle to prevent the servers being backed up from having direct access to the backup server.

    I do virtualize everything, so if my backup plans somehow fail, I still have the Proxmox Backup Server backups. The only downside of using Proxmox Backup Server is that the VM may experience some lag when there is high IO load within the VM and the backup storage struggles to keep up.

    This strategy has never let me down :smile:

    Thanked by 1lala_th
  • restic .

  • WebProjectWebProject Host Rep, Veteran

    Restic can be combined with rclone, I personally like rclone as more options to backup to different services and encrypt the backups files

  • @let_rocks said:
    I have positive experience with Veeam Backup & Replication.

    Using hardened repository is becoming a necessity in this age of crypto ransomware shit.

    But I think they've gotten too big and development slowing down...

  • @TimboJones said:

    @let_rocks said:
    I have positive experience with Veeam Backup & Replication.

    Using hardened repository is becoming a necessity in this age of crypto ransomware shit.

    But I think they've gotten too big and development slowing down...

    If the product is good and stable I don’t mind slow development, unless they increase prices without giving something in return or anything like that.

  • Restic 100%!

    Several disaster recovery restores of whole machines already, never dissapointed, very versatile and robust, one good example of German Quality.

    In this day and age important stuff: fully encrypted, so put/send stuff anywhere and sleep well. Gained compression support last year, so finally efficient in space usage, too. And very, VERY performant.

    Finally, full open source, find a bug, you get to fix it. But, it won't be easy (to find a bug, that is). :smile:

    Whatever you're using right now, give restic a chance and see what it can do, you'll be amazed.

    To give at least some objections, if you're VERY low on RAM, and happen to have LOTS of very small files in there, then it could be a problem. Because, it likes to cache lots of metadata (for performance) and it can get memory intensive. But, speaking of VERY low end, like < 1 GB RAM, otherwise it just works...

    Thanked by 1jlet88
  • @maverick said:
    Restic 100%!

    Several disaster recovery restores of whole machines already, never dissapointed, very versatile and robust, one good example of German Quality.

    In this day and age important stuff: fully encrypted, so put/send stuff anywhere and sleep well. Gained compression support last year, so finally efficient in space usage, too. And very, VERY performant.

    Finally, full open source, find a bug, you get to fix it. But, it won't be easy (to find a bug, that is). :smile:

    Whatever you're using right now, give restic a chance and see what it can do, you'll be amazed.

    To give at least some objections, if you're VERY low on RAM, and happen to have LOTS of very small files in there, then it could be a problem. Because, it likes to cache lots of metadata (for performance) and it can get memory intensive. But, speaking of VERY low end, like < 1 GB RAM, otherwise it just works...

    Yeah I love it too. As for the RAM usage issues, I think there was an environment variable that you can set to reduce usage

  • @vitobotta said:
    Yeah I love it too. As for the RAM usage issues, I think there was an environment variable that you can set to reduce usage

    There is, but it's not very helpful, restic being written in Go, those variables just tweak a bit Go garbage collection, to recover memory a bit sooner, but that ends up with 10 - 15% memory usage savings, which might, or might not help.

    Let me be a bit more specific, I never ran into these issues personally, I just happen to know that it is possible. I happen to have some millions of files on this computer I type this comment on. And I know for a fact that restic needs a few GB of RAM to back it all up. And it happens in a few minutes every night, having in mind that it also has to be sent to a very distant location. It's really fast, but... needs some RAM to do it that fast. It's a compromise.

    So, if anybody has millions (or probably several hundreds of thousands of files) on a very low RAM machine, it would certainly be a problem. But, anything else would just work, and be really fast.

    Also, one other thing, restic allows very elaborate retention policies, when I said millions of files, what I really mean is that you could pick one of those and request recover of its copy from 2020, and I could do it. All these snapshots are very well packed and encrypted on disk storage, still, some RAM is required to handle all that enormous data storage efficiently.

  • @let_rocks said:

    @TimboJones said:

    @let_rocks said:
    I have positive experience with Veeam Backup & Replication.

    Using hardened repository is becoming a necessity in this age of crypto ransomware shit.

    But I think they've gotten too big and development slowing down...

    If the product is good and stable I don’t mind slow development, unless they increase prices without giving something in return or anything like that.

    I use Veeam as well.

    I'm a bit concerned about how slow they are to support new OS releases. But on the other hand, I really like how their software supports the use of multiple repositories that can run on virtually anything (VMs as well as physical servers).

    An important difference between Veeam and most of the other backup options mentioned so far is that it uses block-based backups. When you're backing up systems with many millions of small files, this can be a critical difference in performance.

  • I am testing Duplicacy with my Mastodon server now. It's nice and seems fast. I tried it on my Mac previously but ended up using Arq there.

    Thanked by 1arda
  • I've been using duplicacy for a few years now (just the CLI though, no GUI). It's worked well and while restores are slow and somewhat cumbersome, they've always worked.

  • I am running a cluster. So one node is not open to web and I run backup operations there. All nodes are synced via syncthing. I use backup the files and mysql db dump to OneDrive through Duplicity.

  • I’ve been too lazy to set anything up so I just backup the application layer.

    Hosting a bunch of WP sites and that’s it, so the underlying stack I could install in like 15 minutes.

    However, might be worth fixing for easier restoration in case something goes tits up.

  • NetDynamics24NetDynamics24 Member, Host Rep

    Acronis, jetbackup, cpanel, and DA embedded backup system.

  • ShazanShazan Member, Host Rep

    BackupPC. It allows pull backups, very efficient deduplication across many clients, compression, and it is fast.

  • WebProjectWebProject Host Rep, Veteran

    @NetDynamics24 said:
    Acronis, jetbackup, cpanel, and DA embedded backup system.

    common issue with some integrated into control panel backup option like cPanel backup feature is occasionally fail to backup in past, so much better to use dedicated tool to backup the data like jetbackup 👌

    I haven’t tried the Duplicity yet, need to do a few tests.

  • From testing Duplicacy:

    • my Macs: Duplicacy doesn't support creating APFS snapshots on external drives... so it cannot create consistent point in time backups of external drives if you have files open, which is disappointing. Back to using Arq Backup for the Macs. I have it configured to back up to both iDrive e2 and Backblaze b2
    • Mastodon server: I will stick with Restic. Just simpler to reason about and works well. With Duplicacy there are various tasks to take into account to keep the repository in good shape.
  • zfs/lvm snapshots for Linux rigs
    clonezilla for windows ( yes I know I'm prolly in the minority)

  • I have a DIY bash script that runs on cron. It simply runs some commands / containers to export the DBs, etc. compress and encrypt, and then upload to my remote cloud. There's nothing incremental and advanced about it.

    It takes only some kilobytes (less than 50 MBs including the running containers that runs command dedicated to this backup), portable, and runs everywhere.

    I was thinking about kopia.io, Duplicaty looks cool, thanks for the information

  • Restic+Rclone: Easy to set up in a new server, fast, easy to restore, never failed (reliable).

    I used to be a Duplicati user. It was resources intensive & very slow. Then switched to shell script based backup. Basically makes archive for my different applications then backup them using rclone move.

    I am thinking about setup kopia as a secondary backup solution. You know, just in case…

    Thanked by 2maverick lala_th
  • @aj_potc said:

    @let_rocks said:

    @TimboJones said:

    @let_rocks said:
    I have positive experience with Veeam Backup & Replication.

    Using hardened repository is becoming a necessity in this age of crypto ransomware shit.

    But I think they've gotten too big and development slowing down...

    If the product is good and stable I don’t mind slow development, unless they increase prices without giving something in return or anything like that.

    I use Veeam as well.

    I'm a bit concerned about how slow they are to support new OS releases. But on the other hand, I really like how their software supports the use of multiple repositories that can run on virtually anything (VMs as well as physical servers).

    Their support attitude in the forums like, "we target new releases within X days and it's your fault if users upgrade to new releases before Veeam supports it." In my case, it triggered daily warning emails that the users upgraded to some unsupported technical preview Windows 11, despite being on Windows 10. None of the users would even know how to join Insiders. And now they're long past that initial 90 days or whatever and then it just becomes a soft target, not some official release schedule.

    An important difference between Veeam and most of the other backup options mentioned so far is that it uses block-based backups. When you're backing up systems with many millions of small files, this can be a critical difference in performance.

    There's something wrong with Veeam's block based restores that annoys me. It's noticeably slower than Acronis doing image based restores. Say you have 500GB of data on a 1TB drive. Acronis will say it's restoring 500GB and Veeam will say it's restoring 1TB, with reported disk transfer speeds that are nonsensical and the time required to restore will be some time between what you'd expect for between 500 and 1000GB transferred. It made it completely useless to predict the restoration time and took significantly longer than it should have.

    Don't get me wrong, Acronis estimate of restoring has been horribly wrong for over a decade, but restores much faster than Veeam in my experience.

  • JordJord Moderator, Host Rep

    I've been testing comet backups on a couple of servers, it's not to bad. Seems to do the trick.

  • @arda said:
    I have a DIY bash script that runs on cron. It simply runs some commands / containers to export the DBs, etc. compress and encrypt, and then upload to my remote cloud. There's nothing incremental and advanced about it.

    It takes only some kilobytes (less than 50 MBs including the running containers that runs command dedicated to this backup), portable, and runs everywhere.

    I was thinking about kopia.io, Duplicaty looks cool, thanks for the information

    I recommend against Kopia. It looks good but it's just not ready for production use yet. It corrupted the repository 3 times in a row for me.> @Hakim said:

    Restic+Rclone: Easy to set up in a new server, fast, easy to restore, never failed (reliable).

    Why do you use Restic with Rclone? Isn't it slower? Is it because you use storage not supported by Restic?

    I used to be a Duplicati user. It was resources intensive & very slow. Then switched to shell script based backup. Basically makes archive for my different applications then backup them using rclone move.

    Duplicati was very unreliable with large backups for me. I actually never managed to do a full restore with Duplicati, and I know lots of people have had trouble with its unreliability.

    I am thinking about setup kopia as a secondary backup solution. You know, just in case…

    Warning for you too... Kopia on paper looks good, but it doesn't seem to be reliable enough yet. I wouldn't recommend it to back up important stuff just yet.

    Thanked by 2TimboJones arda
  • @Jord said:
    I've been testing comet backups on a couple of servers, it's not to bad. Seems to do the trick.

    Looks similar to Jungle Disk but much, much cheaper.

  • eva2000eva2000 Veteran
    edited January 2023

    @vitobotta said: What do you use to back up your servers?

    I write my own backup scripts with features I want as speed of backup and restoration are important to me:

    1. extensive error checking, so backup retention routines don't delete older backups until the recent backup has been verified to have completed 100%
    2. mobile push and email notification on successful backups
    3. multiple remote backup location support via S3 compatible providers and FTP
    4. utilise newer tar and rsync versions with native zstd compression support so can tune the compression levels for near network/disk line speed backup speeds.
    5. support several methods of MariaDB MySQL backup including multi-threaded mysqldump via tab separated CSV files or standard single threaded SQL backups, mydumper and also multi-threaded MariaBackup. So I can choose based on how fast I want to be able to backup and restore data

    My old benchmarks with the newer versions of tar and rsync and zstd can be found at https://blog.centminmod.com/2021/01/30/2214/fast-tar-and-rsync-transfer-speed-for-linux-backups-using-zstd-compression/ :)

    Here’s old benchmarks with Rsync 3.2.3 with zstd compression fast negative levels for -30, -60, -150, -350 and -8000 vs system Rsync 3.1.2 to showcase how flexible zstd is in allowing you to choose speed versus compression ratio/sizes. For rsync 3.2.3 zstd there are compression levels from -131072 to 22 and a choice between newer xxhash vs traditional md5 checksum algorithms.

    Thanked by 1maverick
  • @eva2000 said:

    @vitobotta said: What do you use to back up your servers?

    I write my own backup scripts with features I want as speed of backup and restoration are important to me:

    1. extensive error checking, so backup retention routines don't delete older backups until the recent backup has been verified to have completed 100%
    2. mobile push and email notification on successful backups
    3. multiple remote backup location support via S3 compatible providers and FTP
    4. utilise newer tar and rsync versions with native zstd compression support so can tune the compression levels for near network/disk line speed backup speeds.
    5. support several methods of MariaDB MySQL backup including multi-threaded mysqldump via tab separated CSV files or standard single threaded SQL backups, mydumper and also multi-threaded MariaBackup. So I can choose based on how fast I want to be able to backup and restore data

    My old benchmarks with the newer versions of tar and rsync and zstd can be found at https://blog.centminmod.com/2021/01/30/2214/fast-tar-and-rsync-transfer-speed-for-linux-backups-using-zstd-compression/ :)

    Here’s old benchmarks with Rsync 3.2.3 with zstd compression fast negative levels for -30, -60, -150, -350 and -8000 vs system Rsync 3.1.2 to showcase how flexible zstd is in allowing you to choose speed versus compression ratio/sizes. For rsync 3.2.3 zstd there are compression levels from -131072 to 22 and a choice between newer xxhash vs traditional md5 checksum algorithms.

    Could you share these scripts?

  • @lala_th said: Could you share these scripts?

    Nope for my specific use only :)

  • @TimboJones said:

    @aj_potc said:

    @let_rocks said:

    @TimboJones said:

    @let_rocks said:
    I have positive experience with Veeam Backup & Replication.

    Using hardened repository is becoming a necessity in this age of crypto ransomware shit.

    But I think they've gotten too big and development slowing down...

    If the product is good and stable I don’t mind slow development, unless they increase prices without giving something in return or anything like that.

    I use Veeam as well.

    I'm a bit concerned about how slow they are to support new OS releases. But on the other hand, I really like how their software supports the use of multiple repositories that can run on virtually anything (VMs as well as physical servers).

    Their support attitude in the forums like, "we target new releases within X days and it's your fault if users upgrade to new releases before Veeam supports it." In my case, it triggered daily warning emails that the users upgraded to some unsupported technical preview Windows 11, despite being on Windows 10. None of the users would even know how to join Insiders. And now they're long past that initial 90 days or whatever and then it just becomes a soft target, not some official release schedule.

    Yeah, Veeam's stance on this really grinds my gears. In Veeam's understanding, any point release update of a Linux OS counts as a "version upgrade," for which we aren't supposed to expect support from them for 90+ days. I don't know about everybody else, but I'm not letting a Web-facing Linux instance go unpatched for 90+ days after OS updates are available. I use stable Linux distros precisely so I can apply their patches rapidly. I guess Veeam expects us to be testing these patches internally for 6 months before we update production systems... :neutral:

    And Veeam still don't support any RHEL derivatives since the demise of CentOS, which is a big issue for me.

    There's something wrong with Veeam's block based restores that annoys me. It's noticeably slower than Acronis doing image based restores. Say you have 500GB of data on a 1TB drive. Acronis will say it's restoring 500GB and Veeam will say it's restoring 1TB, with reported disk transfer speeds that are nonsensical and the time required to restore will be some time between what you'd expect for between 500 and 1000GB transferred. It made it completely useless to predict the restoration time and took significantly longer than it should have.

    Don't get me wrong, Acronis estimate of restoring has been horribly wrong for over a decade, but restores much faster than Veeam in my experience.

    I've only done limited testing of Acronis, but in the end decided against it because it has complex requirements if you want to host your own repositories and not pay another provider for this. With Acronis I also found no ability to create backup copy jobs to send backups to multiple repos, like Veeam supports. On the other hand, I was impressed by the Acronis UI and its extensive OS support, which was bigger than Veeam's.

    If I really had to restore my bigger systems with Veeam, it would take days and would be an absolute last resort if all other restoration attempts failed.

    Thanked by 1TimboJones
  • Do you back up your installed software or just data? Shouldn't backup be only for user data? Since a good portion of the disk is taken by OS and apps already in distro repositories (it's superfluous).

Sign In or Register to comment.