New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Fastest backup tool for Linux? Possibly FOSS
I have been using Kopia for a few months and I like it, it's faster than Restic but now I am restoring 300+ GB and it's estimating 3h20m (backup is in iDrive e2). What is the fastest backup tool for Linux, possibly FOSS, that supports S3?
Second question, would backup and restores be faster using Borg and a Hetzner Storage Box, therefore over SSH?
Comments
Is the issue really the software choice or is it iDrive or the network in between?
1gbps is 125MB/sec theoretical (you'll never see that). You're only getting 25MB/sec though so surely there is room for improvement. Is this a VPS?
iDrive is even faster than Wasabi from my other tests concerning purely the storage. And it's a ton faster than Backblaze B2 and others. It's a Hetzner dedicated server
Expect all storage providers to throttle your I/O in one way or another.
What is the real question here? If you want a real block performance, it will
cost. Not extremely expensive, but it will cost. All those shitty "backup solutions"
are not really up for what you are paying. Want a small life-hack? Post it on Usenet.
But you must be somewhat tech-savvy around how it works and then you can have
5000+ days of retention. Just protect it with a rar+password and have even 1PB.
The download speed will be 1gbit+ from all the providers, but again, not actually
recommending here anything, just check how piracy works today before it reaches
all the torrent trackers.
What I am really looking for is the fastest free backup tool for Linux.
Never look for a "tool", look for a flexible solution, if so to speak.
A tool? dd/tar/7z - your magic trio.
Then even back it to a mega.nz drive using CLI (actually quite a good performance)
or other clouds. The idea is making it encrypted + shared among many providers.
So that when you need to download it, you can saturate 10gbit links without thinking.
I give too many ideas here for free
I've been happy using Rclone with B2. The
--transfers
switch lets you tune it to get the most out of your bandwidth, or not, as appropriate.May I strongly disagree here?
Depends who your provider is, 1gbps is not the limit for syncing stuff between continents.
Duplicacy (https://duplicacy.com/) is good, but it's not open source which is why I haven't tried it myself yet.
That only gives you one backup - ideally you want to keep multiple backups (e.g. one per week) and not have to transfer all the data each time (incremental or differential backup).
Bruh.
I do it on Google, Azure Cloud, Ya.Disk and many others.
And then I download particles on 10gbit/s from all to my fresh server in EU.
For HEAVY backups, I post it on Usenet, 50+TB (Well, it's heavy for me) and they
store it on their backbones. And when I need everything back, 10gbit+ flowing to me.
I don't think you can, actually, because of physics :-) But I may have not been clear.
I was saying that 125MB/sec is the maximum theoretical speed for a 1gbps port, and you'll never see that in actual practice unless you're in a lab. Trying to set a boundary on the maximum the OP is going to see with a 1gbps pipe.
Definitely there are larger pipes out there.
Maybe define "fast". Fast initial backup or fast incremental backups? How about space usage? Without compression is faster, but larger. So many tradeoffs. I wrote some of them up in our FAQ. Be sure to follow the link to the official Borg docs. Many tips to get better speed moved there recently. E.g. you can ignore extended attrs to make it faster, if you don't need those.
At BorgBase.com we don't throttle anything, but bandwidth and disk IO are shared. If you stick to less busy times (not 12am/pm and not the full hour) you will have to mostly to yourself.
I believe he said fastest, not necessarily a 1gbit port
There are faster ways to backup/extract stuff these days, that was my point.
It seems that your service is Borg specific, what's the difference backing up to your service compared to Hetzner Storage Box? Hetzner is $35.40.year for 1TB vs $80 for BorgBase.
I have 1 Gbps ports but generally speaking, I am looking for the fastest tool at both backing up and especially restoring for when something goes wrong. "fastest" as compared with the same conditions hardware specs / network.
That's right. BorgBase is specialized in Borg only and comes with much easier setup and more features than a generic storage provider. This includes isolated repos, multiple regions, monitoring for missed backups, append-only mode, server-side compactions to name the main ones.
We also provide expert support to set up your backup flow and maintain or sponsor all relevant projects in the Borg ecosystem. This includes Borg itself, Borgmatic, Pika Backup (for Gnome) and Vorta (macOS and Linux). Keeping those FOSS projects healthy ultimately benefits everyone, including those using other providers.
@vitobotta Make your hot files smaller, that way you can restore your core system faster and get all the colder data over time (eg: 25gb hot data, 275gb cold data)
For some use cases it make sense... And is very fast
One variation on that is to use a configuration management tool like Ansible. No more backing up of /etc, /var, etc., because all of that is taken care of by my scripts.