Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Life after CrashPlan: why they sucked, why Backblaze & B2 suck, Glacier vs Others - Page 5
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Life after CrashPlan: why they sucked, why Backblaze & B2 suck, Glacier vs Others

1235

Comments

  • rm_rm_ IPv6 Advocate, Veteran
    edited November 2017

    If you upload many small files, the speed will be less than 10 Mbit, because completion of each upload can take several seconds. Try with files of 1GB or more and check what speed do you get during the actual upload.

    But at this point from all your posts in this thread going on and on, it almost seems like you didn't actually want the product, but just got it for a chance to give it bad reviews and do some whining at OVH (well of course, they step on your tail in the hosting market more and more, now expanding into the US and all that).

  • KuJoeKuJoe Member, Host Rep

    rm_ said: But at this point from all your posts in this thread going on and on, it almost seems like you didn't actually want the product, but just got it for a chance to give it bad reviews and do some whining at OVH (well of course, they step on your tail in the hosting market more and more, now expanding into the US and all that).

    I wanted to be wrong. I wanted it to be a decent solution. I don't consider them a competitor with my company so them being bad or good only hurts me personally, not professionally. In my past experience with them they were never this bad. If you look at my past reviews of them on LET you'll notice that the speeds I am getting today and the speeds I was getting before are much different. I knew their plans are too good to be true, but I figured since this was OVH they wouldn't be horrible but I was wrong.

    And you are right, I didn't want the service. Before paying the invoice there were so many red flags that their service was horrible, but again I figured "it's OVH, how bad can they be?" only to find that the reviews and my past experience didn't even touch how bad the service is.

    5 days into the upload and I'm sitting at 10.18GB uploaded so far. To compare hubiC to Amazon Drive, the initial backup took 5 Days, 12 Hours, 13 Minutes, and 47 Seconds to Amazon Drive.

  • @KuJoe Are you using the official client or something like rclone?

    The official Windows client hasn't been updated for ages, and the "backup" feature only allows for 3 simultaneous transfers. The official Hubic Linux client is Mono-based so it's not great either.

  • KuJoeKuJoe Member, Host Rep

    @M_Ordinateur said:
    @KuJoe Are you using the official client or something like rclone?

    The official Windows client hasn't been updated for ages, and the "backup" feature only allows for 3 simultaneous transfers. The official Hubic Linux client is Mono-based so it's not great either.

    I'm using the Synology client (Hyper Backup).

  • I haven't found an Internet backup provider that doesn't suck.

    Find someone (perhaps on LET) with whom you can have a reciprocal arrangement - you send a small computer with an 8 TB drive, that person sends one to you (or you send each other disk images for the same hardware on each end), and you each back up to each other's location. Two Raspberry Pis and two 8 TB USB drives would be perfect.

  • MaouniqueMaounique Host Rep, Veteran

    johnklos said: you send a small computer with an 8 TB drive

    Why send anything, the drives or computers at the other end would suck if not sent by you? Just agree to buy same hardware, maybe used from ebay or something, and done.

    Thanked by 1rm_
  • Maounique said:

    Why send anything

    I think the idea might be to send a drive full of data (your backups) to avoid having to upload them through a slow ISP connection. Then further backups are just incremental updates.

    I use data center storage with regular old servers for stuff like that though, or storage offerings like Online C14, Hetzner Storagebox, etc. Stuff priced like Hubic is obviously going to be crap. There is no free lunch.

  • Currently using syncthing to sync to a kimsufi 2TB server. The kimsufi is set to rsync to another kimsufi every 12 hours. At home, additional sync to two portable WD 3TBs.. Been going fine so far.

  • There's a lot of band-width in a station wagon. -Fred Gruenberger, 1971

  • KuJoeKuJoe Member, Host Rep

    7 days of uploading and I'm at 14.40GB and my other backups are 7 days behind. :( I gave it a valiant effort but I'm abandoning hubiC. Oh well, I wanted it to be not horrible but it failed me. :(

  • KuJoeKuJoe Member, Host Rep
    edited November 2017

    Just another update for anybody who cares and is still looking for a solution. So the lengthy hubiC upload killed my Amazon Drive backup job on my NAS so I had to create a new one. 5 days in and 412.34GB uploaded so far compared to the 10.18GB I was able to upload to hubiC in the same time frame.

    I've got 4 days left on my Backblaze trial and I think I'm going to stick with it for $50 a year. It's only backing up 102GB of my critical data but after the initial upload (about a day or two) I switched the performance settings to "automatic" and it's been able to keep up with the files as I change them including extracting large archives. The GUI estimates it can upload a max of 59GB per day which is plenty for my needs. It's still no CrashPlan replacement though, it only stores 30 days worth of versions of my files which is a real let down because it's usually the versions that save me in times of need (I remember going over 2 years back on my CrashPlan backups to find the license key that I never thought I'd need again and ended up saving me a lot of money). Then again, when the time comes for Backblaze to raise their pricing or discontinue their service I won't feel horrible for cancelling because 30 days isn't much to worry about.

    Thanked by 1jaden
  • I made one payment to hubiC in May for the 10TB plan. My payment is "still processing" and my data is still there. Kinda funny, it was a test run of 350GB video. I would like to use it, the speed is acceptable, but I don't want my data to disappear and then have to do it all at once again.

  • Giving Backblaze a try on my Windows machine. Not hating it so far, though it will be months before everything is backed up. I guess it's worth the $5/month just to back up my music collection, which is >1 TB and on an external drive.

    Seems odd that they have a Python-based command-line tool for using the B2 cloud storage from Mac OS but they don't support Linux. How much different can it be?

  • szarka said: Seems odd that they have a Python-based command-line tool for using the B2 cloud storage from Mac OS but they don't support Linux. How much different can it be?

    They don't have linux software because they offer unlimited storage for personal backups and don't want someone to back up their mega storage hosting boxes.

  • @jetchirag said:

    szarka said: Seems odd that they have a Python-based command-line tool for using the B2 cloud storage from Mac OS but they don't support Linux. How much different can it be?

    They don't have linux software because they offer unlimited storage for personal backups and don't want someone to back up their mega storage hosting boxes.

    B2 is charged by the GB, though. So they should be happy to have people backing up their mega storage hosting boxes to it!

  • MaouniqueMaounique Host Rep, Veteran

    willie said: I think the idea might be to send a drive full of data (your backups) to avoid having to upload them through a slow ISP connection.

    If taht is the case, the whole thing is useless, when you need your data you will have to get back the drive through mail, otherwise the slow upload will probably be slower.

  • B2 has a linux client

    Thanked by 1szarka
  • @lurch said:
    B2 has a linux client

    So that command-line Python tool also works on Linux? Why don't they say so at https://www.backblaze.com/b2/docs/quick_command_line.html ? Lots of talk about third-party integrations for Linux on their site, but for the one tool I actually want to use they never mention Linux!

  • @szarka said:

    @lurch said:
    B2 has a linux client

    So that command-line Python tool also works on Linux? Why don't they say so at https://www.backblaze.com/b2/docs/quick_command_line.html ? Lots of talk about third-party integrations for Linux on their site, but for the one tool I actually want to use they never mention Linux!

    Yep using the git method

    Thanked by 1szarka
  • @szarka said:

    @jetchirag said:

    szarka said: Seems odd that they have a Python-based command-line tool for using the B2 cloud storage from Mac OS but they don't support Linux. How much different can it be?

    They don't have linux software because they offer unlimited storage for personal backups and don't want someone to back up their mega storage hosting boxes.


    B2 is charged by the GB, though. So they should be happy to have people backing up their mega storage hosting boxes to it!

    Sorry missed it was B2. Afaik it works with Linux

  • @Maounique said:
    If taht is the case, the whole thing is useless, when you need your data you will have to get back the drive through mail, otherwise the slow upload will probably be slower.

    It'd be silly to think you'd need it all at once unless something catastrophic happened, and if that does happen, then yes, you'd probably ship it.

    I use backups for... wait for it... backing things up. I don't use backups for storage, but I can see how some people might want to do that.

  • MaouniqueMaounique Host Rep, Veteran

    johnklos said: I use backups for... wait for it... backing things up. I don't use backups for storage, but I can see how some people might want to do that.

    I do not understand. Of course i am talking about backing things up. When your data is wiped for some reason, you will want your backups available. If youhave to wait for your disk to arrive through the post, then either the data is not important or you can live without it for long enough.
    In both cases, remote storage should do, only contact someone with a good upload just in case.

  • KuJoeKuJoe Member, Host Rep
    edited November 2017

    If you have your backups setup correctly, your local backups are there if you need to restore a few files and your off-site backups are there for if your house burns down. Off-site backups are worthless if you're not able to do a complete restore in a reasonable amount of time.

  • HarambeHarambe Member, Host Rep

    Maounique said: If youhave to wait for your disk to arrive through the post, then either the data is not important or you can live without it for long enough.

    Yeah, I think you guys are just arguing about the type/importance of your data. I can wait a week or two for a backup copy of family photos/videos to arrive, but not business-critical files.

    I wouldn't rely on a $50/yr personal backup service to quickly recover files needed to operate my business - I have those mirrored across several of my own servers and S3 for good measure.

  • MaouniqueMaounique Host Rep, Veteran

    I have 2 layers of back-ups, or rather tiers.
    One is huge, it is personal and is backed up at home and on usb disks at another home in another city.
    The other is more technical but also includes writings and is backed up "in the cloud". only a couple of tb or so.
    I do not currently have a third back up of the first round of data nor second of the second round, which is not a very good idea, but the cost would be too much for very low risks. Lets say that is google blows up together with my house, I have more important things to worry about than my data.

  • Not sure why some people like to talk bad about backblaze... they are one of the best backup services that I use, and probably the cheapest.

    I backup 1Tb+ daily to backblaze b2 using rclone https://rclone.org/b2/ on a Ubuntu server with 10 Gbps... and no problems.
    You only need to have a script compressing the small files into a larger file, and set enough concurrent threads for uploading.

    Fastest would be google cloud storage, using their client... but it's also much more pricey.

    Or you can create your own backup cluster with glusterFS... if you know what you're doing.

  • raindog308raindog308 Administrator, Veteran

    Update...been testing B2.

    I tried their published CLI client but it has issues, chiefly an inability to process links. It just follows them blindly...hours later I discovered an innocent link to my dropbox folder resulted in...yeah...

    However, rclone sync seems OK. There are some drawbacks:

    • you can tell it to not follow links, but if you were ever to sync the other way, the links won't be restored, which is not surprising. I need to write a "restore all links that existed at time of backup" script and stick it in output as part of the backup or something.

    • if you rename a top directory or move things around, you'll be reuploading all of that. it tends to freeze top-level directory names and encourage gingerly, gradual reorganization. With CP's dedupe none of this mattered.

    I looked at duplicity but the way it works is offputting...if I ever needed a restore, I'd need 2x the size to stage and uncompress (I think...?) However, I could break up my volumes a bit. If none were bigger than a couple hundred GB that might work.

    Thanked by 1szarka
  • @raindog308 I think you may want to plan your directory structure a bit more.
    For example, an archive where you put stuff that doesn't change often (and keep it in sync with rclone) and another delta archive, for changing or recent stuff.

    • Stick to a standard name, such as YYYYMMDD/directoryname/YYYYMMDD_backup.zip

    • Don't rename the archive directory and contents, just add / delete new stuff daily from delta.

    • Have a different bucket for different stuff... don't put everything inside the same and rclone specific, smaller directories (it's faster to start if there are less files).

    • Set a time limit on your bucket, so you don't pay and accumulate Tb's of data from "2 years ago" in the future.

    • Use at least 16 or 32 threads.

    • Use copy, instead of sync, when possible... this is what I use:

    /usr/bin/rclone copy -v --stats=15s --transfers 32 /home/backups/recovery myremotename:bucketnamerecovery

    /usr/bin/rclone sync -v --stats=15s --transfers 64 /home/backups/delta myremotename:bucketnamedelta

  • This is how I roll:

    • I've got 4 different storage VPS each between 1-2 TB of storage. Each have Resilio Sync on them pointed to different volumes on various home computers.
    • I'm a big documentary fan so download quite a bit (got about 4TB worth). Many of these documentaries show up on Youtube (the search for duration > 20 minutes is a helpful filter), so that's basically free backup for a good portion of them. Just like camelcamelcamel.com tracks Amazon prices, I suppose you can write a scraper that checks what % chance any of your media titles will be on Youtube at any point in time.
    • I use a private torrent tracker for getting these documentaries. If you build up good ratio at a tracker, then that's basically free re-download credit in the event of a local drive failure. You just need to cron an ls > movies.txtto keep a text file of all the titles you've downloaded and print that out if you need a backup.

    Backblaze only has the data stored in one single datacenter in Sacramento, California but have opened a new one in Phoenix, Arizona.

    Those are all within Kim Jong Un range .. no thanks.

  • @boxelder said:
    Those are all within Kim Jong Un range .. no thanks.

    Your data won't really matter, since the planet will be gone before a restoration could complete, anyhow.

Sign In or Register to comment.