Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


HostHatch Amsterdam Node Failure | All Data Lost | Still Reliable? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

HostHatch Amsterdam Node Failure | All Data Lost | Still Reliable?

135

Comments

  • ariq01ariq01 Member

    Man.....
    everytime my HK vps on HH suddenly went offline, i worried about "data loss" :joy:

  • unit2xunit2x Member

    @ariq01 said:
    Man.....
    everytime my HK vps on HH suddenly went offline, i worried about "data loss" :joy:

    the same

  • @ariq01 said:
    Man.....
    everytime my HK vps on HH suddenly went offline, i worried about "data loss" :joy:

    Rclone to backblaze everyday and you won't worry..

    Thanked by 1ariq01
  • bdlbdl Member

    @LiliLabs said:
    I restored from the backup of the backup of the backup of the backup actually :)

    @ariq01 said:
    Man.....
    everytime my HK vps on HH suddenly went offline, i worried about "data loss" :joy:

    that's why you backup

    Thanked by 2ariq01 lorian
  • AnotherAnother Member
    edited June 2022

    I had 2 TB server, all data lost. I was very happy with service, but now i am scared.

    @lowendclient said: Rclone to backblaze everyday and you won't worry..

    backing it up in backblaze will cost me more than double the price of VPS it self

  • AdvinAdvin Member, Patron Provider

    @Another said:
    I had 2 TB server, all data lost. I was very happy with service, but now i am scared.

    @lowendclient said: Rclone to backblaze everyday and you won't worry..

    backing it up in backblaze will cost me more than double the price of VPS it self

    Backup to Google Drive then :)
    $7-$20/month unlimited storage

    Thanked by 1Abd
  • AbdAbd Member, Patron Provider

    @Advin said:

    Backup to Google Drive then :)
    $7-$20/month unlimited storage

    can't get past that daily upload limit 😬

  • AdvinAdvin Member, Patron Provider
    edited June 2022

    @Abd said:

    @Advin said:

    Backup to Google Drive then :)
    $7-$20/month unlimited storage

    can't get past that daily upload limit 😬

    750GB per day per account
    Just make multiple (free) accounts, add them to a shared drive, and run sync operations via Rclone or just do incremental backups. Anyways, if you have a 2TB VPS, 750GB/day should be enough unless your data is constantly changing :)

    Thanked by 2bdl Abd
  • AnotherAnother Member
    edited June 2022

    @Advin said: $7-$20/month unlimited storage

    still more than server price, 90$ 2 years blackfriday. Anyway I had backup of scripts and configs so now slowly restoring.

  • @stevewatson301 said:
    Is this egress only for georeplication, or do you actually egress that much data for general use? Do you mind sharing a bit more about what you run on these servers (e.g. NFS/Minio/Borgbackup etc.)

    I run Minio, the egress is without accounting for deduplication.

    Interesting. If I run the numbers, assuming 20TB of replicated data, thus reducing the actual amount of data to 5TB, you still get a pricing of $0.00985/GB, which happens to be the same as S3's Onezone-IA. Not bad, I'd say.

    I have 20TB of non replicated data, stored in chunks across each VPS. I can withstand one failure and be able to rebuild.

    Thanked by 3bulbasaur Abd maverick
  • @Abd said:

    @Advin said:

    Backup to Google Drive then :)
    $7-$20/month unlimited storage

    can't get past that daily upload limit 😬

    Just create service accounts. Easy work around, and if you need auto swapping there is a forked rclone that does it.

  • hosthatchhosthatch Patron Provider, Top Host, Veteran

    @ariq01 said:
    Man.....
    everytime my HK vps on HH suddenly went offline, i worried about "data loss" :joy:

    We don’t do Storage VMs in Hong Kong.

    Our NVMe nodes have software RAID-10, and none of them use hardware RAID cards, which were the culprit in the last few incidents.

    The rebuild times, when NVMe drives fail are usually in hours and not days due to the size and speed of NVMes.

    So no - none of the issues we’ve had in the recent past would apply to them. That said though, please always keep backups of your data.

    And on Storage nodes, we have mitigated this issue (or lowered the risk by a huge factor) by using multiple RAID cards and multiple RAID arrays per node, instead of single large arrays.

    Not sure if your comment was meant more as a joke, but I hope the info above helps calm your nerves. :)

  • ariq01ariq01 Member

    @hosthatch said:
    We don’t do Storage VMs in Hong Kong.

    Hi,
    mine is Promotional Package NVMe 4G (E5)

    Not sure if your comment was meant more as a joke, but I hope the info above helps calm your nerves. :)

    Thanks for clarifying, i don't mean to jokes :sweat_smile:, sometimes uptimerobot unable to ping my HK server (down and then up again), i guess bcs the network unreachable or rto for a while.

    But when i logged in to ssh, uptime still on the track (75days+). So, yeah, my vps run smoothly.

  • TimboJonesTimboJones Member
    edited June 2022

    @hosthatch said:
    And on Storage nodes, we have mitigated this issue (or lowered the risk by a huge factor) by using multiple RAID cards and multiple RAID arrays per node, instead of single large arrays.

    LowEndStaticians, is this true?

    Adding more hardware RAID cards increases the failure rate and reduces MTBF. It sounds like decreasing the array size is intended to speed up rebuild times, but is that actually true? Is the rebuild time for a 10TB drive the same in a 4 drive vs 8 drive RAID 10 array? It's still writing 10TB in both cases.

    Or is this halving the data loss rate by reducing lost data by 50% when failures occur?

    I thought risk usually goes down as disk count is increased (probability or something spread over more units?)? I never took statistics in high school and regret that.

  • @TimboJones said:

    @hosthatch said:
    And on Storage nodes, we have mitigated this issue (or lowered the risk by a huge factor) by using multiple RAID cards and multiple RAID arrays per node, instead of single large arrays.

    LowEndStaticians, is this true?

    Adding more hardware RAID cards increases the failure rate and reduces MTBF. It sounds like decreasing the array size is intended to speed up rebuild times, but is that actually true? Is the rebuild time for a 10TB drive the same in a 4 drive vs 8 drive RAID 10 array? It's still writing 10TB in both cases.

    Or is this halving the data loss rate by reducing lost data by 50% when failures occur?

    I thought risk usually goes down as disk count is increased (probability or something spread over more units?)? I never took statistics in high school and regret that.

    Smaller arrays do rebuild faster, yes. Smaller redundant arrays also give them a better chance of recovering from inevitable drive failures. Risk goes up as disk count is increased because the likelihood of one drive failing is always the same. This is the reason why RAID 0 actually gives you negative redundancy, since the chance of one drive failing is now multiplied by however many drives are in the array. Multiple small arrays limit your exposure to one drive failing though, instead of rebuilding a 30 drive array you might just have to rebuild a 5 drive array. Again, HH is acting on pretty good faith here, no need to jump down their throats :)

    Thanked by 2bulbasaur TimboJones
  • add_iTadd_iT Member
    edited June 2022

    delete

  • @wa44io4 said: I'll be moving away from HostHatch for proper projects and storing backups, they've become very unreliable lately.

    very unrealiable lately because of one failure?
    because one node stop working? isn't that normal?

  • MumblyMumbly Member

    @duckeeyuck said:

    @wa44io4 said: I'll be moving away from HostHatch for proper projects and storing backups, they've become very unreliable lately.

    very unrealiable lately because of one failure?
    because one node stop working? isn't that normal?

    One? What are you even talking about?
    In very short time 3 different locations failed. Not to mention numerous complains all over the forum about unanswered tickets for weeks.
    As much as I liked them in past yes, they became very unreliable.

    Thanked by 4default nick_ adly foitin
  • defaultdefault Veteran

    Remember we had a provider for which people kept finding excuses (including me) all because of the low prices and offers? This is how the term "involucrated" got created around here.

    Thanked by 1foitin
  • @duckeeyuck said: very unrealiable lately because of one failure?

    one failure? they can't fix the network in Chicago for many months. It takes them at least six months to go from ignoring to realizing the problem

    this is how it looks right now

  • DPDP Administrator, The Domain Guy

    @default said: This is how the term "involucrated" got created around here

    Correct me if I'm wrong but I don't think the word "involucrated" "got created" around here.

    It was just a word cociu has been using for several years in his replies/communications :smiley:

    Thanked by 3bulbasaur bdl K4Y5
  • defaultdefault Veteran

    @DP said:

    @default said: This is how the term "involucrated" got created around here

    Correct me if I'm wrong but I don't think the word "involucrated" "got created" around here.

    It was just a word cociu has been using for several years in his replies/communications :smiley:

    True. And we adopted that term after he closed business and services without explanations.

  • @Samael said:
    In very short time 3 different locations failed. Not to mention numerous complains all over the forum about unanswered tickets for weeks.
    As much as I liked them in past yes, they became very unreliable.

    To be fair, 3 different nodes in 3 locations failed. Considering how many they run, it seems somewhat expected, although not good. I run several dozen storage servers personally and I lose around one node every few years. Drive/hardware failures happen. About the tickets, I'm sure it happens, but every ticket I've opened is answered same/next day. I don't understand why people are having this issue, I've found HH support decently fast. Not denying there are issues, I've just experienced the opposite.

  • @LiliLabs said:

    @Samael said:
    In very short time 3 different locations failed. Not to mention numerous complains all over the forum about unanswered tickets for weeks.
    As much as I liked them in past yes, they became very unreliable.

    To be fair, 3 different nodes in 3 locations failed. Considering how many they run, it seems somewhat expected, although not good. I run several dozen storage servers personally and I lose around one node every few years. Drive/hardware failures happen. About the tickets, I'm sure it happens, but every ticket I've opened is answered same/next day. I don't understand why people are having this issue, I've found HH support decently fast. Not denying there are issues, I've just experienced the opposite.

    As far as I've seen, the trick with HH is to 1) have an actual issue, 2) collect all tech info that you can, and 3) wait for 16-20 hours and then file a high priority ticket directly stating the impact (and no, it can't be "losing millions").

    I once had a routing issue which was promptly solved this way, though HH never replied to my ticket.

    Thanked by 1fluffernutter
  • MumblyMumbly Member
    edited June 2022

    How many of those "tricks" worked for you to be opinionated about the case? Just curious. :)

  • @stevewatson301 said:
    As far as I've seen, the trick with HH is to 1) have an actual issue, 2) collect all tech info that you can, and 3) wait for 16-20 hours and then file a high priority ticket directly stating the impact (and no, it can't be "losing millions").

    I once had a routing issue which was promptly solved this way, though HH never replied to my ticket.

    I just try to be explicit in what needs to be fixed eg: "Server IOwait high on instance IP address" and not just "server slow fix now". Including as much data as possible into your ticket just seems like good manners at this point, anything less feels like an insult to your provider's time, especially if you're on a promo plan. The last time I had an issue I got a ticket response in under 5 minutes with some advice from Mike (love that guy). Expecting your provider to do detective work on your service that doesn't give them any margin is insulting and it's no wonder why those types of tickets take longer.

  • MumblyMumbly Member
    edited June 2022

    @LiliLabs said: Expecting your provider to do detective work on your service that doesn't give them any margin is insulting and it's no wonder why those types of tickets take longer.

    How do you know that all those people (and there's really a lot of them complaining recently) expected from a provider to do detective work for them?
    Just because of you personally have good experience SO FAR (you may change your opinion anytime) this don't mean that other ticket requests are less legit or helpful in description of a problem than yours, so please just stop making up things about other people support tickets ...

    Thanked by 1foitin
  • bulbasaurbulbasaur Member
    edited June 2022

    @Samael said:

    @LiliLabs said: Expecting your provider to do detective work on your service that doesn't give them any margin is insulting and it's no wonder why those types of tickets take longer.

    How do you know that all those people (and there's really a lot of them complaining recently) expected from a provider to do detective work for them?

    Just work a tech support job at any company. The number of customers who refuse to cooperate or provide any information and yet demand that their issue be fixed "RIGHT NAO" is the primary driver of attrition in those jobs.

    Or maybe, you could create a new thread with the contents your tickets for us to decide where you lie. I'm expecting a bellicose answer to this request though :)

  • @Samael said:
    How do you know that all those people (and there's really a lot of them complaining recently) expected from a provider to do detective work for them?
    Just because of you personally have good experience SO FAR (you may change your opinion anytime) this don't mean that other ticket requests are less legit or helpful in description of a problem than yours, so please just stop making up things about other people support tickets ...

    Have done work with webhosting providers for the past 6 years. Customers are just like this.

  • @Samael said:

    @LiliLabs said: Expecting your provider to do detective work on your service that doesn't give them any margin is insulting and it's no wonder why those types of tickets take longer.

    How do you know that all those people (and there's really a lot of them complaining recently) expected from a provider to do detective work for them?
    Just because of you personally have good experience SO FAR (you may change your opinion anytime) this don't mean that other ticket requests are less legit or helpful in description of a problem than yours, so please just stop making up things about other people support tickets ...

    And somehow you can ignore good experiences with hosthatch support simply because some people have been complaining? The reality is the most vocal people tend to be the ones who want to complain about something.

    Thanked by 1fluffernutter
Sign In or Register to comment.