Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


HostHatch Amsterdam Node Failure | All Data Lost | Still Reliable? - Page 5
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

HostHatch Amsterdam Node Failure | All Data Lost | Still Reliable?

1235»

Comments

  • jsgjsg Member, Resident Benchmarker

    @miu said:
    i know providers who are using hot spares and have preconfigured automatic array rebuild to start immediately when controller found any disk offline (without technician intervention needed), also when controller's manufacturers and engineers include (as standard in each better and more expensive raid-controller) options for adding hot spares and set auto-start for rebuild, this must have some sane reasons and sense = they believe that this can work and be successfully used....

    For example one provider (who i personally consider for one of the best ever) - LiteServer used hot spares and auto rebuild started immediately by controller for RAID10 arrays. So still i assume, on R10 nodes that is not extremely oversold and HW is not very outdated is not problem use hot spares and let controller initiate auto rebuild immediately when faulty disk occurs and has been detected, and is not necessary to make for this purpose server offline...

    Wrong department. The reason for that simply is is one of personal priority. Some providers go for safety and take the concerned VMs or even the node offline, while others go, to put it simply, for no downtime plus earliest rebuild possible ("my uptime numbers and my equipment is priority") and don't care a lot about performance for users (during rebuild).

    Plus evidently it's a question of which segment a provider is in. Raid 5 and 6 are cheapest so in a segment (low and mid end of LET) where price is a and likely the, decisive factor providers tend to go for Raid 6 plus one or two spare drives for the whole location (not per node or array). Why Raid 6? Because it's the "sweet spot" in that segment; just one additional drive gives "double protection" (so they please to think and so they read in marketing material) over Raid 5 but in most cases still is much cheaper than Raid 1 or 10.

    Plus evidently "automatic array rebuild without technician intervention needed" is irrelevant with a decent provider because they have someone in the NOC 24/7 anyway. But hey soon you'll see marketing material saying that their equipment (incl. drives and Raid controllers) are AI controlled!!!, hurray /sarcasm

  • TODOTODO Member

    Did anyone receive the compensation? Mine still says

    Next payment is due on 2023-12-30 for a total of $90.00.

  • @TODO said:
    Did anyone receive the compensation? Mine still says

    Next payment is due on 2023-12-30 for a total of $90.00.

    That only has relevance if we knew your service was paid up for 1.5 years already.

    Thanked by 1TODO
  • TODOTODO Member
    edited June 2022

    @TimboJones said:

    @TODO said:
    Did anyone receive the compensation? Mine still says

    Next payment is due on 2023-12-30 for a total of $90.00.

    That only has relevance if we knew your service was paid up for 1.5 years already.

    Yeah sorry, that wasn't clear, not sure why I just pasted that hahaha
    This was the last blackfriday deal of 2 years prepayment and it was deployed on 31-12-2021 even though they said this on their deployment email

    Your billing date will also start from 10th January 2022 instead of today.

    But that didn't happen either, kinda makes me a bit confused.

  • @TODO said:

    @TimboJones said:

    @TODO said:
    Did anyone receive the compensation? Mine still says

    Next payment is due on 2023-12-30 for a total of $90.00.

    That only has relevance if we knew your service was paid up for 1.5 years already.

    Yeah sorry, that wasn't clear, not sure why I just pasted that hahaha
    This was the last blackfriday deal of 2 years prepayment and it was deployed on 31-12-2021 even though they said this on their deployment email

    Your billing date will also start from 10th January 2022 instead of today.

    But that didn't happen either, kinda makes me a bit confused.

    Sounds like you have plenty of time to ticket and have that corrected.

    Thanked by 1TODO
  • TODOTODO Member

    @TimboJones said:

    @TODO said:

    @TimboJones said:

    @TODO said:
    Did anyone receive the compensation? Mine still says

    Next payment is due on 2023-12-30 for a total of $90.00.

    That only has relevance if we knew your service was paid up for 1.5 years already.

    Yeah sorry, that wasn't clear, not sure why I just pasted that hahaha
    This was the last blackfriday deal of 2 years prepayment and it was deployed on 31-12-2021 even though they said this on their deployment email

    Your billing date will also start from 10th January 2022 instead of today.

    But that didn't happen either, kinda makes me a bit confused.

    Sounds like you have plenty of time to ticket and have that corrected.

    Yeah, I am just asking here first because I read that they take a very long time to reply to tickets and I thought the info might be correct elsewhere, because of the new portal old portal thing.(Signed up when only the new portal was available, and it didn't have some features) Will wait a couple days to see if anyone got the 3 months or is it only me, and if so I will contact them.
    TBH I really like their offer and I wouldn't be mad whether I got the compensation or not.

  • @MeAtExampleDotCom said: This is why multiple backup generations is important: if you only have a single copy, and that was refreshed soon before failure, there is a risk that these pre-complete-death failure patterns mean the only backup is corrupt

    For any backups toy care about, I'd really recommend using Borgbackup's append-only mode. It disallows the client from deleting any data from the backup. It means that even if your server gets hacked and the attacker also deletes the backups, you can still recover. A lot safer than just a blind rsync mirror with no historical backups. Borg's deduping is pretty good so I keep daily backups for the past month as well as monthly backups indefinitely.

    @hosthatch said: another node that was recently put into production. It has two independent and smaller RAID60 arrays which makes it more reliable without sacrificing too much on the rebuild time

    Will the new nodes in Los Angeles have the same config? I'm still using my storage VPS that experienced data loss, but I think I'd like to migrate to a newer node at some point (once my other VPSes can be migrated to the new panel, as I'm using the internal network between all of them).

    Thanked by 1ariq01
  • edited June 2022

    @Daniel15 said:

    @MeAtExampleDotCom said: This is why multiple backup generations is important: if you only have a single copy, and that was refreshed soon before failure, there is a risk that these pre-complete-death failure patterns mean the only backup is corrupt

    For any backups toy care about, I'd really recommend using Borgbackup's append-only mode. It disallows the client from deleting any data from the backup.

    My hacked-together-with-rsync backups use filesystem based snapshots (not terribly efficient, but it works: many hardlinks) to keep old versions of files.

    It means that even if your server gets hacked and the attacker also deletes the backups, you can still recover.

    Unless they properly kill the backups by gaining access to the backup system, rather than pushing an empty volume from the client. I use a soft-offline arrangement to protect a bit against that, and true offline copies for a few really important bits of information.

    Borg's deduping is pretty good so I keep daily backups for the past month as well as monthly backups indefinitely.

    I'm like to be wary of deduplication, as the whole point of backups is strategic redundancy. My snapshots do the same, many daily/other snapshots hardlink to the same unchanged files, but I have more than one snapshot chain so there isn't just one physical copy of each. Otherwise you only really have one backup copy of a file that never changes but it could be a vital file not to lose and if you get a bad sector where that one much-referenced copy is held all copies are gone. Less of an issue if you follow good practise (3-2-1 or similar) and have more than one physical backup location, which I do too, but storage is relatively cheap these days so no harm in being paranoid enough to have multiple copies at each location too.

    [I have nothing against Borg BTW, it seems a well regarded and very capable tool, but I've been rolling my own setup to acheive the layers of protection I need/want since long before it existed]

    Thanked by 1quicksilver03
  • LeviLevi Member

    Again, provider basically say that customers are idiots :D

  • HostHatch is down again for me, any one else facing same issue?

    Amsterdam Node

    Thanked by 1darkimmortal
  • Looks to be down, got a message from support saying it's being looked into. Hope they didn't get hit by the bathtub curve on those new drives :(

    Thanked by 1darkimmortal
  • @Another said:
    HostHatch is down again for me, any one else facing same issue?

    Amsterdam Node

    meeeeeee

  • VoidVoid Member

    Makes sense why whoever has their box is eager to transfer it off

  • BetaMasterBetaMaster Member
    edited June 2022

    @Another said:
    HostHatch is down again for me, any one else facing same issue?

    Amsterdam Node

    Yeah, me too. Hopefully is not drive failure again. Is this the end of my journey?

  • @jmaxwell said:
    Makes sense why whoever has their box is eager to transfer it off

    I'll happily take any transfers :)

    Thanked by 1ariq01
  • darkimmortaldarkimmortal Member
    edited June 2022

    Also down again in Amsterdam since approx 5am UTC. Symptoms different than last time - instantly dead rather than IO-related freezes.

    Server page in their customer panel says: "The node this VM resides on has had a hardware failure that is preventing it from booting up. We are currently looking at replacing the failed hardware."

  • @jmaxwell said: Makes sense why whoever has their box is eager to transfer it off

    Except This downtime and Node failure last time, Experience is amazing. Great bandwidth and fast hardware.

    Anyways, My server is up again.

    Thanked by 1darkimmortal
  • darkimmortaldarkimmortal Member
    edited June 2022

    Server is back up as per previous post, but noticed it has a slower CPU: was EPYC 7551P with 671 single core geekbench, now E5-2620 v4 with 518 geekbench

    Real world perf loss for me is within 10%, rather than the 30% suggested by geekbench, so not the end of the world. But at 1/4 the physical cores I guess there is less CPU to go around

    Edit: Actually don't mind me, turns out it was never sold as specifically Epyc, just luck of the draw

Sign In or Register to comment.