Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


HostHatch Los Angeles storage data corruption (Was: HostHatch Los Angeles storage down) - Page 7
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

HostHatch Los Angeles storage data corruption (Was: HostHatch Los Angeles storage down)

123457»

Comments

  • darkimmortaldarkimmortal Member
    edited May 2022

    @TimboJones said:

    @darkimmortal said:

    @xetsys said:

    @tetech said:

    @skorous said: To be fair, was it really a known issue until Chicago happened? They had one occurrence on one node of many in LA without a definitive cause. Two instances starts to indicate a pattern and as they said they're going to accelerate plans to migrate everybody to new hardware as a result.

    Seems to be the same as last December in LA. Below from December 2021.

    Hello,

    As you may be aware, there have been multiple outages that have affected your active storage VM in Los Angeles, hosted on the node "STOR4.LAX"

    From our troubleshooting so far, the RAID card in the server appears to be failing and kicking out healthy drives. This is an extremely rare situation, but it is happening at the moment.

    Unfortunately, we cannot guarantee the integrity of the data on the array and are working on moving VMs away from this node. We have the following two options available for you:

    1) We create a new VM for you. You move over the data yourself or restore from your backups. We remove the old VM and move over your IP address (if that is needed by you).
    2) We migrate your VM to another (healthy) node. However, depending on the size of your storage VM, it may take a long time to migrate during which your VM will remain offline.

    Please reply to this email and let us know what you prefer out of these two options.

    We would also like to ask you to take a fresh backup of your important data on the VM as soon as possible, in case we have to deal with the worst-case scenario of complete data loss.

    Apologies for the inconvenience and we are doing the best on our end to get this resolved ASAP.

    Kindest Regards,
    Your HostHatch team

    I wonder if there have been any incident that involved disc failure and successful data recovery? Or all those have been "raid card failure"? This more and more feels like raid0 configuration. A VM wont be able to tell if the host storage is raid6 or raid0. That info is probably not available for virtual environment.

    The symptoms don’t really match a raid 0 failure

    Great argument! Oh wait...

    RAID 0 failure would be either total loss, or if they tried hide it, runs of null bytes everywhere

    At a stretch you could see these symptoms if they picked another disk in the raid 0 array at random and copied its contents to the replacement disk. But that is into FUD territory

  • Is this the worst company ever? I requested to be moved to cloud a week ago. It feels like the twilight zone dealing with these people

  • tetechtetech Member

    @sidewinder said:
    I requested to be moved to cloud a week ago. It feels like the twilight zone dealing with these people

    You mean something like teleporting?

    Thanked by 1bulbasaur
  • Daniel15Daniel15 Veteran
    edited July 2022

    @hosthatch Sorry to bump this again, but do you have plans to do a mass migration of VPSes on the affected node/nodes to new hardware, or is it still case-by-case / by request only?

  • darkimmortaldarkimmortal Member
    edited July 2022

    @Daniel15 said:
    @hosthatch Sorry to bump this again, but do you have plans to do a mass migration of VPSes on the affected node/nodes to new hardware, or is it still case-by-case / by request only?

    After the total data loss event on the Epyc platform be careful what you wish for!

    https://lowendtalk.com/discussion/179541/hosthatch-amsterdam-node-failure-all-data-lost-still-reliable/p1

Sign In or Register to comment.