Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Down - OVH - SBG - Lots and lots of tears. - Page 17
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Down - OVH - SBG - Lots and lots of tears.

11314151719

Comments

  • NeoonNeoon Community Contributor, Veteran
    edited March 2021

    Bad news everyone, it has been delayed.
    Monday, 22 March is it for now.

    https://www.ovh.ie/news/press/cpl1786.fire-our-strasbourg-site

  • deankdeank Member, Troll

    It is unlikely that OVH has a control over how fast they can turn stuff on.

    Inspections need to be made by the fire department and their insurance company to deal with, not to mention their power company calling the shots.

  • amarmsamarms Member
    edited March 2021

    OVH is the epic winner of this fire. Oles is such a chad. I've been so impressed with their handling of this situation that I bought 5 more servers with them. They'll make more money than ever. Real suckers are people screaming on Twitter that had no backups of their CrItIcAl dAtA.

    As for the reason... data centers don't usually catch fire, right? Right?? I think they had data that somebody really wanted gone, so the entire DC had to be taken down. Sabotage is the only plausible explanation at this point. I think further down the line it'll become very clear this was what happened here.

    Rigging up the site for a massive UPS/battery bank/generator explosion would've been far more believable than a fire, but would've left no survivors. The staff should be really grateful.

    And remember. Today it's OVH, tomorrow it might be your data center. Once somebody is determined enough, they're able to accomplish anything.

  • deankdeank Member, Troll

    I do recall hearing about DC fires occasionally but, as far as I know, none of incidents I've heard was able to spread, probably due to concrete floors.

  • @amarms said: Ye yee... these bitches deserved a good burnin'

    This hashtag is a goldmine

  • NeoonNeoon Community Contributor, Veteran

    Sadly, some of these people are clearly trolls, just joined this month.

    Thanked by 1webcraft
  • yoursunnyyoursunny Member, IPv6 Advocate

    Most people are demanding access to their burnt data.
    Meanwhile, some are asking for the data to be re-deleted:


  • OVH has put a status page where you can check if your service is recoverable:

    https://www.ovhcloud.com/en/lp/status-services-backup-strasbourg/

    Most interesting thing from my perspective is is that it seems that lost almost all data from "cloud archive" service, aka long term storage.

  • @yoursunny said: Meanwhile, some are asking for the data to be re-deleted:

    Ooh... Ooh... double burn :wink:

  • Hello,
    looking for someone affected by the accident, who had backups anywhere, could start a new server in another DC and used their FAILOVER IP to successfully move it to this new server.
    So I know if this service (failover IP) is reliable for situations like this.
    Thanks.

  • yoursunnyyoursunny Member, IPv6 Advocate

    @lowdod said:
    looking for someone affected by the accident, who had backups anywhere, could start a new server in another DC and used their FAILOVER IP to successfully move it to this new server.
    So I know if this service (failover IP) is reliable for situations like this.

    According to Twitter, users were receiving error messages trying to move a failover IP out of SBX. Some users tried for many hours and finally got through.
    It would be faster to switch the DNS records instead.

    Thanked by 1xms
  • coolicecoolice Member
    edited March 2021

    We have it better (as we do storage replication on ~5 minutes between different OVH datacenters) + cold reserve (storage replicated on one hour to another provider) + backups BUT

    Failover IP movement function get clogged in that case and trough the day they have service degradation (of the movement) in entire EU

    We started IP movement from SBG to GRA in 4:40 AM CET and it took more than 6 maybe 7 hours - I was ready to give up and start the VMs with new IPs

    then trough the day I started to move another IP i do not use between GRA and RBX (timing dropped to 2-3 hours) but was slow... In the evening it was back to normal to 3-5 min

    They claimed that they added capacity to that service so that will not happened again

    Thanked by 1yoursunny
  • verovero Member, Host Rep

    You wouldn't have guessed what message I got popped-up when entering OVH website..

    The protection of your data is our priority.

    Thanked by 3lentro jsg TimboJones
  • deankdeank Member, Troll
    edited March 2021

    @vero said:
    You wouldn't have guessed what message I got popped-up when entering OVH website..

    The protection of your data is our priority.

    Well, to be fair, they did send customers' data to clouds, actual clouds.

    Now, that is pure dedication on their part. It must have cost them a lot, too. Show me one provider who does that for ya, none.

    OVH forever.

  • verovero Member, Host Rep

    @deank said:
    Well, to be fair, they did send customers' data to clouds, actual clouds.

    First real cloud provider. No one promised that data will be retrievable.

    Thanked by 1webcraft
  • deankdeank Member, Troll

    Those data in the clouds won't ever be breached. That's the best security.

  • cazrzcazrz Member

    Its data in the smoke not in the clouds.

    Thanked by 1webcraft
  • deankdeank Member, Troll

    Meh, that's a bureaucratic choice of words.

    Smokes, clouds, they are all tiny particles of something in the air. To OVH, they are the same.

    Thanked by 1webcraft
  • pulseonepulseone Member
    edited March 2021

    i thought some colors to
    https://www.ovhcloud.com/en/lp/status-services-backup-strasbourg/
    would be beneficial so i give you this little chromeconsole snippet ;)

    $('td').each((k,v) => { v = $(v); switch (v.text()) { case "Recoverable": v.css("background-color", "#00ee0099"); break; case "Non-recoverable": v.css("background-color", "#ee000099"); break; case "Under investigation": v.css("background-color", "#eeee0099"); break; }});

    note the many red with openstack zone 1,2,4...

  • lowdodlowdod Member
    edited March 2021

    @coolice said:
    We have it better (as we do storage replication on ~5 minutes between different OVH datacenters) + cold reserve (storage replicated on one hour to another provider) + backups BUT
    ...
    We started IP movement from SBG to GRA in 4:40 AM CET and it took more than 6 maybe 7 hours - I was ready to give up and start the VMs with new IPs

    Thanks for sharing.. so it seems one should go directly with IP change with such an event, it can be predicted that the service will degrade.

    By the way how do you replicate the storage every 5 minutes between DCs?

  • @lowdod said:

    @coolice said:
    We have it better (as we do storage replication on ~5 minutes between different OVH datacenters) + cold reserve (storage replicated on one hour to another provider) + backups BUT
    ...
    We started IP movement from SBG to GRA in 4:40 AM CET and it took more than 6 maybe 7 hours - I was ready to give up and start the VMs with new IPs

    Thanks for sharing.. so it seems one should go directly with IP change with such an event, it can be predicted that the service will degrade.

    By the way how do you replicate the storage every 5 minutes between DCs?

    I hope they will fix it cause there is no point of having failover IPs when you cannot failover in time and they adverse them for that at least in the past ...

    For one domain dns failover is ok but for hundreds when half of them do not use your dns becuse in forum told them no to use providers dns it become complicated

    Proxmox ZFS storage replication https://pve.proxmox.com/wiki/Storage_Replication

    after first sync it is super efficient and took seconds to sync changes

    Thanked by 1webcraft
  • @Jason18 said:
    OVH has put a status page where you can check if your service is recoverable:

    https://www.ovhcloud.com/en/lp/status-services-backup-strasbourg/

    Most interesting thing from my perspective is is that it seems that lost almost all data from "cloud archive" service, aka long term storage.

    Not sure if I understand the correctly, but what are "Internal Backup (DC/Service)", e.g. "Public Cloud Archive (os-pca-sbg)" that are in the same rows as VPS zones? Does it mean they replicated the data from the VPSes if you didn't pay for it or selected any backup options? Or are those for paid Public Cloud Archive option. Mostly concerned about what "Internal" means in this case.

    I was never particularly worried about backups, but always concerned if data actually disappears from existence the moment I delete it from a VPS. This situation of non-recovery seems to confirm it luckily, in case of OVH at least.

  • Proxmox ZFS storage replication https://pve.proxmox.com/wiki/Storage_Replication

    after first sync it is super efficient and took seconds to sync changes

    Thanks! While sync is fast, is there any reliable way to check that the copy is correct and consistent?
    Some years ago I've found a bug in Bacula that was causing some percent of the files to be actually missing in the backups; since that I'm really paranoid about this. I was curious how many people at that time were not aware that they are doing backups that are useless :)

  • coolicecoolice Member
    edited March 2021

    @lowdod said:
    Thanks! While sync is fast, is there any reliable way to check that the copy is correct and consistent?
    Some years ago I've found a bug in Bacula that was causing some percent of the files to be actually missing in the backups; since that I'm really paranoid about this. I was curious how many people at that time were not aware that they are doing backups that are useless :)

    Bacula & ZFS is not comparable as ZFS has way more users starting from Data Hoarders with Free Nas, Rsync.net, Hetzner storage boxes and Other big biz... (Open ZFS I mean, non open one is hoarded by Oracle)

    Can bug happens? Yes it can .... As it happens as happens 6 years ago... https://github.com/openzfs/zfs/issues/4050

    Can a bug happens before the community finds it and simultaneously in same moment the main server burns , the hot spare got broken file (not entire one file system but a file) by a zfs bug the cold storage (could be affected with the exact the exact same file) and on top of that regular rsync jetbackup can be failed in exact same moment

    Yes It can...

    But then it will be act of God and will be very hard to sue them and prove negligence from your 2.95/mo shared hosting provider if they take all that steps to protect your data

  • TimboJonesTimboJones Member
    edited March 2021

    @yoursunny said:
    Most people are demanding access to their burnt data.
    Meanwhile, some are asking for the data to be re-deleted:


    Jesus Christ, you went 0/2. They specifically DON'T want ANYONE to access it and they want it DESTROYED, not re-deleted. The tweets were pretty clear.

    It's perfectly valid to prevent damaged hard drives from being sent to the recyclers and then had their data read (and information misused). The fact that you're Chinese and actively dismissing this valid scenario makes you super suspicious.

    SMH

  • yoursunnyyoursunny Member, IPv6 Advocate

    @TimboJones said:

    @yoursunny said:
    Most people are demanding access to their burnt data.
    Meanwhile, some are asking for the data to be re-deleted:


    Jesus Christ, you went 0/2. They specifically DON'T want ANYONE to access it and they want it DESTROYED, not re-deleted. The tweets were pretty clear.

    It's perfectly valid to prevent damaged hard drives from being sent to the recyclers and then had their data read (and information misused). The fact that you're Chinese and actively dismissing this valid scenario makes you super suspicious.

    SMH

    "re-delete" and "destroy" mean the same:

    dd if=/dev/urandom of=/dev/sdb
    

    I run this when I need to return a disk to the supplier.

  • @yoursunny said:

    @TimboJones said:

    @yoursunny said:
    Most people are demanding access to their burnt data.
    Meanwhile, some are asking for the data to be re-deleted:


    Jesus Christ, you went 0/2. They specifically DON'T want ANYONE to access it and they want it DESTROYED, not re-deleted. The tweets were pretty clear.

    It's perfectly valid to prevent damaged hard drives from being sent to the recyclers and then had their data read (and information misused). The fact that you're Chinese and actively dismissing this valid scenario makes you super suspicious.

    SMH

    "re-delete" and "destroy" mean the same:

    dd if=/dev/urandom of=/dev/sdb
    

    I run this when I need to return a disk to the supplier.

    They don't, don't be obtuse. There will be zero documents on hard drive shredding from any company that includes that dd command. It's also ridiculous to ask OVH to run dd on the pile. Right, because nobody fucking did.

    Re-delete and destroy are not interchangeable in any use case where someone really does mean "re-delete", such as editing a document.

  • lentrolentro Member, Host Rep

    @TimboJones said: The fact that you're Chinese and actively dismissing this valid scenario makes you super suspicious.

    @yoursunny was simply pointing out an interesting contrast. No need to make the miscommunication a huge deal.

  • encrochat was hosted at ovh ^^

Sign In or Register to comment.