New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Just found this on twitter, among the responses to OVH Cloud CEO's tweet about the fire.
They indeed say they do not have any data recovery plan:
And this is (supposedly) "a leading platform for collecting national and international news", at least according to google translate.
Goes to show you cant "node" 14nm++++++++++++++++++++++++++++++++++++++++++++++++ forever.
SBG2 and some SBG1 servers are on the clouds now
Pray for rain?
Done.
It doesn't look like... it is.
Actually, did the Wishhosting black Friday Ryzen get burnt down??? Oh shit I'm affected
@Hetzner_OL In light of this can you maybe shed some light what measures you have in place for such a thing?
We recently introduce France DDoS protected services hosted in SBG2 for Gaming Users and today its Destroyed.
Should never have used wooden racks, I say!
What a firesale.
Here is a reply
Official statement,
https://www.ovh.ie/news/press/cpl1786.fire-our-strasbourg-site
And make sure they are regularly tested. The worst time to find your backups have not been updating properly is when you try to restore something from them.
Not a host myself and this isn't practical for them (at least not at LET prices!) but for my personal stuff (mail server, couple of small web server VMs) I have a small VM for each that daily pulls in the last off-site backup and restores it so I can do a quick check (if the mail server copy is up and has mail from yesterday, my latest backup of that is likely OK, etc.). For backups of simple storage: regular checksums on everything and compare those between "live", backup & off-site backup. Also check older snapshots by checksum & compare the results from the two backups. Have your checks email you on completion even if all is OK (not just if there is an alert) so the absence of an email is a problem (if you are only alerted on an issue occurring, silence doesn't always mean no issue as it could indicate a nasty problem and the checks are broken too).
time to actually test those DR plans and Backups
http://travaux.ovh.net/?do=details&id=49017
http://travaux.ovh.net/?do=details&id=49016
maybe related?
No matter how much you trust a provider you shouldn't have all your eggs (live data and backups) in one basket. Remove as many single points of failure as possible, if you only use one company then they are one of those potential points of failure.
Nothing wrong with having your primary backups with the same provider as your live systems, chances are you can restore faster this way when needed, as long as you have another backup elsewhere too.
I doubt they already know the real cause and even if, that they would rush into such maintenance decisions esp. in totally different locations right now...
Hey, at least no one got hurt. On the bright side, the replies are basically content farms and will probably feed LET for a day or two.
Tell them that even the big three provider's servers are also not redundant in case of a datacenter-ending event
Oh fuck! Just realized that too. Luckily, it was just a backup RDP box for me with nothing that I don't already have backed-up.
Atleast they have AutoBoot™
Bad news. I just interesting why internal firefighting system not used. Using water against fire in data-center is not good idea actually. All equipment would be destroyed by water now. Gas or foam must be used instead...
No fucking shit, who would've known?
On a more serious note, the most sane explanation would be that it simply failed; OVH isn't some two-bit summer host that is blind to the concept of fire suppression.
Exact designs are going to vary by DC, there are many ways to skin this cat and they all have pros & cons (usually the big con of the best options otherwise is cost, the next one being danger to humans, and environmental concerns are a thing too).
They almost certainly automatic gas/foam based systems in the server rooms and other machine rooms that can cope with (contain & extinguish) a small fire starting in one of those areas but cost/safety/other implications are likely to prohibit using that for the whole building. And the fire may not have started in a server room, it'll be a while before investigators will be in a position to release solid information about where it did start and how it spread.
Once you get to the point of calling the external firefighting service all bets are off: they will bring their own kit that they are trained to use and their job is to protect people and surrounding buildings. Avoiding wetting your servers is not high on their priority list, if it is even there at all. Risk of further problems due to their use of water near electrical systems? - the solution there is to pull the plug on the electricity supply (mains and backup generators).
any refugee deals?
A couple more night photos of the OVHcloud fire in Strasbourg: