New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
If One generator took fire, that would be +1, he should still have his ‘N’ to keep the load.
There’s nothing about fire at OVH so it’s completely irrelevant. As per Oles we’re talking simple ‘aaaarrrggg my both feeds from public are down’ Scenario, for which your generators are for.
You just saved me opening a ticket :P
Even there status page is down. Would be nice if it was hosted at another ISP.
my server in RBX2 & RBX4 down too
2 servers in RBX down here
You seem to be more outraged than the actual users of the service. Found another opportunity to take potshots at others :P ?
Lol a bunch of my stuff in RBX is down too, what the fuck.
RBX down here..
Seems like problem on their fiber network. They are going to have a great day. And customers leaving. (But I guess you get what you pay for)
@Clouvider any refugee offer? ;-)
I give them well over 500EUR in total, couple of the servers are 100EUR+ models I think I'm at least a little entitled to complain. Then again, they're generally fine, not like I'm gonna go and cry for a refund now.
At first it was only our Game servers that got affected in Strasbourg, now all of our servers in Strasbourg are offline.
From oles'twitter:
depends on the SLA you signed with them...
public cloud in GRA and dedis in RBX up (for now), though can't login and therefore don't know/remember which DC exactly. SYS in SBG all down.
Agreed on that no problem. And shit happens it's possible that they just had bad luck. But there seem to be many angry customers. Time for them to learn how to set up redundancy / automated failover for important stuff I guess.
I’m assuming they lost some connectivity that was operated from affected DCs.
Nothing as of yet, we’re always happy do to square deals no matter if OVH is down or not.
Yeah, I don’t tolerate fuck ups. They wind me up. Plan for the worst, hope for the best.
Guess I'll need to check mine after to see just how entitled I really am, I never needed it before.
Your RBX dedis are up? Mine aren't as far as I can tell.
They're usually pretty good, surprising that they let almost all their stuff go down like this.
Octave tweets:
We have a general optical issue on all our optical network in Europe: all chassis in all POP shutdown all the links 100G simultaneous (!!). RBX SBG GRA LIM ERI are down. P19 WAW BHS are UP.
For what is worth, the burden of this screw up will fall on OVH engineers this weekend. I don’t envy them.
remember even if you have 99.9% average it'll result in 8h downtime (per year) to be no issue ;-)
maybe I memorised the location wrong and they are in Gravelines too...
I agree,
DelimiterOVH probably thought of being prepared enough for power outages so that they went for running a lot of routing also for other locations through SBG with no ways around...Well fuck.
CISCO equipment using SNMP? g
Yeah, I forgot what it was for the OVH main brand stuff, I feel like it was 99.95 or something, I'd check but their site is down. Of course they can put as many 9s as they want, doesn't actually mean they'll be up that much.
Seems likely, my stuff in GRA is fine.
almost 3 hours offline... anyone called OVH ? do they have any idea when they will fix it?
RBX: all optical links 100G from RBX to TH2, GSW, LDN, BRU, FRA, AMS are down.
Now that’s what I love to hear ^ /s
Would be neat if there was even a vague ETA tbh
my server is on SBG-1.
octave wrote that 1 gen was restarted but SBG still down so w t f ?.,
Apparently they have some centralized routing which is at RBX and since it went down, all went down.
I guess, this incident will give them lesson for being too lenient on power feed and failover .
https://www.heise.de/security/meldung/Tausende-Cisco-Switches-offen-im-Internet-Angriffe-laufen-bereits-3882810.html