New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
nevermind, old..
I've got a feeling they're having issues worth NVMe / Ceph again. Because some VPS Cloud servers are still in service mode.
Good chance you will be fine then, worst case scenario it will require a little bit of a massage but your data will be ok, unless you were running write back with no bbu or something.
@Amitz Oh god dammit just when I finally got over the last batch of shitty trance stuck in my head.
No, not as far as I know. I'm still hoping that the control panel hasn't been lying to me about my servers being up this whole time, and it's a matter of connectivity.
And that’s why you use Juniper, mmm... I love Juniper and their commits
Sigh. I'm already thinking about reinstalling vps and restoring from backup. That's absolutelu nightmarish project.
Well, bad luck all the way.
OVH tries to make the transition from very cheap to reliable and hike the prices and then this happens.
This reminds me of a joke:
-Hey, I notice you are billing me for air conditioning but the room I rented in your motel does not have one!
-That is the thing, sir, we are gathering money to buy one!
Most of my servers are back up now in SBG1. No array issues (Thank the Gods!).
Thats why you have BBU, otherwise you will have a headache.
Will you relocate?
But I'm interested here. Who after the incident will relocate?
Naa, let's be honest, I don't use OVH because of quality, I use them because they are cheap and in doing so I accept the pitfalls, for me, it obviously had no impact as I have a failover system in place.
Anyone that did not have a failover/redundancy is either not doing anything important or their business does not justify it, in which case, a day or so of downtime a year is probably not a big deal anyway given the cost.
Still 10+ servers down and in maintenance mode. - Phew. But doesn't matter, now it's a good night sleep and then damage assessment first thing in morning. - Phew again.
And close to all the discussions he started are about OVH.
Funny stuff. Huh.
Anyway, regarding this incident, I think we generally blame too much, myself included, given their low prices and actually a good service. I wish them the best recovering from this incident.
The most epic thing is that I have lost access in the morning since the servers in RBX. What server I do not buy on SYS KS always drops RBX, even atoms. Panels, billing and other in RBX and GRA are mostly located.
On the nodes that are in the SBG the current status of the four servers has fallen on the nodes, i7-6700k, 16 vps on each. 16 * 4 ~ 64 vps. Have not yet risen, but I'm not nervous, maybe need to be restored. Judging by monitoring the MRTG server is off. The remaining 3 servers in SBG stood up normally.
In SBG we mainly have i7-6700k servers. And for them the audience of gaming clients minecraft, and they are not really in a hurry to do something, just wait.
In the GRA where we have hundreds of 1245v2 resisted. But this epic came out all and fun. But there were a lot of tickets too. In CIS, half of the sites are hanging in OVH.
I already relocated some personal services from OVH to a Raspberry Pi at home, and to a VPS on my larger home server.
Also turns out having MX1/MX2 at the same DC isn't the best idea, but I was able to rapidly add a third one as the downtime just started.
Won't relocate exactly. But will definitely re-evaluate dependency on OVH for anything remotely critical.
EDIT: I'm pissed, I suffered almost 12hrs downtime across 12 servers (OVH/SYS). However I have to keep in mind that you get what you pay for, and it's my own fault for hosting anything of major importance on OVH without appropriate redundancies in place.
Where's all of the misplaced rage!? C'mon, this is LET. You were/are losing millions of dollars per second you're not able to mine dogecoins, aren't you!?!!
Fixed.
That's the spirit! What's your infrastructure look like at home?
Very true. Being down for a while is no big issue for me but more of an annoyance. I usually use 4 locations for anything of importance. Today one of them took a hit by a Ddos attack (leaseweb) while at the same time both failover systems in SBG and RBX were down. The one remaining server(i3d) held up luckily but that's walking of thin ice. The fact that 2 of their DCs were down most of the day is a shitty experience. I know they're cheap but it does qualify as a major fuckup and a reason to migrate.
I waited an hour, and already collected the dumps in a heap to recover in online.net or in Hetzner, while restoring rbx came to life. And the dependence will also be a little less, in other cases I will make sure of safety.
This incident showed how many and what customers are there. That actually a lot of them.
In general, all fall. And backups solve everything.
Good point.
If you're going to cancel a fuckton of them- give LET a heads' up this time? Those 4Cs were an awesome deal!
Those 4Cs need a gbps port, GBPS !
It should have been BHS i5-3570s, but it did not work out. We decided to keep them.
In fact, so far there are not any special cancellations.
For the same action, in February, there were 50 servers repaid at a cheap price of servers and there are no refusals from them.
Customers at SBG have been complaining for a week on the channel that it's downloaded.
I'm not alone, it's still in the team there are people they also resell the servers of different DCs. This is the total number.
I was refereing to Online.net because of what this person said:
I fail to see how it is related?