New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Mine were down too, until I received the mail about the incident and restarted it.
Unfortunately the mail wasn't send until 4 hours after the situation had been resolved, so that meant another 4 hours of downtime for my instance.
I'm a new customer, and this is a new server, so I hadn't set up any monitoring yet. Guess I need to get going on that...
8 hours of downtime?
so much for high availability
you may want to come down from the clouds now and back to the "ordinary" VPSes.
Not really, it has been sent when the HA was turned off, not 4 hours after that.
Okay. Double checking the mail headers I can see now that I was confused by the many timezones, daylight savings etc. involved (includes a forwarding from the one you sent to), and that all hops stays within same minute:second (more or less).
So you are probably right, and I'm wrong. Sorry about that.
@Maounique Are you still facing problems, my instance has been horribly slow today?
Bah it was just @Maounique trying to do a runner but he forgot how again.
Could have been something on my side too. The loads went to 5+ without any reason, ssh connections were getting dropped etc. Rebooted the instance and everything has been smooth so far.
It's probably hundreds of VMs being booted up and set up after the downtime.
Not really, they either booted in the time before the announcement or were left down after when we turned off HA. It must have been a local issue, at this time there are no known problems in the whole infrastructure apart from the disabled HA. Actually, load is lower than usual which means either there are still some VMs down (cant be more than 20-30) or the restarts cleared internal issues with some instances.
Reason for stopping: Migrating to the same host.
This is a bug we are investigating.
That is one of the reasons to not build large as hell clusters.
Very expensive to upgrade.
We keep our clusters sized at 10-20 hosts.