New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Dediserve: data loss confirmed in one Dublin region
hostingwizard_net
Member
No annoucement, no ticket and no email (yet) but this was confirmed:
Earlier today we suffered multiple disk failures on our storage array for HV10 on our DUB3 cloud platform, the data centre engineer was dispatched to replace these disks, unfortunately due to human error the wrong disks were pulled causing a cascading failure on this RAID array with the loss of all data. Once the issue came to light senior engineers from dediserve were dispatched to investigate, unfortunately by that time the data was unrecoverable and lost. The storage array was then rebuilt and brought back online.
Comments
Holy crap! I wonder if that guy has a job anymore? I don't know how I feel about this...
Tough times ahead at least...
As their TOS say "You agree to maintain an appropriate backup of files and data stored on the dediserve clouds", but this will cost them a handful of customers for sure!
Enjoy your premium expensive cloud.
Obligatory.
Time to make use of their "1000% SLA".
The issue impacted one Array, paired with one hypervisor, in a single cloud location.
Total impact was to 12 customers, hence no relevance in a release.
Am curious where even that email was released?
Holy cow, my 1year+ resource on Dublin-C3, Ireland.
and all good, no issue...
thank god
high availability raid5 ? again, uptime and SLA are overrated. backups come in handy...
We run RAID 50 with a spare in all arrays, this was an exceedingly unfortunate case of DC hands and eyes / human error.
Fortunately the impact was very minor, and anyone impacted has been contacted and compensated.
Looks like nothing escapes the sauron eye of LET.
I feel for you, shit happens, esp. human error. and I meant uptime / SLA are mostly overrated from customers pov ;-)
nice to see you're honest about what happened and luckily didn't have a bigger impact.
No tape backup?
+1000
Cloud? What happened to failover?
Or as most you provide just KVM "cloud"
Hang on a minute, if disks failed how come they pulled the wrong disks, surely it was flashing amber on the disks that had failed?
Oh wait no, these perfectly healthy green ones look broken pulls them out!
Stinks on Raid0
I remember about 6 years ago getting a call because the new guy doing the fire drill tests pressed the wrong button, to be fair they were both red...... but on opposite sides of the DC, sadly the one he pressed was the EPO which shut everything down and left him in darkness..
That was a fun day...week...month.
There are buttons that can go nuclear just by pressing them by chance? Awesome!!!
Normal in Dublin DC
Probably Technical need to do 1000 tasks per hour for their hour norm.
@jarland maybe fix the title of this thread?
Not really a 'Full data loss'
OMG that was awesome.
As a customer, I don't consider two days of downtime and the loss of the VM as something minor. And so far I haven't been netither contacted nor compensated. OTOH this VM has run without problems for 14 months.
The title is fine. Quoting dediserve: "the loss of all data"
This is not the end of the world but this is not something that should be ignored either.
We provide free 10gb snapshot space with all plans and customers can enable a daily snapshot to NAS. Anyone with such a snapshot was rolled back to then. Luckily that was 8/12 of the impacted customers.
In terms of failover, that's for hardware failure not this
Shit happens. A few years ago Wooservers did a similar thing. My VPS was completely wiped. I had to reinstall from scratch and backups.
If you were impacted and not contacted, please PM me your account ID to investigate!