New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
"Extremely serious virtual machine bug threatens cloud providers everywhere"
Haven't read it yet, so might be nothing, but seems interesting.
Edit: seems like it's not nothing!
Comments
My initial impression is that you're only vulnerable if your KVM or Xen guests have a floppy device.Red Hat's security advisory claims that it's exploitable even without a floppy device configured for the guests.
Patches are ready for Xen and QEMU but no news for KVM, afai read.
@MrGeneral @MCHPhil @cociu @sambling @incloudibly @hbjlee17 @MeanServers @MSPNick @SkylarM @ndelaespada @HostSailor
Still partially vulnerable even if floppy device is not enabled here's more info http://venom.crowdstrike.com/
do you need to apply patches manually or can this be done by a simple yum upgrade?
KVM uses qemu for it's emulation, so updating qemu should resolve the issue for KVM.
@HostVenom
Thanks mate, more detailed information about this: http://xenbits.xen.org/xsa/advisory-133.html
Although, we're fine in our case :-).
Thank you once again, @Traffic , great job!
I prefer OVZ anyway. A SSD OVZ is just incredible for speed. The fact no kernel is virt'd it saves ram,cpu and reduces speed on reboots.
@Thanks no problem here , any way we update the qemu too yesterday
Kernel Care ? Seriously ?
This isn't a kernel bug, it's in the userspace qemu-kvm binary, so something like kernelcare wouldn't have helped (though VZ would still not be vulnerable to this class of vulnerability since it has no ongoing userspace component).
To clear up some of the confusion I see going around:
Let me know if I missed anything major.
The bug pertains to the floppy disk emulation in qemu, nothing OpenVZ has to worry about.
That's fun, basically you can't be sure anymore that any data you ever had on any KVM/HVM VPS since 2004 wasn't copied away or tampered with, even without any knowledge from the provider's side. It's naive to assume this bug is just now became known to the humanity, very likely the "black hat" circles had it in their sleeves since a long long time ago.
All the more reasons to migrate all remotely important stuff to dedis, even the cheapest Atom dedi is orders of magnitude more secure and private, not to mention of course immune to this kind of shit.
A provider I use setup a new node with the patch applied and then live migrated VMs to the patched node, no downtime at all.
This. I've never understood why people thinking a VPS is equivalent to a small dedicated server in terms of privacy/security. It may be more cost-effective, or be cheaper or be able to scale easier, but assuming that your VPS is "private" is as ridiculous as saying "my neighbor is more likely to see my insertprivatethinghere than my roommate is"
Vultr?
That was a really professional way to deal with it.
Although in my case, I'll be recommending my clients to shutdown their instances and start them again.
Edit: Nodes have been patched a few hours ago. I've sent an email to my clients as well.
It's good the fact that LET people spread the news so fast. I'm glad to be part of this community.
No, this provider doesn't fall within LE* pricing. I don't think Vultr have yet commented on whether they're affected or not.
LunaNode have. They having a full reboot later tonight.
Linode have confirmed they're not affected.
We are performing a global reboot on Friday 15 May 2015 at 11:00 pm EDT per the announcement email. We are opting not to perform an immediate reboot to give customers the chance to reboot their virtual machines on their own schedule, because we have security mechanisms in place that limit the malicious actions that could be performed from an exploit of this vulnerability. VMs provisioned or stopped/started after 2:30 pm EDT will not need to be rebooted during the global reboot.
Volume-backed instances (i.e., VMs with their root partition in a volume) will simply be live-migrated, and thus will not need downtime. This live migration is in progress, and customers will be notified of which VMs were live migrated today evening.
These exams screwed my brain up thought 15th was today -.-
Fair point my friend. The honest truth is you can never guarantee 100% safety while a machine outside of your physical reach is accessible by the internet, but you can definitely reduce the potential points of failure.
I think that's very unlikely. This is the internet where word can spread like wild fire.
Their KVM node is (they only got one):
"Hello,
Our administrators have detected an issue affecting the physical hardware your Linode resides on. We're working to resolve the issue as quickly as possible and will update this ticket as soon as we have more information.
Your patience and understanding is greatly appreciated."
My test KVM was also rebooted.
RamNode already sent out an email regarding this and that they're going to fix that soon. (Y)
EDIS does reboots now as well.
No need for a full node reboot though as I understood it, stop running VM's KVM+HVM update qemu, start the VM's back up that were stopped?
This is best example why guest user on virtualization systems need to have one strong solution for crypting data/system (without chance of attacker to dump RAM memory or other things like this!)