Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


"Extremely serious virtual machine bug threatens cloud providers everywhere"
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

"Extremely serious virtual machine bug threatens cloud providers everywhere"

perennateperennate Member, Host Rep
edited May 2015 in General

http://arstechnica.com/security/2015/05/extremely-serious-virtual-machine-bug-threatens-cloud-providers-everywhere/

Haven't read it yet, so might be nothing, but seems interesting.

Edit: seems like it's not nothing!

«13

Comments

  • NickMNickM Member
    edited May 2015

    My initial impression is that you're only vulnerable if your KVM or Xen guests have a floppy device.

    Red Hat's security advisory claims that it's exploitable even without a floppy device configured for the guests.

  • erkinerkin Member

    Patches are ready for Xen and QEMU but no news for KVM, afai read.

  • vfusevfuse Member, Host Rep

    Still partially vulnerable even if floppy device is not enabled here's more info http://venom.crowdstrike.com/

  • do you need to apply patches manually or can this be done by a simple yum upgrade?

  • NickMNickM Member

    erkin said: Patches are ready for Xen and QEMU but no news for KVM, afai read.

    KVM uses qemu for it's emulation, so updating qemu should resolve the issue for KVM.

    Thanked by 1erkin
  • MikePTMikePT Moderator, Patron Provider, Veteran

    @Traffic said:
    MrGeneral MCHPhil cociu sambling incloudibly hbjlee17 MeanServers MSPNick SkylarM ndelaespada HostSailor

    Thanks mate, more detailed information about this: http://xenbits.xen.org/xsa/advisory-133.html

    Although, we're fine in our case :-).

    Thank you once again, @Traffic , great job!

    Thanked by 1Traffic
  • @Jack said:
    Haha! So much for XEN/KVM beating OVZ in things, at least with OVZ kernel care would prevent downtime with something like this.. Don't think it'd work with XEN kernels or has a patch been released?

    I prefer OVZ anyway. A SSD OVZ is just incredible for speed. The fact no kernel is virt'd it saves ram,cpu and reduces speed on reboots.

  • cociucociu Member

    @Thanks no problem here , any way we update the qemu too yesterday

  • BharatBBharatB Member, Patron Provider

    @Jack said:
    Haha! So much for XEN/KVM beating OVZ in things, at least with OVZ kernel care would prevent downtime with something like this.. Don't think it'd work with XEN kernels or has a patch been released?

    Kernel Care ? Seriously ?

  • @Jack said:
    Haha! So much for XEN/KVM beating OVZ in things, at least with OVZ kernel care would prevent downtime with something like this.. Don't think it'd work with XEN kernels or has a patch been released?

    This isn't a kernel bug, it's in the userspace qemu-kvm binary, so something like kernelcare wouldn't have helped (though VZ would still not be vulnerable to this class of vulnerability since it has no ongoing userspace component).

    To clear up some of the confusion I see going around:

    • The patches for qemu were relased at 8am US/Eastern this morning. If you patched previous to this, you are not currently patched.
    • Patching alone is not enough to secure your infrastructure. You must shut down all guests and start them again. Simply restarting guests is also insufficient.
    • Patches for major distros are out already, but as of this writing not all mirrors have finished replicating these patches. Make sure you have actually patched to the latest release version for your distro.
    • The guest does not need to have a floppy drive configured in KVM. The vulnerable code is accessible without the floppy device in /dev/.
    • Xen paravirtualization is not vulnerable, but Xen HVM is. KVM and virtualbox are vulnerable, as is plain qemu.

    Let me know if I missed anything major.

    Thanked by 2MikePT Nick_A
  • KuJoeKuJoe Member, Host Rep

    @Jack said:
    RedKrieg didn't read what the bug was, ploop is similar to userspace on vz isn't it?

    The bug pertains to the floppy disk emulation in qemu, nothing OpenVZ has to worry about.

  • rm_rm_ IPv6 Advocate, Veteran
    edited May 2015

    The bug has existed since 2004.

    That's fun, basically you can't be sure anymore that any data you ever had on any KVM/HVM VPS since 2004 wasn't copied away or tampered with, even without any knowledge from the provider's side. It's naive to assume this bug is just now became known to the humanity, very likely the "black hat" circles had it in their sleeves since a long long time ago.

    All the more reasons to migrate all remotely important stuff to dedis, even the cheapest Atom dedi is orders of magnitude more secure and private, not to mention of course immune to this kind of shit.

  • J1021J1021 Member

    RedKrieg said: Patching alone is not enough to secure your infrastructure. You must shut down all guests and start them again. Simply restarting guests is also insufficient.

    A provider I use setup a new node with the patch applied and then live migrated VMs to the patched node, no downtime at all.

    Thanked by 1MikePT
  • JonchunJonchun Member

    @rm_ said:
    All the more reasons to migrate all remotely important stuff to dedis, even the cheapest Atom dedi is orders of magnitude more secure and private, not to mention of course immune to this kind of shit.

    This. I've never understood why people thinking a VPS is equivalent to a small dedicated server in terms of privacy/security. It may be more cost-effective, or be cheaper or be able to scale easier, but assuming that your VPS is "private" is as ridiculous as saying "my neighbor is more likely to see my insertprivatethinghere than my roommate is"

  • @kcaj said:
    A provider I use setup a new node with the patch applied and then live migrated VMs to the patched node, no downtime at all.

    Vultr?

  • MikePTMikePT Moderator, Patron Provider, Veteran
    edited May 2015

    @kcaj said:
    A provider I use setup a new node with the patch applied and then live migrated VMs to the patched node, no downtime at all.

    That was a really professional way to deal with it.

    Although in my case, I'll be recommending my clients to shutdown their instances and start them again.

    Edit: Nodes have been patched a few hours ago. I've sent an email to my clients as well.

    It's good the fact that LET people spread the news so fast. I'm glad to be part of this community.

  • J1021J1021 Member

    No, this provider doesn't fall within LE* pricing. I don't think Vultr have yet commented on whether they're affected or not.

  • @kcaj said:
    No, this provider doesn't fall within LE* pricing. I don't think Vultr have yet commented on whether they're affected or not.

    LunaNode have. They having a full reboot later tonight.

  • J1021J1021 Member

    Linode have confirmed they're not affected.

  • perennateperennate Member, Host Rep
    edited May 2015

    TinyTunnel_Tom said: LunaNode have. They having a full reboot later tonight.

    We are performing a global reboot on Friday 15 May 2015 at 11:00 pm EDT per the announcement email. We are opting not to perform an immediate reboot to give customers the chance to reboot their virtual machines on their own schedule, because we have security mechanisms in place that limit the malicious actions that could be performed from an exploit of this vulnerability. VMs provisioned or stopped/started after 2:30 pm EDT will not need to be rebooted during the global reboot.

    Volume-backed instances (i.e., VMs with their root partition in a volume) will simply be live-migrated, and thus will not need downtime. This live migration is in progress, and customers will be notified of which VMs were live migrated today evening.

  • @perennate said:
    Volume-backed instances (i.e., VMs with their root partition in a volume) will simply be live-migrated, and thus will not need downtime. This live migration is in progress, and customers will be notified of which VMs were live migrated today evening.

    These exams screwed my brain up thought 15th was today -.-

  • jarjar Patron Provider, Top Host, Veteran

    rm_ said: All the more reasons to migrate all remotely important stuff to dedis, even the cheapest Atom dedi is orders of magnitude more secure and private, not to mention of course immune to this kind of shit.

    Fair point my friend. The honest truth is you can never guarantee 100% safety while a machine outside of your physical reach is accessible by the internet, but you can definitely reduce the potential points of failure.

    Thanked by 2Pwner yomero
  • J1021J1021 Member

    rm_ said: very likely the "black hat" circles had it in their sleeves since a long long time ago.

    I think that's very unlikely. This is the internet where word can spread like wild fire.

    Thanked by 1Maounique
  • MikePTMikePT Moderator, Patron Provider, Veteran

    @kcaj said:
    Linode have confirmed they're not affected.

    Their KVM node is (they only got one):

    "Hello,

    Our administrators have detected an issue affecting the physical hardware your Linode resides on. We're working to resolve the issue as quickly as possible and will update this ticket as soon as we have more information.

    Your patience and understanding is greatly appreciated."

    My test KVM was also rebooted.

  • RamNode already sent out an email regarding this and that they're going to fix that soon. (Y)

  • WilliamWilliam Member

    EDIS does reboots now as well.

  • AnthonySmithAnthonySmith Member, Patron Provider

    No need for a full node reboot though as I understood it, stop running VM's KVM+HVM update qemu, start the VM's back up that were stopped?

    Thanked by 1yomero
  • getvpsgetvps Member

    This is best example why guest user on virtualization systems need to have one strong solution for crypting data/system (without chance of attacker to dump RAM memory or other things like this!)

Sign In or Register to comment.