New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Yeah, no. That's not going to make any difference here. FDE only protects data 'at rest' (ie. powered off), and the host node can always access the RAM of the VM if they are sufficiently competent, that's just how it works. Even a dedicated server doesn't completely eliminate that risk, although it requires physical access - you can just plug into any DMA-capable slot (PCI, Firewire, ...)
@joepie91, huge issue is not if "legit" people can access your plain data, but one private bug like this created and used by "badguys" is very dangerous..
@SpeedBus is CrownCloud patched? Thank you
@drserver
Has SolusVM put out an announcement? I know they do roll their own Xen RPMs for some clients.
Meh, two hosts at @MCHPhil are down since two hours ago as well as the billing area.
Yes. And what you suggested doesn't protect you against that.
Apologies for the update, but we have decided to proceed with suspend/save/restore/resume operation tonight instead of the global restart on Friday.
I think it is best to shutdown all VMs, you cannot rely on the customers to do it, I think I would think they have to be shutdown even if the advisory does say clearly this is not required.
As for safety, since you cannot trust your host, VM or dedi, the risk is the same. Any host has access to your data on any kind of medium, the only way to keep it private is full encryption and never mount it on a vm not fully in your home/premises where you have full control. As for external access due to vulnerabilities, that is the same on vm or dedi.
The only case where there is a difference, is this, horizontal access through a compromised(able) virtualization layer.
Even that won't cut it. Somebody can still break into your house and take your server.
If that is encrypted too and you unmount everything anyway when not in use, not really.
But, given enough time and resources, any encryption will be broken, although torture can also make you give the key.
Depends who is after you, in the end, if they want to plant something, nobody can stop them, so, there is really no escape for the little guy, except moving to a country where the rule of law is respected.
If it's unmounted, it's not a very useful server.
what if this venom thing attack silently since 2007 ? why they just found/released it yesterday.. now all my privacy is gone.
You can't rule out that a black hat group, or even a state security or spying agency has found this some years ago and kept it to themselves. Or maybe multiple groups at once. Knowledge about widely exploitable but not yet patched stuff like this can be worth a lot of money, why would they disclose it? It only took till 2015 till some of the actual security experts looked closely at the same pieces of code and figured it out, but who can guarantee that they were the first to do that.
Sure, but such vulnerabilities can exist in any software interacting with untrusted input, we've seen critical SSL and web server bugs in the past. So why is dedicated server "orders of magnitude" more secure than virtualized server? Don't think anyone's saying that dedicated isn't more secure, just that there's other factors too.
Well for one, the web server runs as its own user and not as root, so any web server expoit won't expose your E-Mail if it's handled by the same server, for example.
Because a virtualized server is vulnerable to everything a dedicated server is, and then a whole new class of instant-root-over-the-whole-system possible exploits.
Also it's a proverbial "RAID0" of vulnerabilities: with a VPS the security of your system depends on both your system not being compromised, AND THE PROVIDER'S HOST OS on the node not being compromised. I believe even by the probability theory this alone counts as "an order of magnitude" less reliable. Your security depends on two systems one of which you don't control and don't even know a lot about (e.g.: is it updated with security patches? how often?... who has the credentials to manage it?... do they store those securely? etc etc)
Many more than that, actually. If a breakout vulnerability exists, every VM on the system is a possible attack vector for your service.
Exactly - only needs 1 VM on the same node as you to be badly configured and the bad guys have a vector to use venom against the host and access all the vms.
What's a typical number of vms per host?
The moral of the story: F*ck floppy drives.
Yup, all done last night http://status.crowncloud.net/open_issue.php?id=52
If you run C5 and Xen 3.4.x we have some RPM's in testing
https://documentation.solusvm.com/display/DOCS/Xen+3.4.x+RPM+Releases
For KVM you will need to yum update qemu-kvm and STOP and Start all VMs.
depends on nodes, older hardware cant keep too many due to memory and cpu being insufficient, newer hardware with quad E5 or equivalent each with 6 cores+ and 512 GB ram, can have hundreds of VMS, not oversold.
What is the minimum required here? I know you have to stop then boot the guests but can you do that one at a time after updating QEMU on the host or do you have to stop all of them before starting any of them up again?
If that is the case wouldn't it just be easier to reboot the Host? Or whatever sort of global reboot necessary that stops the guests.
You can do them individually. The issue is in the qemu binary itself, so once the updated qemu binary is installed, any new qemu processes will be safe from the issue.
Yes if it is KVM you can do 1 at a time after updating qemu-kvm.
With Xen you will need to reboot, in fact with Xen I would suggest you do an orderly shutdown of all guests then reboot in the the standard kernel, update Xen, then reboot in to the latest Xen version.
I don't do Xen. I guess I may as well just reboot my KVM hosts. Less work than going through and powering down and starting each guest.
Wouldn't pausing/suspending them also work as presumably that would also kill the qemu-kvm proccess forcing a new one to be started on resume?
I don't think so. It specifically says you need to power off or migrate the guest in the Redhat bug report. If you could pause/suspend they would have said that.
Can you give some input to this ticket?
130934