New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
If it takes them 12 - 24 hours to manually write a patch then they are just not doing things right, it needs to be as fully automated as possible or it is near useless, if the update is not security related then a reboot is probably not needed anyway and if it is security related then it needs to be <60 minutes or it is just pointless.
In a perfect world, yes. :P
2 kernel panics while preparing for reboot. One is back up. Other is running fsck for the last 2 hours. FML
Just been through that only to find that once it did boot in the latest kernel there is zero network connectivity, switch to 090.3 and the network is fine again, .4 and .5 absolutely defies all logic and all network connectivity is dead.
Thinking of switching back to .3 and migrating to ploop to mitigate, this is why I love Xen!!!!!!!!!!!!!!!!!!
VPSDime? Mine down for some time
Yes. fsck is just finished 5 minutes ago. Containers are booting at the moment. Check back in 10 mins, if it's not online, please open a ticket.
finally all sorted... well at least until the next critical update tomorrow.....
In regards to ksplice:
within an hour.
-Samson
http://www.cloudlinux.com/blog/clnews/kernelcare-cve20143519-critical-virtuozzoopenvzpcs-vulnerability-patch.php
Too late!
KernelCare
Security Update Issued
An update for KernelCare was just released to address that critical OpenVZ / VZ / PCS SimFS security flaw that was reported earlier
Official Link:
http://cloudlinux.com/blog/clnews/504.php
Ongoing Discussion via WHT:
lol
Would appear both ksplice and kernelcare released their updates at the exact same time
This seems really bad on the surface but as far as I can tell only people with an existing container can exploit this. It's not a wide open thing where everyone on the planet can exploit it. An important distinction. If you have a bunch of customer who are the type that are willing and able to hack their own provider I would say get better customers.
So statisitically, the odds of getting exploited by this compared to the ones where anyone on the internet can do it are extremely small.
thats true, but it depends of people, maybe i get a VPS at some provider here, and i have exploit, and want to fuck other people, so that would not be good for other customers.
still being important to update, you dont know who is next door (vps)
Yes so I would be careful not to put anyone signing up recently on a server that has not been updated.
Tough day for those involved no doubt. Seems these sorts of things are popping up with more regularity lately. Props to @tragic who posted this initially.
The OpenVZ guys seemed to fixed this rather quickly. I think it's mostly just one guy but he's real good. I believe he is one of the core Linux kernel developers.
We fixed this fast after seeing the issue posted on twitter, also a few hosts have been hacked recently, so anyone know about that was it anything to do with this or?
I got notifications from chicagovps, wable,ramnode,vpsdime, and buyvm regarding this issue.
But I have no news from bandwagonhost and xvmlabs, and my VPSes with them were not restarted in the past 24 hours, I am not sure if this means their service is not affect by this vulnerability. @dcc any comments?
@webflier
We patched everything about 12 hours ago (we normally rely on ksplice or kernelcare depending on the node, but in this case we decided to do a full reboot). You can verify this with 'uname -a'
Only 2 hosts I have yet to see an update from is Crissic and XVMLabs. Both still show 2.6.32-042stab088.4 for me and container has not been rebooted. Ticketed into Crissic, (#447029) but Skyler replied that the nodes were fixed. Anyone seeing this same issue on Crissic's OVZ05 node?
@dcc Thanks, you are the only provider I am with that can fix this issue in time and without reboot. Good job!
One Crissic node I'm on is running : https://wiki.openvz.org/Download/kernel/rhel6/042stab084.20
In short, I don't think they patched all their machines.
Upgrading the kernel is not 100% necessary to guard against this bug, doing this for all containers should do the trick too:
So how can I be sure its been fixed? Kernel is "2.6.32-042stab088.4" and my uptime has been 6 days and counting. Does this fix require a reboot?
uname -a should return : 042stab090.5 in the kernel version.
We're both on older kernels - different variants.
@SkylarM has to slap his techs around.
I got a lot of Emergency Reboot emails this morning...
No mention of RHEL5 2.6.18 kernels.