Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


OpenVZ Security Update - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

OpenVZ Security Update

13»

Comments

  • KuJoeKuJoe Member, Host Rep
    edited June 2014

    @sonotse @Kris Reboot your VPS via the control panel (SolusVM?) and it will show you the correct kernel in your VPS. When OpenVZ does a node reboot, the default setting is to only suspend the VPS and take a snapshot of the running processes and open files which does not change anything on the VPS so the kernel of the VPS will show the same kernel before the snapshot (and also explains why your uptime is still the same after the node was rebooted). Even though your VPS may be "running" an older kernel, OpenVZ still uses the node's kernel so the uname -a output is merely cosmetic for the end user.

    Thanked by 2Magiobiwan 0xdragon
  • my vps looks like update from VMBox and crowncloud. but the Strongswan and OpenVPN can't work anymore after the update. Is there any fix?

  • KuJoeKuJoe Member, Host Rep

    @catding The nodes probably need the iptables modules loaded. We had some issues with some iptables modules not wanting to load on our nodes and some of them would only load after we stopped the vz service and then ran the modprobe command.

  • catdingcatding Member
    edited June 2014

    Hi KuJoe

    thanks for your reply.
    I got error on OpenVPN ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
    and Strongswan 08[LIB] dumping 2 stack frame addresses: 08[LIB] @ 0xb774e000 (__kernel_sigreturn+0x0) [0xb774e500] 08[LIB] /usr/local/lib/ipsec/plugins/libstrongswan-kernel-netlink.so @ 0xb7214000 [0xb72175f2] 11[DMN] thread 11 received 11 09[DMN] thread 9 received 11 11[LIB] dumping 48 stack frame addresses: 09[LIB] dumping 2 stack frame addresses: 11[LIB] @ 0xb774e000 (__kernel_sigreturn+0x0) [0xb774e500] 09[LIB] @ 0xb774e000 (__kernel_sigreturn+0x0) [0xb774e500] 11[LIB] /usr/local/lib/ipsec/plugins/libstrongswan-kernel-netlink.so @ 0xb7214000 [0xb7218122] 09[LIB] /lib/i386-linux-gnu/libpthread.so.0 @ 0xb7663000 [0xb767aff4] 11[LIB] -> /root/src/tmp/strongswan-5.1.3/src/libhydra/plugins/kernel_netlink/kernel_netlink_ipsec.c:2263

    looks like have no relation with iptables, but don't know what's the problem.

    @KuJoe said:
    catding The nodes probably need the iptables modules loaded. We had some issues with some iptables modules not wanting to load on our nodes and some of them would only load after we stopped the vz service and then ran the modprobe command.

  • have enabled TUN/TAP and PPP on SolusVM, but still have this problem.

  • rskrsk Member, Patron Provider

    catding said: have enabled TUN/TAP and PPP on SolusVM, but still have this problem.

    Contact your provider, they should perform some commands to enable it on the node level.

  • have contacted them. but no response from them yet.:(

    @rsk said:
    Contact your provider, they should perform some commands to enable it on the node level.

  • SpeedBusSpeedBus Member, Host Rep

    @catding All nodes here have tun/tap, ppp etc enabled as of last night, put in a ticket if it isn't still working and we'll fix it up :)

  • Hi Boss.
    thanks for your reply. it is working. but my VM in VMBOX still have problem...

    @SpeedBus said:
    catding All nodes here have tun/tap, ppp etc enabled as of last night, put in a ticket if it isn't still working and we'll fix it up :)

  • Here's a serious question:

    What's wrong with ploop? (why not use it, is there any reason that would make ploop unfavorable in comparison to the outdated simfs chroot?)

  • rds100rds100 Member

    @GoodHosting it's too new, so not so much tested for now.

  • @rds100 said:
    GoodHosting it's too new, so not so much tested for now.

    Ahh, fair enough. So not necessarily anything wrong yet as much as it's "not yet old enough".

  • smansman Member
    edited June 2014

    @KuJoe said:
    sonotse Kris Reboot your VPS via the control panel (SolusVM?) and it will show you the correct kernel in your VPS. When OpenVZ does a node reboot, the default setting is to only suspend the VPS and take a snapshot of the running processes and open files which does not change anything on the VPS so the kernel of the VPS will show the same kernel before the snapshot (and also explains why your uptime is still the same after the node was rebooted). Even though your VPS may be "running" an older kernel, OpenVZ still uses the node's kernel so the uname -a output is merely cosmetic for the end user.

    Huh? Containers don't run kernels. uname -r in a container will always show the current node kernel as far as I know. Snapshots are userspace. You can install/update kernels in a container but they don't do anything. Sometimes an app wants to see the kernel in the /boot directory and that's about all it's good for.

  • @sman said:
    Huh? Containers don't run kernels. uname -r in a container will always show the current node kernel as far as I know. Snapshots are userspace.

    The container will show the old kernel information if it was "paused" while the host node rebooted though, (sometimes). A reboot of the container fixes this.

  • @sman said:
    Huh? Containers don't run kernels. uname -r in a container will always show the current node kernel as far as I know. Snapshots are userspace.

    He's right in this case. Not sure why that's the case, but I was able to confirm with a VPS, shows old kernel, rebooted that VPS, and it's fine now. All tied into how OpenVZ does a SUSPEND, which is why things like your uptime stats may not always reset to 0 when a node is rebooted (if the reboot was done cleanly)

    Thanked by 1Kris
  • smansman Member
    edited June 2014

    I've never seen that. Are these CentOS 6 containers?

  • this is because openvz by default suspends the containers on reboot and when they are resumed, old kernel still shows, until the comtainer is rebooted.

    this is openvz 101 and frankly any ovz "provider" who doesnt know that, is lame.

  • smansman Member
    edited June 2014

    @kri_s_2000 said:
    this is because openvz by default suspends the containers on reboot and when they are resumed, old kernel still shows, until the comtainer is rebooted.

    this is openvz 101 and frankly any ovz "provider" who doesnt know that, is lame.

    Better you project your feelings of inadequacy on me than your cat I guess. I am apparently so special that you felt the need to create a new handle just so you can snipe from the bushes.

  • PatrickPatrick Member
    edited June 2014

    As above it suspends and creates a snapshot of the VM which is why it shows old kernel, if you reboot VM it will show the current one from node.

    If you use KernelCare/KSplice it won't show up as it's inline patching on HN

  • KrisKris Member

    sman said: Huh? Containers don't run kernels. uname -r in a container will always show the current node kernel as far as I know.

    I thought the same, and luckily get out of any OVZ machines just as I was trying to reluctantly get an ex-company I managed to use CentOS6 instead of CentOS 5 / Swapless as the norm.

    That never did quite happen, and been Xen-happy since. Had much to learn in terms of true VMs vs. containers, but the stability is amazing on the Xen Platform. (when not slabbed / nested, albeit)

    Thanks for clarifying @INIZ and @SkylarM. Explains the extraordinary uptime regardless of reboots as well.

  • @rsk said:
    Contact your provider, they should perform some commands to enable it on the node level.

    Looks like the TUN/TAP module didn't load ont he node again. A simple "modprobe tun" would fix the issue (in theory).

  • KuJoeKuJoe Member, Host Rep

    sman said: Huh? Containers don't run kernels.

    Hence why I put running in quotes. I also said:

    KuJoe said: Even though your VPS may be "running" an older kernel, OpenVZ still uses the node's kernel so the uname -a output is merely cosmetic for the end user.

    Sorry about any confusion my post caused you.

  • MaouniqueMaounique Host Rep, Veteran

    OVZ is tricky to deal with, we prefer the standard approach rather than suspend in most cases since we had issues before. For example a weird 127.0.0.1 address being unavailable after suspending and resuming, that was definitely hard to diagnose espeically since we had complains some days after when people finished diagnosing why their lo interfaces were no longer working. It took an accidental crash of our beloved OVZ nodes which fixed that, and, since it was not just one incident, i think it is better to update everything from time to time and have a full orderly reboot. Sorry for people uptime stats, please join the Xen crowd which counts it in years.

  • shovenoseshovenose Member, Host Rep

    I would like to thank @KuJoe for providing the most detailed writeup of the incident in the mass email. All VPS providers I'm with at this point sent notifications regarding it promptly, but only one actually one did a detailed, complete explanation of the problem, the solution, etc.

    I appreciated not needing to go dig through a billion forum thread while at work and just to read one email that summed it all up.

  • rskrsk Member, Patron Provider

    Magiobiwan said: Looks like the TUN/TAP module didn't load ont he node again. A simple "modprobe tun" would fix the issue (in theory).

    As well as the other modprobe ppp_ commands :P

  • Is this kernel secure? 2.6.32-042stab085.17

  • KuJoeKuJoe Member, Host Rep
    edited June 2014

    @stallion said:
    Is this kernel secure? 2.6.32-042stab085.17

    Nope. The only OpenVZ.org kernel that is secure is stab90.5

  • shovenoseshovenose Member, Host Rep

    @stallion said:
    Is this kernel secure? 2.6.32-042stab085.17

    Try rebooting your VPS. Check again.

Sign In or Register to comment.