Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


OpenVZ is pointless - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

OpenVZ is pointless

2»

Comments

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    angstrom said: Do you think that LXC has the potential to replace OpenVZ down the road

    No, because that isn't the goal of it.

    There's a lot of things they would need to integrate that isn't really needed in LXC.

    For instance, most container products out there (docker, etc) are using NAT via the host node anyway so a VENET offering isn't needed at all. IP locks can be built with things like ebtables and such, but since it's really unlikely a VENET like interface will ever exist, you'll have to do that all by hand.

    For storage people will just say to use BTRFS or LVM if you need to enforce limits, but most people using LXC don't really care about disk usage of their VM's since it'll be about as much as if they were running the app w/o the containerizing.

    Francisco

    Thanked by 2angstrom eva2000
  • FredQc said: it's mean for idle anyway ;)

    That's the though question: does OVZ idles better than KVM? Idling your own kernel is more fun but uses more resources: not ecology friendly - OVZ might make more sense for idling.

  • Crandolph said: What's the point in purchasing anything other than dedicated (KVM instead of OpenVZ)?

    Comes down to price point for end users at least from what I experience myself. You have to know the limitations of OpenVZ and decide whether it's acceptable for your own intended usage. I don't mind buying up $25/yr 2GB HostUS OpenVZ VPS instances for different types of development and testing of my Centmin Mod stack. But if your usage requirements require TCP level tuning and/or some other Kernels and/or Kernel module support i.e. IPSET for better performing firewall when dealing with large numbers of IP addresses, then OpenVZ isn't ideal for that. For instance I managed to scale one of my Wordpress sites to 10,000 concurrent users blitz.io load test on 2GB DigitalOcean KVM VPS and that requires tuning at TCP level which you won't be able to do on a typical OpenVZ VPS.

    OpenVZ or KVM? = answer, use the right tool for the intended job :)

    Thanked by 1Crandolph
  • @Francisco isn't proxmox deal with some of the issues LXC lacks over OpenVZ on their own LXC implementation? Eg. networking models, some security and disk quotas?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @jvnadr said:
    @Francisco isn't proxmox deal with some of the issues LXC lacks over OpenVZ on their own LXC implementation? Eg. networking models, some security and disk quotas?

    Even proxmox covers their own asses in regards to the security comments:

    The LXC team thinks unprivileged containers are safe by design.

    Proxmox still suffers from all of the issues I mentioned.

    I'm pretty sure the reason Proxmox stopped supporting OpenVZ is they didn't want to keep dealing with having to ship 2 kernels (one OpenVZ, one close to stable mainline) for their product to work. No one wants to run KVM on a 2.6.32 based kernel, it's going to run like ass.

    Francisco

    Thanked by 2jvnadr eva2000
  • @Francisco said:

    • Storage. Disk Quota's are not easy to implement. Since there is no simfs or ploop, hosts have to use LVM (which means they can't oversell their space at all), BTFS (which means they don't give a single shit about customer data), or ZFS (which works mostly fine but has problems documented in the following point). Sub quotas (so the user can set their own users space) only works on LVM and requires a lot of patching around to make it supposedly play nice. Now that you're on LVM shrinking a users volume is going to be iffy and a much greater chance of data loss or other issues. You also can't bump a users inodes without also bumping their total disk allocation. If you formatted the drive to allow many inodes then that's simple enough, but still.

    XFS has directory quotas or you could mount qcow2 images using qemu-nbd so i wouldn't say LVM is the only option.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    mksh said: XFS has directory quotas or you could mount qcow2 images using qemu-nbd so i wouldn't say LVM is the only option.

    XFS is container aware? I've heard nothing about that. Just because it has quotas on a folder doesn't mean the end user is going to see that limit in df -h or similar.

    Great, so now you're using NBD and when it crashes (because it's going to crash) your VM's going down hard :( The only perk it provides is that QEMU images are thin provisioned at the start, but once a user blows up their space to the maximum it's never going to shrink again. You might as well just use LVM in that case. You'll likely get some users that won't max their drives so you'll save a bit there, but you can get the same results with LVM thin provisioning without providing another POF.

    Francisco

  • mkshmksh Member
    edited February 2018

    @Francisco said:

    mksh said: XFS has directory quotas or you could mount qcow2 images using qemu-nbd so i wouldn't say LVM is the only option.

    XFS is container aware? I've heard nothing about that. Just because it has quotas on a folder doesn't mean the end user is going to see that limit in df -h or similar.

    Yeah, you got a point there. I kinda pulled that out of my ass. Overselling will be obvious in this scenario.

    Great, so now you're using NBD and when it crashes (because it's going to crash) your VM's going down hard :(

    Well, can't agree here. I've used NBD quite a lot and i can't remember it crashing even a single time.

    The only perk it provides is that QEMU images are thin provisioned at the start, but once a user blows up their space to the maximum it's never going to shrink again.

    Yeah shrinking qcow2 is tricky sadly. I am quite positive that it would be possible to exchange the file behind NBD by a shrunken version even if it might need a tiny bit of hacking though.

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited February 2018

    mksh said: Yeah, you got a point there. I kinda pulled that out of my ass. Overselling will be obvious in this scenario.

    With df -h not working it's going to lead to more tickets, leaking of your configurations/etc, as well as no way for the user to know how much space they really have. If the XFS directory quota's don't also do inode accounting, you're going to end up with some user having 20 million inodes in their VM and they become a liability if you ever have to FSCK or migrate them.

    mksh said: Well, can't agree here. I've used NBD quite a lot i can't remember it crashing even a single time.

    There's plenty of posts out there documenting the crashes and in many cases no plans to fix the problem. When it crashes, the drive simply disappears, and now you got a CT looking for a volume that doesn't exist. I wonder if a container would be killable in such a state or would it be a completely zombie'd instance/stuck in D state?

    mksh said: Yeah shrinking qcow2 is tricky sadly. I am quite positive that it would be possible to exchange the file behind NBD by a shrunken version even if it might need a tiny bit of hacking though.

    What a scary as hell thing to try to do :P

    Francisco

  • @Francisco said:

    mksh said: Yeah, you got a point there. I kinda pulled that out of my ass. Overselling will be obvious in this scenario.
    mksh said: Well, can't agree here. I've used NBD quite a lot i can't remember it crashing even a single time.

    There's plenty of posts out there documenting the crashes and in many cases no plans to fix the problem. When it crashes, the drive simply disappears, and now you got a CT looking for a volume that doesn't exist. I wonder if a container would be killable in such a state or would it be a completely zombie'd instance/stuck in D state?

    No idea really. I have a bunch of NBD devices pretty much running 24/7 for years and as i said they never crashed on me.

    mksh said: Yeah shrinking qcow2 is tricky sadly. I am quite positive that it would be possible to exchange the file behind NBD by a shrunken version even if it might need a tiny bit of hacking though.

    What a scary as hell thing to try to do :P

    Come on have some faith ;)

    Thanked by 1Francisco
  • ShazanShazan Member, Host Rep

    @Francisco said:
    No one wants to run KVM on a 2.6.32 based kernel, it's going to run like ass.

    Are you talking about performance? Is the difference so evident?

  • desperanddesperand Member
    edited February 2018

    @bugrakoc said:
    OpenVZ is much more efficient. It is good for most applications, and even better than KVM for some.

    It's a shame that Virtuozzo 7 still isn't production-ready.

    No, it's not.

    Memory allocation is not possible to predict. Memory consumptions and whole memory management model is very different and not the same as with classic model. You think your app will utilize 500mb of RAM at OVZ? You are very wrong! You can't know how much it will utilize, maybe 2GB, maybe 1.4GB, maybe 900MB, but not 500MB.

    Tons of limitations because of the shared kernel.

    Very old Kernel with very old hardware and features, which do not know anything about any new processors, and features, which is actively used in production by different other products.

    This type of virtualization NEVER WAS and NEVER WILL be a production ready. Just greedy merchants trying to grab a lot of money from knowing nothing fresh clients.

    It's like selling Sandboxie sandboxes or droplets, or kubases.

    Always voting for XEN / KVM / VMware only.

    Thanked by 2tarasis rm_
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Shazan said: Are you talking about performance? Is the difference so evident?

    Extremely. 2.6.32 came out out in 2009 and while CentOS 6's kernel is a little newer than that, there has been incredible performance gains to KVM since then.

    Not to mention better compatibility with hardware and things like that. Slabbing of OpenVZ nodes is more or less a must these days if you're wanting to use any recent gear.

    Francisco

  • SplitIceSplitIce Member, Host Rep

    Francisco said: Load Averages. Load averages are not calculated on a per container basis like OpenVZ. This means that a user with a completely idle container is going to see the full load of the node and more likely than not will complain to the host or even cancel because "the VPS is broken/node is oversold".

    Could I tempt you in a a patch to scale the load average between 0 and 0.1?

  • @Francisco said: It is extremely easy to make a container have complete full root access to a node.

    Do you see that happening easily even with unprivileged containers also? Thanks for the useful insights.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @akb said:

    @Francisco said: It is extremely easy to make a container have complete full root access to a node.

    Do you see that happening easily even with unprivileged containers also? Thanks for the useful insights.

    Not heard of any new breakouts as of late, but what I'm getting at is that it's very easy to screw up and make it so VM's are privileged, and not unprivileged.

    Francisco

  • @SplitIce said:

    Francisco said: Load Averages. Load averages are not calculated on a per container basis like OpenVZ. This means that a user with a completely idle container is going to see the full load of the node and more likely than not will complain to the host or even cancel because "the VPS is broken/node is oversold".

    Could I tempt you in a a patch to scale the load average between 0 and 0.1?

    It’ll be hilarious when people complain that their VPS takes 10 seconds to run wget but the load average displayed is still under 0.1 :)

  • @Crandolph OpenVZ shares the same host-guest kernel and it's much more efficient - think jail shells on the same machine vs. cpu instruction emulation (hardware accelerated or not, anyhow). Now, one doesn't have to overload the machine and provision more total resources towards users than it has.

    I've been running OpenVZ guests on very underpowered cheap AMD "servers" and they ran just fine - no worse than running on the host node itself, with the added benefit of user/container isolation, just because that many additional layers of software abstraction just aren't there.

  • chihcherngchihcherng Veteran
    edited February 2018

    Without OpenVZ, will there still be cheap VPS in the future?

  • @chihcherng said:
    Without OpenVZ, will there still be cheap VPS in the future?

    Not as cheap as one would hope.

  • @chihcherng said:
    Without OpenVZ, will there still be cheap VPS in the future?

    Only time will answer that, maybe new virtualization system/type would do better/replacing OVZ -- may not in the near future..

  • rm_rm_ IPv6 Advocate, Veteran
    edited February 2018

    Janevski said: jail shells on the same machine vs. cpu instruction emulation (hardware accelerated or not, anyhow).

    You think KVM is "cpu instruction emulation"? Seriously?...

    Janevski said: no worse than running on the host node itself

    Wow what an achievement.

  • omelasomelas Member
    edited February 2018

    @rm_ said

    well, if we go to lowend (128/256mb) memory usage of kernel starts being matter. but maybe thats too lowend.

  • @ricardo said:

    Crandolph said: Don't use a random one then. Linode, Digitalocean, Vultr, etc, all not bad choices with pros and cons. At least companies that won't disappear overnight.

    You missed the context of cheaper

    @omelas said:
    @rm_ said

    well, if we go to lowend (128/256mb) memory usage of kernel starts being matter. but maybe thats too lowend.

    I can get OpenVPN to run on Debian on 128 or 192mb ram on KVM. That's an example of KVM being used lowend.

    Could probably get a 128MB Debian KVM to run a low traffic nginx site or something too.

    OpenVZ does prosper a bit on the super low end for obvious reasons though.

  • Have 3 vps at the moment and 2 of them are openvz. They works well just a little problem with OS. May cost much more money if i use KVM.

  • edited February 2018

    Yawn, here we go again.

    @Crandolph said:
    The only positive of OpenVZ is it needs slightly less resources.

    You obviously have never used it as an administrator. Smells more like trolling to me.

    You can put twice as many virtual machines on it without overselling and the more you put the more it kicks KVMs ass. That's because it scales far better whereas each KVM is yet another kernel and virtual hardware interface running. It's not even close.

    They should not even be compared because it's apples and oranges anyways.

  • edited February 2018

    @Crandolph said:
    ...Because of overselling of course.

    Drink!

    Thanked by 1bugrakoc
  • risharderisharde Patron Provider, Veteran

    I usually choose OpenVZ because of the amazing boot times... Up to this day while it might be trivial to some of you guys, it's what makes me keep my sanity when doing reboots since I usually have to when developing and testing.

  • @risharde to be fair everything boots fast with SSDs.

    Thanked by 1Aidan
Sign In or Register to comment.