New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
No, because that isn't the goal of it.
There's a lot of things they would need to integrate that isn't really needed in LXC.
For instance, most container products out there (docker, etc) are using NAT via the host node anyway so a VENET offering isn't needed at all. IP locks can be built with things like ebtables and such, but since it's really unlikely a VENET like interface will ever exist, you'll have to do that all by hand.
For storage people will just say to use BTRFS or LVM if you need to enforce limits, but most people using LXC don't really care about disk usage of their VM's since it'll be about as much as if they were running the app w/o the containerizing.
Francisco
That's the though question: does OVZ idles better than KVM? Idling your own kernel is more fun but uses more resources: not ecology friendly - OVZ might make more sense for idling.
Comes down to price point for end users at least from what I experience myself. You have to know the limitations of OpenVZ and decide whether it's acceptable for your own intended usage. I don't mind buying up $25/yr 2GB HostUS OpenVZ VPS instances for different types of development and testing of my Centmin Mod stack. But if your usage requirements require TCP level tuning and/or some other Kernels and/or Kernel module support i.e. IPSET for better performing firewall when dealing with large numbers of IP addresses, then OpenVZ isn't ideal for that. For instance I managed to scale one of my Wordpress sites to 10,000 concurrent users blitz.io load test on 2GB DigitalOcean KVM VPS and that requires tuning at TCP level which you won't be able to do on a typical OpenVZ VPS.
OpenVZ or KVM? = answer, use the right tool for the intended job
@Francisco isn't proxmox deal with some of the issues LXC lacks over OpenVZ on their own LXC implementation? Eg. networking models, some security and disk quotas?
Even proxmox covers their own asses in regards to the security comments:
Proxmox still suffers from all of the issues I mentioned.
I'm pretty sure the reason Proxmox stopped supporting OpenVZ is they didn't want to keep dealing with having to ship 2 kernels (one OpenVZ, one close to stable mainline) for their product to work. No one wants to run KVM on a 2.6.32 based kernel, it's going to run like ass.
Francisco
XFS has directory quotas or you could mount qcow2 images using qemu-nbd so i wouldn't say LVM is the only option.
XFS is container aware? I've heard nothing about that. Just because it has quotas on a folder doesn't mean the end user is going to see that limit in
df -h
or similar.Great, so now you're using NBD and when it crashes (because it's going to crash) your VM's going down hard The only perk it provides is that QEMU images are thin provisioned at the start, but once a user blows up their space to the maximum it's never going to shrink again. You might as well just use LVM in that case. You'll likely get some users that won't max their drives so you'll save a bit there, but you can get the same results with LVM thin provisioning without providing another POF.
Francisco
Yeah, you got a point there. I kinda pulled that out of my ass. Overselling will be obvious in this scenario.
Well, can't agree here. I've used NBD quite a lot and i can't remember it crashing even a single time.
Yeah shrinking qcow2 is tricky sadly. I am quite positive that it would be possible to exchange the file behind NBD by a shrunken version even if it might need a tiny bit of hacking though.
With
df -h
not working it's going to lead to more tickets, leaking of your configurations/etc, as well as no way for the user to know how much space they really have. If the XFS directory quota's don't also do inode accounting, you're going to end up with some user having 20 million inodes in their VM and they become a liability if you ever have to FSCK or migrate them.There's plenty of posts out there documenting the crashes and in many cases no plans to fix the problem. When it crashes, the drive simply disappears, and now you got a CT looking for a volume that doesn't exist. I wonder if a container would be killable in such a state or would it be a completely zombie'd instance/stuck in D state?
What a scary as hell thing to try to do :P
Francisco
No idea really. I have a bunch of NBD devices pretty much running 24/7 for years and as i said they never crashed on me.
Come on have some faith
Are you talking about performance? Is the difference so evident?
No, it's not.
Memory allocation is not possible to predict. Memory consumptions and whole memory management model is very different and not the same as with classic model. You think your app will utilize 500mb of RAM at OVZ? You are very wrong! You can't know how much it will utilize, maybe 2GB, maybe 1.4GB, maybe 900MB, but not 500MB.
Tons of limitations because of the shared kernel.
Very old Kernel with very old hardware and features, which do not know anything about any new processors, and features, which is actively used in production by different other products.
This type of virtualization NEVER WAS and NEVER WILL be a production ready. Just greedy merchants trying to grab a lot of money from knowing nothing fresh clients.
It's like selling Sandboxie sandboxes or droplets, or kubases.
Always voting for XEN / KVM / VMware only.
Extremely. 2.6.32 came out out in 2009 and while CentOS 6's kernel is a little newer than that, there has been incredible performance gains to KVM since then.
Not to mention better compatibility with hardware and things like that. Slabbing of OpenVZ nodes is more or less a must these days if you're wanting to use any recent gear.
Francisco
Could I tempt you in a a patch to scale the load average between 0 and 0.1?
Do you see that happening easily even with unprivileged containers also? Thanks for the useful insights.
Not heard of any new breakouts as of late, but what I'm getting at is that it's very easy to screw up and make it so VM's are privileged, and not unprivileged.
Francisco
It’ll be hilarious when people complain that their VPS takes 10 seconds to run wget but the load average displayed is still under 0.1
@Crandolph OpenVZ shares the same host-guest kernel and it's much more efficient - think jail shells on the same machine vs. cpu instruction emulation (hardware accelerated or not, anyhow). Now, one doesn't have to overload the machine and provision more total resources towards users than it has.
I've been running OpenVZ guests on very underpowered cheap AMD "servers" and they ran just fine - no worse than running on the host node itself, with the added benefit of user/container isolation, just because that many additional layers of software abstraction just aren't there.
Without OpenVZ, will there still be cheap VPS in the future?
Not as cheap as one would hope.
Only time will answer that, maybe new virtualization system/type would do better/replacing OVZ -- may not in the near future..
You think KVM is "cpu instruction emulation"? Seriously?...
Wow what an achievement.
@rm_ said
well, if we go to lowend (128/256mb) memory usage of kernel starts being matter. but maybe thats too lowend.
I can get OpenVPN to run on Debian on 128 or 192mb ram on KVM. That's an example of KVM being used lowend.
Could probably get a 128MB Debian KVM to run a low traffic nginx site or something too.
OpenVZ does prosper a bit on the super low end for obvious reasons though.
Have 3 vps at the moment and 2 of them are openvz. They works well just a little problem with OS. May cost much more money if i use KVM.
Yawn, here we go again.
You obviously have never used it as an administrator. Smells more like trolling to me.
You can put twice as many virtual machines on it without overselling and the more you put the more it kicks KVMs ass. That's because it scales far better whereas each KVM is yet another kernel and virtual hardware interface running. It's not even close.
They should not even be compared because it's apples and oranges anyways.
Drink!
I usually choose OpenVZ because of the amazing boot times... Up to this day while it might be trivial to some of you guys, it's what makes me keep my sanity when doing reboots since I usually have to when developing and testing.
@risharde to be fair everything boots fast with SSDs.