All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
OpenVZ vs other virtualization offers ratio
OpenVZ virtualization is most popular, as far as I can see, among low-end offers. I didn't build stats thoroughly, however, it just is subset of what I personally saw - OpenVZ offers are approx. 80% of them all. If you have more real-life stats, please correct me with reference to data.
However, from my viewpoint, OpenVZ has several serious drawbacks. One is about obvious limits on what can be run (certain Linux species only), the other is lack of certain capabilities.
Personally, I find the absence of such components as ipset netfilter extension a serious design flaw. I can explain why to those interested. One can notice OpenVZ team doesn't think it's a flaw, however.
The question is: are low-endians (those offering low-end VPSes/whatever), in general, planning to abandon OpenVZ in favor of more capable technologies?
I understand that OpenVZ allows crowding much more VPSes on single server. But perhaps the aspect of quality will prevail in some low-endian minds?
As for me, I am phasing out OpenVZ VPSes off my hosting farm.
Comments
Well, i suppose this is more directed to us as we are out of stock on OVZ for some months (except overzold).
From a provider's point of view, OVZ is ok, but unstable. Servers crash out of nowhere and all the time exploits appear to "facilitate" those crashes.
That is improving, however, as of late we had much less crashes, but the facts remain:
Pro's
-OVZ has one of the lowest overheads, only vServer does better at the price of much fewer features;
-OVZ is scalable, when you need a CPU boost it is there as well as containers can be given/taken resources from without as much as a reboot;
-OVZ is cheap, not only because it has lower overhead so more can be put on a server, but also because it takes advantage of unused resources much better than other virtualization methods allowing for a better utilization of the node's resources, making it cheaper per resources and more environmentally friendly.
Con's
-Not full virtualization, cannot use own kernel, some programs dont work, also some features require special modules and patches;
-It is easy to go on a slippery slope to allocate much more resources than the node can handle at peaks ("overselling" which actually means overloading);
-Since it is such a collection of patches and the context switching is crazy on nodes with a lot of containers=>a lot of threads, it can crash quite easily, even without exploits, therefore is rather unsuitable for mission critical environments, even though any provider and virtualization can go down, OVZ, especially overloaded providers, are more prone to failure.
In spite of all that, OVZ is great for hosting, unless you need special firewalling, the rest goes great, especially on the vSwap kernels. It is also great if you need a lot of resources at a low price and//or you wish to reduce your carbon footprint and waste in general.
I do not agree with dogmatic aproaches, I switch from OVZ because is oversold and bad in general.
More like: I switch from OVZ because I need special firewall,s special kernel, special installation options, unsupported OSes, etc. But if you needed all those, you would never go to OVZ in the first place.
I thought I stated my primary reason for moving from OvenVZ somewhere else: kernel modules, starting from ipset. In short: wherever I need low-level (kernel-related) control, OpenVZ is simply out of question.
If I need something "exotic" (Windows, Solaris, *BSD, rarer Linux specii etc) OpenVZ just can't be used, so that's no problem at all.
I suggest not using emotional qualifiers such as "good", "bad". Perhaps "efficient", "less efficient", "incompatible" etc should be used instead. Personal preferences are very important, but they rarely suit technical discussions.
Note that the above is hardly related to me - I do not use emotional arguments when choosing virtualization technology.
Every virtualization technology has pros and cons. Its really about what works best for the individuals circumstance. Imo.
Well, to give a ratio:
Xen+KVM vms are roughly a third of all, OVZ makes the other 2/3. This was taken from solus, it does not consider our storage plans (Xen+KVM, we do not have OVZ storage plans), windows plans or the instances in the cloud. However, OVZ is still far ahead of all the other VMs combined, even tho we are out of stock for some 2 months (to be fair, also KVM is out of stock and still have a few OVerZolds)
It depends on what you are doing. For me OpenVZ is excellent.
Administration is easier. There is only 1 kernel to deal with. I can easily browse VPS directories. Setting up and moving VPSs between nodes is a breeze. Very mature technology. Templating system works well. Less things for clients to screw up..ie no individual kernels. Simpler for clients since they are only running in userspace. Easier to troubleshoot VPS problems. Multiple ways to enter VPS (command line from Node ie. vzctl enter, command line browsing their directories, via SSH). Quick VPS boot up time since no kernel to boot. More features on control panels like SolusVM. You and client see bandwidth, drive space, cpu usage monitoring. With KVM you only get bandwidth. An often overlooked advantage is lower memory requirement per VPS since no kernel. Unfortunately that's a hard feature to sell so it's never mentioned. Needless to say a 256MB OpenVZ VPS gives the customer more usable memory than a 256MB KVM VPS.
Also probably the most important is that it's very lightweight and very efficent use of hardware. Any unused memory is available. Can allocate cpu cores like KVM but unlike KVM you can also allocate a percentage of cpu cycles. Very handy if someone is trying to hog cpu cycles. I also find it handles disk I/O sharing more gracefully then KVM.
Disadvantages are well known and no shortage of people willing to point them out all the time. I don't know why people like to hate on OpenVZ so much. Maybe because they never took the time to understand it and configure it properly on the admin end of things. On the client end, concern about being on an oversold server is legitimate. That is also a legitimate concern on KVM as well. That is also entirely a choice of the admin, nothing to do with how good or bad the virtualization is. As an administrator it's awesome. My customers don't care about having their own kernel and all they notice is the quick reboot time and all the control panel monitoring.
PS: not sure what the OP is doing to get crashing. I had a bit of a problem on 2.6.18 kernel where the network interface would stop working once in awhile but 2.6.32 kernel has been solid as a rock. I have servers with hundreds of days uptime with almost 100 VPSs on them. Last reboot was a kernel update.
That is very far from working properly. In theory, yes, very nice, but in practice, nope. If you limit the container like that, will still lock the cores and its load will go up, it is the worse of both worlds in many cases. You can convince the user to leave that way, but it doesnt do the "needful" to manage load efficiently.
On KVM and Xen it works.
I don't know what to tell you other than you must be doing it wrong because it works for me. There are 2 settings actually. CPU units and CPU %. CPU units should always be 1000, CPU % can be whatever. Nothing "locks up"...ever.
You cannot do that on KVM as far as I know so I don't know what you are talking about there. Only thing you can do on KVM is allocate # of CPU cores and that's it.
Well, we don't talk about those who hate. Flamewars were, are and will be.
I can once again repeat the magic word 'ipset', but looks like but few people in this world have grasped the convenience of that. With it, I filter hundreds of thousands of IPs/IP networks on medium-power VPS without visible speed degradation. Now please attempt to do that with vanilla iptables.
I suppose, however, that when ipsets are more widely appreciated, OpenVZ team will do corresponding kernel modifications to allow it, as well.
However, that won't happen to low-level tuning. Too bad few OpenVZ providers optimize the nodes properly, so that memory/network related settings are at least close to optimal.
Apart from that yes, its quicker, simpler and easier to administer. No sense to deny the advantages.
Also a subset of OpenVZ is included in mainline kernel now. So you can create OpenVZ containers with a standard kernel. You just won't get all the features unless you use the OpenVZ kernel
Yes, starting with version 4 of vzctl.
Once again, while OpenVZ lacks significant (from my viewpoint) functionality, it's of little interest to me. However, I monitor recent changes.
Do you mean ipset netfilter extension? I've never heard of or used it myself and I'm doing fairly typical things. Why is it a significant missing feature? Significant if you really need it but I don't think a lot of people do. Is there any reason that feature cannot be added as some point? They are adding things all the time and netfilter extensions is one thing that does get updated from time to time.
It does have drawbacks, which can be show-stoppers if they are a problem for you. But, the vast majority of users don't need the functionality OpenVZ does not provide.
Well, perhaps I should enlighten people about ipset. It has been created to make iptables-based filtering easier and more efficient.
An example of ipset vs. traditional iptables comparison in mass blocking IP addresses.
ipsets are not supported by many iptables automation tools, I suppose that's why they are not widely used.
Personally, I use it to block access to known malicious traffic sources (such as botnet drones, SpamHaus DROP list, StopForumSpam comment spammers etc), as well as building my own list of misbehaving IPs.
At the moment of writing this text 218023 entries (179425 addresses and 38598 networks) are filtered out on VPS running my sites. The above filtering costs approx. 0.05 seconds of additional latency in average when accessing the site. The above rules consume 7638976 bytes of memory.
VPS is a KVM-type VPS using 1 CPU core (3.2GHz) and running CentOS 6.*.
That's perfectly normal. If people are happy with OpenVZ, they simply won't appear in this discussion.
As a customer, I'm more than happy to have a mix of KVM reverse proxies and OVZ backends. Like Maounique said, for things that just don't work on OVZ, people don't use OVZ - but there's plenty of things that do work just fine so being a stickler about it seems a bit odd.
P.S. quality hosting doesn't just depend on the virt.
(sigh)
I have said OpenVZ ceases to fit my goals at the moment. If it does fit yours/whoever's, no one prevents you from using it, correct?
I'm a bit confused then. Your OP question is laced with personal prejudices and purports that OVZ is an universally inferior choice and that hosters need to abandon it. Everyone above me has posted the idea of 'using whatever fits your needs' which you seem to understand clearly.
You seem to just want confirmation that you've made the right choice. I don't know if we can even give you that considering - as we've all agreed so far - that it's a reflection of personal requirements.
Please prove at least one of this statement with citing my words.
Note: don't forget to mention when I stress "for my purposes" and "for my goals".
What I want is simple answer to my question - the ratio between virtualization technologies, the trends in it. All the rest are but your assumptions.
I use both. OpenVZ is my first choice with a bit of KVM sprinkled in for those things that won't work on OpenVZ. Dedicated servers are pretty cheap now a days so no reason you could not have both. Or use ProxMox which can do both on one box.
Well, I use/test all types of hosting, actually. However, I keep only those that are most cost-efficient and can help me to handle everyday tasks. I use OpenVZ-driven backup VPSes without any complaints, for example.
@Master_Bo - check out Docker/lxc. A bit better suited for application/platform isolation but really good stuff..
Don't forget another downside of OpenVZ - it's an absolute "glass house", a hoster (or their $2/hour outsourced "support specialist" working from their mom's basement) can enter your VPS stealthily with one "vzctl enter", poke at what files you have, then view/edit/copy any of them. It should be totally out of the question when hosting anything even remotely private, for example any sort of website or forum with user registration and login system.
In fact, any VM can be accessed in more or less stealth way; however, OpenVZ is the simplest case, I agree.
If the server is in physical contact with third parties, even dedicated server can't be considered safe enough in the sense of unauthorized data access.
With Xen/KVM it's more involved; and then, if they want to edit any data, they will need to poweroff/reboot the VPS in question, which makes it at least somewhat noticeable (even though can be masked as some outage/maintenance).
Which should not be a reason to just throw hands up in the air and sheepishly use the most insecure of all available options (OpenVZ).
It depends on how much the adversary wants your data. The only sure (to some extent) way to store private data outside your house/company is to use encrypted containers on some storage space exported somehow (NFS/iSCSI). That is a block device and the encryption/decryption does not happen on the remote machine, only your computer knows the passphrase, but, of course, an attacker with A LOT of resources can break a lot of encryption after a few years or can simply torture you, it is not impossible, just impractical.
...and yet there are a few LEB providers who ignore this advice and host their WHMCS billing system offsite on an OpenVZ VPS at another provider.
I may be mistaken, but accessing (reading) data may as well go completely unnoticed. Editing data could be somewhat tricky, though.
That's an interesting subject, yes. Since VM's RAM can be read relatively easily, encryption should ensure it doesn't keep large chunks of unencrypted data. Once again, if VM decrypts data automatically, that can be intercepted. If it's done manually, chances are it can be intercepetd as well.
Of course it all depends on how much the intruder party needs to access the data in question.
We can just conclude that OpenVZ, other conditions equal, is least secure of them all.
Linux Vserver, LXC (used on some Joe's Cloud VPS's), FreeBSD Jails, and Solaris Containers are probably equally unsecure because all 4 are containerization rather than true virtualization and their file systems can be easily accessed by anyone with root privileges but their market penetration is much lower (and Solaris Containers are usually only used for internal company purposes rather than for providing virtualized services to 3rd parties).
You are only looking at it as an end user. As an admin the so called "glass house" you speak of is a huge advantage. It makes it that much easier to administer the VPS's. If you are doing semi-managed then going into end users VPS's is a necessity. KVM is more of a black box so harder to administer.
I will give you a real example of the advantage of this for the admin as well as end user. I run some particular apps on all my VPS's and a few times there were vulnerabilities found and made public. So a bunch of VPS's had the c99 hacker script installed. With KVM you would have to go into each VPS to scan and remove the malicious script and patch the vulnerability. With OpenVZ I ran an automated bash routine on the node that scanned all VPS's and remove the malicious script and installed the patch. That's minutes vs hours of work.
Btw, the admin can make a backup/clone of your KVM and change the root password anytime they want so it's no more "secure". Just takes a little more effort. Admins always have 100% control of your VPS so talking about it in terms of security is not correct anyways. If you cannot trust your Admin or are doing things you need to hide then you should never use a VPS.
If you have something to hide and want to use VPS then by all means, use KVM. It won't stop snoopy admins or if they get an FBI warrant. You won't know about if they make a clone either.
That's basically what I said, so not sure with what you're (dis)agreeing here...
Yes!!! That's who most of us are around here, end users.
Couldn't care less, I simply won't buy this from you, period. Many others, with time getting burnt on OpenVZ limitations, also won't.
OK, good point, one shouldn't rely too much on extra privacy in KVM/Xen, it's still a VPS and is still stored on a server where someone else is 'root'.
However it is indeed possible to have a very secure system on KVM, specifically one with rootfs encrypted, and asking for decryption password on boot (which you will need to enter via the VNC console). Of course they could still keylog the VNC console, but that would require a level of determination far beyond that of a casual "snooper".
...hmm, or downloading the decryption key over HTTPS from a remote server, where you make it available manually only for a short time, when you know that your VPS is indeed booting up and you are following the process. That way the keylogging issue is avoided
In any case, there are a lot of such possibilities with KVM, and absolutely none with OpenVZ. Because even if you have a super-duper encrypted FS on OpenVZ, when you have it mounted, a 'vzctl enter' will give a full view of it to the host.
The OpenVZ vs KVM/Xen/Dedicated security issue has nothing to do with "having something to hide". It has everything to do with reducing the risk of your users'/customers' personal information (or your company's data) being compromised by a 3rd party gaining access to it.