New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
On every KVM VPS I've ever had with every provider (Linode, Vultr, etc), a reboot is required for extra IPs to become usable.
This is even stated in their documentation, i.e:
"If you purchase additional IP addresses for your Linode, you must reboot your machine before the IP addresses will function correctly."
Source: https://www.linode.com/blog/networking/additional-ip-addresses/
"Note: You must restart the server via the control panel before using the IP address. Rebooting via SSH is not sufficient."
Source: https://www.vultr.com/docs/vultr-reserved-ips/#Assign_a_Reserved_IP_to_an_Instance
If I'm missing something here and could use education, I'm all ears - nothing beats learning something new every day
It depends on the back-end. In most cases adding an IP to a running instance for as long as the routers would not block it, can work without a reboot. If you would need to add another interface on the instance, with a different MAC and other things, in some cases, yes, a reboot might be needed.
Price and knowing the VPS has low resource consumption during the long periods my VPS is idle. With KVM it feels more like I am leaving a machine running 24/7.
Blame the crappy systems at incompetent providers.
Dynamic IP change is just one command in the guest machine:
ip addr add 2001:db8::2/64 dev uplink
(IPv4 is same syntax)
If filters are in place, they can be changed dynamically too.
Example competent provider with awesome system: Oracle Cloud.
You can change IP assignments all you want without reboot.
That applies to KVM too. However, OVZ containers share the kernel of the host. For very small instances (think 128-512 MB) the kernel may take a significant part of the available RAM, so it's a win for both customers (more RAM for their apps) and the provider (more instances to squeeze). Combined with NAT, they can reach the rock-bottom price (and performance).
As people's expectations change and hardware becomes more powerful, that advantage has pretty much eroded. Even if you have a very small workload, you're better off managing your own Docker container on the usual KVM offers.
I would say price, but I really dislike OpenVZ that I try to avoid it. The same for goes for LXC. There are just some situations where either KVM prices for equivalent resources are unjustifiably high or the equivalent KVM is just not available. I prefer to never use OpenVZ. Less than 10% of the systems I have are OpenVZ.
Docker-on-KVM is the answer for very small workloads, when the developer has multiple such workloads in the same location, which can fill a KVM.
I have a very large KVM (the famous 9999 plan, my last one), in which I made several LXC containers, in which I made several Docker containers.
It's Docker-in-LXC-on-KVM, for better isolation.
A different case is having very small workloads in many different locations.
This is when I want to acquire very small OpenVZ or LXC containers in many different locations.
I used to have WebHorizon NAT bundle but now switched to MicroNode Instances.
It can only be for price. Because of kernel restrictions, firewalld has had numerous problems running properly and may be even a cause of being hacked due to firewall failing to run and user doesn't realize because their traffic works and doesn't realize same for everyone else...
I gave up on openvz years ago for this reason.
OpenVZ is cheaper, but without support from Virtuozzo and is NOT up-to-date with the latest (security) updates. Best effort project from the OpenVZ team. (Version 7 or newer)
I migrated my stuff to LXD years ago because of this lack of support and updates. Using Ubuntu LTS with ZFS storage and I love it!
I dont see any reason to use OpenVZ + I am curious about the current status with the devs from Russia.
Also: Don’t forget security. The provider can always access your container instantly with OpenVZ.
Rather LXC than OpenVZ. And KVM if possible.
so i guess its not really by choice.
OpenVZ is more forgiving when it comes to overselling, thus allowing providers set more competitive pricing and attract more customers (hence the demand for OpenVZ).
Can I ask why?
I was making up shitty jokes.
Ahahah! I got fully owned :P
Why this question raised many times and now by @nessa .
Straight answer OpenVZ is FUD and eventually will dead in near future as well ramnode
There is no cost difference, advantage etc etc at all whatsoever other than oversold nodes which made them a lot of profit and now creating discussions hear and there on ovz for there promotion.
For private servers LXC is option or for the NATVPSs.
Indeed, we do this live, without any reboots needed.
when i used ovz...its only because of price and i really found no difference between kvm and openvz
few years later, i started using docker which can't be used in ovz 6(i guess) so i started buying kvm ones which are expensive
then i heard ovz 7 can run docker too but by then, i mostly dropped all un-used vpses and using few kvms.
Its affordable
For my opinion OpenVZ its shit.
More than a decade ago, I used OpenVZ because it was much less expensive than KVM and other choices. My feeling at the time was OpenVZ offerings were 1/2 to 2/3 the price of comparable KVM offerings.
OpenVZ was very common. KVM was much less common. Xen was uncommon. If you dug into the Xen offerings, they seemed to be mostly PV (paravirtualization, like OpenVZ), not HVM (virtual machine, like KVM). A few providers offered VMware.
That was a long time ago. These days, KVM offerings are common and priced in the same ranges as OpenVZ. If there is a significant performance hit with KVM, I never noticed it. I like KVM because most providers offer CD/DVD images mounted on request with a console to let me do my own server installations (or remote boot installation).
So yeah, KVM. I won't be lookin' back at OpenVZ anytime soon. I have one OpenVZ remaining, and I already arranged to let it expire without renewal in a couple weeks. The company that rented it to me started raising the price every year. I should have cancelled it years ago.
If you make your KVM prices same as your openVZ ones (same specs ofc).. noone will preffer it.
TBH that's what we're evaluating...
The container is a glorious version of chroot while a paravirtualized machine has own kernel which is absolutely different from a container.
Restrictive, outdated, etc... Just way less headache with LXC than with OpenVZ overall. Plus if I need to run containers, I'd just spin LXC or Podman up anyways. There's no place for OpenVZ whatsoever.
Edit: Since others in the thread had already spoken, I think I don't have to continue to keep being polite anymore. So, here it goes.
OpenVZ is SHIIIIIIIIT
if we are talking containers, I prefer lxc. otherwise KVM/bare metal.
My main VM's are KVM based but.. I have many OVZ based VM also, and I'm satisfied with them. It's perfectly enough for a lot of tasks (no need more for the tasks what I use on them. For example: host some websites, IRC tools, OpenVPN, etc). And of course, it's cheaper.
OpenVZ is popular due to a number of important factors. First off, OpenVZ provides outstanding resource efficiency. Multiple containers use the same kernel thanks to container-based virtualization. Because containers share common libraries and resources, this strategy maximises resource utilisation while requiring less overhead than complete virtualization systems. The effective use of resources results in lower costs and better performance.
”A number of important factors”
mentions one
There are other ways of doing that, some even lighter on resources.
Yes, docker does suck in many aspects, but we also have LXC which ticks all the boxes for a container-style "virtualization".
Just curious, why are people preferring OVZ over LXC? Not intending to start a polemic here, when Proxmox moved to LXC from OVZ I didn't like it, but now I think it was a good move and curious to hear other people's experiences.