New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
If you can have nested virtualization you can get a lot more potential can be released from your VPS.
Some examples?
With nested virtualization you could create a lot of virtual machine (KVM) or LXC on top of your VPS.
Performance wise, with enabled VMX AMDV you will get more performance compared to the disabled one, but not significant, maybe around 5-10%.
So from the providers point of view why do they not all enable this? I can only find it causes issues live migration on some hypervisors
It limits overselling potential which is a key metric for the low end price bracket. Overselling is good, most come out only winners from it.
How does it limit overselling?
I should clarify that I'm making an assumption, but if people are upset about it not being there I think it helps lend credit to the assumption. It's sort of like how I don't sell over 100GB storage for email. If I sell 15GB, almost no one who buys it uses 15GB. If I sell 100GB, almost everyone who buys it uses 100GB. There are certain thresholds and variables that increase average usage.
Well, if you offer VM-x (AMD-V), then you need to allow for the possibility that customers will really take advantage of it, and if they do, this will mean a heavier load on the host node, which will mean that you should sell fewer VPSes on that node in order to avoid too heavy a load on that node.
In sum, if you offer VM-x (AMD-V) and you are prudent, you can oversell CPU less than if you don't offer VM-x (AMD-V).
From a customer's perspective, if a provider offers VM-x (AMD-V), the customer tends to feel more confident that the node is less oversold than if the provider doesn't VM-x (AMD-V), but naturally, this feeling may not always turn out to be correct.
LXC doesn't need nested virtualization to be enabled, as it's just a container (OpenVZ-like), not VM-in-VM.
This is not correct either, unless actually doing nested KVM, these features don't affect any other use.
As for the OP question, I am kind of wondering myself, maybe if you got a 8-16GB RAM KVM, you might want to split it further... But splitting a typical LEB with 1 GB RAM or so, seems weird. And in both cases, LXC just screams to be the better choice for multiple personal VMs (i.e. not separate customers each).
So if the node has VM-x (AMD-V) you're saying people may use the server for nesting? or what exactly. I understand that if you have customers that sell nested virtual machines on average those will be heavier loads.
Thank you for correcting LXC section. A bit interesting test using KVM & CPU Host from @Not_Oles.
https://talk.lowendspirit.com/discussion/comment/69084#Comment_69084
Yes: if VM-x (AMD-V) is passed through, people may use the server for nesting.
Yes, but one doesn't need to assume that customers actually sell nested virtual machines -- only that they install/use them.
But my main point is that the "obsession" with VM-x (AMD-V) -- though I wouldn't call it an obsession -- has more to do with a perception or feeling on the part of customers that they're getting a better VPS for their money. This perception or feeling comes from the idea that if a provider offers VM-x (AMD-V) on the node, then the node is probably less oversold with respect to CPU than if the provider didn't offer VM-x (AMD-V) on the node (as I tried to explain above).
From his table what's being compared is no VM-x (AMD-V) for the VM at all (extremely slow), and the other two are AES pass-through on/off. So doesn't seem much related.
And sure you can run a VPS using QEMU-only (unaccelerated). This doesn't require VM-x (AMD-V) pass-through, but will be 10 times slower and will pretty much always use 100% CPU on the host VM, risking getting into trouble for CPU abuse.
The node always has VM-x (AMD-V), without that no KVM VPSes will even run at all (as said above QEMU VPS can be run, but 10x slower).
The question is whether that feature is also passed through into user VMs, for them to have their own VMs inside. That is what's called nesting. And the pass-through can be toggled on the host node.
Maybe some people like passthrough because passthrough allows running a second operating system without buying a second VPS?
Or because passthrough allows running a second instance of the same operating system, but differently set up for testing?
With passthrough the second instance might be quick and easy to reinstall or remove with low risk of damage to the main VPS operating system?
Greetings! 🌎🌍
Maybe but I guess most people don't really do that. There are some use cases for emulators I know most Android emulators require it.
From the POV of a provider I don't see why they wouldn't have it enabled except for maybe providers like Contabo where it used to be popular to see summer hosts nesting their servers.