New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
If you need direct access to NIC, it is better to just have a dedi with dedicated resources. Having SR-IOV in a virtual machine is niche requirement not suitable for public environment like VPS providers.
Many if not all low end providers does not have competency about what sr-iov is. So better buy some shitbox and play with dedicated machine.
the number of $7 vps on their host is probably also way over the limit of VFs their nics can split out..
Oracle Cloud has SR-IOV in paid plans.
For SR-IOV, the switch / router must accept multiple MACs on the same port.
Hetzner redacted is known to send out "unauthorized MAC address" notices.
We shall poke @Not_Oles to try this on crunchy though.
On a low-density host (e.g. 8 customers), it's totally possible to setup SR-IOV.
Wrong answer.
Deploying 8 dedicated servers would consume 4U of space and 8 switchports.
It's more efficient to deploy 8 VDS in one dedicated server, consuming 2U of space and 1 switchport.
If you need high density and don't care about virtualization overhead, virtual machines are better. If you need high performance, virtualization overhead is not acceptable in the first place.
I can't imagine of any widespread use-case when customer can justify SR-IOV and is not concerned by NIC resource sharing amongst KVM guests.
Think high performance VDS plans.
The EPYC 7443 has a unique feature: each processor can be configured to have 1, 2, or 4 NUMA sockets.
With this and SR-IOV, you can make 8 VMs.
The host has 8 cores and 32GB RAM.
Sounds reasonable, but there is a major drawback - you can't migrate such VMs without downtime/shutdown due to SR-IOV. Therefore, purely software XDP-based solution for high-performance networking tasks looks for me as much efficient and virtualization-native approach from providers perspective.
Given hardware support, XDP can significantly reduce overhead caused by default QEMU tap configuration.
If you want to come aboard and check, it's okay. You're always welcome! :-)
so sr-iov can only support 8 guests?
ConnectX-5 supports up to 95 VFs per PF.
https://docs.nvidia.com/networking/display/mlnxofedv461000/single+root+io+virtualization+(sr-iov)#src-12013542_safe-id-U2luZ2xlUm9vdElPVmlydHVhbGl6YXRpb24oU1JJT1YpLUNvbmZpZ3VyaW5nU1ItSU9WZm9yQ29ubmVjdFgtNC9Db25uZWN0LUlCL0Nvbm5lY3RYLTUoSW5maW5pQmFuZCk
oh ok so I guess the bottleneck is the limited number of guests it can be virtualized for and needing specific nics
Hmm never mind it seems like the intel 82599 a nic from 2009 and only $30 now can have 63 which seems like a pretty large number. Certainly possible for medium priced hosts, maybe not for the lowest end.
But this saves cpu on the hypervisor so it can allow increased density.