All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
SOYOUSTART / OVH: IPv6 only VMs help!!! :(
Hey guys, I'm a little annoyed at OVH but I'm also clearly incompetent at this level of networking especially with Ubuntu. I have spent likely 3 days trying to get this going.
I'm running an Ubuntu host on SYS. The idea was to use the ipv6 /64 to create my personal VMs (Oracle Virtualbox) for testing, coding etc.
Issue is: OVH docs says you must have a VMAC for ipv6 to work. Support told me the same thing. But that just doesn't fit my needs because it would mean for every single VM that I want a single IPv6 for, I'd have to purchase an IPv4 per VM. Like - WTH ??? I'd like to fire up 20 VMs not all of which need to be online simultaneously but not have to reconfigure networking everytime I want to use one of them. What's the point of millions of IPv6 if I can't use them independently? Maybe I'm missing something or this could be my legit rant.
Found 2016 article (which gave me some hope) - tried it but my VM just doesn't want to ping -6 google: https://ninefinity.org/post/kvm-guest-with-ipv6-only-on-ovh/
So firstly I need a good samaritan who can just confirm that this article doesn't work anymore. And secondly (optionally) hint to me the terminology I should be using to figure out how to ipv6 working on VMs.
Any pointers or articles out there that I might have missed?
Thanks so much!!!
Comments
OVH IPv6 doesn't need vmac for VM use, mac address on IPv6 isn't even an option at OVH. I never use IPv6 only VM so I don't have anything useful to share for this though.
Thanks @MikeA - so according to the docs (which does work) - you must have the VM configured with an IPv4 vmac. You can then add an ipv6 to the vm and it works. They claim it's something about how they do routing (not something I understand or can visualize) but I suspect the VMAC is registered somewhere in their network to route accordingly. This way however requires you to pay for an ipv4 per VM basically which makes no sense to me.
I do same on Kimsufi, I have a few KVM VMs in libvirt in a bridge (br0). My incoming internet connection is on eth0 (not in bridge), with an IPv4 and an IPv6 that mentioned as /128 but in real life I can use it as /64.
Because OVH do not allow using multiple IPv6s on Kimsufi in same time, I found the solution, nddpd in a good ProxMox on Kimsufi description in LET, that proxyes the IPv6 neighbor messages from the bridge to eth0.
I don't know enough about SYS rules regarding IPv6 but this solution might work.
Thanks @adns appreciate the insight!
I might just have to use cloudflare tunnels after all...
You should check out the tutorial from @Maounique. It's basically for proxmox giving VMs full IPv6 but it should work on Virtualbox as well.
https://lowendtalk.com/discussion/182736/basic-setup-of-proxmox-in-an-ovh-environment-kimsufi-example-with-nat-and-ipv6-enabled-containers/p1
Customizing for virtualbox or other type 2 hypervisor is pretty easy, but my tutorial does not explain it step by step. Maybe it would be a good idea to expand it and comb a bit the iptables rules to make them more robust for complex environments like many bridges and complex NAT.
If you setup ndppd on the host then it will just work
IPv6 only. I am experienced in running IPv6 only VMs but it is not for everyone. At least some kind of IPv4 exit is desirable and it is not that difficult.