All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
IPv6 issue (SYS ignoring?)
Hi guys!
I have 3 dedicated servers at SYS, let's call them srv1, srv2 and srv3.
All servers are in Proxmox cluster.
I have a problem with IPv6, but ONLY on srv1.
All 3 servers have /64 IPv6 subnet that is divided into 256 /72 IPv6 subnets.
All servers (hypervisors) are not IPv6 configured (bridge vmbr0) but gateway is known for all of them.
And on srv2 and srv3 IPv6 works fine and I can ping everything.
On srv1 it does not work and also I cannot ping the gateway, but IPv6 config for me is 100% correct:
VM on srv1 (IP edited):
root@glpi:~# ip neigh show 192.168.1.1 dev nat1 lladdr 62:ab:cd:ef:b8:e6 STALE fe80:41d0:1234:8dff:ff:ff:ff:ff dev eth0 FAILED
IPv6 on srv1 (hypervisor)
`root@srv1:~# ip neigh show
10.10.168.254 dev vmbr0 lladdr 00:00:0c:9f:f0:01 REACHABLE
100.10.168.252 dev vmbr0 lladdr 00:ee:ab:09:bb:2b STALE
100.10.168.253 dev vmbr0 lladdr 38:90:a5:2e:f1:61 STALE
192.168.1.2 dev vmbr1 lladdr 7a:cb:16:6a:65:d1 STALE
fe80::58db:adff:febc:e12e dev vmbr0 lladdr 5a:db:ad:bc:e1:2e STALE
fe80:41d0:1234:8dff:ff:ff:ff:ff dev vmbr0 lladdr 00:05:73:a0:00:00 router STALE`
As you can see, gateway fe80:41d0
... is the same on VM and and on hypervisor.
And look what happens when trying to ping gateway:
root@glpi:~# ping -c 3 fe80:41d0:1234:8dff:ff:ff:ff:ff
PING fe80:41d0:1234:8dff:ff:ff:ff:ff(2001:41d0:1008:1cff:ff:ff:ff:ff) 56 data bytes
From fe80:41d0:1234:8da7:100:: icmp_seq=1 Destination unreachable: Address unreachable
From fe80:41d0:1234:8da7:100:: icmp_seq=2 Destination unreachable: Address unreachable
From fe80:41d0:1234:8da7:100:: icmp_seq=3 Destination unreachable: Address unreachable
--- fe80:41d0:1234:8dff:ff:ff:ff:ff ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2025ms
It cannot reach gateway when it is exactly the same as on hypervisor. Hypervisor reaches the gateway, but VM does not.
Default route on VM is via gateway provided above.
Does anyone of you have an idea what is wrong here? For me it looks like an issue on router configuration on OVH (SYS) side...
Public IPv4 and IPv6 addresses have been redacted.
SYS ignoring - means I reported the issue yesterday and no answer from their side.
Comments
Doesn't OVH/SYS statically forward the whole IPv6 network to the MAC address of your server, still lacking failover IPv6 addresses with custom MAC addresses?
Then a bridged setup with different MACs on the VMs won't work.
So why it works on two servers and on only one it doesn't?
There is a bridge with the same MAC address as eth0 interface
My guess is because they use different model/brand routers or switches for different parts of their datacenter, or the same, but configured a bit differently (by mistake/oversight/lack of care). You might not have much luck getting them to fix this.
Try changing the problem server to use routing to the VMs instead of bridging, and run
ndppd
on the OVH-side interface in the "static" mode for the /64.