Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Does anyone have a (Working) tutorial in order to get Proxmox to work with Kimsufi using IPV6
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Does anyone have a (Working) tutorial in order to get Proxmox to work with Kimsufi using IPV6

ljsealsljseals Member
edited April 2017 in Help

Greetings, I am trying to install run Proxmox on a Kimsufi server via IPV6 without using nat. I have followed several tutorials with failure; however, the last one I am a able to ping the container but can not get access to the internet.

The first tutorial
https://www.kiloroot.com/proxmox-kimsufi-ovh-soyoustart-ipv6-host-multiple-containers-and-virtual-machines-on-a-single-kimsufi-server-using-ipv6-and-proxmox/

Simply does not work. Geraldo states:

I finally discovered the problem. I got down to how proxmox 3.4 manages OpenVZ networking and found that it creates (internally as you are not shown that in the interface) a new network adapter. Then it’s add your second address to your primary adapter and establish SNAT and DNAT routes with the container.

Proxmox4 don’t do that anymore. When you create a LXC container. It directy bridge it with your network interface. That’s ok, but Kimsufi (at least with my server) have restricted the MAC Learning capability of their switches or routers to only one MAC, so all the packets from any other MACs are automatically dropped.
Found that up, I reinstalled Proxmox4, create a new bridge interface and assigned it an private IPV6 and put there a container.
Then I set SNAT and DNAT routes for it and added the public IPv6 I want to use to the main interface. And… it worked!!>

The second tutorial works but not totally.
microsofttranslator.com/bv.aspx?from=&to=en&a=http%3A%2F%2Fsngr.org%2F2017%2F02%2F%E5%8D%95ipv4%E6%9C%8D%E5%8A%A1%E5%99%A8%E5%BB%BA%E7%BA%AFipv6%E7%9A%84vps%E7%94%A8%E4%BA%8Eweb%E7%AE%A1%E7%90%86pt%E6%8C%82%E6%9C%BA.html

I disabled multicast and it still does not work. I am running Centos 7 as a CT.

Does anyone have a tutorial on how to make this work. In essence looking to install Proxmox on Kimsufi to run webapps containers with using nat. God bless you!

Comments

  • I used to run a Proxmox V4 node on a Kimsufi and the setup for the network was pretty strightforward as I recall.

    This isn't a tutorial but here is the actual interfaces file from that install (no longer live), and the helper script which dealt with broadcasting the IPv6 addresses setting routing and doing some inbound IPv4 NAT.

    Hopefully you can make out the principles. You'll need to use a SNAT or MASQUERADE iptables rule to allow outbound IPv4 connections from the containers. This worked for me for CentOS and Windows containers.

    /etc/network/interfaces

    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # for Routing
    ##auto vmbr1
    ##iface vmbr1 inet manual
    ##  post-up /etc/pve/kvm-networking.sh
    ##  bridge_ports dummy0
    ##  bridge_stp off
    ##  bridge_fd 0
    auto vmbr1
    iface vmbr1 inet static
        address 192.168.253.254/24
        bridge_ports dummy0
        bridge_stp off
        bridge_fd 0
    
    iface vmbr1 inet6 static
        address 2001:41d0:1:6505:192:168:253:254
        netmask 64
        post-up echo 1 > /proc/sys/net/ipv6/conf/all/proxy_ndp
        post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
        post-up echo 1 > /proc/sys/net/ipv6/conf/default/forwarding
        post-up /usr/local/bin/vmbr1-up.sh
    
    # vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
    auto vmbr0
    iface vmbr0 inet static
        address 91.121.18.5
        netmask 255.255.255.0
        network 91.121.18.0
        broadcast 91.121.18.255
        gateway 91.121.18.254
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        post-up /sbin/iptables -t nat -A POSTROUTING -o vmbr0 -j MASQUERADE
    
    iface vmbr0 inet6 static
        address 2001:41d0:1:6505::1
        netmask 64
        post-up /sbin/ip -f inet6 route add 2001:41d0:1:65ff:ff:ff:ff:ff dev vmbr0
        post-up /sbin/ip -f inet6 route add default via 2001:41d0:1:65ff:ff:ff:ff:ff
        pre-down /sbin/ip -f inet6 route del default via 2001:41d0:1:65ff:ff:ff:ff:ff
        pre-down /sbin/ip -f inet6 route del 2001:41d0:1:65ff:ff:ff:ff:ff dev vmbr0
    

    /usr/local/bin/vmbr1-up.sh

    /sbin/ip -f inet6 neigh add proxy 2001:41d0:1:6505:192:168:253:251 dev vmbr0
    /sbin/ip -f inet6 neigh add proxy 2001:41d0:1:6505:192:168:253:252 dev vmbr0
    /sbin/ip -f inet6 neigh add proxy 2001:41d0:1:6505:192:168:253:253 dev vmbr0
    /sbin/ip -f inet6 route add 2001:41d0:1:6505:192:168:253:251 dev vmbr1
    /sbin/ip -f inet6 route add 2001:41d0:1:6505:192:168:253:252 dev vmbr1
    /sbin/ip -f inet6 route add 2001:41d0:1:6505:192:168:253:253 dev vmbr1
    iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 2223 -j DNAT --to 192.168.253.252
    iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 2222 -j DNAT --to 192.168.253.253
    
    Thanked by 1ljseals
  • ljsealsljseals Member
    edited April 2017

    Thanks @cochon, I will try it again; however, I must say that it is foreign to me; however, I pray for the Lord's help. Thank you for your setup.

  • https://raymii.org/s/tutorials/Proxmox_VE_One_Public_IP.html

    This is similar to your setup but do you have to have Nat if you have ipv6. Sorry if that is a newb question. God bless!

  • No, not for IPv6, they were just in my config, so mentioned it for completeness. No IPv4 connectivity at all can be hard work.

    The important bits for Kimsufi (OVH) are the '/sbin/ip -f inet6 neigh add proxy' lines which broadcast the additional IPv6 adresses of the internal containers (on vmbr1) out of the public interface (on vmbr0) so that the OVH routers see and know to route these extra addresses to your box.

    If you can ping6 the internal interfaces of your container from outside, it suggests routing may be working, can you ping6 back out to a numerical address (e.g. Google's DNS servers on 2001:4860:4860::8888). DNS could be your issue, what are you using as the DNS resolver address in your containers, if you have no IPv4 connectivity and don't want it, it has to be an IPv6 resolver like the example given.

    Thanked by 1ljseals
  • NeoonNeoon Community Contributor, Veteran
  • Yes, I was using Google DNS servers. I was able to ping inside of the container and reach yum repositories; however, I was not able to ping the container from my computer not github. I will try it again...

  • Thanks @Neoon

Sign In or Register to comment.