Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Basic setup of Proxmox in an OVH environment (Kimsufi example) with NAT and IPv6 enabled containers - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Basic setup of Proxmox in an OVH environment (Kimsufi example) with NAT and IPv6 enabled containers

2

Comments

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2022

    @SeederKun said: is it possible to assign /80s to these CTs?

    It is, but what would the benefit be? You can do subnetting (which could be useful in some cases when you have many VMs or some VLANs, but a /64 should be the minimal unit in IPv6), just make sure you have a statement for each in /etc/ndppd.conf.

  • Hi. This is a great tutorial! However I am having issues with connecting my VM to the internet. From my client, I can access local IPs, but cant seem to get the internet to work. I'm using Ubuntu as my VM. I was wondering what settings I had to use (subnet for ipv4/prefix for ipv6). In addition, I cant seem to get ipv6 to work. What could be wrong?

  • MaouniqueMaounique Host Rep, Veteran
    edited December 2022

    @premudeshi said:
    Hi. This is a great tutorial! However I am having issues with connecting my VM to the internet. From my client, I can access local IPs, but cant seem to get the internet to work. I'm using Ubuntu as my VM. I was wondering what settings I had to use (subnet for ipv4/prefix for ipv6). In addition, I cant seem to get ipv6 to work. What could be wrong?

    if both IPv4 and IPv6 do not seem to work, it is likely you are attaching to the wrong bridge, check if you are attaching the vm to vmbr6 and not vmbr0 (in case you have kept the names I gave, if not replace with yours).
    If you put here the configs I could take a look (better send in PM in case you forget some IP in there).

    Thanked by 1premudeshi
  • Hi
    if somone have problem loosing acces to https or http in container after open port
    add interface in rules should resolve problem
    post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 443 -j DNAT --to-destination 10.0.0.100
    post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 80 -j DNAT --to-destination 10.0.0.100

  • MaouniqueMaounique Host Rep, Veteran
    edited December 2022

    Based on feedback I have received, I revised the guide a bit adding more info and some common troubleshooting:

    So, you are the proud owner of an OVH bare metal machine. This includes many kinds, starting from KS-1 (yes, the Atom with a clunky HDD attached) to various other options with only one IPv4 and only an advertised /128 of IPv6.

    Now, OVH offers Proxmox installations on bare metal and you would love to run at least own containers with it (KS1 WORKS with containers and even KVM without virtualization support, but KVM on it is painfully slow).

    The problem is you would have to do NAT for IPv4 with all the bad things that brings and OVH routers do not allow for more than one IPv6 to go out on the internet at any given time, albeit you could use any of the /64 and not only the one they are giving you in the settings.

    In this tutorial I will try to explain everything about setting up Proxmox 7 with containers (for VMs settings are similar), from custom partitioning at installation to NAT-ing the IPv4 and proxying IPv6 to the containers including sample configs with fictitious IPs. I will explain for the lowest possible product, KS-1, we will set up a container with it but the setting are similar for all the products in this situation (only single IPv4 and IPv6 offered).

    As soon as you get the product, you can go at the control panel and install Proxmox 7.
    You could, of course, go with the defaults, but I strongly recommend you customize the installation, especially if you have 2 disks. You would like to install only one disk and give it a larger SWAP partition. The storage is mounted on /var/lib/vz and it is LVM by default, but you could change to ext4 even at this stage, just not the mount point (at the time of the writing,t he interface does not allow that, but you can change it everything later in the /etc/fstab).

    If you do go with the defaults, you will not need parts of this tutorial, just the network section. For a simple KS-1, defaults are just fine as it has only one clunky disk anyway.

    Network section

    (the most important part of this tutorial which will make your poor little box a fully-fledged Proxmox server capable of running multiple containers at least).
    So, we need to look mainly at one file which is located at /etc/network/interfaces.
    The initial config is something like this (for a KS-1):

    # network interface settings; autogenerated
    # Please do NOT modify this file directly, unless you know what
    # you're doing.
    #
    # If you want to manage parts of the network configuration manually,
    # please utilize the 'source' or 'source-directory' directives to do
    # so.
    # PVE will preserve these directives, but will NOT read its network
    # configuration from sourced files, so do not attempt to move any of
    # the PVE managed interfaces into external files!
    
    auto lo
    iface lo inet loopback
    
    iface eno0 inet manual
    
    auto vmbr0
    iface vmbr0 inet static
        address 123.123.123.123/24
        gateway 123.123.123.1
        bridge-ports eno0
        bridge-stp off
        bridge-fd 0

    Pay attention at this line "iface eno0 inet manual" It will give you the name of the interface which is connected to the OVH routers. If yours is not eno0, then replace with the relevant interface in the config. Also, replace 123.123.123.123 with your IPv4 and 123.123.123.1 with your gateway, usually they are already there, but, in case you have a DHCP config, like:

    auto vmbr0
    iface vmbr0 inet dhcp

    You should change it to static taking the values from your control panel for the IPv4 and gateway.

    Some KS-1 will have an IPv6 config as well, and it could be automatic with some DHCP equivalent or plain static. We do not care as we will replace that config anyway.

    The first changes will be done to the IPv6 config for the bridge facing the internet, vmbr0. Assuming your IPv6 in the panel for your service is address 2001:41d0:8:aaaa::1/128, then it should be something like this:

    iface vmbr0 inet6 static
        address 2001:41d0:8:aaaa::ffff/128
        post-up sleep 5; /sbin/ip -6 route add  2001:41d0:8:aaff:ff:ff:ff:ff dev vmbr0
        post-up sleep 5; /sbin/ip -6 route add default via 2001:41d0:8:aaff:ff:ff:ff:ff
        pre-down /sbin/ip -6 route del default via 2001:41d0:8:aaff:ff:ff:ff:ff
        pre-down /sbin/ip -6 route del 2001:41d0:8:aaff:ff:ff:ff:ff dev vmbr0

    This adds an IPv6 to the machine (the one you can use to access it from the internet 2001:41d0:8:aaaa::ffff here) and routes for it considering the OVH peculiar situation. It will also delete the routes at shutdown (not required, but it is good to form nice habits).
    I won't go into details about the literal form of IPv6, it is beyond the scope of this newbie-level tutorial, just make sure you get the correct number of "f"s. Normally 8 should suffice for a /64, but OVH is actually routing a /56 and the gateway is for that, therefore you need to have 10. Just pay attention and replace with your actual IPv6 part.

    Save and test the config with

    ifup --no-act vmbr0

    If you get no errors, it is probably okay.

    Now we have IPv6 connectivity (if we didn't before, some come with it, some not) for the machine, you should be able to ping google.com over IPv6 after a restart. Do not use the apply configuration from the Proxmox interface, it will not work, just a restart to make sure changes work AND you can reach your server over SSH. If you cannot for some reason ping your IPv4 nor IPv6 (changed from the one you got 2001:41d0:8:aaaa::1 into 2001:41d0:8:aaaa::ffff in this example) then you will need to reinstall as it is the simplest thing. It usually means you made some typo, you will have to restart paying more attention.

    The next changes will not affect basic connectivity, so, if you screwed up something, you can always go back and change /etc/network/interfaces.

    We will now add a bridge (virtual interface like vmbr0) for the containers and virtual machines. You should name it vmbr6 or vmbr46 as it would provide both IPv4 and IPv6 connectivity to the VMs and CTs. I will use vmbr6 in this example.

    Append it as follows:

    auto vmbr6
    iface vmbr6 inet static
        address 10.0.0.254/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE

    This will create the interface you would bridge the VMs/CTs to (vmbr6), gives it an IP and sets up NAT for your containers. If you only wanted to have NAT in your containers, you are already good to go, save the file, do an ifup vmbr6, create your container, give it an IP like 10.0.0.xxx/24 with gateway 10.0.0.254 (the bridge IP), bridge it to vmbr6 and you have IPv4 connectivity inside the container.
    If you need (and should use) IPv6 inside your container, read on!

    We will now add IPv6 connectivity to the vmbr6 bridge. Append this to your /etc/network/interfaces:

    iface vmbr6 inet6 static
        address 2001:41d0:8:aaaa::1/64

    That is basically the IPv6 you have in your panel with /128 at the end replaced with /64. it will be the gateway for your containers.
    STOP! Saving and doing ifdown vmbr6 and ifup vmbr6 will not work to give you IPv6 connectivity in the containers because of the peculiarity of OVH routers to allow only one IPv6 at a time to go out over the internet. We will have send all traffic through the address

    2001:41d0:8:aaaa::ffff

    we set to face the internet.
    This is not as simple as enabling forwarding and NAT. While IPv6 NAT is possible, we would like to have FULL IPv6 access over the internet, all ports available to all IPv6-enabled containers, be directly accessible from the internet over IPv6, etc.
    IPv6 comes with NDP which means neighbour discovery protocol which basically allows for autoconfiguration in "normal" IPv6 environments, but it requires for the routers to allow the OS to do that and as noted, OVH routers will not let IPv6 access to more than one address at a time, so it will not work, as such, we will proxy all our requests through that one and we have a neat tool for this, a daemon named ndppd.

    ndppd is not part of the standard Debian nor Proxmox so we will have to install it:

    apt install ndppd

    then configure it in its own config file, which does not exist and will have to be created at /etc/ndppd.conf
    Edit it like this according to our example IP:

    route-ttl 30000
    proxy vmbr0 {
    router yes
    timeout 500
    ttl 30000
    rule 2001:41d0:8:aaaa::/64 {
    static
    }
    }
    

    And save.
    There is no 1 or ffff after ::, that is not a typo.
    it would complain the range is too big, but it will work just fine for our small machine.

    Now, we need to start and daemonize it:

    ndppd -d -c /etc/ndppd.conf

    We are not done yet. By default, Proxmox installs from OVH templates do not forward either IPv4, nor IPv6 packets which I find odd since it is its job to host VMs, but nevermind that, we will enable it.
    For the IPv4 I did it in the post-up stanza of vmbr6 (post-up echo 1 > /proc/sys/net/ipv4/ip_forward) which writes 1 meaning enabled directly, but for the IPv6 we would be better off with enabling it permanently in the /etc/sysctl.conf.
    Find this line:

    #net.ipv6.conf.all.forwarding=1

    and uncomment it (remove the "#" in front) save and apply:

    sysctl -p /etc/sysctl.conf

    Now you should be able to give NATed IPv4 and full access IPv6 to your containers.

    This should work for most people, but what if you need some ports forwarded for some dummy app which still has no (or poor) support for IPv6 or it has to be available over IPv4 too over the wide internet?
    We can do this in the IPv4 section of the vmbr6, just add these lines (add and delete rules always, so there are 2 lines for each, you could only add, would still work, but for good practices deleting is not a bad idea).

    post-up iptables -t nat -A PREROUTING -p udp --dport 60203 -j DNAT --to-destination 10.0.0.203
    post-down iptables -t nat -D PREROUTING -p udp --dport 60203 -j DNAT --to-destination 10.0.0.203

    This will forward port 60203 udp to the container (or VM) with the IP 10.0.0.203. You can make it tcp, just change to udp part into tcp. You can, obviously, change the IP it is forwarded to as well and these are the most variable things.
    For a port range, do this:

    post-up iptables -t nat -A PREROUTING -p tcp --dport 10000:60000 -j DNAT --to 10.0.0.253
    post-down iptables -t nat -D PREROUTING -p tcp --dport 10000:60000 -j DNAT --to 10.0.0.253

    With that we forward the TCP ports from 10000 to 60000 to the container or VM with IP 10.0.0.253

    With both these things, your vmbr6 section for IPv4 would look like this:

    auto vmbr6
    iface vmbr6 inet static
        address 10.0.0.254/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE
        post-up iptables -t nat -A PREROUTING -p udp --dport 60203 -j DNAT --to-destination 10.0.0.203
        post-up iptables -t nat -A PREROUTING -p tcp --dport 10000:60000 -j DNAT --to 10.0.0.253
        post-down iptables -t nat -D PREROUTING -p tcp --dport 10000:60000 -j DNAT --to 10.0.0.253
        post-down iptables -t nat -D PREROUTING -p udp --dport 60203 -j DNAT --to-destination 10.0.0.203
        post-down iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE

    That is about it, basic IPv4 and IPv6 connectivity for your containers or virtual machines is assured for as long as you assign correct IPs to them and bridge to vmbr6.
    Assigning IPv4 is trivial, I suppose and IPv6 should assign something like 2001:41d0:8:aaaa::1001/64 in the container with name 1001, for example and gateway will be 2001:41d0:8:aaaa::1

    I have been asked after the first revision of this guide to provide a way to forward various ports to same port in various VMs. That is possible, but a better approach in my view would be to handle that inside the VM or container, for example, I need to forward port 9999 and 9998 to port 80 in two different VMs. Then we do (in proxmox's /etc/network/interfaces the vmbr6 IPv4 section!) the usual forwarding of tcp ports 9999 and 9998 to the relevant IPs, for example 10.0.0.110 and 10.0.0.111

    post-up iptables -t nat -A PREROUTING -p tcp --dport 9998 -j DNAT --to-destination 10.0.0.110
    post-up iptables -t nat -A PREROUTING -p tcp --dport 9999 -j DNAT --to-destination 10.0.0.111
    post-down iptables -t nat -D PREROUTING -p tcp --dport 9998 -j DNAT --to-destination 10.0.0.111
    post-down iptables -t nat -D PREROUTING -p tcp --dport 9999 -j DNAT --to-destination 10.0.0.110


    Note: Once again, the post-down statements are not absolutely necessary in a normal operation of the node when you just reboot it, but in case you would repeatedly take up and down some interface only, cleaning up the rules is a good idea.
    You would then do in the container's /etc/network/interfaces (debian) (10.0.0.110)

      post-up iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 9998 -j REDIRECT --to-port 80
      post-down iptables -D PREROUTING -t nat -i eth0 -p tcp --dport 9998 -j REDIRECT --to-port 80


    The same thing in the other VM or container, just replace the --dport 9998 with --dport 9999
    I assume that:
    1. you have iptables installed as many container images come without (for example, debian 11) If not,

    apt install iptables
    1. your interface in the vm is eth0 (in most cases it is)
    2. in case your distro is a different one, please refer to the manual about how do that at each boot.
      If you only need the forwarding once:
    iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 9998 -j REDIRECT --to-port 80

    If you wish to be available each time, but do not wish to load it via the ifup method, you could do it once, save the rules (debian example):

    /sbin/iptables-save > /etc/iptables/rules.v4

    You could manually load the saved rules:

    /sbin/iptables-restore < /etc/iptables/rules.v4

    Or install the "persistent" service to do that for you at every boot:

    apt install iptables-persistent

    and enable it:

    systemctl enable netfilter-persistent.service

    That being said, maybe you need more, like sharing a NAS inside your KS for network boot and the like and you do not wish to be even NAT-ed to the public interface or for any other reason.
    You could add another interface to your container or VM and bridge it to another bridge you need to add to your /etc/network/interfaces like this:

    auto vmbr64
    iface vmbr64 inet static
        address 192.168.100.254/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0

    Just add a second interface in your container or VM, bridge it to vmbr64 and give it an IP like (in this case) 192.168.100.100/24 No gateway is needed.
    Your NAS (if you would use one or any other service) will be able to serve your containers or VMs inside the KS at that address for as long as they have a secondary virtual interface with a similar IP bridged to vmbr64.

    Extra partitioning

    Now, if you have another disk, now is the time to format it and add to /etc/fstab then to Proxmox as another storage.
    First, create a mount point:

    mkdir /disk2

    Now, find the second disk:

    lsblk

    You will see something like this:
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    loop0 7:0 0 8G 0 loop
    sda 8:0 0 894.3G 0 disk
    ├─sda1 8:1 0 1G 0 part /boot
    ├─sda2 8:2 0 125.7G 0 part /
    ├─sda3 8:3 0 1G 0 part [SWAP]
    ├─sda4 8:4 0 1K 0 part
    ├─sda5 8:5 0 766.5G 0 part /var/lib/vz
    └─sda6 8:6 0 1.7M 0 part
    sdb 8:16 0 894.3G 0 disk

    You see here, like in most cases, the second disk is called sdb.
    We will use parted to created the partition as it is included with Proxmox:

    parted /dev/sdb
    select /dev/sdb
    mklabel msdos
    mkpart
    p
    2048
    100%

    This is what you will type and press enter at the questions. We created an msdos table because that is default and we will have only one partition so no problem with the fact msdos tables are limited to 4 primary partitions. You can additionally type print to see the partition if it takes all the space (minus 2048 we reserved at the start for compatibility reasons) and then type

    quit

    .
    It tells you we need to update fstab which we will do after we format the partition:

    mkfs.ext4 -L disk2 /dev/sdb1

    Now, time to update /etc/fstab for which we would need the UUID of the partition.

    blkid

    Look for the /dev/sdb1, in my case:

    /dev/sdb1: LABEL="disk2" UUID="f5739cf0-617f-4cb3-b202-c6220474489c" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="36adfec1-01"

    My /etc/fstab before adding the disk is like this:

    UUID=ee299698-a265-4940-9523-b1b18b5e5ccf       /       ext4    defaults        0       1
    UUID=4899814d-eb64-44ea-be91-9936f196816e       /boot   ext4    defaults        0       0
    UUID=ee15ff96-31fe-46b6-9767-f647468eb50e       /var/lib/vz     ext4    defaults        0       0
    UUID=8b17be41-2d4c-4984-a372-bd3a6c967f62       swap    swap    defaults        0       0
    Now we need to add a new line with:
    UUID=f5739cf0-617f-4cb3-b202-c6220474489c   /disk2  ext4    defaults    0   0

    While we are here, if you would like to change the mount point for the storage on the first disk from /var/lib/vz to something else, such as, say, /mainstorage, you can do so here, just make sure you have that mount point (mkdir /mainstorage). If you already have a container or VM, their disks will appear to have moved and you will have to operate the changes in configs. Also, before saving, make sure you have an empty line at the end of the file.
    Before restarting, run

    mount -a

    for extra checks. If you forgot to create a mount point, you will be reminded here.
    After the restart, you can simply go to the proxmox interface https://youripv4:8006 and add the new disk to the storage as a directory. In case you would like to access your interface over IPv6 only, it is possible and recommended, that is one of the reasons I have chosen ffff instead of the regular 1 for the IPv6, besides sanity reasons and ease of setting the gateway in the containers. There are tutorials about doing that in Proxmox and can apply verbatim, that is regular Proxmox stuff not related to OVH peculiarities.
    There are also tutorials about how to install a Let's Encrypt cert, albeit I prefer to add exceptions.

    Troubleshooting connectivity

    (based on feedback I received since posting the guide)

    -IPv6 does not work in the container or VM.

    Check if you have a different IPv6 in the vmbr0 than in vmbr6 The one I have chosen to contact the OVH routers (on vmbr0) is ending in ::ffff/128 and the one I use as a gateway for VMs and containers bridged to vmbr6 is ending in ::1/64 Please see the network mask as well, /128 vs /64.
    This is only my recommendation, but you could always use a different scheme, just remember to replace with your choice everywhere

    -IPv4 AND IPv6 are not working in the container/VM.

    Check if you bridged to the correct bridge. By default, the proxmox network settings offer vmbr0 for bridging. That will not work, you need to bridge to the interface (bridge) not facing the internet which, in this tutorial, I called vmbr6.

    -After forwarding port 443, 80 to a VM/container browsing from inside the VM no longer works.

    I am not sure why that is happening, I was not able to reproduce this issue reported by multiple people and logically should never be a problem. Fortunately, it is only temporary, just shutdown/boot (container/vm) if it happens and it should work as intended.

    -After reloading vmbr6 (for example with ifdown/ifup) the containers/VMs lose all connectivity.

    Shutdown/boot the containers/VMs. Connectivity will be restored.

    I am still taking feedback, if it is not working for you, PM me and I will look into it.

  • twaintwain Member
    edited December 2022

    Good tutorial. I’m wondering why you are encouraging using various frontend/public ip non standard ports when perhaps a better approach imv would be to implement a catchall HAProxy vm that your host iptables rules will send to for all inbound connections of normal inbound ports, and then the HAProxy will then do host/domain-based routing/filtering to the appropriate target vm endpoints.

    Just another way to go about it and you don’t have to get crazy with all kind of ports in the frontend connection. I only glanced over the tutorial so maybe you did actually implement something that uses the other approach..

  • MaouniqueMaounique Host Rep, Veteran
    edited December 2022

    I have tried to make it simple for newbies who only want to paste some stuff and have the containers running with IPv4 and IPv6.
    I am not sure a haproxy setup would be easier. Besides, I am supposing people would run various things on those ports, for example I am running Tor bridges on ports 80, 443, 110, 995 etc. and while that scenario would not need a HAProxy usage as a simple port forwarding would suffice, we can't assume everyone would run similar services or even standard services on those ports.
    Having something that works in all cases without another SPoF is probably a better approach in a tutorial for newbies who would probably have a hard time troubleshooting.

    This also draws from personal experience. Many years ago I was avoiding iptables rules deeming them too complicated and used things like floppy routers for my home needs.
    The moment I have seen an example and was able to paste my first rule and see it working (enable NAT) I was very happy. People would first see the rules, modify to suit their needs, then start to understand the logic, I hope. I have used in purpose various notations for many things, not only iptables rules, just for that end.

  • buddermilchbuddermilch Member
    edited December 2022

    I canceled my subscription with ovh to change to hetzner. Namely because I had problems with the ovh ip beeing on the UCEPROTECTL3 blacklist (they can s*** my d*** I'm not paying 45 chf for the removel). Also I was doing daily backups of a certain container that had grown in size and I needed the faster uplink.
    Anyway, hetzner is offering Proxmox as an "installed on Debian" solution, so the network is setup as you would expect with a eno1 connecting the primary IP. As I did not like the fact that Proxmox was installed on top of debian and I wanted to install Proxmox from iso with zfs RAID1, I created a qemu isntance in rescue mode and Installed Proxmox over a VNC connection to the attached disk in the qemu instance.

    Long story short, I set up the bare connectivity in the qemu instance and edited the interface after booting into the real Proxmox environment. I did a setup using the routed configuration where you connect a bridge to the interface, in this case eno1. It seems to work way better out of the box than the bridged setup using vmbr0 (which is default for a fresh proxmox installation). IPv6 works as soon as the container is started and overall seems to have less hiccups than on ovh.

    Granted I don't know if that's the case because hetzner openly give access to whole /64 (even including rDNS setting for each individual IPv6 address) and therefore has better integration.

    Maybe its something that people with connection problems should try out. Basically rename vmbr0 to the real interface, in my case eno1.

    auto lo
    iface lo inet loopback
    
    auto eno1
    iface eno1 inet static
        address 123.123.123.123/24
        gateway 123.123.123.1
    
    iface eno1 inet6 static
        address 2001:41d0:8:aaaa::ffff/128
        post-up sleep 5; /sbin/ip -6 route add  2001:41d0:8:aaff:ff:ff:ff:ff dev eno1
        post-up sleep 5; /sbin/ip -6 route add default via 2001:41d0:8:aaff:ff:ff:ff:ff
        pre-down /sbin/ip -6 route del default via 2001:41d0:8:aaff:ff:ff:ff:ff
        pre-down /sbin/ip -6 route del 2001:41d0:8:aaff:ff:ff:ff:ff dev eno1
    
    auto vmbr6
    iface vmbr6 inet static
        address 10.0.0.254/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eno1 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -o eno1 -j MASQUERADE
    

    The rest of the setup stays the same. The only downside (or upside) refering to the Proxmox documention is "The network, in turn, sees each virtual machine as having its own MAC" which now it doesnt because it sends everything through eno1 "This makes sure that all network packets use the same MAC address."

    A small edit: I had a hard time finding out why docker is not working with IPv6 inside a container/kvm like it does with IPv4 (like a NAT). It seems to not be able to create the needed iptables6 entries, atleast in the standard configuration. Its weird to use a fresh IPv6 address for each individual container, as is the expected behaviour given in the docker documentation. I think its wasteful and not really needed in my cases, also could pose a security risk because of using a real IPv6 for just a container without a firewall. IPv6 NAT can be enabled by including following in the /etc/docker/daemon.json

    { 
    "ipv6": true,
    "fixed-cidr-v6": "fd00::/80",
    "experimental": true,
    "ip6tables": true
    }
    

    Containers behave now exactly like using IPv4 only mode. Also thanks for not including that in the docker doc and making me read through issue reports on github....

  • Hi guys
    I have 2 SYS-5 boxes, I configure both according to this instruction, on one everything works right away.
    On the second, IPv6 does not work for containers.
    But if a specific container address (for example 2001:41d0:8:aaaa:100::) is pinged from outside, IPv6 for the container starts working fine. Until the next reboot.
    What could be the problem?

  • another strange thing came up
    if immediately after rebooting in the container, execute the command
    ping -6 -c 4 ipv6.google.com
    100% packets are lost
    after the successful execution of the command
    traceroute ipv6.google.com
    ping -6 -c 4 ipv6.google.com runs without error
    any ideas how to fix this?

  • SwiftnodeSwiftnode Member, Host Rep
    edited February 2023

    @buddermilch said:
    I canceled my subscription with ovh to change to hetzner. Namely because I had problems with the ovh ip beeing on the UCEPROTECTL3 blacklist (they can s*** my d*** I'm not paying 45 chf for the removel).

    Does anyone even use that UCEPROTECTL3 list? Pretty much any large ASN is listed as a risk on one or more of their lists, all with an "express delisting" payment page.

    Our upstream AS included, and I've never had a customer complain about any mail or anything being prevented because of that listing.

    The website/blacklist seems to be run by an extortionist group, or one of the dumbest humans on the planet. I don't think it genuinely gets used anywhere.

  • @Swiftnode said:

    @buddermilch said:
    I canceled my subscription with ovh to change to hetzner. Namely because I had problems with the ovh ip beeing on the UCEPROTECTL3 blacklist (they can s*** my d*** I'm not paying 45 chf for the removel).

    Does anyone even use that UCEPROTECTL3 list? Pretty much any large ASN is listed as a risk on one or more of their lists, all with an "express delisting" payment page.

    Our upstream AS included, and I've never had a customer complain about any mail or anything being prevented because of that listing.

    The website/blacklist seems to be run by an extortionist group, or one of the dumbest humans on the planet. I don't think it genuinely gets used anywhere.

    Yes, sadly its used by two companies as a filter where I regularly send emails to. Obviously I could have just asked the companies to whitelist my IP but I was still in the process of transition from my big corp mail to selfhosted. Also I was also limited by network speed later on and the Hetzner box is a relly good fit for all of my needs now.

    Regardless of that I am using a SMTP relay now because of trouble with other email providers blocking mail not because of blacklisting but other stupid stuff. It is so annoying.

  • SwiftnodeSwiftnode Member, Host Rep

    @buddermilch said:

    @Swiftnode said:

    @buddermilch said:
    I canceled my subscription with ovh to change to hetzner. Namely because I had problems with the ovh ip beeing on the UCEPROTECTL3 blacklist (they can s*** my d*** I'm not paying 45 chf for the removel).

    Does anyone even use that UCEPROTECTL3 list? Pretty much any large ASN is listed as a risk on one or more of their lists, all with an "express delisting" payment page.

    Our upstream AS included, and I've never had a customer complain about any mail or anything being prevented because of that listing.

    The website/blacklist seems to be run by an extortionist group, or one of the dumbest humans on the planet. I don't think it genuinely gets used anywhere.

    Yes, sadly its used by two companies as a filter where I regularly send emails to. Obviously I could have just asked the companies to whitelist my IP but I was still in the process of transition from my big corp mail to selfhosted. Also I was also limited by network speed later on and the Hetzner box is a relly good fit for all of my needs now.

    Regardless of that I am using a SMTP relay now because of trouble with other email providers blocking mail not because of blacklisting but other stupid stuff. It is so annoying.

    You should name and shame those 2 companies, if they're intentionally using a blacklist that operates with the sole purpose of extorting people, they're contributing to the problem.

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2023

    @Smbat said: What could be the problem?

    The problem is that only one IPv6 (any) at a time is allowed to connect the the OVH routers.

    If you have that issue it is likely that your IPv6-enaled VMs/containers are connecting directly (bridged) the vmbr0 instead of the bridge with the IPv6 voodoo and your gateway is the one from the OVH, not your own setup.

    Once you ping from outside, the router deletes the old IPv6, enables the new one and it works. But it can stop working at any time when another IPv6 would be chosen or at reboot.

    @Smbat said: any ideas how to fix this?

    Likely the same issue.

  • ebonyebony Member
    edited April 2023

    my new server i can not get this to work for the life of me have they changed somthing?

    i have a stange ipv6 address that i might be getting confused its

    2001:41d0:111:aa1::1 ( i have changed this but this is the format its the same.

    the gateway is the really confsing part as its got 0 in the panel without the 0 i don't get any ipv6 on the host

    2001:41d0:0111:0aff:00ff:00ff:00ff:00ff

    i just can not ping 2001:41d0:111:aa1::1/64 but 2001:41d0:111:aa1::ffff/128 works

    anything in vm does not work to ipv6 as well. 2001:41d0:111:aa1::1001/64, vmbr46, gateway 2001:41d0:111:aa1::1 (ipv4 works in the vm)

  • MaouniqueMaounique Host Rep, Veteran

    AFAIK it still works, at least in my KS servers.

    2001:41d0:111:aa1::1/64 this is the gateway for the VMs inside your node. You should not use this to access the node.

  • ebonyebony Member
    edited April 2023

    my network config

    auto lo
    iface lo inet loopback
    
    iface eno1 inet manual
    
    auto vmbr0
    iface vmbr0 inet static
        address 217.x.x.x/24
        gateway 217.x.x.x
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
    
    iface vmbr0 inet6 static
        address 2001:41d0:111:aa2::ffff/128
        post-up sleep 5; /sbin/ip -6 route add  2001:41d0:111:aff:ff:ff:ff:ff dev vmbr0
        post-up sleep 5; /sbin/ip -6 route add default via 2001:41d0:111:aff:ff:ff:ff:ff
        pre-down /sbin/ip -6 route del default via 2001:41d0:111:aff:ff:ff:ff:ff
        pre-down /sbin/ip -6 route del 2001:41d0:111:aff:ff:ff:ff:ff dev vmbr0
    
    
    auto vmbr46
    iface vmbr46 inet static
        address 10.0.0.254/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE
    
    iface vmbr46 inet6 static
        address 2001:41d0:111:aa2::1/64
    

    /etc/ndppd.conf

    route-ttl 30000
    proxy vmbr0 {
    router yes
    timeout 500
    ttl 30000
    rule 2001:41d0:111:aa2::/64 {
    static
    }
    }
    

    vm config

    vmbr46
    10.0.0.1/24 gw 10.0.0.254
    2001:41d0:111:aa2::1001/64 gw 2001:41d0:111:aa2::1
    

    reinatalled a few times and just can not address ipv6 inside the vm can big the ::1 from the vm just can not access the network

    Thanked by 1ehab
  • MaouniqueMaounique Host Rep, Veteran
    edited April 2023

    @ebony said: reinatalled a few times and just can not address ipv6 inside the vm can big the ::1 from the vm just can not access the network

    I am not sure what you mean. You can ping ::1 but not the internet?
    check the following:

    VM is bridged to vmbr46
    ndppd daemon is started and active
    ipv6 forwarding is activated.

  • ebonyebony Member
    edited April 2023

    @Maounique said:

    @ebony said: reinatalled a few times and just can not address ipv6 inside the vm can big the ::1 from the vm just can not access the network

    I am not sure what you mean. You can ping ::1 but not the internet?
    check the following:

    inside the vm i can ping the gateway on vmbr46 but if i ping google or other ipv6 address i get nothing back

    ndppd.service - LSB: NDP Proxy Daemon
         Loaded: loaded (/etc/init.d/ndppd; generated)
         Active: active (running) since Fri 2023-04-21 22:27:58 BST; 1h 21min ago
           Docs: man:systemd-sysv-generator(8)
        Process: 907 ExecStart=/etc/init.d/ndppd start (code=exited, status=0/SUCCESS)
          Tasks: 1 (limit: 38388)
         Memory: 952.0K
            CPU: 1.112s
         CGroup: /system.slice/ndppd.service
                 └─945 /usr/sbin/ndppd -d -p /var/run/ndppd.pid
    
    Apr 21 22:27:58 Blossom systemd[1]: Starting LSB: NDP Proxy Daemon...
    Apr 21 22:27:58 Blossom ndppd[934]: (notice) ndppd (NDP Proxy Daemon) version 0.2.4
    Apr 21 22:27:58 Blossom ndppd[934]: (notice) Using configuration file '/etc/ndppd.conf'
    Apr 21 22:27:58 Blossom ndppd[934]: (warning) Low prefix length (64 <= 120) when using 'static' method
    Apr 21 22:27:58 Blossom systemd[1]: Started LSB: NDP Proxy Daemon.
    

    working now thanks

  • MaouniqueMaounique Host Rep, Veteran

    @ebony said: working now thanks

    Glad you have managed :)

  • qebaqeba Member

    @ebony said: working now thanks

    currently facing the same issues, how does you solve this? im able to ping to gateway IP but not working ping to google with IPv6.

  • MaouniqueMaounique Host Rep, Veteran

    Same answer.

    Check that:
    VM is bridged to vmbr46
    ndppd daemon is started and active
    ipv6 forwarding is activated.

  • qebaqeba Member
    edited June 2023

    This is my configuration

    My IPv6 details as below, this provided to me by provider.
    IPv6 Range: 200a:c001:05c3:f46e::/64
    Gateway: 200a:c000::1

    The /etc/network/interfaces as below:

    auto lo
    iface lo inet loopback
    
    iface ens3 inet manual
    
    auto vmbr0
    iface vmbr0 inet static
            address 141.xxx.xxx.xxx/24
            gateway 141.xxx.xxx.xxx
            bridge-ports ens3
            bridge-stp off
            bridge-fd 0
    
    iface vmbr0 inet6 static
      address 200a:c001:05c3:f46e::ffff/128
      post-up sleep 5; /sbin/ip -6 route add  200a:c000::1 dev vmbr0
      post-up sleep 5; /sbin/ip -6 route add default via 200a:c000::1
      pre-down /sbin/ip -6 route del default via 200a:c000::1
      pre-down /sbin/ip -6 route del 200a:c000::1 dev vmbr0
    
    auto vmbr1
    iface vmbr1 inet static
      address 10.10.100.1/24
      bridge-ports none
      bridge-stp off
      bridge-fd 0
      post-up echo 1 > /proc/sys/net/ipv4/ip_forward
      post-up iptables -t nat -A POSTROUTING -s '10.10.100.1/24' -o vmbr0 -j MASQUERADE
      post-down iptables -t nat -D POSTROUTING -s '10.10.100.1/24' -o vmbr0 -j MASQUERADE
    
    iface vmbr1 inet6 static
      address 200a:c001:05c3:f46e::1/64
    
    

    The /etc/ndppd.conf file

    route-ttl 30000
    proxy vmbr0 {
    router yes
    timeout 500
    ttl 30000
    rule 200a:c001:05c3:f46e::/64 {
    static
    }
    }
    

    The ndppd service:

    ● ndppd.service - LSB: NDP Proxy Daemon
         Loaded: loaded (/etc/init.d/ndppd; generated)
         Active: active (running) since Mon 2023-06-12 15:03:21 +08; 6min ago
           Docs: man:systemd-sysv-generator(8)
        Process: 807 ExecStart=/etc/init.d/ndppd start (code=exited, status=0/SUCCESS)
          Tasks: 1 (limit: 23916)
         Memory: 1004.0K
            CPU: 597ms
         CGroup: /system.slice/ndppd.service
                 └─834 /usr/sbin/ndppd -d -p /var/run/ndppd.pid
    
    Jun 12 15:03:21 tr systemd[1]: Starting LSB: NDP Proxy Daemon...
    Jun 12 15:03:21 tr ndppd[832]: (notice) ndppd (NDP Proxy Daemon) version 0.2.4
    Jun 12 15:03:21 tr ndppd[832]: (notice) Using configuration file '/etc/ndppd.conf'
    Jun 12 15:03:21 tr ndppd[832]: (warning) Low prefix length (64 <= 120) when using 'static' method
    Jun 12 15:03:21 tr systemd[1]: Started LSB: NDP Proxy Daemon.
    
    

    This the VM config (Alamalinux 8.7):

    TYPE=Ethernet
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=dhcp
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=no
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=eui64
    NAME=ens18
    UUID=xxxxxxxx.xxx.xx
    DEVICE=ens18
    ONBOOT=yes
    IPV6ADDR= 200a:c001:05c3:f46e::200/64
    IPV6_DEFAULTGW=200a:c001:05c3:f46e::1
    

    From the host to internet, no issues, i can ping to internet. For the NAT IPv4 inside the VM, it's working perfectly, I'm able to get outside to the internet.

    I could ping between the VM and the proxmox host using IPv6, I also able to ping the bridge IPv6 (200a:c001:05c3:f46e::1) from the VM, but if i tried to ping to google it given me the error as below.

    [root@localhost ~]# ping google.com
    PING google.com(sof02s44-in-x0e.1e100.net (2a00:1450:4017:80d::200e)) 56 data bytes
    From _gateway (200a:c001:05c3:f46e::1) icmp_seq=9 Destination unreachable: Address unreachable
    From _gateway (200a:c001:05c3:f46e::1) icmp_seq=10 Destination unreachable: Address unreachable
    From _gateway (200a:c001:05c3:f46e::1) icmp_seq=11 Destination unreachable: Address unreachable
    

    I also tried to ping from my proxmox host to the VM - (200a:c001:05c3:f46e::200) also can, and if i stopped the VM and try to ping, it will failed (since the VM is off). Seems to me the IPv6 is already able to communicate with each other (the ndppd service is working).

    just still not able to figure out, why I cannot go outside to the internet using IPv6 from the VM.

  • SeederKunSeederKun Member
    edited June 2023

    @qeba said:
    This is my configuration

    My IPv6 details as below, this provided to me by provider.
    IPv6 Range: 200a:c001:05c3:f46e::/64
    Gateway: 200a:c000::1

    The /etc/network/interfaces as below:

    auto lo
    iface lo inet loopback
    
    iface ens3 inet manual
    
    auto vmbr0
    iface vmbr0 inet static
            address 141.xxx.xxx.xxx/24
            gateway 141.xxx.xxx.xxx
            bridge-ports ens3
            bridge-stp off
            bridge-fd 0
    
    iface vmbr0 inet6 static
      address 200a:c001:05c3:f46e::ffff/128
      post-up sleep 5; /sbin/ip -6 route add  200a:c000::1 dev vmbr0
      post-up sleep 5; /sbin/ip -6 route add default via 200a:c000::1
      pre-down /sbin/ip -6 route del default via 200a:c000::1
      pre-down /sbin/ip -6 route del 200a:c000::1 dev vmbr0
    
    auto vmbr1
    iface vmbr1 inet static
      address 10.10.100.1/24
      bridge-ports none
      bridge-stp off
      bridge-fd 0
      post-up echo 1 > /proc/sys/net/ipv4/ip_forward
      post-up iptables -t nat -A POSTROUTING -s '10.10.100.1/24' -o vmbr0 -j MASQUERADE
      post-down iptables -t nat -D POSTROUTING -s '10.10.100.1/24' -o vmbr0 -j MASQUERADE
    
    iface vmbr1 inet6 static
      address 200a:c001:05c3:f46e::1/64
    
    

    The /etc/ndppd.conf file

    route-ttl 30000
    proxy vmbr0 {
    router yes
    timeout 500
    ttl 30000
    rule 200a:c001:05c3:f46e::/64 {
    static
    }
    }
    

    The ndppd service:

    ● ndppd.service - LSB: NDP Proxy Daemon
         Loaded: loaded (/etc/init.d/ndppd; generated)
         Active: active (running) since Mon 2023-06-12 15:03:21 +08; 6min ago
           Docs: man:systemd-sysv-generator(8)
        Process: 807 ExecStart=/etc/init.d/ndppd start (code=exited, status=0/SUCCESS)
          Tasks: 1 (limit: 23916)
         Memory: 1004.0K
            CPU: 597ms
         CGroup: /system.slice/ndppd.service
                 └─834 /usr/sbin/ndppd -d -p /var/run/ndppd.pid
    
    Jun 12 15:03:21 tr systemd[1]: Starting LSB: NDP Proxy Daemon...
    Jun 12 15:03:21 tr ndppd[832]: (notice) ndppd (NDP Proxy Daemon) version 0.2.4
    Jun 12 15:03:21 tr ndppd[832]: (notice) Using configuration file '/etc/ndppd.conf'
    Jun 12 15:03:21 tr ndppd[832]: (warning) Low prefix length (64 <= 120) when using 'static' method
    Jun 12 15:03:21 tr systemd[1]: Started LSB: NDP Proxy Daemon.
    
    

    This the VM config (Alamalinux 8.7):

    TYPE=Ethernet
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=dhcp
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=no
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=eui64
    NAME=ens18
    UUID=xxxxxxxx.xxx.xx
    DEVICE=ens18
    ONBOOT=yes
    IPV6ADDR= 200a:c001:05c3:f46e::200/64
    IPV6_DEFAULTGW=200a:c001:05c3:f46e::1
    

    From the host to internet, no issues, i can ping to internet. For the NAT IPv4 inside the VM, it's working perfectly, I'm able to get outside to the internet.

    I could ping between the VM and the proxmox host using IPv6, I also able to ping the bridge IPv6 (200a:c001:05c3:f46e::1) from the VM, but if i tried to ping to google it given me the error as below.

    [root@localhost ~]# ping google.com
    PING google.com(sof02s44-in-x0e.1e100.net (2a00:1450:4017:80d::200e)) 56 data bytes
    From _gateway (200a:c001:05c3:f46e::1) icmp_seq=9 Destination unreachable: Address unreachable
    From _gateway (200a:c001:05c3:f46e::1) icmp_seq=10 Destination unreachable: Address unreachable
    From _gateway (200a:c001:05c3:f46e::1) icmp_seq=11 Destination unreachable: Address unreachable
    

    I also tried to ping from my proxmox host to the VM - (200a:c001:05c3:f46e::200) also can, and if i stopped the VM and try to ping, it will failed (since the VM is off). Seems to me the IPv6 is already able to communicate with each other (the ndppd service is working).

    just still not able to figure out, why I cannot go outside to the internet using IPv6 from the VM.

    Do you have #net.ipv6.conf.all.forwarding=1 uncommented under /etc/sysctl.conf in the host ? if not do it and apply the change by doing sysctl -p /etc/sysctl.conf

    I think this is the cause of the issue.

  • qebaqeba Member

    Do you have #net.ipv6.conf.all.forwarding=1 uncommented under /etc/sysctl.conf in the host ? if not do it and apply the change by doing sysctl -p /etc/sysctl.conf

    yes already did that... still not working. :'(

  • SeederKunSeederKun Member
    edited June 2023

    @qeba said:

    Do you have #net.ipv6.conf.all.forwarding=1 uncommented under /etc/sysctl.conf in the host ? if not do it and apply the change by doing sysctl -p /etc/sysctl.conf

    yes already did that... still not working. :'(

    Try setting the gateway in vmbr1 to 200a:c001:05c3:f46e::ffff/64 so it will reach the host internet gateway? like this

    iface vmbr1 inet6 static
      address 200a:c001:05c3:f46e::1/64
      up ip -6 route add 200a:c001:05c3:f46e::ffff/64 dev vmbr1
    

    make sure to have some sort of IPMI/VNC/KVM access so if things go bad you can revert the changes.
    I am not a networking expert by any means but setting up networking varies from a provider to another.

  • MaouniqueMaounique Host Rep, Veteran

    This tutorial has been written for a very specific situation, the one in which OVH was not allowing more than one IPv6 to get out over the internet.
    If your provider does NOT use the same shitty "technique", then you can simply use the provider's gateway directly.
    In order to test this, add a new interface to the VM, bridge it to vmbr0 and give it one IPv6 and the provider's gateway and remove any IPv6 setting from the interface bridged to vmbr1.

  • lc475lc475 Member

    Thanks for your detailed tutorial. IPv6 is working on my Kimsufi box now. :)

    Thanked by 1Maounique
  • ebonyebony Member

    @qeba said:

    @ebony said: working now thanks

    currently facing the same issues, how does you solve this? im able to ping to gateway IP but not working ping to google with IPv6.

    i had to always tracert ipv6.google.com to get it to fire on the VM a few hours later it would die unless u keep a ping6 open i had very bad ping6 on the server support fixed the ::1 address but would not fix anything else on the ipv6 orded a new server lower price less ram :( but that server just worked fine no problems at all run my minecraft server on a ipv6 and works great!

  • How to create multiple VMs running webserver on ports 80/443 while using this NAT setup?

Sign In or Register to comment.