Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Basic setup of Proxmox in an OVH environment (Kimsufi example) with NAT and IPv6 enabled containers
New on LowEndTalk? Please Register and read our Community Rules.

Basic setup of Proxmox in an OVH environment (Kimsufi example) with NAT and IPv6 enabled containers

MaouniqueMaounique Member
edited December 2022 in Tutorials

Based on feedback I have received, I revised the guide a bit adding more info and some common troubleshooting:

So, you are the proud owner of an OVH bare metal machine. This includes many kinds, starting from KS-1 (yes, the Atom with a clunky HDD attached) to various other options with only one IPv4 and only an advertised /128 of IPv6.

Now, OVH offers Proxmox installations on bare metal and you would love to run at least own containers with it (KS1 WORKS with containers and even KVM without virtualization support, but KVM on it is painfully slow).

The problem is you would have to do NAT for IPv4 with all the bad things that brings and OVH routers do not allow for more than one IPv6 to go out on the internet at any given time, albeit you could use any of the /64 and not only the one they are giving you in the settings.

In this tutorial I will try to explain everything about setting up Proxmox 7 with containers (for VMs settings are similar), from custom partitioning at installation to NAT-ing the IPv4 and proxying IPv6 to the containers including sample configs with fictitious IPs. I will explain for the lowest possible product, KS-1, we will set up a container with it but the setting are similar for all the products in this situation (only single IPv4 and IPv6 offered).

As soon as you get the product, you can go at the control panel and install Proxmox 7.
You could, of course, go with the defaults, but I strongly recommend you customize the installation, especially if you have 2 disks. You would like to install only one disk and give it a larger SWAP partition. The storage is mounted on /var/lib/vz and it is LVM by default, but you could change to ext4 even at this stage, just not the mount point (at the time of the writing,t he interface does not allow that, but you can change it everything later in the /etc/fstab).

If you do go with the defaults, you will not need parts of this tutorial, just the network section. For a simple KS-1, defaults are just fine as it has only one clunky disk anyway.

Network section

(the most important part of this tutorial which will make your poor little box a fully-fledged Proxmox server capable of running multiple containers at least).
So, we need to look mainly at one file which is located at /etc/network/interfaces.
The initial config is something like this (for a KS-1):

# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 123.123.123.123/24
    gateway 123.123.123.1
    bridge-ports eno0
    bridge-stp off
    bridge-fd 0

Pay attention at this line "iface eno0 inet manual" It will give you the name of the interface which is connected to the OVH routers. If yours is not eno0, then replace with the relevant interface in the config. Also, replace 123.123.123.123 with your IPv4 and 123.123.123.1 with your gateway, usually they are already there, but, in case you have a DHCP config, like:

auto vmbr0
iface vmbr0 inet dhcp

You should change it to static taking the values from your control panel for the IPv4 and gateway.

Some KS-1 will have an IPv6 config as well, and it could be automatic with some DHCP equivalent or plain static. We do not care as we will replace that config anyway.

The first changes will be done to the IPv6 config for the bridge facing the internet, vmbr0. Assuming your IPv6 in the panel for your service is address 2001:41d0:8:aaaa::1/128, then it should be something like this:

iface vmbr0 inet6 static
    address 2001:41d0:8:aaaa::ffff/128
    post-up sleep 5; /sbin/ip -6 route add  2001:41d0:8:aaff:ff:ff:ff:ff dev vmbr0
    post-up sleep 5; /sbin/ip -6 route add default via 2001:41d0:8:aaff:ff:ff:ff:ff
    pre-down /sbin/ip -6 route del default via 2001:41d0:8:aaff:ff:ff:ff:ff
    pre-down /sbin/ip -6 route del 2001:41d0:8:aaff:ff:ff:ff:ff dev vmbr0

This adds an IPv6 to the machine (the one you can use to access it from the internet 2001:41d0:8:aaaa::ffff here) and routes for it considering the OVH peculiar situation. It will also delete the routes at shutdown (not required, but it is good to form nice habits).
I won't go into details about the literal form of IPv6, it is beyond the scope of this newbie-level tutorial, just make sure you get the correct number of "f"s. Normally 8 should suffice for a /64, but OVH is actually routing a /56 and the gateway is for that, therefore you need to have 10. Just pay attention and replace with your actual IPv6 part.

Save and test the config with

ifup --no-act vmbr0

If you get no errors, it is probably okay.

Now we have IPv6 connectivity (if we didn't before, some come with it, some not) for the machine, you should be able to ping google.com over IPv6 after a restart. Do not use the apply configuration from the Proxmox interface, it will not work, just a restart to make sure changes work AND you can reach your server over SSH. If you cannot for some reason ping your IPv4 nor IPv6 (changed from the one you got 2001:41d0:8:aaaa::1 into 2001:41d0:8:aaaa::ffff in this example) then you will need to reinstall as it is the simplest thing. It usually means you made some typo, you will have to restart paying more attention.

The next changes will not affect basic connectivity, so, if you screwed up something, you can always go back and change /etc/network/interfaces.

We will now add a bridge (virtual interface like vmbr0) for the containers and virtual machines. You should name it vmbr6 or vmbr46 as it would provide both IPv4 and IPv6 connectivity to the VMs and CTs. I will use vmbr6 in this example.

Append it as follows:

auto vmbr6
iface vmbr6 inet static
    address 10.0.0.254/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE

This will create the interface you would bridge the VMs/CTs to (vmbr6), gives it an IP and sets up NAT for your containers. If you only wanted to have NAT in your containers, you are already good to go, save the file, do an ifup vmbr6, create your container, give it an IP like 10.0.0.xxx/24 with gateway 10.0.0.254 (the bridge IP), bridge it to vmbr6 and you have IPv4 connectivity inside the container.
If you need (and should use) IPv6 inside your container, read on!

We will now add IPv6 connectivity to the vmbr6 bridge. Append this to your /etc/network/interfaces:

iface vmbr6 inet6 static
    address 2001:41d0:8:aaaa::1/64

That is basically the IPv6 you have in your panel with /128 at the end replaced with /64. it will be the gateway for your containers.
STOP! Saving and doing ifdown vmbr6 and ifup vmbr6 will not work to give you IPv6 connectivity in the containers because of the peculiarity of OVH routers to allow only one IPv6 at a time to go out over the internet. We will have send all traffic through the address

2001:41d0:8:aaaa::ffff

we set to face the internet.
This is not as simple as enabling forwarding and NAT. While IPv6 NAT is possible, we would like to have FULL IPv6 access over the internet, all ports available to all IPv6-enabled containers, be directly accessible from the internet over IPv6, etc.
IPv6 comes with NDP which means neighbour discovery protocol which basically allows for autoconfiguration in "normal" IPv6 environments, but it requires for the routers to allow the OS to do that and as noted, OVH routers will not let IPv6 access to more than one address at a time, so it will not work, as such, we will proxy all our requests through that one and we have a neat tool for this, a daemon named ndppd.

ndppd is not part of the standard Debian nor Proxmox so we will have to install it:

apt install ndppd

then configure it in its own config file, which does not exist and will have to be created at /etc/ndppd.conf
Edit it like this according to our example IP:

route-ttl 30000
proxy vmbr0 {
router yes
timeout 500
ttl 30000
rule 2001:41d0:8:aaaa::/64 {
static
}
}

And save.
There is no 1 or ffff after ::, that is not a typo.
it would complain the range is too big, but it will work just fine for our small machine.

Now, we need to start and daemonize it:

ndppd -d -c /etc/ndppd.conf

We are not done yet. By default, Proxmox installs from OVH templates do not forward either IPv4, nor IPv6 packets which I find odd since it is its job to host VMs, but nevermind that, we will enable it.
For the IPv4 I did it in the post-up stanza of vmbr6 (post-up echo 1 > /proc/sys/net/ipv4/ip_forward) which writes 1 meaning enabled directly, but for the IPv6 we would be better off with enabling it permanently in the /etc/sysctl.conf.
Find this line:

#net.ipv6.conf.all.forwarding=1

and uncomment it (remove the "#" in front) save and apply:

sysctl -p /etc/sysctl.conf

Now you should be able to give NATed IPv4 and full access IPv6 to your containers.

This should work for most people, but what if you need some ports forwarded for some dummy app which still has no (or poor) support for IPv6 or it has to be available over IPv4 too over the wide internet?
We can do this in the IPv4 section of the vmbr6, just add these lines (add and delete rules always, so there are 2 lines for each, you could only add, would still work, but for good practices deleting is not a bad idea).

post-up iptables -t nat -A PREROUTING -p udp --dport 60203 -j DNAT --to-destination 10.0.0.203
post-down iptables -t nat -D PREROUTING -p udp --dport 60203 -j DNAT --to-destination 10.0.0.203

This will forward port 60203 udp to the container (or VM) with the IP 10.0.0.203. You can make it tcp, just change to udp part into tcp. You can, obviously, change the IP it is forwarded to as well and these are the most variable things.
For a port range, do this:

post-up iptables -t nat -A PREROUTING -p tcp --dport 10000:60000 -j DNAT --to 10.0.0.253
post-down iptables -t nat -D PREROUTING -p tcp --dport 10000:60000 -j DNAT --to 10.0.0.253

With that we forward the TCP ports from 10000 to 60000 to the container or VM with IP 10.0.0.253

With both these things, your vmbr6 section for IPv4 would look like this:

auto vmbr6
iface vmbr6 inet static
    address 10.0.0.254/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE
    post-up iptables -t nat -A PREROUTING -p udp --dport 60203 -j DNAT --to-destination 10.0.0.203
    post-up iptables -t nat -A PREROUTING -p tcp --dport 10000:60000 -j DNAT --to 10.0.0.253
    post-down iptables -t nat -D PREROUTING -p tcp --dport 10000:60000 -j DNAT --to 10.0.0.253
    post-down iptables -t nat -D PREROUTING -p udp --dport 60203 -j DNAT --to-destination 10.0.0.203
    post-down iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE

That is about it, basic IPv4 and IPv6 connectivity for your containers or virtual machines is assured for as long as you assign correct IPs to them and bridge to vmbr6.
Assigning IPv4 is trivial, I suppose and IPv6 should assign something like 2001:41d0:8:aaaa::1001/64 in the container with name 1001, for example and gateway will be 2001:41d0:8:aaaa::1

I have been asked after the first revision of this guide to provide a way to forward various ports to same port in various VMs. That is possible, but a better approach in my view would be to handle that inside the VM or container, for example, I need to forward port 9999 and 9998 to port 80 in two different VMs. Then we do (in proxmox's /etc/network/interfaces the vmbr6 IPv4 section!) the usual forwarding of tcp ports 9999 and 9998 to the relevant IPs, for example 10.0.0.110 and 10.0.0.111

post-up iptables -t nat -A PREROUTING -p tcp --dport 9998 -j DNAT --to-destination 10.0.0.110
post-up iptables -t nat -A PREROUTING -p tcp --dport 9999 -j DNAT --to-destination 10.0.0.111
post-down iptables -t nat -D PREROUTING -p tcp --dport 9998 -j DNAT --to-destination 10.0.0.111
post-down iptables -t nat -D PREROUTING -p tcp --dport 9999 -j DNAT --to-destination 10.0.0.110


Note: Once again, the post-down statements are not absolutely necessary in a normal operation of the node when you just reboot it, but in case you would repeatedly take up and down some interface only, cleaning up the rules is a good idea.
You would then do in the container's /etc/network/interfaces (debian) (10.0.0.110)

  post-up iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 9998 -j REDIRECT --to-port 80
  post-down iptables -D PREROUTING -t nat -i eth0 -p tcp --dport 9998 -j REDIRECT --to-port 80


The same thing in the other VM or container, just replace the --dport 9998 with --dport 9999
I assume that:
1. you have iptables installed as many container images come without (for example, debian 11) If not,

apt install iptables
  1. your interface in the vm is eth0 (in most cases it is)
  2. in case your distro is a different one, please refer to the manual about how do that at each boot.
    If you only need the forwarding once:
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 9998 -j REDIRECT --to-port 80

If you wish to be available each time, but do not wish to load it via the ifup method, you could do it once, save the rules (debian example):

/sbin/iptables-save > /etc/iptables/rules.v4

You could manually load the saved rules:

/sbin/iptables-restore < /etc/iptables/rules.v4

Or install the "persistent" service to do that for you at every boot:

apt install iptables-persistent

and enable it:

systemctl enable netfilter-persistent.service

That being said, maybe you need more, like sharing a NAS inside your KS for network boot and the like and you do not wish to be even NAT-ed to the public interface or for any other reason.
You could add another interface to your container or VM and bridge it to another bridge you need to add to your /etc/network/interfaces like this:

auto vmbr64
iface vmbr64 inet static
    address 192.168.100.254/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

Just add a second interface in your container or VM, bridge it to vmbr64 and give it an IP like (in this case) 192.168.100.100/24 No gateway is needed.
Your NAS (if you would use one or any other service) will be able to serve your containers or VMs inside the KS at that address for as long as they have a secondary virtual interface with a similar IP bridged to vmbr64.

Extra partitioning

Now, if you have another disk, now is the time to format it and add to /etc/fstab then to Proxmox as another storage.
First, create a mount point:

mkdir /disk2

Now, find the second disk:

lsblk

You will see something like this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 8G 0 loop
sda 8:0 0 894.3G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 125.7G 0 part /
├─sda3 8:3 0 1G 0 part [SWAP]
├─sda4 8:4 0 1K 0 part
├─sda5 8:5 0 766.5G 0 part /var/lib/vz
└─sda6 8:6 0 1.7M 0 part
sdb 8:16 0 894.3G 0 disk

You see here, like in most cases, the second disk is called sdb.
We will use parted to created the partition as it is included with Proxmox:

parted /dev/sdb
select /dev/sdb
mklabel msdos
mkpart
p
2048
100%

This is what you will type and press enter at the questions. We created an msdos table because that is default and we will have only one partition so no problem with the fact msdos tables are limited to 4 primary partitions. You can additionally type print to see the partition if it takes all the space (minus 2048 we reserved at the start for compatibility reasons) and then type

quit

.
It tells you we need to update fstab which we will do after we format the partition:

mkfs.ext4 -L disk2 /dev/sdb1

Now, time to update /etc/fstab for which we would need the UUID of the partition.

blkid

Look for the /dev/sdb1, in my case:

/dev/sdb1: LABEL="disk2" UUID="f5739cf0-617f-4cb3-b202-c6220474489c" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="36adfec1-01"

My /etc/fstab before adding the disk is like this:

UUID=ee299698-a265-4940-9523-b1b18b5e5ccf       /       ext4    defaults        0       1
UUID=4899814d-eb64-44ea-be91-9936f196816e       /boot   ext4    defaults        0       0
UUID=ee15ff96-31fe-46b6-9767-f647468eb50e       /var/lib/vz     ext4    defaults        0       0
UUID=8b17be41-2d4c-4984-a372-bd3a6c967f62       swap    swap    defaults        0       0
Now we need to add a new line with:
UUID=f5739cf0-617f-4cb3-b202-c6220474489c   /disk2  ext4    defaults    0   0

While we are here, if you would like to change the mount point for the storage on the first disk from /var/lib/vz to something else, such as, say, /mainstorage, you can do so here, just make sure you have that mount point (mkdir /mainstorage). If you already have a container or VM, their disks will appear to have moved and you will have to operate the changes in configs. Also, before saving, make sure you have an empty line at the end of the file.
Before restarting, run

mount -a

for extra checks. If you forgot to create a mount point, you will be reminded here.
After the restart, you can simply go to the proxmox interface https://youripv4:8006 and add the new disk to the storage as a directory. In case you would like to access your interface over IPv6 only, it is possible and recommended, that is one of the reasons I have chosen ffff instead of the regular 1 for the IPv6, besides sanity reasons and ease of setting the gateway in the containers. There are tutorials about doing that in Proxmox and can apply verbatim, that is regular Proxmox stuff not related to OVH peculiarities.
There are also tutorials about how to install a Let's Encrypt cert, albeit I prefer to add exceptions.

Troubleshooting connectivity

(based on feedback I received since posting the guide)

-IPv6 does not work in the container or VM.

Check if you have a different IPv6 in the vmbr0 than in vmbr6 The one I have chosen to contact the OVH routers (on vmbr0) is ending in ::ffff/128 and the one I use as a gateway for VMs and containers bridged to vmbr6 is ending in ::1/64 Please see the network mask as well, /128 vs /64.
This is only my recommendation, but you could always use a different scheme, just remember to replace with your choice everywhere

-IPv4 AND IPv6 are not working in the container/VM.

Check if you bridged to the correct bridge. By default, the proxmox network settings offer vmbr0 for bridging. That will not work, you need to bridge to the interface (bridge) not facing the internet which, in this tutorial, I called vmbr6.

-After forwarding port 443, 80 to a VM/container browsing from inside the VM no longer works.

I am not sure why that is happening, I was not able to reproduce this issue reported by multiple people and logically should never be a problem. Fortunately, it is only temporary, just shutdown/boot (container/vm) if it happens and it should work as intended.

-After reloading vmbr6 (for example with ifdown/ifup) the containers/VMs lose all connectivity.

Shutdown/boot the containers/VMs. Connectivity will be restored.

I am still taking feedback, if it is not working for you, PM me and I will look into it.

«1

Comments

  • hi,
    Thanks for the tutorial.
    How to access vm with public ip ? Can you make a tutorial ?

    =lets say I have one dedicated, and have 10 vm on there.
    I want to connect the VM using public IP. How ?

  • @mrlongshen said: I want to connect the VM using public IP. How ?

    You can using the IPv6 as per this tutorial. If you mean IPv4:
    1. Do you have extra IPs assigned?
    2. If the answer at 1 is "no" then you can using NAT and port forwarding as detailed in the tutorial.
    3. If the answer is yes, then you can simply add the IP to the main interface of the CT/VM and choose vmbr0 as a bridge. I am assuming the extra IPs are failover ones and not a subnet routed to you, that would be a bit more complicated.

  • @mrlongshen said:
    hi,
    Thanks for the tutorial.
    How to access vm with public ip ? Can you make a tutorial ?

    =lets say I have one dedicated, and have 10 vm on there.
    I want to connect the VM using public IP. How ?

    Add to the vmbr6 after post-up iptables -t the below (eg https traffic)

    post-up iptables -t nat -A PREROUTING -p tcp --dport 443:443-j DNAT --to 10.0.0.25 (VM IP)

    post-down iptables -t nat -D PREROUTING -p udp --dport 443-j DNAT --to-destination 10.0.0.25 (VM IP)

  • MaouniqueMaounique Member
    edited November 2022

    @wii747 said: post-up iptables -t nat -A PREROUTING -p tcp --dport 443:443-j DNAT --to 10.0.0.25 (VM IP)

    post-down iptables -t nat -D PREROUTING -p udp --dport 443-j DNAT --to-destination 10.0.0.25 (VM IP)

    you put up port forwarding for tcp and down for udp.

    To access the VM over the internet on port 443 TCP, which is HTTPS default, you do that (with the minor correction you put down TCP as well) and use the url
    https://yourproxmoxipv4 and it would go to the VM webserver, not the node.

  • @Maounique said:

    @mrlongshen said: I want to connect the VM using public IP. How ?

    You can using the IPv6 as per this tutorial. If you mean IPv4:
    1. Do you have extra IPs assigned?
    2. If the answer at 1 is "no" then you can using NAT and port forwarding as detailed in the tutorial.
    3. If the answer is yes, then you can simply add the IP to the main interface of the CT/VM and choose vmbr0 as a bridge. I am assuming the extra IPs are failover ones and not a subnet routed to you, that would be a bit more complicated.

    I want to use ipv4 .. It more easiest..

    1. Yes I have extra IP. but its for failover. What that means ?
  • MaouniqueMaounique Member
    edited November 2022

    @mrlongshen said: Yes I have extra IP. but its for failover. What that means ?

    If you got them in relation to a specific machine (so you start, SYS, for example), then you can create a VM there and bridge it to vmbr0 and should work.
    You can use them in special conditions in other nodes. In same DC, for example.
    Same as before, add a vm or container and bridge to vmbr0.
    In many cases, though, KS don't accept failover IPs as their routers only accept one IP per MAC, both IPv4 and IPv6.
    These IPs are not available as an offer for Kimsufi, but work for SYS or game servers.

  • trungkientrungkien Member
    edited November 2022

    Great works!
    Very detailed guide.
    I had similar setup like this on several ovh & hetzner servers with NAT and Ip failover. For /etc/fstab, I always use Label (disk2) instead of UUID to save some time. Will try ipv6 part on my new KS LE just bought today.

    Thanked by 1Maounique
  • @trungkien said: I always use Label (disk2) instead of UUID

    Me too, but it is best to teach newbies newer practices, not our old ways. They will grow up with UUIDs whether we like it or not :P

    Thanked by 1trungkien
  • jugganutsjugganuts Member
    edited November 2022

    good tut! glad someone finally put all the relevant info in one place for the non standard setup on ovh routers...

    Thanked by 1Maounique
  • Awasome guide, worked a treat thanks @Maounique

    Thanked by 1Maounique
  • Just curious - do you guys leave your proxmox web UI port open on your internet side, and if so are there any known exploits, or do you lock it down to internal / wireguard only?

  • I leave on IPv6 only. It is unlikely someone would scan IPv6 with any measure of success.

    Thanked by 1ralf
  • Thanks again, everything works including ipv6 following this tutorial.

    Thanked by 1Maounique
  • Thanks for the tutorial. I had some time this evening and messed around with proxmox for a bit. It's my first time ever working with it and I was a bit confused on how to secure the webpage. I found the documentation for pveproxy which handles access to the web interface (it was publicly available). I usually try and make Web interfaces accessable only through a Wireguard connection to my servers. So I added my Wireguard IP range (10.7.0.1/24) at the pveproxy config but this wasn't working. So I just added the IP of my client (10.7.0.2). This seems to work and my panel is only accessable when connected to Wireguard. The port is still open to public, which makes sense as the interface still uses the public IP for connection.

    Is there a way to change the web interface IP to the Wireguard gateway, for example 10.7.0.1?

  • MaouniqueMaounique Member
    edited November 2022

    @buddermilch said: Is there a way to change the web interface IP to the Wireguard gateway, for example 10.7.0.1?

    Why would you want it to be the gateway? From your perspective, the proxmox UI is just another service within the "local" virtual network you created with Wireguard.
    As such it makes no sense to be the gateway.

    That being said:
    1. Proxmox is actively maintained, the exploits would probably be rare;
    2. I am only enabling it over IPv6 because it is really unlikely someone would scan IPv6;
    3. If you are really paranoid, just allow only localhost and connect with a SSH tunnel and key, over Wireguard if you like.

    Thanked by 1buddermilch
  • @Maounique said: 2. I am only enabling it over IPv6 because it is really unlikely someone would scan IPv6;

    This is actually a good idea I never thought about that!

    Right now it only allows local connection from wireguard so, regarding your reply, it should be fine i guess. Thanks again!

    Thanked by 1Maounique
  • @buddermilch said:
    Thanks for the tutorial. I had some time this evening and messed around with proxmox for a bit. It's my first time ever working with it and I was a bit confused on how to secure the webpage. I found the documentation for pveproxy which handles access to the web interface (it was publicly available). I usually try and make Web interfaces accessable only through a Wireguard connection to my servers. So I added my Wireguard IP range (10.7.0.1/24) at the pveproxy config but this wasn't working. So I just added the IP of my client (10.7.0.2). This seems to work and my panel is only accessable when connected to Wireguard. The port is still open to public, which makes sense as the interface still uses the public IP for connection.

    Is there a way to change the web interface IP to the Wireguard gateway, for example 10.7.0.1?

    Set-up 2FA auth in proxmox, that should be pretty safe then.

    Thanked by 1buddermilch
  • buddermilchbuddermilch Member
    edited November 2022

    Thanks again for the tutorial! Everything seems to work i can ping the container from my other server on ipv4 (as expected) and even better IPv6. But i still dont know why host resolving is not working. I set it on google dns for both the host and the container but I cant resolve hostnames for whatever reason. I tried adding dns-nameserver on the interface as suggested by the Debain documentation but it's still not working. Can anyone help me out? It seems to resolve on 8.8.8.8 doesnt it? Is this a port forwarding problem?

    ; <<>> DiG 9.16.33-Debian <<>> google.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32726
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 512
    ;; QUESTION SECTION:
    ;google.com.                    IN      A
    
    ;; ANSWER SECTION:
    google.com.             300     IN      A       142.250.75.238
    
    ;; Query time: 11 msec
    ;; SERVER: 8.8.8.8#53(8.8.8.8)
    ;; WHEN: Sat Nov 26 20:22:21 UTC 2022
    ;; MSG SIZE  rcvd: 55
    

    Ok whatever it started to work all of a sudden. I dont get it...

    And another Edit. Is it possible that the lack of Iptables on the fresh container is the cause of the problem? Because right before writing this comment it was not working and I installed Iptables, wrote the comment and tried "ping" again afterwards and it worked.

  • MaouniqueMaounique Member
    edited November 2022

    It should not need iptables unless you are forwarding something or firewalling.
    Usually, if something is not working then it starts working all of a sudden, it is IPv6, but this is not your case.
    It is possible you have some wrong IPv6 resolver and priority is over IPv6. In that case, there is a bit of a lag before IPv6 starts working, a few seconds. Just make sure you have IPv6 resolver too:
    2001:4860:4860::8888
    2001:4860:4860::8844

    OR, recommended, OpenDNS:
    2620:119:35::35
    2620:119:53::53

    Thanked by 1buddermilch
  • when i nat port 443 (https://.. ) then vm's lose ipv4 internet. fine without 443 though does this setup not work for that?

  • And again im completely clueless. Everything works as expected, my VM has a real IPv6 address. I can ping it from my PC, I can check the TCP port from my PC. What I can't is connecting to it from my browser, for a webinterface for example.

    Ipv6 test gives 10/10 from my home connection. Putty is working with the native IPv6 address of my VM. Why can I not open the Webinterface in Chrome/Firefox when putting in the IPv6 IP ([xx:xx:xx:xx::x]:port) ?

  • buddermilchbuddermilch Member
    edited November 2022

    @buddermilch said:
    And again im completely clueless. Everything works as expected, my VM has a real IPv6 address. I can ping it from my PC, I can check the TCP port from my PC. What I can't is connecting to it from my browser, for a webinterface for example.

    Ipv6 test gives 10/10 from my home connection. Putty is working with the native IPv6 address of my VM. Why can I not open the Webinterface in Chrome/Firefox when putting in the IPv6 IP ([xx:xx:xx:xx::x]:port) ?

    Forget about it I'm stupid. After setting up a small web page and testing it out on the container it was indeed working no problem. I assumed Docker is working with IPv6 out of the box which it apperently isnt...

    Edit: That is way more complex than I thought but it's actually really fun. Probably my best BF purchase :smiley:

  • MaouniqueMaounique Member
    edited November 2022

    @ebony said: when i nat port 443 (https://.. ) then vm's lose ipv4 internet. fine without 443 though does this setup not work for that?

    I will need more info for that, what rule do you use and which VMs lose IPv4 internet? Which RFC 1918 IPs are you using in the VMs which are losing IPv4 internet? Where are you forwarding port 443? Which IP and VM? Does that still have IPv4 internet?

    Put here your vmbr6 config with and without forwarding.

  • MaouniqueMaounique Member
    edited November 2022

    @buddermilch said: That is way more complex than I thought but it's actually really fun. Probably my best BF purchase

    Yep, one of the reasons having a KS is to learn and experiment with your own contained "LAN".
    The constraints are "helping" you to experiment things you are not normally doing.

    Thanked by 1buddermilch
  • @Maounique said:

    @ebony said: when i nat port 443 (https://.. ) then vm's lose ipv4 internet. fine without 443 though does this setup not work for that?

    I will need more info for that, what rule do you use and which VMs lose IPv4 internet? Which RFC 1918 IPs are you using in the VMs which are losing IPv4 internet? Where are you forwarding port 443? Which IP and VM? Does that still have IPv4 internet?

    Put here your vmbr6 config with and without forwarding.

        # for ipv6
        auto vmbr46
        iface vmbr46 inet static
            address 10.0.0.254/24
            bridge-ports none
            bridge-stp off
            bridge-fd 0
            post-up echo 1 > /proc/sys/net/ipv4/ip_forward
            post-up iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE
            post-up iptables -t nat -A PREROUTING -p tcp --dport 446 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p udp --dport 1194 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 8090 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 21 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 40110:40210 -j DNAT --to 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 25 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 465 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 110 -j DNAT --to-destination 10.0.0.102
            post-up iptables -t nat -A PREROUTING -p tcp --dport 993 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 446 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p udp --dport 1194 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 8090 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 443 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 21 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 40110:40210 -j DNAT --to 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 25 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 465 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 110 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D PREROUTING -p tcp --dport 993 -j DNAT --to-destination 10.0.0.102
            post-down iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE
    
        iface vmbr46 inet6 static
            address 2001:41d0:2:c96b::1/64
    

    it seems i can ping 8.8.8.8 and other sites but i can not access http or https on any site

  • MaouniqueMaounique Member
    edited November 2022

    Remove ipv6 config that exposes your IP.

    The rules are okay, but you didn't answer the other questions.
    You lost access on the node or in the vm?
    As far as I can tell, this should not happen. I just did a full run reinstalling one of the KS-1 I have following the tutorial every step to make sure I didn't make some mistake and this does not happen.

  • ebonyebony Member
    edited November 2022

    @Maounique said:
    Remove ipv6 config that exposes your IP.

    The rules are okay, but you didn't answer the other questions.
    You lost access on the node or in the vm?
    As far as I can tell, this should not happen. I just did a full run reinstalling one of the KS-1 I have following the tutorial every step to make sure I didn't make some mistake and this does not happen.

    sorry it was the VM's that can not access http/https sites when i portforward 80/443 and its all vms

    can ping sites though like 8.8.8.8 its like a dns prob

  • its ok all fixed now not sure what was wrong

  • MaouniqueMaounique Member
    edited November 2022

    Not really, it could be something different, try adding the google DNSs to make sure.
    The ports you forward should not influence anything as the outgoing connections have nothing to do with that.
    I was forwarding everything and wasn't able to replicate your issue, it just makes it look like, for TCP and UDP, your vm is directly on the internet, that is all (if you do like me and forward 1-65535)

    If you do NOT forward those ports, remove rules, bring down and back up vmbr46, then reboot vm (or updown the interface inside) does browsing from VM magically work?

    EDIT: NVM, I missed your reply.
    And that leaves me with the mystery. I was also curious what went wrong :P

  • Thanks for this great in depth tutorial
    is it possible to assign /80s to these CTs?

Sign In or Register to comment.