Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


VM using different public IP with bridge setting in Proxmox isn't pinging to ourside. - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

VM using different public IP with bridge setting in Proxmox isn't pinging to ourside.

2»

Comments

  • @yongsiklee said:

    @BingoBongo said:

    This config should work out of the box unless you are missing some crucial details

    That config was the first config I did but that did not work. Hence...

    Yes, we are back at the start. I suspect it doesn't work because it's a VPS and the provider is tying both IP addresses to the virtual MAC address of the VPS (like you would do with proxmox running on a dedicated server), and the LXC container has a different MAC address. But that's just an hypothesis.

  • yongsikleeyongsiklee Member, Patron Provider

    It could b> @fredo1664 said:

    @yongsiklee said:

    @BingoBongo said:

    This config should work out of the box unless you are missing some crucial details

    That config was the first config I did but that did not work. Hence...

    ...I suspect it doesn't work because it's a VPS and the provider is tying both IP addresses to the virtual MAC address of the VPS...,and the LXC container has a different MAC address. But that's just an hypothesis.

    I totally suspect that is true. I can't also get a private container to access internet using NAT. It all comes down to a single MAC on the VPS interface that only works. They must have MAC filter on the switch.

  • yongsikleeyongsiklee Member, Patron Provider

    @yongsiklee said:
    It could b> @fredo1664 said:

    @yongsiklee said:

    @BingoBongo said:

    This config should work out of the box unless you are missing some crucial details

    That config was the first config I did but that did not work. Hence...

    ...I suspect it doesn't work because it's a VPS and the provider is tying both IP addresses to the virtual MAC address of the VPS...,and the LXC container has a different MAC address. But that's just an hypothesis.

    I totally suspect that is true. I can't also get a private container to access internet using NAT. It all comes down to a single MAC on the VPS interface that only works. They must have MAC filter on the switch.

    They just said no filter. Just one mac per vps.

  • FalzoFalzo Member

    I think you're still mixing things up too much. let's do this step by step.

    1) bridged setup most likely won't work, because the IPs are already routed and available directly at your vNIC/VM. aka the addon IP is not set up as virtualisation IP.

    2) now you need a routed setup, but make sure to decide which IP is for what and bind those correctly to the interfaces or not, so that the routing goes properly.
    hostroutes/pointopoint so far worked pretty well for me, so I'd stick with that

    3) let's define xx.xx.65.139 as IP used for the host and xx.xx.66.37 as the one for the LXC guest

    4) host config

    auto eth0
    iface eth0 inet static
        address xx.xx.65.139
        netmask 255.255.255.255
        pointopoint xx.xx.64.1
        gateway xx.xx.64.1
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    
    auto vmbr0
    iface vmbr0 inet static
        address xx.xx.65.139
        netmask 255.255.255.255
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        up route add -host xx.xx.66.37 dev vmbr0
    

    5) guest config, via GUI

    this kind of setup worked for me in several cases. however, of course it can be done differently and might not solve everyones problem at all ;-) ;-)

    6) for using an internal IP/subnet like 10.0.0.0/24 instead and forward all traffic I'd suggest to setup a seperate bridge of course and make sure to not forget the correct masquerading rules for iptables. also make sure those go into the correct tables (-t nat) as well as specifying source and destination IPs in the correct way, not just the device.

    Thanked by 1Maounique
  • yongsikleeyongsiklee Member, Patron Provider

    @Falzo said:
    I think you're still mixing things up too much. let's do this step by step.

    1) bridged setup most likely won't work, because the IPs are already routed and available directly at your vNIC/VM. aka the addon IP is not set up as virtualisation IP.

    2) now you need a routed setup, but make sure to decide which IP is for what and bind those correctly to the interfaces or not, so that the routing goes properly.
    hostroutes/pointopoint so far worked pretty well for me, so I'd stick with that

    3) let's define xx.xx.65.139 as IP used for the host and xx.xx.66.37 as the one for the LXC guest

    4) host config

    > auto eth0
    > iface eth0 inet static
    >     address xx.xx.65.139
    >     netmask 255.255.255.255
    >     pointopoint xx.xx.64.1
    >     gateway xx.xx.64.1
    >     post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    > 
    > auto vmbr0
    > iface vmbr0 inet static
    >     address xx.xx.65.139
    >     netmask 255.255.255.255
    >     bridge_ports none
    >     bridge_stp off
    >     bridge_fd 0
    >     up route add -host xx.xx.66.37 dev vmbr0
    > 

    5) guest config, via GUI

    this kind of setup worked for me in several cases. however, of course it can be done differently and might not solve everyones problem at all ;-) ;-)

    6) for using an internal IP/subnet like 10.0.0.0/24 instead and forward all traffic I'd suggest to setup a seperate bridge of course and make sure to not forget the correct masquerading rules for iptables. also make sure those go into the correct tables (-t nat) as well as specifying source and destination IPs in the correct way, not just the device.

    I almost gave up after I tried yours.

    BUT,

    after this line added to yours, post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp,

    it FINALLY work!!!!!!

    THANK YOU VERY MUCH! :D

  • yongsikleeyongsiklee Member, Patron Provider

    @yongsiklee said:

    @Falzo said:
    I think you're still mixing things up too much. let's do this step by step.

    1) bridged setup most likely won't work, because the IPs are already routed and available directly at your vNIC/VM. aka the addon IP is not set up as virtualisation IP.

    2) now you need a routed setup, but make sure to decide which IP is for what and bind those correctly to the interfaces or not, so that the routing goes properly.
    hostroutes/pointopoint so far worked pretty well for me, so I'd stick with that

    3) let's define xx.xx.65.139 as IP used for the host and xx.xx.66.37 as the one for the LXC guest

    4) host config

    > > auto eth0
    > > iface eth0 inet static
    > >     address xx.xx.65.139
    > >     netmask 255.255.255.255
    > >     pointopoint xx.xx.64.1
    > >     gateway xx.xx.64.1
    > >     post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    > > 
    > > auto vmbr0
    > > iface vmbr0 inet static
    > >     address xx.xx.65.139
    > >     netmask 255.255.255.255
    > >     bridge_ports none
    > >     bridge_stp off
    > >     bridge_fd 0
    > >     up route add -host xx.xx.66.37 dev vmbr0
    > > 

    5) guest config, via GUI

    this kind of setup worked for me in several cases. however, of course it can be done differently and might not solve everyones problem at all ;-) ;-)

    6) for using an internal IP/subnet like 10.0.0.0/24 instead and forward all traffic I'd suggest to setup a seperate bridge of course and make sure to not forget the correct masquerading rules for iptables. also make sure those go into the correct tables (-t nat) as well as specifying source and destination IPs in the correct way, not just the device.

    I almost gave up after I tried yours.

    BUT,

    after this line added to yours, post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp,

    it FINALLY work!!!!!! You are genius!

    THANK YOU VERY MUCH! :D

  • yongsikleeyongsiklee Member, Patron Provider

    Now, SSH into the VM is not taking password. Now the trouble is how I can access from outside into my VM with a public address.

  • FalzoFalzo Member
    edited March 2023

    @yongsiklee said:
    Now, SSH into the VM is not taking password. Now the trouble is how I can access from outside into my VM with a public address.

    I'd guess you have some leftover iptables rules or the likes from former tries that might be interfering?

    PS: can you see the failed logins within the guests logfiles? if so, it should be an issue with your ssh config, did you enable password auth at all?
    only if you can't find anything inside the guest, then it might be firewall or routing issue at the hostnode still...

    Thanked by 1yongsiklee
  • yongsikleeyongsiklee Member, Patron Provider
    edited March 2023

    @Falzo said:

    @yongsiklee said:
    Now, SSH into the VM is not taking password. Now the trouble is how I can access from outside into my VM with a public address.

    I'd guess you have some leftover iptables rules or the likes from former tries that might be interfering?

    PS: can you see the failed logins within the guests logfiles? if so, it should be an issue with your ssh config, did you enable password auth at all?
    only if you can't find anything inside the guest, then it might be firewall or routing issue at the hostnode still...

    It was ssh config that I had to edit. All is fine now. Thank you so much for your configuration. This config is here to stay. B)

  • vsys_hostvsys_host Member, Patron Provider

    When same subnet on different interfaces is used, reverse path filter should be diasbled
    echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
    echo 0 > /proc/sys/net/ipv4/conf/default/rp_filter

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2023

    While routed would work, I would generally prefer point 6, bridge and NAT because I might want to have another container in the future. Containers share ram and CPU very well and I am one container per task kind of person. It would also work with only one IP as well.

    Sure, some voodoo with port forwarding would be needed, but, since it is one container per task, it usually needs a limited number of ports anyway.

    Thanked by 1Falzo
Sign In or Register to comment.