New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Yes, we are back at the start. I suspect it doesn't work because it's a VPS and the provider is tying both IP addresses to the virtual MAC address of the VPS (like you would do with proxmox running on a dedicated server), and the LXC container has a different MAC address. But that's just an hypothesis.
It could b> @fredo1664 said:
I totally suspect that is true. I can't also get a private container to access internet using NAT. It all comes down to a single MAC on the VPS interface that only works. They must have MAC filter on the switch.
They just said no filter. Just one mac per vps.
I think you're still mixing things up too much. let's do this step by step.
1) bridged setup most likely won't work, because the IPs are already routed and available directly at your vNIC/VM. aka the addon IP is not set up as virtualisation IP.
2) now you need a routed setup, but make sure to decide which IP is for what and bind those correctly to the interfaces or not, so that the routing goes properly.
hostroutes/pointopoint so far worked pretty well for me, so I'd stick with that
3) let's define xx.xx.65.139 as IP used for the host and xx.xx.66.37 as the one for the LXC guest
4) host config
5) guest config, via GUI
this kind of setup worked for me in several cases. however, of course it can be done differently and might not solve everyones problem at all ;-) ;-)
6) for using an internal IP/subnet like 10.0.0.0/24 instead and forward all traffic I'd suggest to setup a seperate bridge of course and make sure to not forget the correct masquerading rules for iptables. also make sure those go into the correct tables (-t nat) as well as specifying source and destination IPs in the correct way, not just the device.
I almost gave up after I tried yours.
BUT,
after this line added to yours, post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp,
it FINALLY work!!!!!!
THANK YOU VERY MUCH!
Now, SSH into the VM is not taking password. Now the trouble is how I can access from outside into my VM with a public address.
I'd guess you have some leftover iptables rules or the likes from former tries that might be interfering?
PS: can you see the failed logins within the guests logfiles? if so, it should be an issue with your ssh config, did you enable password auth at all?
only if you can't find anything inside the guest, then it might be firewall or routing issue at the hostnode still...
It was ssh config that I had to edit. All is fine now. Thank you so much for your configuration. This config is here to stay.
When same subnet on different interfaces is used, reverse path filter should be diasbled
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/default/rp_filter
While routed would work, I would generally prefer point 6, bridge and NAT because I might want to have another container in the future. Containers share ram and CPU very well and I am one container per task kind of person. It would also work with only one IP as well.
Sure, some voodoo with port forwarding would be needed, but, since it is one container per task, it usually needs a limited number of ports anyway.