Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How to link several servers together via VPN
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How to link several servers together via VPN

raindog308raindog308 Administrator, Veteran

Forgive me, but advanced networking it not one of my strengths.

I have three VPSes, each in different DCs with different providers. They talk to each other using a protocol which itself does not offer encryption. I would like their traffic to be encrypted. It's a set of ports, but I'd be OK just saying "if you want to talk to one of the other 2 servers, you reach them on this encrypted network for everything".

I'm thinking VPN. Note that the servers also talk to the general public Internet, so they'd need to talk on their public IP and then also on a VPN.

So is there a way I can create a tunnel between each VPS to the other two using...OpenVPN? I'd want something I can automate so it comes up if I reboot the nodes, etc. And IPs would need to be fixed.

Then presumably I can create 10.x.x.x DNS entries and tell the nodes to talk on those.

Any howto I should look at? I assume OpenVPN is the way to go, but other than running Nyr's automated script to setup VPNs, I'd never used OpenVPN for server-to-server commo. But it could be any VPN really.

Comments

  • SadySady Member

    TincVPN is the way to go.

  • +1 for Tinc, fits your need exactly and is also far easier to scale than OpenVPN for a private network setup.

  • howardsl2howardsl2 Member
    edited August 2016

    https://github.com/hwdsl2/setup-ipsec-vpn

    Note: OpenVZ VPS is not supported.

    After connecting via IPsec/L2TP, the VPN server has IP 192.168.42.1 within the VPN, and VPN clients are assigned IPs starting from 192.168.42.10.

    Client setup instructions, including Linux:
    https://git.io/vpnclients

    If connecting from a Linux VPS, don't forget to add a route to your own public IP before running the last command, so that your SSH connection is not affected.

  • The simplest least resource intensive way to achieve what you want is already on your system.
    Use ssh to create an ssh tunnel forwarding the port/s used over the SSH to the remote machine. Very secure, very simple. Use SSH key pairs to allow the machines to log into each other without a password.
    You can also do this to create a VPN to a machine that does not support tun/tap btw.

    Thanked by 1muratai
  • raindog308raindog308 Administrator, Veteran

    mycosys said: The simplest least resource intensive way to achieve what you want is already on your system. Use ssh to create an ssh tunnel forwarding the port/s used over the SSH to the remote machine. Very secure, very simple. Use SSH key pairs to allow the machines to log into each other without a password. You can also do this to create a VPN to a machine that does not support tun/tap btw.

    Oh, there is also spiped: http://www.tarsnap.com/spiped.html

    Hadn't thought of that.

    Thanked by 1muratai
  • tinc is a nice choice however i found it transferring slow compared to direct connections. i mean, a lot slower. CPU usage is pretty high on my i7. Tweak a bit and couldnt improve it much so gave up.

  • raindog308 said: I have three VPSes, each in different DCs with different providers. They talk to each other using a protocol which itself does not offer encryption. I would like their traffic to be encrypted. It's a set of ports, but I'd be OK just saying "if you want to talk to one of the other 2 servers, you reach them on this encrypted network for everything".

    >

    Assuming you can do IPSEC, your use case is a perfect match for IPSEC transport mode. See https://www.strongswan.org/testing/testresults/ikev2/host2host-transport/

  • @msg7086 said:
    I found it transferring slow compared to direct connections

    By direct, do you mean unencrypted? I found my CPU usage dropped noticeably switching from point-to-point OpenVPN to Tinc, and switching Tinc to 'Cipher = camellia-128-cbc' gave another noticeable boost (and I'm talking low spec CPU bound Atom/Baytrail servers, no encryption acceleration).

  • Shot2Shot2 Member
    edited August 2016

    I have PeerVPN (homemade compile/deb-packaged/systemd service) deployed as a mesh over IPv4 and/or IPv6 between 6 DCs. Each machine gets a new tap ("vpn0") interface with some user-configurable "local" ipv4 and ipv6 (10.x.x.x / fd00:x::x), then they talk gently to each other :)

    Very easy to setup (a simple 8-line .conf file), reasonably low on resources, no impact on xfer speed...

  • I'm using openvpn with virtwire.

    Connects my home servers to my remote dedi and VMs

  • emgemg Veteran

    @ipasces6 said: Assuming you can do IPSEC, your use case is a perfect match for IPSEC transport mode. See https://www.strongswan.org/testing/testresults/ikev2/host2host-transport/

    @ipasces6's comment makes sense to me. I do not know the specific implementations being discussed here, but I can say:

    • IPSec sits lower in the network stack compared with SSL/TLS. Less wasted overhead.
    • IPSec can run more efficiently than SSL/TLS. Much of the packet (datagram) handling can be done in the kernel. E.g., plaintext packets that are not related to the VPN connection can be examined quickly at interrupt time and then passed up the network stack with minimal delay and interference. Encrypted packets can be handled more efficiently, too. Note that it depends somewhat on how IPSec is shimmed into the network stack. Do SSL/TLS VPN implementations run in user mode, with associated context switching? If so, then expect IPSec to be far more efficient.
    • If you decide to use IPSec, then transport mode is the correct choice. Tunnel mode adds extra overhead for no benefit at all in this particular use case.
  • @Shot2 said:
    I have PeerVPN (homemade compile/deb-packaged/systemd service) deployed as a mesh over IPv4 and/or IPv6 between 6 DCs. Each machine gets a new tap ("vpn0") interface with some user-configurable "local" ipv4 and ipv6 (10.x.x.x / fd00:x::x), then they talk gently to each other :)

    Very easy to setup (a simple 8-line .conf file), reasonably low on resources, no impact on xfer speed...

    I'm also using PeerVPN, it works pretty well. Though I am considering using else that supports multiple VPN connections. I have some servers that need to be connected to multiple VPN networks. I'm thinking about using tinc because it supports this. Though PeerVPN should also do this if I run the software twice.

  • @cochon said:

    @msg7086 said:
    I found it transferring slow compared to direct connections

    By direct, do you mean unencrypted? I found my CPU usage dropped noticeably switching from point-to-point OpenVPN to Tinc, and switching Tinc to 'Cipher = camellia-128-cbc' gave another noticeable boost (and I'm talking low spec CPU bound Atom/Baytrail servers, no encryption acceleration).

    By direct I mean connect to its public IP directly, via FTP etc. Switching to Tinc and I see tinc running high CPU usage even on my i7-4770. The servers are i3 level so not even that low end.

    I might take your suggestion as a try when I have some time. Thanks for the info.

  • @CFarence said:
    I'm also using PeerVPN, it works pretty well. Though I am considering using else that supports multiple VPN connections. I have some servers that need to be connected to multiple VPN networks. I'm thinking about using tinc because it supports this. Though PeerVPN should also do this if I run the software twice.

    Yep ;) once integrated into systemd, it's as easy as creating a "newVPNblah.conf" file then "systemctl enable peervpn@newVPNblah".

  • YKMYKM Member

    Softether

Sign In or Register to comment.