Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Anyone know how to SSH Tunnel like this?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Anyone know how to SSH Tunnel like this?

pubcrawlerpubcrawler Banned
edited December 2012 in General

Have two remote sites that connectivity between them sucks. Both are gigabit and dedicated servers. Speed between them is 2MB/s.

Needing to move a few hundred gigabytes around. So will take forever and a day.

Have 3rd machine with good speed between both sites. Thinking about using the 3rd machine (VPS) just to route traffic between the two ends.

Wanting to do this:

1.1.1.1 (new server) ----> 8.8.8.8 (VPS in middle) ---> 2.2.2.2 (remote server)

Basically just use the VPS in the middle to route traffic in and out of for rsync.

Anyone have a clue how to get this to work with rsync and SSH to do this?

«1

Comments

  • How about :

    1.1.1.1 rsync to 8.8.8.8

    8.8.8.8 rsync to 2.2.2.2

    Or, 8.8.8.8 download the file from 1.1.1.1
    2.2.2.2 download from 8.8.8.8

  • Set up OpenVPN, assign them both private IPs, and then use said private IPs.

    Easiest way to go about it.

  • Is the VPS KVM? If yes you could setup gre tunnels dedi1-VPS and dedi2-VPS, then add some routes and voila.

  • @ErawanArifNugroho, that won't work since the intermediate VPS doesn't have the disk space to store the files.

  • @rds100, this is OpenVZ and I am just a customer, no control of the server.

  • Well, we can use vpn as Wintereise suggested :)

  • From machine a :
    ssh -L 2222:localhost:2222 machinea

    Then, from b:
    ssh -L 2222:localhost:22 machinec

    Replace :22 with rsync port no if you're using rsync directly

    Now from machinea ssh localhost:2222 should get you to machinec

    Not sure this will give you any real speed boost though...

  • Seems like the right direction @tehdan. Didn't seem to work though....

    The OpenVPN I am unclear about because need the node in the middle as a "relay" sort of --- just to pass packets back and forth.

    Sucks the route between A and C is all over HE's network. Lately I see way too many HE slow downs, saturations, etc.

  • @pubcrawler - it will work just fine, I do it all the time to get to machines behind firewalls. However the command "ssh localhost:2222" might be a bit wrong, you probably want "ssh -p2222 localhost" which is more portable

    A couple of things to remember

    keep the sessions alive - the lazy way to do this is to ssh from a-b then b-c in the same window, then leave top running in this window. more elegant solutions are available.

    the 2222 needs to be the same number throughout, anything over 1024 that's not in use is fine. the final :22 should be the destination service port, ie 22 for ssh or 873(?) for rsync

  • About to plug a SSHFS thing together that should suffice... probably easiest best know thing we fuss with all the time. Totally skipped my frazzled mind...

  • I predict the end result would be slower than just moving the files from dedi A to dedi B over the slow internet, but it doesn't hurt trying.
    My experience with scp-ing big data between servers on the same gigabit LAN says that there is quite an overhead because of the encryption / decryption.

  • Xinetd on the middle vps with redirect will do it.

  • justinbjustinb Member
    edited December 2012
  • Good recommendation @Keith. Need to get up to speed on Xinetd, so putting that on my list of learning to-do's.

    @rds100, so far, have more than tripled speed with the SSHFS route. Still slow, but heading in better direction. Instead of 2MB/s up to 6-7MB/s.

    Server A to the VPS in the middle is capable of 19M/s. The VPS in the middle to server B is capable of speeds of 40M/s.

    Still more headroom, but I might just let it run and get some sleep :)

  • SSH, however, is slow. See if hooking up a HTTP server is possible.

  • I'd rather it be slow than feeding data to the public switched internet data recorders :)

  • klikliklikli Member
    edited December 2012

    @pubcrawler said: I'd rather it be slow than feeding data to the public switched internet data recorders :)

    HTTPS with both sever and client certificate (?)

  • jarjar Patron Provider, Top Host, Veteran
    edited December 2012

    Would be suggestion. I could probably be accused of overusing SSHFS, but I love it.

  • @jarland, I am a huge fan of SSHFS. Requires some command line tweaking here and there depending on situation, but it works very good.

    Kind of funny mapping a remote drive to a VPS then mirroring that from another machine over rsync :) Yeah that's overhead, but it worked in a pinch with funky connection performance issues in the middle.

    Way easier than monkeying around with ssh tunneling and 3 different commands to perfect to even get the thing working.

    Thanks to @Francisco who kindled the idea by recommending NFS actually.

  • Install a Squid proxy on the VPS and route rsync through it? http://superuser.com/questions/87827/how-to-force-the-rsync-to-use-my-proxy-settings

  • Interesting hack there @NickW!

  • Similar to NickW's suggestion but not passing rsync in plaintext, you can proxy your SSH session through the middle VPS running squid, using corkscrew:

    In your ssh_config
    Host foo ProxyCommand corkscrew proxy.domain 3128 %h %p

    If this is a one-time data move, rsync will incur additional overhead you don't need - particularly if you're moving a lot of small files. An tar piped via ssh will run quicker (assuming you have stable connectivity).

  • Fancy @craigb!

    I absolutely love the creative solutions (although might not be simplest to implement in a pinch).

  • TheLinuxBugTheLinuxBug Member
    edited December 2012

    How bout 'redir' ?
    apt-get install redir

    # redir
    usage:
            redir --lport= --cport= [options]
            redir --inetd --cport=
    
            Options are:-
                    --lport=             port to listen on
                    --laddr=IP              address of interface to listen on
                    --cport=             port to connect to
                    --caddr=          remote host to connect to
                    --inetd         run from inetd
                    --debug         output debugging info
                    --timeout=   set timeout to n seconds
                    --syslog        log messages to syslog
                    --name=    tag syslog messages with 'str'
                    --connect= CONNECT string passed to proxy server
                                    Also used as service name for TCP wrappers
                    --bind_addr=IP  bind() outgoing IP to given addr
                    --ftp=            redirect ftp connections
                            where type is either port, pasv, both
                    --transproxy    run in linux's transparent proxy mode
                    --bufsize=      size of the buffer
                    --max_bandwidth=   limit the bandwidth
                    --random_wait=        wait before each packet
                    --wait_in_out=    1 wait for in, 2 out, 3 in&out
    
            Version 2.2.1.
    

    (Run this on the middle server of course)

    Cheers!

  • @TheLinuxBug, redir looks totally basass. Somehow it's new to me.

  • @pubcrawler in a pinch, try this one liner:

    tar zcvf - /path/to/source/file | ssh middlevps -T "ssh targetvps '(cd /tmp && tar zxvf - )'"

  • Could you split your source files and run like 20 separate rsync?

  • I could @herbyscrub, but that would be a nightmare and possibly problematic...

  • Lol yeah. I ran into this problem once, but I didn't want to my brain too much trying to figure it out.

Sign In or Register to comment.