Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Slow transmission speeds at high ping
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Slow transmission speeds at high ping

SashkaProSashkaPro Member

Hi,
I have an issue that I can't fix by myself.

The problem is that on "high" latency, I got a slow transfer speed via rsync/scp/sftp.

So, let's say I have three servers in "country A," "country B," and "country C." All of them have connection 1G.
"Country A" > "Country C" got around 16ms ping, and when I transferred a 1 TB file, I got ~600Mbps. Iperf between A and C for around 900-1000 Mbps.
So no problem at all between A & C.

"Country B" > "Country C" took around 50 ms, and when I transfer the same 1 TB file, I can reach only ~250Mbps.
BUT! Iperf between B & C can reach the same 900-1000 Mbps.

If I have servers B and C, I can think, "Hey, it may be a disk problem or something else," but A&C allows me to reach 600Mbps.

Okay, for test purposes, I grabbed a server on another ISP in country B and tried to upload from "Country B" to "Country B2" (latency around 5 ms)—and my 1TB file uploads reached 1Gbps easily.

So. My question is:
1) What to do? How can more than 250 Mbps be reached from "B" to "C"?
2) How do people use storage VPSs that are so far away from them, like from Asia to the US/EU?
3) I tried to do some jump hosts but had no luck (the speed stayed the same, around 250 Mbps. Maybe it's the wrong location, I don't know).
4) If routing is shit (as Google answers say), then how could iperf3 reach 1G between B and C?

Notes:
BBR is enabled on A, B, and C.
Iperf3 can reach full 1G between A, B, and C.

There seems to be some TCP problem (maybe too much time for an answer due to latency).

I think I should try to do GRE between B and C and rsync over GRE (so I can avoid the TCP layer on the WAN), but I haven't tried it yet.

Any other ideas?

Comments

  • SashkaProSashkaPro Member
    edited April 14

    Yep, thanks!
    Forgot to mention that A, B, and C have

    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    net.core.wmem_max = 16777216
    net.core.rmem_max = 16777216
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 87380 16777216
    

    In sysctl.conf.
    I don't know what exactly those parameters are doing (I just Googled it a couple of years ago), but at least maybe I should try to remove them.

  • @SashkaPro said:

    Yep, thanks!
    Forgot to mention that A, B, and C have

    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    net.core.wmem_max = 16777216
    net.core.rmem_max = 16777216
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 87380 16777216
    

    In sysctl.conf.
    I don't know what exactly those parameters are doing (I just Googled it a couple of years ago), but at least maybe I should try to remove them.

    Have you remove them? Is it the culprit?

  • @Motion3549 said: Have you remove them? Is it the culprit?

    Nah, not yet. On weekends, I will try to remove them.
    Till the weekend, I will try to understand what those parameters are doing. :)

  • skorupionskorupion Member, Host Rep

    How is Country B to Country A?
    As for your questions:

    1) What to do? How can more than 250 Mbps be reached from "B" to "C"?

    A) Not sure, though do some more benchmarks [like B->A], and check if there might be just more limitations.

    2) How do people use storage VPSs that are so far away from them, like from Asia to the US/EU?

    A) I have a backup server in the US, while my server is in EU [Germany]. There is around 150ms latency between them. I do not care about the speeds, as long as the backups are successful. I don't expect to reach XGbps on shared spinning rust that's basically on the other side of the globe.

    3) I tried to do some jump hosts but had no luck (the speed stayed the same, around 250 Mbps. Maybe it's the wrong location, I don't know).

    A) Very possible, especially if your stuff needs low latency, and high speeds.

    4) If routing is shit (as Google answers say), then how could iperf3 reach 1G between B and C?

    A) iperf3 is meh at best. When we are checking speeds we usually ask clients to use speedtest.net as it gives more real world results, though still not perfect. [Not the python version, the official one]

    In general, I'd say one thing: artificial benchmarks ≠ Real life performance

    Obligatory AI anwser from claude:

    Why Higher Latency Dramatically Impacts File Transfer (But Not iperf3)
    This situation illustrates the fundamental interaction between TCP, latency, and throughput - often called the "long fat network" problem.
    The TCP Window and BDP Concept
    TCP uses a "window" of bytes that can be in flight (transmitted but not yet 
    acknowledged). The maximum theoretical throughput is limited by:
    Max Throughput = TCP Window Size / Round Trip Time (RTT)
    This is called the Bandwidth-Delay Product (BDP). With his current settings:
    
    For 16ms RTT (A→C): 16,777,216 bytes / 0.016 seconds = 1048.6 MB/s (~8.4 Gbps)
    For 50ms RTT (B→C): 16,777,216 bytes / 0.050 seconds = 335.5 MB/s (~2.7 Gbps)
    
    Theoretically, his window settings should allow for full throughput, but in practice, several factors interfere.
    Why File Transfers Are More Affected Than iperf3
    
    Protocol Behavior Differences:
    
    iperf3 uses multiple parallel TCP streams by default
    rsync/scp/sftp use a single TCP stream with additional overhead
    
    
    TCP Slow Start and Congestion Control:
    
    TCP starts with a small window and gradually increases
    Higher latency means it takes longer to ramp up to full speed
    BBR helps but doesn't completely solve this for large file transfers
    
    
    Application vs. Raw Testing:
    
    iperf3 optimizes for testing raw throughput (zero-filled memory buffers)
    File transfers involve disk reads, encryption, protocol handshakes, etc.
    These additional operations introduce micro-delays that compound with latency
    
    
    Interactive Protocol Nature:
    
    SSH-based transfers involve many small back-and-forth exchanges
    Each of these small exchanges is hit by the full latency penalty
    These "chatty" protocols suffer more as latency increases
    
    
    Single Stream Limitations:
    
    A single TCP stream can't fully utilize bandwidth on high-latency links
    The RTT becomes the limiting factor
    
    
    
    This explains why the same connection can have such different performance characteristics between iperf3 and actual file transfers, and why the gap widens as latency increases.
    It's not that his tuning is wrong - his current settings are reasonable. The fundamental issue is that SSH-based file transfers become increasingly inefficient as latency grows, while iperf3 (being designed specifically for testing) continues to perform well through its optimized approach.
    
  • @skorupion said: A) Not sure, though do some more benchmarks [like B->A], and check if there might be just more limitations.

    Sure, I tested B > A.
    There are no problems at all: around ~600Mbps

    So only B > C slow speeds.

  • skorupionskorupion Member, Host Rep

    @SashkaPro said:

    @skorupion said: A) Not sure, though do some more benchmarks [like B->A], and check if there might be just more limitations.

    Sure, I tested B > A.
    There are no problems at all: around ~600Mbps

    So only B > C slow speeds.

    How many ms between them?

    Thanked by 1nghialele
  • @skorupion said: How many ms between them?

    B > A = 27 ms (around 600 Mbps)
    B > C = 47 ms (around 250 Mbps)
    A > C = 18 ms (around 600 Mbps)

  • skorupionskorupion Member, Host Rep

    @SashkaPro said:

    @skorupion said: How many ms between them?

    B > A = 27 ms (around 600 Mbps)
    B > C = 47 ms (around 250 Mbps)
    A > C = 18 ms (around 600 Mbps)

    In that case, I'd personally run a quick speedtest.net to preferably the host. You can find the IDs of speedtest servers via hovering over the name directly on the speedtest.net website, and then taking a look at what link it leads to:

  • totototototo Member

    @SashkaPro said: The problem is that on "high" latency, I got a slow transfer speed via rsync/scp/sftp.

    I recommend using rclone + webdav - in my case, switching to rclone significantly improved transfer speed.
    https://lowendtalk.com/discussion/comment/4157756#Comment_4157756

    Thanked by 1SashkaPro
  • Packet loss?
    maybe bad routing to the countries

  • sysctl -q -w net.core.default_qdisc=fq
    sysctl -q -w net.ipv4.tcp_congestion_control=bbr

    try running this!

  • @SillyGoose said:
    sysctl -q -w net.core.default_qdisc=fq
    sysctl -q -w net.ipv4.tcp_congestion_control=bbr

    try running this!

    As I wrote in my first post, BBR is enabled :)

    @gbzret4d said: Packet loss?

    There is no packet loss between B and C.

    @Motion3549 said: Have you remove them? Is it the culprit?

    Tried to remove sysctl.conf params. Speed is the same.
    I tried to replace it with the params from the @tototo thread. The same. :(

    @tototo said: I recommend using rclone + webdav - in my case, switching to rclone significantly improved transfer speed.

    Helpful thread, thanks!
    I tried all the tweaks, so I should go with rclone.

    But anyway, why SCP/rsync is still too slow is a mystery.

  • afaik scp has a high overhead (encryption, authenticity checking, etc., ...) compared to other protocols.
    Have you tried raising/lowering the threads? Try it with a single thread and rise it until the speed wont rise any more.
    Try rsync as already mentioned here

  • @gbzret4d said: Have you tried raising/lowering the threads? Try it with a single thread and rise it until the speed wont rise any more.

    Try rsync as already mentioned here

    Sorry, I do not understand this part.
    If I am correct, rsync cannot "multithread" one file.
    At least I can try uploading the files at the same time, just for interest, but that won't solve the problem with "one-file-transfer-via-rsync."

    Or are you speaking about something else?

  • gbzret4dgbzret4d Member
    edited April 14

    @SashkaPro said:

    @gbzret4d said: Have you tried raising/lowering the threads? Try it with a single thread and rise it until the speed wont rise any more.

    Try rsync as already mentioned here

    Sorry, I do not understand this part.
    If I am correct, rsync cannot "multithread" one file.
    At least I can try uploading the files at the same time, just for interest, but that won't solve the problem with "one-file-transfer-via-rsync."

    Or are you speaking about something else?

    i mean scp if you really want to stay with it

  • CharityHost_orgCharityHost_org Member, Patron Provider
    edited April 15

    Without knowing what your IO is at the DST then it's hard to tell what the bottleneck is. If you have slow writes at DST then that may be your problem. Check your max write throughput at your DST server. iperf would not touch your disk, at least not much at all ever, so it's not a good test.

    Also notice "QoS provided by public backbones can depend on the ISP's configuration and agreements with other networks, but it is not uniformly applied across the public internet"

    I think latency or potential drop/error packets, and QoS may be the issue.

  • It might be worth looking into TCP window scaling or adjusting your MTU settings. Have you thought about testing some different transfer protocols?

  • @greenhost_cloud said: Have you thought about testing some different transfer protocols?

    Yeah, I think about @tototo 's post about rclone-webdav.
    But anyway, if the same proto / tool works great on short distances, why does it work badly on long distances? That is the question.

  • @SashkaPro can you try sftp/rsync over direct wireguard? so inter-country would be over udp. just stupid and wondering

  • rcy026rcy026 Member

    @SashkaPro said:

    @greenhost_cloud said: Have you thought about testing some different transfer protocols?

    Yeah, I think about @tototo 's post about rclone-webdav.
    But anyway, if the same proto / tool works great on short distances, why does it work badly on long distances? That is the question.

    Because ssh is a very chatty protocol, it has a lot of small "back and forth" packages that suffers from the latency. It was never designed to move files over long distances.

Sign In or Register to comment.