All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Slow transmission speeds at high ping
Hi,
I have an issue that I can't fix by myself.
The problem is that on "high" latency, I got a slow transfer speed via rsync/scp/sftp.
So, let's say I have three servers in "country A," "country B," and "country C." All of them have connection 1G.
"Country A" > "Country C" got around 16ms ping, and when I transferred a 1 TB file, I got ~600Mbps. Iperf between A and C for around 900-1000 Mbps.
So no problem at all between A & C.
"Country B" > "Country C" took around 50 ms, and when I transfer the same 1 TB file, I can reach only ~250Mbps.
BUT! Iperf between B & C can reach the same 900-1000 Mbps.
If I have servers B and C, I can think, "Hey, it may be a disk problem or something else," but A&C allows me to reach 600Mbps.
Okay, for test purposes, I grabbed a server on another ISP in country B and tried to upload from "Country B" to "Country B2" (latency around 5 ms)—and my 1TB file uploads reached 1Gbps easily.
So. My question is:
1) What to do? How can more than 250 Mbps be reached from "B" to "C"?
2) How do people use storage VPSs that are so far away from them, like from Asia to the US/EU?
3) I tried to do some jump hosts but had no luck (the speed stayed the same, around 250 Mbps. Maybe it's the wrong location, I don't know).
4) If routing is shit (as Google answers say), then how could iperf3 reach 1G between B and C?
Notes:
BBR is enabled on A, B, and C.
Iperf3 can reach full 1G between A, B, and C.
There seems to be some TCP problem (maybe too much time for an answer due to latency).
I think I should try to do GRE between B and C and rsync over GRE (so I can avoid the TCP layer on the WAN), but I haven't tried it yet.
Any other ideas?
Comments
might be helpful
https://cloud.google.com/compute/docs/networking/tcp-optimization-for-network-performance-in-gcp-and-hybrid
Yep, thanks!
Forgot to mention that A, B, and C have
In sysctl.conf.
I don't know what exactly those parameters are doing (I just Googled it a couple of years ago), but at least maybe I should try to remove them.
Have you remove them? Is it the culprit?
Nah, not yet. On weekends, I will try to remove them.
Till the weekend, I will try to understand what those parameters are doing.
How is Country B to Country A?
As for your questions:
A) Not sure, though do some more benchmarks [like B->A], and check if there might be just more limitations.
A) I have a backup server in the US, while my server is in EU [Germany]. There is around 150ms latency between them. I do not care about the speeds, as long as the backups are successful. I don't expect to reach XGbps on shared spinning rust that's basically on the other side of the globe.
A) Very possible, especially if your stuff needs low latency, and high speeds.
A) iperf3 is meh at best. When we are checking speeds we usually ask clients to use speedtest.net as it gives more real world results, though still not perfect. [Not the python version, the official one]
In general, I'd say one thing: artificial benchmarks ≠ Real life performance
Obligatory AI anwser from claude:
Sure, I tested B > A.
There are no problems at all: around ~600Mbps
So only B > C slow speeds.
How many ms between them?
B > A = 27 ms (around 600 Mbps)
B > C = 47 ms (around 250 Mbps)
A > C = 18 ms (around 600 Mbps)
In that case, I'd personally run a quick speedtest.net to preferably the host. You can find the IDs of speedtest servers via hovering over the name directly on the speedtest.net website, and then taking a look at what link it leads to:

I recommend using rclone + webdav - in my case, switching to rclone significantly improved transfer speed.
https://lowendtalk.com/discussion/comment/4157756#Comment_4157756
Packet loss?
maybe bad routing to the countries
sysctl -q -w net.core.default_qdisc=fq
sysctl -q -w net.ipv4.tcp_congestion_control=bbr
try running this!
As I wrote in my first post, BBR is enabled
There is no packet loss between B and C.
Tried to remove sysctl.conf params. Speed is the same.
I tried to replace it with the params from the @tototo thread. The same.
Helpful thread, thanks!
I tried all the tweaks, so I should go with rclone.
But anyway, why SCP/rsync is still too slow is a mystery.
afaik scp has a high overhead (encryption, authenticity checking, etc., ...) compared to other protocols.
Have you tried raising/lowering the threads? Try it with a single thread and rise it until the speed wont rise any more.
Try rsync as already mentioned here
Try rsync as already mentioned here
Sorry, I do not understand this part.
If I am correct, rsync cannot "multithread" one file.
At least I can try uploading the files at the same time, just for interest, but that won't solve the problem with "one-file-transfer-via-rsync."
Or are you speaking about something else?
i mean scp if you really want to stay with it
Without knowing what your IO is at the DST then it's hard to tell what the bottleneck is. If you have slow writes at DST then that may be your problem. Check your max write throughput at your DST server. iperf would not touch your disk, at least not much at all ever, so it's not a good test.
Also notice "QoS provided by public backbones can depend on the ISP's configuration and agreements with other networks, but it is not uniformly applied across the public internet"
I think latency or potential drop/error packets, and QoS may be the issue.
It might be worth looking into TCP window scaling or adjusting your MTU settings. Have you thought about testing some different transfer protocols?
Yeah, I think about @tototo 's post about rclone-webdav.
But anyway, if the same proto / tool works great on short distances, why does it work badly on long distances? That is the question.
@SashkaPro can you try sftp/rsync over direct wireguard? so inter-country would be over udp. just stupid and wondering
Because ssh is a very chatty protocol, it has a lot of small "back and forth" packages that suffers from the latency. It was never designed to move files over long distances.