TheLinuxBug said: I can understand why they are doing this, outbound bandwidth is what is expensive
Stop being so retarded for fuck's sake, no you can not defend a provider who advertises a server as 250 Mbit unmetered, but then has a hidden hard limit of 5 Mbit. Shove your fucking "2 cents" up your ass and just stop posting ffs.
Luckily, it's all not this bad, and things are looking to be a simple misconfiguration somewhere, something they might be looking to fix, rather than a malicious false advertising and whatnot.
"Misconfiguration" - Both BHS and FR ARM servers are having the same issue. It's false advertisement. Period. I had this issue a while ago, opened multiple support ticket and OVH's incompetent staff team, constantly asking the same stupid questions over and over, was unable to help in any way.
CConner said: "Misconfiguration" - Both BHS and FR ARM servers are having the same issue
And? They use the same hardware regardless of location, so that doesn't preclude the misconfiguration hypothesis. Moreover, as we see their latest story is "ARM driver problem", so that's not only a misconfiguration indeed, but also one where they don't want to put in effort to fix it, or to refund anyone who's not satisfied with the product having this issue.
K4Y5 said: Feel free to join the conversation or retweet
I tweeted Octave for that Kimsufi network issue awhile back and while he did get support to contact me asap...it just ended up going back in circles again and nothing got done.
What OS is being used for all the tests, Debian? Any way to get a different ARM-supported OS on these things, or mess around with network drivers etc? What is the NIC anyway? Don't think I've seen that listed in the thread...
@twain said:
What OS is being used for all the tests, Debian? Any way to get a different ARM-supported OS on these things, or mess around with network drivers etc? What is the NIC anyway? Don't think I've seen that listed in the thread...
Tried both Debian derivatives (Debian 9 and Ubuntu 16.04), same issue...
mvpp2 f10f0000.ethernet
where am I supposed to find the driver used?
Picked up the Canada one (Montreal I believe) 2G ARM (cheapest one, $5.99/mo)...
DL/UL from/to HOTServers LLC (Montreal, QC) - 0.63 km away from OVH (or perhaps in OVH DC?):
# speedtest-cli
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from OVH Hosting (54.39.x.x)...
Selecting best server based on latency...
Hosted by HOTServers LLC (Montreal, QC) [0.63 km]: 4.066 ms
Testing download speed........................................
Download: 558.62 Mbit/s
Testing upload speed..................................................
Upload: 152.64 Mbit/s
DL/UL from/to two Ottawa providers (~166 km from OVH DC)
# speedtest-cli --server 17396
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from OVH Hosting (54.39.x.x)...
Hosted by Bell Canada (Ottawa, ON) [166.04 km]: 26.691 ms
Testing download speed........................................
Download: 473.71 Mbit/s
Testing upload speed..................................................
Upload: 30.18 Mbit/s
# speedtest-cli --server 18556
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from OVH Hosting (54.39.x.x)...
Hosted by Rogers (Ottawa, Ontario) [166.04 km]: 60.513 ms
Testing download speed........................................
Download: 393.51 Mbit/s
Testing upload speed..................................................
Upload: 28.81 Mbit/s
DL/UL from/to Hivelocity Tampa:
# speedtest-cli --server 2137
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from OVH Hosting (54.39.x.x)...
Hosted by Hivelocity Hosting (Tampa, FL) [2101.15 km]: 43.886 ms
Testing download speed........................................
Download: 306.51 Mbit/s
Testing upload speed..................................................
Upload: 33.40 Mbit/s
====
Believe HOTServers is in OVH's DC?? Can anyone confirm? The IP was 198.50.237.8 and whois shows OVH.
This is not as bad a cap as the 5Mbit/s that others were seeing for upload, but still not close to 250Mbit/s.
Seems fairly clear what's going on here (if indeed HOTServers is in the OVH DC) for upload speeds, and doesn't appear that it could be related to any ARM HW issues as OVH claims, but who knows I guess? Must be capped at the edge OVH device I would think for outbound. Seems unlikely that distance alone of 166 km away would be able to decrease the upload speed from 152 to 30.
====
For some comparison, here is from Netcup to Ottawa and Tampa:
# speedtest-cli --server 18556
Retrieving speedtest.net configuration...
Testing from netcup GmbH (185.244.x.x)...
Retrieving speedtest.net server list...
Retrieving information for the selected server...
Hosted by Rogers (Ottawa, Ontario) [5966.25 km]: 129.439 ms
Testing download speed................................................................................
Download: 173.42 Mbit/s
Testing upload speed......................................................................................................
Upload: 135.79 Mbit/s
# speedtest-cli --server 2137
Retrieving speedtest.net configuration...
Testing from netcup GmbH (185.244.x.x)...
Retrieving speedtest.net server list...
Retrieving information for the selected server...
Hosted by Hivelocity Hosting (Tampa, FL) [7748.55 km]: 121.273 ms
Testing download speed................................................................................
Download: 129.34 Mbit/s
Testing upload speed......................................................................................................
Upload: 123.23 Mbit/s
Hotservers is in the OVH Montreal (Beauharnois) DC, yep.
Actually a phucked-up networking driver might eventually come into play by enforcing conservative measures whenever a TCP link is wrongly deemed "high latency" and the such. Would be interesting to have a look at the driver, if only I could locate it.
@twain said:
What OS is being used for all the tests, Debian? Any way to get a different ARM-supported OS on these things, or mess around with network drivers etc? What is the NIC anyway? Don't think I've seen that listed in the thread...
Tried both Debian derivatives (Debian 9 and Ubuntu 16.04), same issue...
mvpp2 f10f0000.ethernet
where am I supposed to find the driver used?
On Linux you can do something like this:
[email protected]:~# ethtool -i ens2
**driver: virtio_net**
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:02.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...
I think it's carefully worded to tell me that they won't change anything. but also it seems like trying to not add fuel to the fire by putting the blame onto something which technically would not make sense (e.g. the drivers) and therefore lead to further discussions ;-)
as to why they do implement such useless annoying crap we most likely will never know. especially apps that cause high traffic like torrenting or scraping or whatever are usually all very likely using high number of concurrent connections, so this limit won't stop or help anything at all with that.
still... very cheap storage, so I most likely keep it anyway. as written before for my use case I should be able to work around that odd limitation quite easy.
@Falzo said:
I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...
I think it's carefully worded to tell me that they won't change anything. but also it seems like trying to not add fuel to the fire by putting the blame onto something which technically would not make sense (e.g. the drivers) and therefore lead to further discussions ;-)
as to why they do implement such useless annoying crap we most likely will never know. especially apps that cause high traffic like torrenting or scraping or whatever are usually all very likely using high number of concurrent connections, so this limit won't stop or help anything at all with that.
still... very cheap storage, so I most likely keep it anyway. as written before for my use case I should be able to work around that odd limitation quite easy.
Poor single thread/flow performance might result from bad quality or misconfigured switch if not saturated link as well, binding is limited much on cheaper switches. Who knows what thy use in this budget line.
@Falzo said:
I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...
Care to copy their original answer here? (obfuscating anything private).
@Neoon said:
Even ARM => GRA => TINC => PL gets throttled... the fuck.
Seems like to confirm a driver issue.
Try ARM => TINC => PL, making sure you use udp transport only, and please report
I do not see the point, since GRA => PL maxing out the ports.
That's the point, precisely. Forget about GRA. In my experience, there is no throttling noticeable when doing ARM => rest of the world, if using UDP (hence the need to encapsulate TCP connections into UDP, e.g. with a tunnel or whatever). Someone previously suggested using other protocols, or IPv6 (which is unfortunately broken with these SyS ARM)
@Shot2 - install and run hwinfo, which will reveal the path to driver. You can apt install hwinfo in Ubuntu; likely in standard Debian repo as well. Anyway, I believe the path is /bus/platform/drivers/mvpp2
@Falzo said:
I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...
Care to copy their original answer here? (obfuscating anything private).
Poor single thread/flow performance might result from bad quality or misconfigured switch if not saturated link as well, binding is limited much on cheaper switches. Who knows what thy use in this budget line.
I agree and do think that's some kind of poor man's QoS scheme running on their switches to avoid congestion. maybe they can't even turn that off without replacing the switches at all...
@Neoon said:
No idea but with Ubuntu it works, it works to good, but why.
I used the same command from BHS to online.net and he.net with a cap of 5Mbits/sec to both. This is with Ubuntu installed. Did you do a reinstall recently?
Which exit point is best?
Anyway my point was not to get top speeds (I'll cancel that crap as soon as I get a pathetic answer from their support), just to give some more information regarding TCP-only throttling.
edit: nevermind. The tunnel never gets created. Something must be fucked-up in their customized-kernel-joke v6 stack too.
edit2: got the support answer I reported 1/ the contradictory claims on their webpage 2/ the lack of the advertised IP failovers and IPv6 3/ the very poor outbound external bandwidth compared to the claimed 250mbps. Their answer: "we don't provide failover IPs with arm, cheers" - what a fucking joke is it?!
Comments
Maybe someone interesting Unix Benchmark result of So You Start Storage ARM-2T
I had decent speeds between a Kimsufi in BHS and the ARM in BHS.
Anyone can confirm slow speeds between ARM in France and Kimsufi in BHS ?
Thanks !
solved!
It seems you are downloading on your ARM box ? It's downloading FROM the ARM that is slow...
"Misconfiguration" - Both BHS and FR ARM servers are having the same issue. It's false advertisement. Period. I had this issue a while ago, opened multiple support ticket and OVH's incompetent staff team, constantly asking the same stupid questions over and over, was unable to help in any way.
Nope.
And? They use the same hardware regardless of location, so that doesn't preclude the misconfiguration hypothesis. Moreover, as we see their latest story is "ARM driver problem", so that's not only a misconfiguration indeed, but also one where they don't want to put in effort to fix it, or to refund anyone who's not satisfied with the product having this issue.
I tweeted Octave for that Kimsufi network issue awhile back and while he did get support to contact me asap...it just ended up going back in circles again and nothing got done.
What OS is being used for all the tests, Debian? Any way to get a different ARM-supported OS on these things, or mess around with network drivers etc? What is the NIC anyway? Don't think I've seen that listed in the thread...
Tried both Debian derivatives (Debian 9 and Ubuntu 16.04), same issue...
mvpp2 f10f0000.ethernet
where am I supposed to find the driver used?
Picked up the Canada one (Montreal I believe) 2G ARM (cheapest one, $5.99/mo)...
DL/UL from/to HOTServers LLC (Montreal, QC) - 0.63 km away from OVH (or perhaps in OVH DC?):
DL/UL from/to two Ottawa providers (~166 km from OVH DC)
DL/UL from/to Hivelocity Tampa:
====
Believe HOTServers is in OVH's DC?? Can anyone confirm? The IP was 198.50.237.8 and whois shows OVH.
This is not as bad a cap as the 5Mbit/s that others were seeing for upload, but still not close to 250Mbit/s.
Seems fairly clear what's going on here (if indeed HOTServers is in the OVH DC) for upload speeds, and doesn't appear that it could be related to any ARM HW issues as OVH claims, but who knows I guess? Must be capped at the edge OVH device I would think for outbound. Seems unlikely that distance alone of 166 km away would be able to decrease the upload speed from 152 to 30.
====
For some comparison, here is from Netcup to Ottawa and Tampa:
Hotservers is in the OVH Montreal (Beauharnois) DC, yep.
Actually a phucked-up networking driver might eventually come into play by enforcing conservative measures whenever a TCP link is wrongly deemed "high latency" and the such. Would be interesting to have a look at the driver, if only I could locate it.
On Linux you can do something like this:
So, I tested the following:
ARM => GRA (GATEWAY) => PL
In theory, things should not fuck up, since its a proxy right?
GRA => PL 11.19MB/sec
ARM => GRA 11.36MB/sec
But...
ARM => GRA => PL 562.39kB/s
edit:
Even ARM => GRA => TINC => PL gets throttled... the fuck.
Seems like to confirm a driver issue.
Nope, I just get the same answer that I got: mvpp2. Doesn't point at the driver files/source.
Try ARM => TINC => PL, making sure you use udp transport only, and please report
You're right. I checked the kernel source and grep-ing through it doesn't match anything with mvvp. May be a binary, non open-source driver?
I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...
I think it's carefully worded to tell me that they won't change anything. but also it seems like trying to not add fuel to the fire by putting the blame onto something which technically would not make sense (e.g. the drivers) and therefore lead to further discussions ;-)
as to why they do implement such useless annoying crap we most likely will never know. especially apps that cause high traffic like torrenting or scraping or whatever are usually all very likely using high number of concurrent connections, so this limit won't stop or help anything at all with that.
still... very cheap storage, so I most likely keep it anyway. as written before for my use case I should be able to work around that odd limitation quite easy.
Poor single thread/flow performance might result from bad quality or misconfigured switch if not saturated link as well, binding is limited much on cheaper switches. Who knows what thy use in this budget line.
Care to copy their original answer here? (obfuscating anything private).
Vielen Dank im Voraus!
I do not see the point, since GRA => PL maxing out the ports.
So changing the protocol of tinc to UDP just takes my rsync tcp and puts it into UDP.
Apparently iperf just transfered on fresh installed ubuntu with 1.60 Gbits/sec.
That's the point, precisely. Forget about GRA. In my experience, there is no throttling noticeable when doing ARM => rest of the world, if using UDP (hence the need to encapsulate TCP connections into UDP, e.g. with a tunnel or whatever). Someone previously suggested using other protocols, or IPv6 (which is unfortunately broken with these SyS ARM)
No idea but with Ubuntu it works, it works to good, but why.
so, on ubuntu there is no throttle and everything works like it should? or am i interpreting it the wrong way?
@Shot2 - install and run hwinfo, which will reveal the path to driver. You can apt install hwinfo in Ubuntu; likely in standard Debian repo as well. Anyway, I believe the path is /bus/platform/drivers/mvpp2
klar:
I agree and do think that's some kind of poor man's QoS scheme running on their switches to avoid congestion. maybe they can't even turn that off without replacing the switches at all...
ARM Gravelines -> direct tcp (11.0 ms, 7 hops) -> VMHaus London, wget : 560 KB/s
ARM Gravelines -> tcp-in-udp encrypted tunnel (11.3 ms, 1 hop) -> VMHaus London, wget : 5 MB/s (and it's strongly limited by the encryption, maxing the ARM CPU)
Gimme back my TCP connectionz to ze outside woorld pweeeeaaaase
I used the same command from BHS to online.net and he.net with a cap of 5Mbits/sec to both. This is with Ubuntu installed. Did you do a reinstall recently?
Try HE.net IPv6 tunnel, it's also masked as non-TCP for the outside, but requires no encryption.
Which exit point is best?
Anyway my point was not to get top speeds (I'll cancel that crap as soon as I get a pathetic answer from their support), just to give some more information regarding TCP-only throttling.
edit: nevermind. The tunnel never gets created. Something must be fucked-up in their customized-kernel-joke v6 stack too.
edit2: got the support answer
I reported 1/ the contradictory claims on their webpage 2/ the lack of the advertised IP failovers and IPv6 3/ the very poor outbound external bandwidth compared to the claimed 250mbps. Their answer: "we don't provide failover IPs with arm, cheers" - what a fucking joke is it?!