New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Maybe someone interesting Unix Benchmark result of So You Start Storage ARM-2T
I had decent speeds between a Kimsufi in BHS and the ARM in BHS.
Anyone can confirm slow speeds between ARM in France and Kimsufi in BHS ?
Thanks !
solved!
It seems you are downloading on your ARM box ? It's downloading FROM the ARM that is slow...
"Misconfiguration" - Both BHS and FR ARM servers are having the same issue. It's false advertisement. Period. I had this issue a while ago, opened multiple support ticket and OVH's incompetent staff team, constantly asking the same stupid questions over and over, was unable to help in any way.
Nope.
And? They use the same hardware regardless of location, so that doesn't preclude the misconfiguration hypothesis. Moreover, as we see their latest story is "ARM driver problem", so that's not only a misconfiguration indeed, but also one where they don't want to put in effort to fix it, or to refund anyone who's not satisfied with the product having this issue.
I tweeted Octave for that Kimsufi network issue awhile back and while he did get support to contact me asap...it just ended up going back in circles again and nothing got done.
What OS is being used for all the tests, Debian? Any way to get a different ARM-supported OS on these things, or mess around with network drivers etc? What is the NIC anyway? Don't think I've seen that listed in the thread...
Tried both Debian derivatives (Debian 9 and Ubuntu 16.04), same issue...
mvpp2 f10f0000.ethernet
where am I supposed to find the driver used?
Picked up the Canada one (Montreal I believe) 2G ARM (cheapest one, $5.99/mo)...
DL/UL from/to HOTServers LLC (Montreal, QC) - 0.63 km away from OVH (or perhaps in OVH DC?):
DL/UL from/to two Ottawa providers (~166 km from OVH DC)
DL/UL from/to Hivelocity Tampa:
====
Believe HOTServers is in OVH's DC?? Can anyone confirm? The IP was 198.50.237.8 and whois shows OVH.
This is not as bad a cap as the 5Mbit/s that others were seeing for upload, but still not close to 250Mbit/s.
Seems fairly clear what's going on here (if indeed HOTServers is in the OVH DC) for upload speeds, and doesn't appear that it could be related to any ARM HW issues as OVH claims, but who knows I guess? Must be capped at the edge OVH device I would think for outbound. Seems unlikely that distance alone of 166 km away would be able to decrease the upload speed from 152 to 30.
====
For some comparison, here is from Netcup to Ottawa and Tampa:
Hotservers is in the OVH Montreal (Beauharnois) DC, yep.
Actually a phucked-up networking driver might eventually come into play by enforcing conservative measures whenever a TCP link is wrongly deemed "high latency" and the such. Would be interesting to have a look at the driver, if only I could locate it.
On Linux you can do something like this:
So, I tested the following:
ARM => GRA (GATEWAY) => PL
In theory, things should not fuck up, since its a proxy right?
GRA => PL 11.19MB/sec
ARM => GRA 11.36MB/sec
But...
ARM => GRA => PL 562.39kB/s
edit:
Even ARM => GRA => TINC => PL gets throttled... the fuck.
Seems like to confirm a driver issue.
Nope, I just get the same answer that I got: mvpp2. Doesn't point at the driver files/source.
Try ARM => TINC => PL, making sure you use udp transport only, and please report
You're right. I checked the kernel source and grep-ing through it doesn't match anything with mvvp. May be a binary, non open-source driver?
I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...
I think it's carefully worded to tell me that they won't change anything. but also it seems like trying to not add fuel to the fire by putting the blame onto something which technically would not make sense (e.g. the drivers) and therefore lead to further discussions ;-)
as to why they do implement such useless annoying crap we most likely will never know. especially apps that cause high traffic like torrenting or scraping or whatever are usually all very likely using high number of concurrent connections, so this limit won't stop or help anything at all with that.
still... very cheap storage, so I most likely keep it anyway. as written before for my use case I should be able to work around that odd limitation quite easy.
Poor single thread/flow performance might result from bad quality or misconfigured switch if not saturated link as well, binding is limited much on cheaper switches. Who knows what thy use in this budget line.
Care to copy their original answer here? (obfuscating anything private).
Vielen Dank im Voraus!
I do not see the point, since GRA => PL maxing out the ports.
So changing the protocol of tinc to UDP just takes my rsync tcp and puts it into UDP.
Apparently iperf just transfered on fresh installed ubuntu with 1.60 Gbits/sec.
That's the point, precisely. Forget about GRA. In my experience, there is no throttling noticeable when doing ARM => rest of the world, if using UDP (hence the need to encapsulate TCP connections into UDP, e.g. with a tunnel or whatever). Someone previously suggested using other protocols, or IPv6 (which is unfortunately broken with these SyS ARM)
No idea but with Ubuntu it works, it works to good, but why.
so, on ubuntu there is no throttle and everything works like it should? or am i interpreting it the wrong way?
@Shot2 - install and run hwinfo, which will reveal the path to driver. You can apt install hwinfo in Ubuntu; likely in standard Debian repo as well. Anyway, I believe the path is /bus/platform/drivers/mvpp2
klar:
I agree and do think that's some kind of poor man's QoS scheme running on their switches to avoid congestion. maybe they can't even turn that off without replacing the switches at all...
ARM Gravelines -> direct tcp (11.0 ms, 7 hops) -> VMHaus London, wget : 560 KB/s
ARM Gravelines -> tcp-in-udp encrypted tunnel (11.3 ms, 1 hop) -> VMHaus London, wget : 5 MB/s (and it's strongly limited by the encryption, maxing the ARM CPU)
Gimme back my TCP connectionz to ze outside woorld pweeeeaaaase
I used the same command from BHS to online.net and he.net with a cap of 5Mbits/sec to both. This is with Ubuntu installed. Did you do a reinstall recently?
Try HE.net IPv6 tunnel, it's also masked as non-TCP for the outside, but requires no encryption.
Which exit point is best?
Anyway my point was not to get top speeds (I'll cancel that crap as soon as I get a pathetic answer from their support), just to give some more information regarding TCP-only throttling.
edit: nevermind. The tunnel never gets created. Something must be fucked-up in their customized-kernel-joke v6 stack too.
edit2: got the support answer I reported 1/ the contradictory claims on their webpage 2/ the lack of the advertised IP failovers and IPv6 3/ the very poor outbound external bandwidth compared to the claimed 250mbps. Their answer: "we don't provide failover IPs with arm, cheers" - what a fucking joke is it?!