Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


New SoYouStart 2018 Prices - Page 9
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

New SoYouStart 2018 Prices

1679111228

Comments

  • sinsin Member

    Opened a ticket now too:

    Ticket 9699176

    BHS6 - Rack: T06G35 - Server ID: 308751

  • Shot2Shot2 Member
    edited June 2018

    @OVH_Matt

    [ YOU HAVE 6193 NEW TICKETS ]

  • NeoonNeoon Community Contributor, Veteran
    edited June 2018

    Thats a flood.

  • Well, hopefully most of you only paid for a month...

  • TheLinuxBugTheLinuxBug Member
    edited June 2018

    BTW As per the Arm servers, its a per thread limit of 50Mbit. If you want to get better speeds, use multiple threads. Use something like Parsync and set it to multiple threads, you can easily hit 500Mbit doing this. I can understand why they are doing this, outbound bandwidth is what is expensive and there is a bunch of servers constantly competing for it. Their solution to keep costs reasonable is to lower costs by placing per thread QoS for outbound. What is more interesting is depending on the destination you will actually see higher bursts, but anything using international bandwidth (read: outside Europe) will see this limitation. I believe earlier in this thread someone said pretty much anything local OVH or that peers directly with OVH are less likely to be limited, while external traffic is, this seems to be my experience as well.

    For iperf tests add something like -P <x> (where x is a number greater than 1) to the command to run tests in parallel, if you do this for upload test you will see your overall outbound rate hit a lot more than in a single thread.

    I have a ticket open to them to get Kernel sources for the three kernels they use. The kernel on all their Ubuntu/Debian templates lacks tons of kernel features, most notably the ones needed for LUKS encryption. For now, the only really usable template with this in mind is their Openmediavault (32bit) template running an older 4.9.2 kernel which seems to actually have the needed features, but then you got to fight to remove all the Openmediavault settings to have a usable OS. Anyone else already able to get the kernel sources or modules for the other kernel sets? My Ticket is yet unanswered from Friday and I expect won't be answered till sometime later this week.

    my 2 cents.

    Cheers!

    Thanked by 2ehab vimalware
  • FalzoFalzo Member

    @TheLinuxBug said:

    you totally figured it out already...

  • Shot2Shot2 Member
    edited June 2018

    @TheLinuxBug Didn't know outbound bandwidth was expensive only for TCP traffic... (no such limit when sending UDP packets towards UK and US ASNs...)

    Besides the falsely advertised features, these Arm servers are definitely underwhelming, and that's only because of OVH. Stupid boot system, outdated OS images, crippled kernels, no KVM (or overpriced) - looks like they take every possible step t keep them unattractive and needlessly contorted. The sudden price drop is puzzling in this respect.

    Thanked by 1rm_
  • Shot2 said: Besides the falsely advertised features, these Arm servers are definitely underwhelming, and that's only because of OVH. Stupid boot system, outdated OS images, crippled kernels, no KVM (or overpriced) - looks like they take every possible step t keep them unattractive and needlessly contorted. The sudden price drop is puzzling in this respect.

    Honestly, I haven't had it long enough to provide a real observation, but what I can say is if they provided the kernel source so that you could compile your own kernel, even without 'kvm' or console, this is a good deal if you look at it for what it is. Cheap storage. Period.

    You can use it for some limited services as well if you like. If you take advantage of things like the crypto engine by recompiling openssl as demonstrated by @rm_ in another thread here on LET, it can allow for pretty efficient transfer rates and enough ram to provide some limited services like NFS, SSHFS, simple web portal, vpn server, etc.

    If you are expecting more than that, then you are right, probably not the right fit.

    my 2 cents.

    Cheers!

  • rm_rm_ IPv6 Advocate, Veteran
    edited June 2018

    TheLinuxBug said: I can understand why they are doing this, outbound bandwidth is what is expensive

    Stop being so retarded for fuck's sake, no you can not defend a provider who advertises a server as 250 Mbit unmetered, but then has a hidden hard limit of 5 Mbit. Shove your fucking "2 cents" up your ass and just stop posting ffs.

    Luckily, it's all not this bad, and things are looking to be a simple misconfiguration somewhere, something they might be looking to fix, rather than a malicious false advertising and whatnot.

    Thanked by 1K4Y5
  • TheLinuxBugTheLinuxBug Member
    edited June 2018

    rm_ said: Stop being so retarded for fuck's sake, no you can not defend a provider who advertises a server as 250 Mbit unmetered, but then has a hidden hard limit of 5 Mbit. Shove your fucking "2 cents" up your ass and just stop posting ffs.

    Using iperf with -P 6 I can push 100MB/sec to one of my servers in Netherlands

    http://prntscr.com/jvrek3
    http://prntscr.com/jvre7p
    http://prntscr.com/jvrdkz

    I haven't tested to US servers yet, but again, it looks like if you use multi-threading you can get substantially more than you paid for, possibly.

    Thanks for your usual positive and uplifting reply.

    Cheers!

    Thanked by 2lion vimalware
  • Jona4sJona4s Member
    edited June 2018

    These Arm-A9 are pretty decent. I can get 20'000 HTTP requests per second served across networks, before maxing 190% cpu (2 cores).

    Pretty sure the machine can do even more. Right now I'm trying to lower the software interrupts (Net_rx hogging 30% cpu) so that I can increase the req/s.

  • FalzoFalzo Member

    TheLinuxBug said: Using iperf with -P 6 I can push 100MB/sec to one of my servers in Netherlands

    how about telling something that wasn't written pages ago...

    Thanked by 2Shot2 K4Y5
  • FredQcFredQc Member
    root@arm:~# iperf -c iperf.ovh.net -i 1
    ------------------------------------------------------------
    Client connecting to iperf.ovh.net, TCP port 5001
    TCP window size: 43.8 KByte (default)
    ------------------------------------------------------------
    [  3] local x.x.x.x port 39268 connected with 188.165.12.136 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0- 1.0 sec  18.4 MBytes   154 Mbits/sec
    [  3]  1.0- 2.0 sec  38.1 MBytes   320 Mbits/sec
    [  3]  2.0- 3.0 sec  37.5 MBytes   315 Mbits/sec
    [  3]  3.0- 4.0 sec  38.2 MBytes   321 Mbits/sec
    [  3]  4.0- 5.0 sec  38.1 MBytes   320 Mbits/sec
    [  3]  5.0- 6.0 sec  35.9 MBytes   301 Mbits/sec
    [  3]  6.0- 7.0 sec  37.8 MBytes   317 Mbits/sec
    [  3]  7.0- 8.0 sec  37.8 MBytes   317 Mbits/sec
    [  3]  8.0- 9.0 sec  37.8 MBytes   317 Mbits/sec
    [  3]  9.0-10.0 sec  37.8 MBytes   317 Mbits/sec
    [  3]  0.0-10.0 sec   357 MBytes   299 Mbits/sec
    

    root@arm:~# iperf -c proof.ovh.ca -i 1
    ------------------------------------------------------------
    Client connecting to proof.ovh.ca, TCP port 5001
    TCP window size: 43.8 KByte (default)
    ------------------------------------------------------------
    [  3] local x.x.x.x port 55588 connected with 192.99.19.165 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0- 1.0 sec   194 MBytes  1.63 Gbits/sec
    [  3]  1.0- 2.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  2.0- 3.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  3.0- 4.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  4.0- 5.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  5.0- 6.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  6.0- 7.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  7.0- 8.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  8.0- 9.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  9.0-10.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  0.0-10.0 sec  1.90 GBytes  1.63 Gbits/sec
    
  • NeoonNeoon Community Contributor, Veteran

    @Falzo said:

    TheLinuxBug said: Using iperf with -P 6 I can push 100MB/sec to one of my servers in Netherlands

    how about telling something that wasn't written pages ago...

    HE IS A GOD, KNEE BEFORE HIM, INFIDEL!

  • rm_rm_ IPv6 Advocate, Veteran

    Jona4s said: trying to lower the software interrupts

    Check with ethtool -k eth0 | grep -v fixed what it has from the hardware offloads on the NIC that's off but can be changed.

  • @TheLinuxBug said:
    BTW As per the Arm servers, its a per thread limit of 50Mbit. If you want to get better speeds, use multiple threads. Use something like Parsync and set it to multiple threads, you can easily hit 500Mbit doing this. I can understand why they are doing this, outbound bandwidth is what is expensive and there is a bunch of servers constantly competing for it. Their solution to keep costs reasonable is to lower costs by placing per thread QoS for outbound. What is more interesting is depending on the destination you will actually see higher bursts, but anything using international bandwidth (read: outside Europe) will see this limitation. I believe earlier in this thread someone said pretty much anything local OVH or that peers directly with OVH are less likely to be limited, while external traffic is, this seems to be my experience as well.

    For iperf tests add something like -P <x> (where x is a number greater than 1) to the command to run tests in parallel, if you do this for upload test you will see your overall outbound rate hit a lot more than in a single thread.

    I have a ticket open to them to get Kernel sources for the three kernels they use. The kernel on all their Ubuntu/Debian templates lacks tons of kernel features, most notably the ones needed for LUKS encryption. For now, the only really usable template with this in mind is their Openmediavault (32bit) template running an older 4.9.2 kernel which seems to actually have the needed features, but then you got to fight to remove all the Openmediavault settings to have a usable OS. Anyone else already able to get the kernel sources or modules for the other kernel sets? My Ticket is yet unanswered from Friday and I expect won't be answered till sometime later this week.

    my 2 cents.

    Cheers!

    Nope, inbound is fine. Outbound is capped no matter if you draw from actual disk IO or /dev/zero

  • ofitofit Member

    Answer from my ticket. hope they fix it fast.

    Hello,

    After investigating the case, we have found that the issue is due to the ARM
    network driver, this is not a problem we can fix, this has to be addressed to
    ARM directly.

    Thank you for your understanding!

    For any other questions or concerns, please feel free to contact us through a
    support ticket or through our toll-free line at 1-844-768-7827. We’re here
    24/7 to help you!

    We thank you again for choosing SoyouStart,

    Daniel
    Customer Advocate
    Make sure to visit our FAQ: [1]http://docs.ovh.ca/en/faqs.html

    [1] http://docs.ovh.ca/en/faqs.html

  • Shot2Shot2 Member

    @ofit said:
    Answer from my ticket. hope they fix it fast.

    I interpret that answer as "not my fault, wontfix, deal with it".

  • Falzo said: how about telling something that wasn't written pages ago...

  • FredQcFredQc Member

    ofit said: hope they fix it fast.

    They won't fix it.

    Thanked by 2sin Aidan
  • @ofit said:

    After investigating the case, we have found that the issue is due to the ARM network driver, this is not a problem we can fix, this has to be addressed to ARM directly.

    So, if they dont fix it... could we report it to ARM? How?

  • Well, move your data elsewhere and don't renew it. I don't buy the explanation for the upstream-only abnormally though.

  • FredQcFredQc Member

    These ARM issues reminds me of the online.net bios fiasco resulting of abyssal HDD speeds with no fix available.

    Thanked by 1mtsbatalha
  • sinsin Member
    edited June 2018

    -edit- nevermind saw that they replied to someone else with the same thing

    They just replied to my ticket and they said:

    Hello,
    
    Thank you for contacting SoyouStart regarding your bandwidth issue.
    
    After investigating the case, we have found that the issue is due to the ARM network driver, this is not a problem we can fix, this has to be addressed to ARM directly.
    
    Thank you for your understanding!
    

    I take that as they're not going to do anything about it.

    So they just deployed all these ARM servers without actually testing them?

  • hawkjohn7hawkjohn7 Member
    edited June 2018

    Location BHS storage 2TB, i got 470mbps to iperf.he.net

    root@nsxxxxx:~# iperf -c ping.online.net -i 1

    Client connecting to ping.online.net, TCP port 5001

    TCP window size: 43.8 KByte (default)

    [ 3] local 54.39.62.214 port 25343 connected with 62.210.18.40 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 1.0 sec 9.38 MBytes 78.6 Mbits/sec
    [ 3] 1.0- 2.0 sec 41.8 MBytes 350 Mbits/sec
    [ 3] 2.0- 3.0 sec 41.5 MBytes 348 Mbits/sec
    [ 3] 3.0- 4.0 sec 44.0 MBytes 369 Mbits/sec
    [ 3] 4.0- 5.0 sec 40.0 MBytes 336 Mbits/sec
    [ 3] 5.0- 6.0 sec 44.0 MBytes 369 Mbits/sec
    [ 3] 6.0- 7.0 sec 44.0 MBytes 369 Mbits/sec
    [ 3] 7.0- 8.0 sec 39.5 MBytes 331 Mbits/sec
    [ 3] 8.0- 9.0 sec 43.6 MBytes 366 Mbits/sec
    [ 3] 9.0-10.0 sec 41.6 MBytes 349 Mbits/sec
    [ 3] 0.0-10.1 sec 389 MBytes 324 Mbits/sec

    root@nsxxxx:~# iperf -c iperf.he.net -i 1

    Client connecting to iperf.he.net, TCP port 5001

    TCP window size: 324 KByte (default)

    [ 3] local 54.39.62.214 port 61156 connected with 216.218.227.10 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 1.0 sec 31.1 MBytes 261 Mbits/sec
    [ 3] 1.0- 2.0 sec 56.0 MBytes 470 Mbits/sec
    [ 3] 2.0- 3.0 sec 56.0 MBytes 470 Mbits/sec
    [ 3] 3.0- 4.0 sec 56.0 MBytes 470 Mbits/sec
    [ 3] 4.0- 5.0 sec 56.0 MBytes 470 Mbits/sec
    [ 3] 5.0- 6.0 sec 56.0 MBytes 470 Mbits/sec
    [ 3] 6.0- 7.0 sec 56.0 MBytes 470 Mbits/sec
    [ 3] 7.0- 8.0 sec 56.0 MBytes 470 Mbits/sec
    [ 3] 8.0- 9.0 sec 56.0 MBytes 470 Mbits/sec

  • @FredQc said:
    root@arm:~# iperf -c iperf.ovh.net -i 1
    ------------------------------------------------------------
    Client connecting to iperf.ovh.net, TCP port 5001
    TCP window size: 43.8 KByte (default)
    ------------------------------------------------------------
    [ 3] local x.x.x.x port 39268 connected with 188.165.12.136 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 1.0 sec 18.4 MBytes 154 Mbits/sec
    [ 3] 1.0- 2.0 sec 38.1 MBytes 320 Mbits/sec
    [ 3] 2.0- 3.0 sec 37.5 MBytes 315 Mbits/sec
    [ 3] 3.0- 4.0 sec 38.2 MBytes 321 Mbits/sec
    [ 3] 4.0- 5.0 sec 38.1 MBytes 320 Mbits/sec
    [ 3] 5.0- 6.0 sec 35.9 MBytes 301 Mbits/sec
    [ 3] 6.0- 7.0 sec 37.8 MBytes 317 Mbits/sec
    [ 3] 7.0- 8.0 sec 37.8 MBytes 317 Mbits/sec
    [ 3] 8.0- 9.0 sec 37.8 MBytes 317 Mbits/sec
    [ 3] 9.0-10.0 sec 37.8 MBytes 317 Mbits/sec
    [ 3] 0.0-10.0 sec 357 MBytes 299 Mbits/sec


    root@arm:~# iperf -c proof.ovh.ca -i 1
    ------------------------------------------------------------
    Client connecting to proof.ovh.ca, TCP port 5001
    TCP window size: 43.8 KByte (default)
    ------------------------------------------------------------
    [  3] local x.x.x.x port 55588 connected with 192.99.19.165 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0- 1.0 sec   194 MBytes  1.63 Gbits/sec
    [  3]  1.0- 2.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  2.0- 3.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  3.0- 4.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  4.0- 5.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  5.0- 6.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  6.0- 7.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  7.0- 8.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  8.0- 9.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  9.0-10.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  0.0-10.0 sec  1.90 GBytes  1.63 Gbits/sec
    
  • NeoonNeoon Community Contributor, Veteran

    @sin said:
    -edit- nevermind saw that they replied to someone else with the same thing

    They just replied to my ticket and they said:

    Hello,
    
    Thank you for contacting SoyouStart regarding your bandwidth issue.
    
    After investigating the case, we have found that the issue is due to the ARM network driver, this is not a problem we can fix, this has to be addressed to ARM directly.
    
    Thank you for your understanding!
    

    I take that as they're not going to do anything about it.

    So they just deployed all these ARM servers without actually testing them?

    The fuck? So the ARM network driver decides, that is not OVH network, throttle it?

    Biggest bullshit I ever heard.

    Thanked by 3sin Falzo atomi
  • K4Y5K4Y5 Member
    edited June 2018

    @sin said:
    -edit- nevermind saw that they replied to someone else with the same thing

    They just replied to my ticket and they said:

    Hello,
    
    Thank you for contacting SoyouStart regarding your bandwidth issue.
    
    After investigating the case, we have found that the issue is due to the ARM network driver, this is not a problem we can fix, this has to be addressed to ARM directly.
    
    Thank you for your understanding!
    

    I take that as they're not going to do anything about it.

    So they just deployed all these ARM servers without actually testing them?

    @hawkjohn7 said:

    @FredQc said:
    root@arm:~# iperf -c iperf.ovh.net -i 1
    ------------------------------------------------------------
    Client connecting to iperf.ovh.net, TCP port 5001
    TCP window size: 43.8 KByte (default)
    ------------------------------------------------------------
    [ 3] local x.x.x.x port 39268 connected with 188.165.12.136 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 1.0 sec 18.4 MBytes 154 Mbits/sec
    [ 3] 1.0- 2.0 sec 38.1 MBytes 320 Mbits/sec
    [ 3] 2.0- 3.0 sec 37.5 MBytes 315 Mbits/sec
    [ 3] 3.0- 4.0 sec 38.2 MBytes 321 Mbits/sec
    [ 3] 4.0- 5.0 sec 38.1 MBytes 320 Mbits/sec
    [ 3] 5.0- 6.0 sec 35.9 MBytes 301 Mbits/sec
    [ 3] 6.0- 7.0 sec 37.8 MBytes 317 Mbits/sec
    [ 3] 7.0- 8.0 sec 37.8 MBytes 317 Mbits/sec
    [ 3] 8.0- 9.0 sec 37.8 MBytes 317 Mbits/sec
    [ 3] 9.0-10.0 sec 37.8 MBytes 317 Mbits/sec
    [ 3] 0.0-10.0 sec 357 MBytes 299 Mbits/sec


    root@arm:~# iperf -c proof.ovh.ca -i 1
    ------------------------------------------------------------
    Client connecting to proof.ovh.ca, TCP port 5001
    TCP window size: 43.8 KByte (default)
    ------------------------------------------------------------
    [  3] local x.x.x.x port 55588 connected with 192.99.19.165 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0- 1.0 sec   194 MBytes  1.63 Gbits/sec
    [  3]  1.0- 2.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  2.0- 3.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  3.0- 4.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  4.0- 5.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  5.0- 6.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  6.0- 7.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  7.0- 8.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  8.0- 9.0 sec   195 MBytes  1.64 Gbits/sec
    [  3]  9.0-10.0 sec   195 MBytes  1.63 Gbits/sec
    [  3]  0.0-10.0 sec  1.90 GBytes  1.63 Gbits/sec
    

    @OVH_Matt care to elaborate?

    @others Feel free to join the conversation or retweet - https://mobile.twitter.com/K4Y5/status/1008159669850968064

  • I got the canned response, too. So, they're suggestion is to tell their paying customers that it is on them to contact the hardware vendor regarding this issue? Why should this fall on customers to do? I am assuming that this affects all the ARM servers so, there is no way they can meet the promised bandwidth of 250Mbps.

    I'll just let this one expire but, this is a pretty crappy thing to do to your customers.

  • This issue feels a lot like my old kimsufi issue: https://www.lowendtalk.com/discussion/116410/kimsufi-fr-crappy-network#p1 which was eventually (2 months) resolved by the staff. Ticket #5836123691 on the kimsufi sub-brand if you need a reference for it.

Sign In or Register to comment.