Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


New SoYouStart 2018 Prices - Page 10
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

New SoYouStart 2018 Prices

17810121328

Comments

  • ofitofit Member
    edited June 2018

    Maybe someone interesting Unix Benchmark result of So You Start Storage ARM-2T

       #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
       #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
       #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
       #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
       #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
        ####   #    #  #  #    #          #####   ######  #    #   ####   #    #
    
       Version 5.1.3                      Based on the Byte Magazine Unix Benchmark
    
       Multi-CPU version                  Version 5 revisions by Ian Smith,
                                          Sunnyvale, CA, USA
       January 13, 2011                   johantheghost at yahoo period com
    
    Use of uninitialized value in printf at ./Run line 1379.
    Use of uninitialized value in printf at ./Run line 1380.
    Use of uninitialized value in printf at ./Run line 1379.
    Use of uninitialized value in printf at ./Run line 1380.
    Use of uninitialized value in printf at ./Run line 1589.
    Use of uninitialized value in printf at ./Run line 1590.
    Use of uninitialized value in printf at ./Run line 1589.
    Use of uninitialized value in printf at ./Run line 1590.
    
    1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10
    
    1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10
    
    1 x Execl Throughput  1 2 3
    
    1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3
    
    1 x File Copy 256 bufsize 500 maxblocks  1 2 3
    
    1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3
    
    1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10
    
    1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10
    
    1 x Process Creation  1 2 3
    
    1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10
    
    1 x Shell Scripts (1 concurrent)  1 2 3
    
    1 x Shell Scripts (8 concurrent)  1 2 3
    
    2 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10
    
    2 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10
    
    2 x Execl Throughput  1 2 3
    
    2 x File Copy 1024 bufsize 2000 maxblocks  1 2 3
    
    2 x File Copy 256 bufsize 500 maxblocks  1 2 3
    
    2 x File Copy 4096 bufsize 8000 maxblocks  1 2 3
    
    2 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10
    
    2 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10
    
    2 x Process Creation  1 2 3
    
    2 x System Call Overhead  1 2 3 4 5 6 7 8 9 10
    
    2 x Shell Scripts (1 concurrent)  1 2 3
    
    2 x Shell Scripts (8 concurrent)  1 2 3
    
    ========================================================================
       BYTE UNIX Benchmarks (Version 5.1.3)
    
       System: ns342342: GNU/Linux
       OS: GNU/Linux -- 4.9.58-armada375 -- #1 SMP Thu Nov 2 14:45:09 CET 2017
       Machine: armv7l (unknown)
       Language: en_US.utf8 (charmap="UTF-8", collate="ANSI_X3.4-1968")
       CPU 0: ARMv7 Processor rev 1 (v7l) (0.0 bogomips)
    
       CPU 1: ARMv7 Processor rev 1 (v7l) (0.0 bogomips)
    
       10:28:21 up 4 days, 19:48,  1 user,  load average: 0.08, 0.02, 0.00; runlevel 5
    
    ------------------------------------------------------------------------
    Benchmark Run: Sun Jun 17 2018 10:28:21 - 10:56:44
    2 CPUs in system; running 1 parallel copy of tests
    
    Dhrystone 2 using register variables        3639192.7 lps   (10.0 s, 7 samples)
    Double-Precision Whetstone                      641.9 MWIPS (10.0 s, 7 samples)
    Execl Throughput                                571.5 lps   (29.9 s, 2 samples)
    File Copy 1024 bufsize 2000 maxblocks        105391.6 KBps  (30.0 s, 2 samples)
    File Copy 256 bufsize 500 maxblocks           31626.5 KBps  (30.0 s, 2 samples)
    File Copy 4096 bufsize 8000 maxblocks        238573.3 KBps  (30.0 s, 2 samples)
    Pipe Throughput                              197621.2 lps   (10.0 s, 7 samples)
    Pipe-based Context Switching                  57623.6 lps   (10.0 s, 7 samples)
    Process Creation                               1331.1 lps   (30.0 s, 2 samples)
    Shell Scripts (1 concurrent)                   1133.4 lpm   (60.1 s, 2 samples)
    Shell Scripts (8 concurrent)                    201.2 lpm   (60.2 s, 2 samples)
    System Call Overhead                         383243.4 lps   (10.0 s, 7 samples)
    
    System Benchmarks Index Values               BASELINE       RESULT    INDEX
    Dhrystone 2 using register variables         116700.0    3639192.7    311.8
    Double-Precision Whetstone                       55.0        641.9    116.7
    Execl Throughput                                 43.0        571.5    132.9
    File Copy 1024 bufsize 2000 maxblocks          3960.0     105391.6    266.1
    File Copy 256 bufsize 500 maxblocks            1655.0      31626.5    191.1
    File Copy 4096 bufsize 8000 maxblocks          5800.0     238573.3    411.3
    Pipe Throughput                               12440.0     197621.2    158.9
    Pipe-based Context Switching                   4000.0      57623.6    144.1
    Process Creation                                126.0       1331.1    105.6
    Shell Scripts (1 concurrent)                     42.4       1133.4    267.3
    Shell Scripts (8 concurrent)                      6.0        201.2    335.3
    System Call Overhead                          15000.0     383243.4    255.5
                                                                       ========
    System Benchmarks Index Score                                         205.3
    
    ------------------------------------------------------------------------
    Benchmark Run: Sun Jun 17 2018 10:56:44 - 11:25:08
    2 CPUs in system; running 2 parallel copies of tests
    
    Dhrystone 2 using register variables        7278508.4 lps   (10.0 s, 7 samples)
    Double-Precision Whetstone                     1283.7 MWIPS (10.0 s, 7 samples)
    Execl Throughput                               1021.7 lps   (29.9 s, 2 samples)
    File Copy 1024 bufsize 2000 maxblocks        185645.0 KBps  (30.0 s, 2 samples)
    File Copy 256 bufsize 500 maxblocks           52799.7 KBps  (30.0 s, 2 samples)
    File Copy 4096 bufsize 8000 maxblocks        408280.3 KBps  (30.0 s, 2 samples)
    Pipe Throughput                              380406.0 lps   (10.0 s, 7 samples)
    Pipe-based Context Switching                 110925.9 lps   (10.0 s, 7 samples)
    Process Creation                               2118.4 lps   (30.0 s, 2 samples)
    Shell Scripts (1 concurrent)                   1469.8 lpm   (60.0 s, 2 samples)
    Shell Scripts (8 concurrent)                    203.6 lpm   (60.3 s, 2 samples)
    System Call Overhead                         739135.7 lps   (10.0 s, 7 samples)
    
    System Benchmarks Index Values               BASELINE       RESULT    INDEX
    Dhrystone 2 using register variables         116700.0    7278508.4    623.7
    Double-Precision Whetstone                       55.0       1283.7    233.4
    Execl Throughput                                 43.0       1021.7    237.6
    File Copy 1024 bufsize 2000 maxblocks          3960.0     185645.0    468.8
    File Copy 256 bufsize 500 maxblocks            1655.0      52799.7    319.0
    File Copy 4096 bufsize 8000 maxblocks          5800.0     408280.3    703.9
    Pipe Throughput                               12440.0     380406.0    305.8
    Pipe-based Context Switching                   4000.0     110925.9    277.3
    Process Creation                                126.0       2118.4    168.1
    Shell Scripts (1 concurrent)                     42.4       1469.8    346.7
    Shell Scripts (8 concurrent)                      6.0        203.6    339.3
    System Call Overhead                          15000.0     739135.7    492.8
                                                                       ========
    System Benchmarks Index Score                                         346.6
    
    ======= Script description and score comparison completed! ======= 
    
  • I had decent speeds between a Kimsufi in BHS and the ARM in BHS.

    Anyone can confirm slow speeds between ARM in France and Kimsufi in BHS ?

    Thanks !

  • openosopenos Member
    edited June 2018

    solved!

    DK9QO.md.jpg

    DKiUE.md.jpg

  • @openos said:
    solved!

    DK9QO.md.jpg

    DKiUE.md.jpg

    It seems you are downloading on your ARM box ? It's downloading FROM the ARM that is slow...

  • CConnerCConner Member, Host Rep
    edited June 2018

    @rm_ said:

    TheLinuxBug said: I can understand why they are doing this, outbound bandwidth is what is expensive

    Stop being so retarded for fuck's sake, no you can not defend a provider who advertises a server as 250 Mbit unmetered, but then has a hidden hard limit of 5 Mbit. Shove your fucking "2 cents" up your ass and just stop posting ffs.

    Luckily, it's all not this bad, and things are looking to be a simple misconfiguration somewhere, something they might be looking to fix, rather than a malicious false advertising and whatnot.

    "Misconfiguration" - Both BHS and FR ARM servers are having the same issue. It's false advertisement. Period. I had this issue a while ago, opened multiple support ticket and OVH's incompetent staff team, constantly asking the same stupid questions over and over, was unable to help in any way.

  • openosopenos Member

    @wlambrechts said:

    @openos said:
    solved!

    DK9QO.md.jpg

    DKiUE.md.jpg

    It seems you are downloading on your ARM box ? It's downloading FROM the ARM that is slow...

    DK1ot.md.jpg

    DKKMz.md.jpg

  • Nope.

    # speedtest-cli --server 3165
    Retrieving speedtest.net configuration...
    Testing from OVH Hosting (54.39.xx.xxx)...
    Retrieving speedtest.net server list...
    Retrieving information for the selected server...
    Hosted by Georgia Institute of Technology (Atlanta, GA) [1598.25 km]: 64.266 ms
    Testing download speed................................................................................
    Download: 215.31 Mbit/s
    Testing upload speed......................................................................................................
    Upload: 68.40 Mbit/s
    
  • rm_rm_ IPv6 Advocate, Veteran

    CConner said: "Misconfiguration" - Both BHS and FR ARM servers are having the same issue

    And? They use the same hardware regardless of location, so that doesn't preclude the misconfiguration hypothesis. Moreover, as we see their latest story is "ARM driver problem", so that's not only a misconfiguration indeed, but also one where they don't want to put in effort to fix it, or to refund anyone who's not satisfied with the product having this issue.

  • sinsin Member

    K4Y5 said: Feel free to join the conversation or retweet

    I tweeted Octave for that Kimsufi network issue awhile back and while he did get support to contact me asap...it just ended up going back in circles again and nothing got done.

  • twaintwain Member

    What OS is being used for all the tests, Debian? Any way to get a different ARM-supported OS on these things, or mess around with network drivers etc? What is the NIC anyway? Don't think I've seen that listed in the thread...

  • Shot2Shot2 Member

    @twain said:
    What OS is being used for all the tests, Debian? Any way to get a different ARM-supported OS on these things, or mess around with network drivers etc? What is the NIC anyway? Don't think I've seen that listed in the thread...

    Tried both Debian derivatives (Debian 9 and Ubuntu 16.04), same issue...

    mvpp2 f10f0000.ethernet
    where am I supposed to find the driver used?

  • twaintwain Member
    edited June 2018

    Picked up the Canada one (Montreal I believe) 2G ARM (cheapest one, $5.99/mo)...

    DL/UL from/to HOTServers LLC (Montreal, QC) - 0.63 km away from OVH (or perhaps in OVH DC?):

    # speedtest-cli
    Retrieving speedtest.net configuration...
    Retrieving speedtest.net server list...
    Testing from OVH Hosting (54.39.x.x)...
    Selecting best server based on latency...
    Hosted by HOTServers LLC (Montreal, QC) [0.63 km]: 4.066 ms
    Testing download speed........................................
    Download: 558.62 Mbit/s
    Testing upload speed..................................................
    Upload: 152.64 Mbit/s
    

    DL/UL from/to two Ottawa providers (~166 km from OVH DC)

    # speedtest-cli --server 17396
    Retrieving speedtest.net configuration...
    Retrieving speedtest.net server list...
    Testing from OVH Hosting (54.39.x.x)...
    Hosted by Bell Canada (Ottawa, ON) [166.04 km]: 26.691 ms
    Testing download speed........................................
    Download: 473.71 Mbit/s
    Testing upload speed..................................................
    Upload: 30.18 Mbit/s
    
    # speedtest-cli --server 18556
    Retrieving speedtest.net configuration...
    Retrieving speedtest.net server list...
    Testing from OVH Hosting (54.39.x.x)...
    Hosted by Rogers (Ottawa, Ontario) [166.04 km]: 60.513 ms
    Testing download speed........................................
    Download: 393.51 Mbit/s
    Testing upload speed..................................................
    Upload: 28.81 Mbit/s
    

    DL/UL from/to Hivelocity Tampa:

    # speedtest-cli --server 2137
    Retrieving speedtest.net configuration...
    Retrieving speedtest.net server list...
    Testing from OVH Hosting (54.39.x.x)...
    Hosted by Hivelocity Hosting (Tampa, FL) [2101.15 km]: 43.886 ms
    Testing download speed........................................
    Download: 306.51 Mbit/s
    Testing upload speed..................................................
    Upload: 33.40 Mbit/s
    

    ====

    Believe HOTServers is in OVH's DC?? Can anyone confirm? The IP was 198.50.237.8 and whois shows OVH.

    This is not as bad a cap as the 5Mbit/s that others were seeing for upload, but still not close to 250Mbit/s.

    Seems fairly clear what's going on here (if indeed HOTServers is in the OVH DC) for upload speeds, and doesn't appear that it could be related to any ARM HW issues as OVH claims, but who knows I guess? Must be capped at the edge OVH device I would think for outbound. Seems unlikely that distance alone of 166 km away would be able to decrease the upload speed from 152 to 30.

    ====

    For some comparison, here is from Netcup to Ottawa and Tampa:

    # speedtest-cli --server 18556
    Retrieving speedtest.net configuration...
    Testing from netcup GmbH (185.244.x.x)...
    Retrieving speedtest.net server list...
    Retrieving information for the selected server...
    Hosted by Rogers (Ottawa, Ontario) [5966.25 km]: 129.439 ms
    Testing download speed................................................................................
    Download: 173.42 Mbit/s
    Testing upload speed......................................................................................................
    Upload: 135.79 Mbit/s
    
    # speedtest-cli --server 2137
    Retrieving speedtest.net configuration...
    Testing from netcup GmbH (185.244.x.x)...
    Retrieving speedtest.net server list...
    Retrieving information for the selected server...
    Hosted by Hivelocity Hosting (Tampa, FL) [7748.55 km]: 121.273 ms
    Testing download speed................................................................................
    Download: 129.34 Mbit/s
    Testing upload speed......................................................................................................
    Upload: 123.23 Mbit/s
    
  • Shot2Shot2 Member
    edited June 2018

    Hotservers is in the OVH Montreal (Beauharnois) DC, yep.

    Actually a phucked-up networking driver might eventually come into play by enforcing conservative measures whenever a TCP link is wrongly deemed "high latency" and the such. Would be interesting to have a look at the driver, if only I could locate it.

  • zkyezzkyez Member

    @Shot2 said:

    @twain said:
    What OS is being used for all the tests, Debian? Any way to get a different ARM-supported OS on these things, or mess around with network drivers etc? What is the NIC anyway? Don't think I've seen that listed in the thread...

    Tried both Debian derivatives (Debian 9 and Ubuntu 16.04), same issue...

    mvpp2 f10f0000.ethernet
    where am I supposed to find the driver used?

    On Linux you can do something like this:

    xx@yy:~# ethtool -i ens2
    **driver: virtio_net**
    version: 1.0.0
    firmware-version:
    expansion-rom-version:
    bus-info: 0000:00:02.0
    supports-statistics: yes
    supports-test: no
    supports-eeprom-access: no
    supports-register-dump: no
    supports-priv-flags: no
    
  • NeoonNeoon Community Contributor, Veteran
    edited June 2018

    So, I tested the following:

    ARM => GRA (GATEWAY) => PL

    In theory, things should not fuck up, since its a proxy right?

    GRA => PL 11.19MB/sec

    ARM => GRA 11.36MB/sec

    But...

    ARM => GRA => PL 562.39kB/s

    edit:

    Even ARM => GRA => TINC => PL gets throttled... the fuck.

    Seems like to confirm a driver issue.

  • Shot2Shot2 Member

    @zkyez said:
    On Linux you can do something like this:

    xx@yy:~# ethtool -i ens2
    **driver: virtio_net**
    version: 1.0.0
    firmware-version:
    expansion-rom-version:
    bus-info: 0000:00:02.0
    supports-statistics: yes
    supports-test: no
    supports-eeprom-access: no
    supports-register-dump: no
    supports-priv-flags: no
    

    Nope, I just get the same answer that I got: mvpp2. Doesn't point at the driver files/source.

    @Neoon said:
    Even ARM => GRA => TINC => PL gets throttled... the fuck.
    Seems like to confirm a driver issue.

    Try ARM => TINC => PL, making sure you use udp transport only, and please report :)

  • zkyezzkyez Member

    You're right. I checked the kernel source and grep-ing through it doesn't match anything with mvvp. May be a binary, non open-source driver?

  • FalzoFalzo Member

    I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...

    I think it's carefully worded to tell me that they won't change anything. but also it seems like trying to not add fuel to the fire by putting the blame onto something which technically would not make sense (e.g. the drivers) and therefore lead to further discussions ;-)

    as to why they do implement such useless annoying crap we most likely will never know. especially apps that cause high traffic like torrenting or scraping or whatever are usually all very likely using high number of concurrent connections, so this limit won't stop or help anything at all with that.

    still... very cheap storage, so I most likely keep it anyway. as written before for my use case I should be able to work around that odd limitation quite easy.

  • ClouviderClouvider Member, Patron Provider

    @Falzo said:
    I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...

    I think it's carefully worded to tell me that they won't change anything. but also it seems like trying to not add fuel to the fire by putting the blame onto something which technically would not make sense (e.g. the drivers) and therefore lead to further discussions ;-)

    as to why they do implement such useless annoying crap we most likely will never know. especially apps that cause high traffic like torrenting or scraping or whatever are usually all very likely using high number of concurrent connections, so this limit won't stop or help anything at all with that.

    still... very cheap storage, so I most likely keep it anyway. as written before for my use case I should be able to work around that odd limitation quite easy.

    Poor single thread/flow performance might result from bad quality or misconfigured switch if not saturated link as well, binding is limited much on cheaper switches. Who knows what thy use in this budget line.

  • Shot2Shot2 Member
    edited June 2018

    @Falzo said:
    I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...

    Care to copy their original answer here? (obfuscating anything private).

    Vielen Dank im Voraus!

  • NeoonNeoon Community Contributor, Veteran
    edited June 2018

    @Shot2 said:

    @Neoon said:
    Even ARM => GRA => TINC => PL gets throttled... the fuck.
    Seems like to confirm a driver issue.

    Try ARM => TINC => PL, making sure you use udp transport only, and please report :)

    I do not see the point, since GRA => PL maxing out the ports.

    So changing the protocol of tinc to UDP just takes my rsync tcp and puts it into UDP.

    Apparently iperf just transfered on fresh installed ubuntu with 1.60 Gbits/sec.

  • Shot2Shot2 Member
    edited June 2018

    @Neoon said:

    @Shot2 said:

    @Neoon said:
    Even ARM => GRA => TINC => PL gets throttled... the fuck.
    Seems like to confirm a driver issue.

    Try ARM => TINC => PL, making sure you use udp transport only, and please report :)

    I do not see the point, since GRA => PL maxing out the ports.

    That's the point, precisely. Forget about GRA. In my experience, there is no throttling noticeable when doing ARM => rest of the world, if using UDP (hence the need to encapsulate TCP connections into UDP, e.g. with a tunnel or whatever). Someone previously suggested using other protocols, or IPv6 (which is unfortunately broken with these SyS ARM)

  • NeoonNeoon Community Contributor, Veteran
    edited June 2018

    No idea but with Ubuntu it works, it works to good, but why.

  • mosan7763mosan7763 Member
    edited June 2018

    @Neoon said:

    Apparently iperf just transfered on fresh installed ubuntu with 1.60 Gbits/sec.

    No idea but with Ubuntu it works, it works to good, but why.

    so, on ubuntu there is no throttle and everything works like it should? or am i interpreting it the wrong way?

  • twaintwain Member
    edited June 2018

    @Shot2 - install and run hwinfo, which will reveal the path to driver. You can apt install hwinfo in Ubuntu; likely in standard Debian repo as well. Anyway, I believe the path is /bus/platform/drivers/mvpp2

  • FalzoFalzo Member

    @Shot2 said:

    @Falzo said:
    I got a final answer from the german support, saying that it is an issue already known by the network administration for quite some time and with no solution available at this very moment. it also tells that there is no ETA to when it might be solved, most likely not in short term and that I might get a refund if I'd request one...

    Care to copy their original answer here? (obfuscating anything private).

    Vielen Dank im Voraus!

    klar:

    @Clouvider said:

    Poor single thread/flow performance might result from bad quality or misconfigured switch if not saturated link as well, binding is limited much on cheaper switches. Who knows what thy use in this budget line.

    I agree and do think that's some kind of poor man's QoS scheme running on their switches to avoid congestion. maybe they can't even turn that off without replacing the switches at all...

  • Shot2Shot2 Member
    edited June 2018

    ARM Gravelines -> direct tcp (11.0 ms, 7 hops) -> VMHaus London, wget : 560 KB/s

    ARM Gravelines -> tcp-in-udp encrypted tunnel (11.3 ms, 1 hop) -> VMHaus London, wget : 5 MB/s (and it's strongly limited by the encryption, maxing the ARM CPU)

    Gimme back my TCP connectionz to ze outside woorld pweeeeaaaase

  • @Neoon said:
    No idea but with Ubuntu it works, it works to good, but why.

    I used the same command from BHS to online.net and he.net with a cap of 5Mbits/sec to both. This is with Ubuntu installed. Did you do a reinstall recently?

  • rm_rm_ IPv6 Advocate, Veteran

    Shot2 said: ARM Gravelines -> tcp-in-udp encrypted tunnel (11.3 ms, 1 hop) -> VMHaus London, wget : 5 MB/s (and it's strongly limited by the encryption, maxing the ARM CPU)

    Try HE.net IPv6 tunnel, it's also masked as non-TCP for the outside, but requires no encryption.

  • Shot2Shot2 Member
    edited June 2018

    Which exit point is best?
    Anyway my point was not to get top speeds (I'll cancel that crap as soon as I get a pathetic answer from their support), just to give some more information regarding TCP-only throttling.

    edit: nevermind. The tunnel never gets created. Something must be fucked-up in their customized-kernel-joke v6 stack too.

    edit2: got the support answer :) I reported 1/ the contradictory claims on their webpage 2/ the lack of the advertised IP failovers and IPv6 3/ the very poor outbound external bandwidth compared to the claimed 250mbps. Their answer: "we don't provide failover IPs with arm, cheers" - what a fucking joke is it?!

Sign In or Register to comment.