Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


VPS Benchmark Scripts
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

VPS Benchmark Scripts

ReaperofpowerReaperofpower Member
edited December 2019 in General

These are some of the benchmark scripts I found and have used in the past, are there any others that are better or do you recommend staying with one of these?

Bench.sh

Usage example

$ wget -qO- bench.sh | bash

Nench

Usage example

(curl -s wget.racing/nench.sh | bash; curl -s wget.racing/nench.sh | bash) 2>&1 | tee nench.log

(wget -qO- wget.racing/nench.sh | bash; wget -qO- wget.racing/nench.sh | bash) 2>&1 | tee nench.log

VPSbench

Usage example

**bash <(wget --no-check-certificate -O - https://raw.github.com/mgutz/vpsbench/master/vpsbench) **

VPS Benchmark

Usage example

$ wget http://busylog.net/FILES2DW/busytest.sh -O - -o /dev/null | bash

Linux Bench

Usage example

** wget https://raw.githubusercontent.com/STH-Dev/linux-bench/master/linux-bench.sh && chmod +x linux-bench.sh && ./linux-bench.sh**

Bench-sh-2

Usage example

$ wget https://raw.githubusercontent.com/hidden-refuge/bench-sh-2/master/bench.sh && chmod +x bench.sh && ./bench.sh

unixbench.sh

Usage example

wget --no-check-certificate https://github.com/teddysun/across/raw/master/unixbench.sh
chmod +x unixbench.sh
./unixbench.sh

Thanked by 1Ganonk
«1

Comments

  • dedipromodedipromo Member
    edited December 2019

    I'm kinda curious with that bench.sh — how is it possible to display a web page when viewing it in a browser, while downloading a shell script instead when the query is made from wget/curl with the exact same link?

    EDIT: OK, so the website basically detects my user agent info; if I'm using a browser, it shows the webpage at https://bench.sh; if it detects something like this "User-Agent: Wget/1.13.4 (linux-gnu)," it will 302 redirect to "http://86.re/bench.sh" and download the script.

  • Those that use dd are not reporting useful information for disk write. Only YABS by @MasonR is most accurate for bandwidth throughput because it uses iperf but the problem is lack of public APAC servers.

  • @poisson said:
    Those that use dd are not reporting useful information for disk write. Only YABS by @MasonR is most accurate for bandwidth throughput because it uses iperf but the problem is lack of public APAC servers.

    iperf.cc

    shoud do. (not mine).

  • @eKo said:

    @poisson said:
    Those that use dd are not reporting useful information for disk write. Only YABS by @MasonR is most accurate for bandwidth throughput because it uses iperf but the problem is lack of public APAC servers.

    iperf.cc

    shoud do. (not mine).

    Those are the servers that are used in @MasonR's YABS

  • some thoughtful work went into @jsg's vpsbench: https://www.lowendtalk.com/discussion/144821/new-vps-specific-benchmark-update-download-avail/ is the beginning of some interesting discussion, worth digging into a bit more for some useful insights relating to specifically to VPS performance

    Thanked by 2seriesn jsg
  • @uptime said:
    some thoughtful work went into @jsg's vpsbench: https://www.lowendtalk.com/discussion/144821/new-vps-specific-benchmark-update-download-avail/ is the beginning of some interesting discussion, worth digging into a bit more for some useful insights relating to specifically to VPS performance

    I share his concerns and my homebrew LEBRE Xtended bench (currently three VPSes are quarterway through testing; aiming for 30 days of data) now include:

    • Geekbench 5 for CPU/RAM
    • fio 4k, 64k and 256k blocksize random readwrite to test iops and bandwidth
    • ioping for latency
    • iperf for bandwidth throughput to USA, Europe and APAC (using private servers)

    First batch of results will only be out around middle January.

  • @uptime said:
    some thoughtful work went into @jsg's vpsbench: https://www.lowendtalk.com/discussion/144821/new-vps-specific-benchmark-update-download-avail/ is the beginning of some interesting discussion, worth digging into a bit more for some useful insights relating to specifically to VPS performance

    Have you used it, though? It's the most inaccurate benchmark I've ever used. I've only ever seen one person ask about it and when she didn't reply and I understood no one used it and no one gave a shit, so I've just ignored it.

    Run it on your shittiest servers and then wonder why it gives disk results in the GB/s (free Oracle Vm's with 50MB/s actual limits and Cloud at Cost with 6-60MB/s actual limits). Then run it on your NVMe servers and wonder why those results are less. The numbers are nonsensical.

    I gave up because author is never wrong, just clumsy and doesn't give a shit about doing good work.

    Thanked by 1uptime
  • @TimboJones said:
    Have you used it, though?

    I have not used it - but I find the ideas intriguing, and I would like to subscribe to the newsletter!

  • JordJord Moderator, Host Rep

    @MasonR is prem.

    Thanked by 1MasonR
  • jsgjsg Member, Resident Benchmarker

    For the sake of fairness and correctness:

    I've done quite many benchmarks and reviews using my vpsbench, and quite a few of the benchmarks I did were upon request by the providers. I also know of multiple providers using my vpsbench internally to check server hardware and software, optimize servers, evaluate offered systems, etc.

    Quite a few have publicly thanked me for my benchmarks and none doubted the results of my benchmarks.

    So that's quite a few professionals whose income is influenced by my benchmarks/reviews -against- 1 guy who loves to trash talk me and my work.

    Thanked by 2seriesn poisson
  • I wouldn't worry about people who have no problems assuming they must be correct even if they lack the ability to understand. Asperger's a real thing.

  • @jsg said:
    For the sake of fairness and correctness:

    I've done quite many benchmarks and reviews using my vpsbench, and quite a few of the benchmarks I did were upon request by the providers. I also know of multiple providers using my vpsbench internally to check server hardware and software, optimize servers, evaluate offered systems, etc.

    Quite a few have publicly thanked me for my benchmarks and none doubted the results of my benchmarks.

    So that's quite a few professionals whose income is influenced by my benchmarks/reviews -against- 1 guy who loves to trash talk me and my work.

    I made a point of saying your benchmark reports GB/s for shitty services and your response is that providers don't complain. No shit, Sherlock. You really don't understand much.

    Why a provider would ever use your benchmark to check their system makes no sense to me and makes me think the provider is garbage.

    What you said has no relevance to the validity of the test results. I have yet to see some one else post results from your benchmark and @poisson doesn't even use your benchmarks. In fact, he does a MUCH better job than you.

    I've questioned your results as being valid as well as your commentary on those results as being inconsistent and indicative of problems nearly every time. You really don't have any validation your results are valid other than "people use it and thank me". Yet, you're in the security field? Haha, haha, haha.

  • jsgjsg Member, Resident Benchmarker

    @TimboJones

    Did it ever come to your mind that you might be the one who fails to understand? Of course not.

    Plus you are lying. My results are quite consistently lower ("worse") than those from other benchmarks and that's because I want realistic numbers rather than optimal ones.

    This is a typical case of TimboJones modus operandi. You smear others who actually contribute to the community while you yourself have not written any reviews afaik, let alone ones based on real world numbers.

    Thanked by 1poisson
  • @jsg said:
    @TimboJones

    Did it ever come to your mind that you might be the one who fails to understand? Of course not.

    Your greatest trolling skill is to repeat the thing you keep doing while pretending (oblivious?) you don't.

    This is a typical case of TimboJones modus operandi. You smear others who actually contribute to the community while you yourself have not written any reviews afaik, let alone ones based on real world numbers.

    Others? It's just you. It's not a smear, you are putting out garbage and acting like its gospel. Worst, people are actually mislead to believe you know wtf you're talking about, but you really don't. It's really, really pathetic of you, you're like Slashdot's version of the legendary asshole apk.

    Plus you are lying. My results are quite consistently lower ("worse") than those from other benchmarks and that's because I want realistic numbers rather than optimal ones.

    Realistic? He says, without providing any proof. Which specific other benchmarks would those be? You are tone deaf. I gave you ample warning to check yourself before you're humiliated further. You asked for it, asshole:

    Free tier Oracle VM:

    # ./vpsb.linux-x64 my.targets
    Using "my.targets" as target list
    Machine: amd64, Arch.: x86_64, Model: AMD EPYC 7551 32-Core Processor
    OS, version: Linux 3.10.0, Mem.: 987 MB
    CPU - Cores: 2, Family/Model/Stepping: 23/1/2
    Cache: 64K/64K L1d/L1i, 512K L2, 16M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh mmx fxsr sse sse2 htt sse3 pclmulqdq ssse3 fma cx16
              sse4_1 sse4_2 popcnt aes xsave osxsave avx f16c rdrnd hypervisor
    Ext. Flags: fsgsbase bmi1 avx2 smep bmi2 syscall nx mmxext fxsr_opt pdpe1gb
              rdtscp lm lahf_lm cmp_legacy cr8_legacy lzcnt sse4a misalignsse
              3dnowprefetch osvw topoext perfctr_core
    
    --- proc/mem/performance test single core ---
    ................................................................
    64 rounds~ 1.00 GB ->  137.28 MB/s
    --- proc/mem/performance test multi-core ---
    ................
    4 times 64 rounds ~ 4.00 GB ->  141.26 MB/s
    --- disk test ---
    Sequential writing .................................................................................................................................
    769.36 MB/s
    Random writing     .................................................................................................................................
    1.133 GB/s
    Sequential reading .................................................................................................................................
    206.67 MB/s
    Random reading     .................................................................................................................................
    4.772 GB/s
    --- network test - target       100KB  1MB  10MB   -> 64 MB ---
    http://speedtest.fra02.softlayer.com/downloads/test100.zip      DE,FRA: .......
            2.2 Mb/s   6.2 Mb/s   27.9 Mb/s    -> 27.7 Mb/s
    http://speedtest.par01.softlayer.com/downloads/test100.zip      FR,PAR: .......
            2.2 Mb/s   7.0 Mb/s   32.4 Mb/s    -> 28.2 Mb/s
    http://speedtest.ams01.softlayer.com/downloads/test500.zip      NL,AMS: .......
            2.3 Mb/s   7.2 Mb/s   33.4 Mb/s    -> 29.2 Mb/s
    
    # curl -s https://raw.githubusercontent.com/masonr/yet-another-bench-script/master/yabs.sh | bash
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2019-10-08                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Mon Nov 18 10:44:41 GMT 2019
    
    Basic System Information:
    ---------------------------------
    Processor  : AMD EPYC 7551 32-Core Processor
    CPU cores  : 2 @ 1996.246 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 987M
    Swap       : 8.0G
    Disk       : 39G
    
    Disk Speed Tests:
    ---------------------------------
           | Test 1      | Test 2      | Test 3      | Avg        
           |             |             |             |            
    Write  | 55.70  MB/s | 51.30  MB/s | 51.30  MB/s | 52.77  MB/s
    Read   | 51.27  MB/s | 51.06  MB/s | 51.15  MB/s | 51.16  MB/s
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider                  | Location (Link)           | Send Speed      | Recv Speed     
                              |                           |                 |                
    Bouygues Telecom          | Paris, FR (10G)           | 47.4 Mbits/sec  | 45.9 Mbits/sec 
    Online.net                | Paris, FR (10G)           | 47.8 Mbits/sec  | 42.2 Mbits/sec 
    Severius                  | The Netherlands (10G)     | 47.8 Mbits/sec  | 44.3 Mbits/sec 
    Worldstream               | The Netherlands (10G)     | 48.2 Mbits/sec  | 47.9 Mbits/sec 
    wilhelm.tel               | Hamburg, DE (10G)         | 47.7 Mbits/sec  | 0.00 bits/sec  
    Biznet                    | Bogor, Indonesia (1G)     | 0.00 bits/sec   | 0.00 bits/sec  
    Hostkey                   | Moscow, RU (1G)           | 46.8 Mbits/sec  | busy           
    Velocity Online           | Tallahassee, FL, US (10G) | 48.0 Mbits/sec  | 49.0 Mbits/sec 
    Airstream Communications  | Eau Claire, WI, US (10G)  | 49.2 Mbits/sec  | 49.6 Mbits/sec 
    Hurricane Electric        | Fremont, CA, US (10G)     | 47.3 Mbits/sec  | busy           
    
    Geekbench 4 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 1472                          
    Multi Core      | 1582                          
    Full Test       | https://browser.geekbench.com/v4/cpu/14941852
    

    CloudatCost V3 server:

    $ ./vpsb.linux-x64 -s -c -d -p /tmp
    disk test path set to /tmp
    Machine: amd64, Arch.: x86_64, Model: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
    OS, version: Linux 4.18.0, Mem.: 1.805 GB
    CPU - Cores: 4, Family/Model/Stepping: 6/45/7
    Cache: 32K/32K L1d/L1i, 256K L2, 20M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh ds mmx fxsr sse sse2 ss htt sse3 pclmulqdq ssse3 cx16
              sse4_1 sse4_2 popcnt aes xsave osxsave avx hypervisor
    Ext. Flags: syscall nx rdtscp lm lahf_lm
    
    --- proc/mem/performance test single core ---
    ................................................................
    64 rounds~ 1.00 GB ->  262.32 MB/s
    --- proc/mem/performance test multi-core ---
    ................................
    8 times 64 rounds ~ 8.00 GB ->  872.28 MB/s
    --- disk test ---
    Sequential writing .................................................................................................................................
    1.369 GB/s
    Random writing     .................................................................................................................................
    2.111 GB/s
    Sequential reading .................................................................................................................................
    1.223 GB/s
    Random reading     .................................................................................................................................
    2.400 GB/s
    $ 
    $ sleep 120
    $ 
    $ curl https://raw.githubusercontent.com/masonr/yet-another-bench-script/master/yabs.sh -o yabs.sh; chmod +x yabs.sh
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  9848  100  9848    0     0  71883      0 --:--:-- --:--:-- --:--:-- 71883
    $ ./yabs.sh -i
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2019-10-08                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Sat Dec 21 18:27:23 EST 2019
    
    Basic System Information:
    ---------------------------------
    Processor  : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
    CPU cores  : 4 @ 2900.000 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 1.8Gi
    Swap       : 2.1Gi
    Disk       : 37G
    
    Disk Speed Tests:
    ---------------------------------
           | Test 1      | Test 2      | Test 3      | Avg        
           |             |             |             |            
    Write  | 22.90  MB/s | 24.10  MB/s | 68.60  MB/s | 38.53  MB/s
    Read   | 85.15  MB/s | 121.36 MB/s | 101.48 MB/s | 102.67 MB/s
    
    Geekbench 4 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 3017                          
    Multi Core      | 7518                          
    Full Test       | https://browser.geekbench.com/v4/cpu/15060893
    
    $ sleep 120
    
    $ ./vpsb.linux-x64 -s -c -d -p /root
    disk test path set to /root
    Machine: amd64, Arch.: x86_64, Model: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
    OS, version: Linux 4.18.0, Mem.: 1.805 GB
    CPU - Cores: 4, Family/Model/Stepping: 6/45/7
    Cache: 32K/32K L1d/L1i, 256K L2, 20M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh ds mmx fxsr sse sse2 ss htt sse3 pclmulqdq ssse3 cx16
              sse4_1 sse4_2 popcnt aes xsave osxsave avx hypervisor
    Ext. Flags: syscall nx rdtscp lm lahf_lm
    
    --- proc/mem/performance test single core ---
    ................................................................
    64 rounds~ 1.00 GB ->  262.69 MB/s
    --- proc/mem/performance test multi-core ---
    ................................
    8 times 64 rounds ~ 8.00 GB ->  845.52 MB/s
    --- disk test ---
    Sequential writing .................................................................................................................................
    13.394 GB/s
    Random writing     .................................................................................................................................
    7.828 GB/s
    Sequential reading .................................................................................................................................
    13.464 GB/s
    Random reading     .................................................................................................................................
    7.462 GB/s
    

    CloudatCost V1 server:

    # ./vpsb.linux-x64 -s -c -d -p /tmp
    disk test path set to /tmp
    Machine: amd64, Arch.: x86_64, Model: Intel(R) Xeon(R) CPU           E5405  @ 2.00GHz
    OS, version: Linux 4.9.0, Mem.: 3.883 GB
    CPU - Cores: 8, Family/Model/Stepping: 6/23/6
    Cache: 32K/32K L1d/L1i, 6M L2, ? L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh ds mmx fxsr sse sse2 ss htt sse3 ssse3 cx16 sse4_1 x2apic
              hypervisor
    Ext. Flags: syscall nx lm lahf_lm
    
    --- proc/mem/performance test single core ---
    ................................................................
    64 rounds~ 1.00 GB ->  173.27 MB/s
    --- proc/mem/performance test multi-core ---
    ................................................................
    16 times 64 rounds ~ 16.00 GB ->  131.02 MB/s
    --- disk test ---
    Sequential writing .................................................................................................................................
    390.11 MB/s
    Random writing     .................................................................................................................................
    259.79 MB/s
    Sequential reading .................................................................................................................................
    1.071 GB/s
    Random reading     .................................................................................................................................
    803.67 MB/s
    # 
    # sleep 120
    # 
    # curl https://raw.githubusercontent.com/masonr/yet-another-bench-script/master/yabs.sh -o yabs.sh; chmod +x yabs.sh
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  9848  100  9848    0     0   138k      0 --:--:-- --:--:-- --:--:--  139k
    root@ubnt:/tmp/vpsbench# ./yabs.sh -i
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2019-10-08                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Sat 21 Dec 2019 06:30:06 PM EST
    
    Basic System Information:
    ---------------------------------
    Processor  : Intel(R) Xeon(R) CPU           E5405  @ 2.00GHz
    CPU cores  : 8 @ 1999.496 MHz
    AES-NI     : ❌ Disabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 3.9Gi
    Swap       : 459Mi
    Disk       : 19G
    
    Disk Speed Tests:
    ---------------------------------
           | Test 1      | Test 2      | Test 3      | Avg        
           |             |             |             |            
    Write  | 13.00  MB/s | 12.90  MB/s | 12.90  MB/s | 12.93  MB/s
    Read   | 52.31  MB/s | 50.81  MB/s | 53.07  MB/s | 52.07  MB/s
    
  • LETBox:

    # ./vpsb.linux-x64 -s -c -d -p /tmp      
    disk test path set to /tmp
    Machine: amd64, Arch.: x86_64, Model: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
    OS, version: Linux 4.18.0, Mem.: 7.804 GB
    CPU - Cores: 2, Family/Model/Stepping: 6/62/4
    Cache: 32K/32K L1d/L1i, 2M L2, 16M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh mmx fxsr sse sse2 ss sse3 pclmulqdq vmx ssse3 cx16 pcid
              sse4_1 sse4_2 x2apic popcnt tsc_deadline aes xsave osxsave avx f16c
              rdrnd hypervisor
    Ext. Flags: fsgsbase tsc_adjust smep erms syscall nx pdpe1gb rdtscp lm lahf_lm
    
    --- proc/mem/performance test single core ---
    ................................................................
    64 rounds~ 1.00 GB ->  317.24 MB/s
    --- proc/mem/performance test multi-core ---
    ................
    4 times 64 rounds ~ 4.00 GB ->  658.98 MB/s
    --- disk test ---
    Sequential writing .................................................................................................................................
    1.099 GB/s
    Random writing     .................................................................................................................................
    1.802 GB/s
    Sequential reading .................................................................................................................................
    4.544 GB/s
    Random reading     .................................................................................................................................
    3.221 GB/s
    
    # curl https://raw.githubusercontent.com/masonr/yet-another-bench-script/master/yabs.sh -o yabs.sh; chmod +x yabs.sh
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  9848  100  9848    0     0  60790      0 --:--:-- --:--:-- --:--:-- 60790
    [root@butters vpsbench]# ./yabs.sh -i
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2019-10-08                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Sat Dec 21 18:23:08 EST 2019
    
    Basic System Information:
    ---------------------------------
    Processor  : Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
    CPU cores  : 2 @ 2999.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 7.8Gi
    Swap       : 0B
    Disk       : 3.0T
    
    Disk Speed Tests:
    ---------------------------------
           | Test 1      | Test 2      | Test 3      | Avg        
           |             |             |             |            
    Write  | 741.00 MB/s | 836.00 MB/s | 835.00 MB/s | 804.00 MB/s
    Read   | 506.64 MB/s | 529.10 MB/s | 1102.42 MB/s | 712.72 MB/s
    
    Geekbench 4 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 2994                          
    Multi Core      | 5541                          
    Full Test       | https://browser.geekbench.com/v4/cpu/15060871
    

    VMBox:

    # ./vpsb.linux-x64 -s -c -d
    Machine: amd64, Arch.: x86_64, Model: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
    OS, version: Linux 2.6.32, Mem.: 2.0 GB
    CPU - Cores: 2, Family/Model/Stepping: 6/45/6
    Cache: 32K/32K L1d/L1i, 256K L2, 20M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh ds acpi mmx fxsr sse sse2 ss htt tm pbe sse3 pclmulqdq
              dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca
              sse4_1 sse4_2 x2apic popcnt tsc_deadline aes xsave osxsave avx
    Ext. Flags: syscall nx pdpe1gb rdtscp lm lahf_lm
    
    --- proc/mem/performance test single core ---
    ................................................................
    64 rounds~ 1.00 GB ->  91.49 MB/s
    --- proc/mem/performance test multi-core ---
    ................
    4 times 64 rounds ~ 4.00 GB ->  176.46 MB/s
    --- disk test ---
    Sequential writing .................................................................................................................................
    228.29 MB/s
    Random writing     .................................................................................................................................
    440.13 MB/s
    Sequential reading .................................................................................................................................
    922.32 MB/s
    Random reading     .................................................................................................................................
    692.86 MB/s
    #
    # sleep 120
    #
    # curl https://raw.githubusercontent.com/masonr/yet-another-bench-script/master/yabs.sh -o yabs.sh; chmod +x yabs.sh
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  9848  100  9848    0     0  26852      0 --:--:-- --:--:-- --:--:-- 26907
    [root@kenny vpsbench]# ./yabs.sh -i
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2019-10-08                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Sat Dec 21 15:51:40 PST 2019
    
    Basic System Information:
    ---------------------------------
    Processor  : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
    CPU cores  : 2 @ 1200.000 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 2.0G
    Swap       : 0B
    Disk       : 50G
    
    Disk Speed Tests:
    ---------------------------------
           | Test 1      | Test 2      | Test 3      | Avg
           |             |             |             |
    Write  | 151.00 MB/s | 57.50  MB/s | 68.70  MB/s | 92.40  MB/s
    Read   | 25.54  MB/s | 161.40 MB/s | 165.72 MB/s | 117.55 MB/s
    

    Expected excuses:
    1. You test on FreeBSD 12 and these are all linux based
    2. I'm somehow doing it wrong
    3. Server suddenly became super busy the moment after doing vpsbenchmark
    4. Because Facebook
    5. Because someone was on your lawn

  • jsgjsg Member, Resident Benchmarker
    edited December 2019

    @TimboJones

    Thank you for finally providing evidence for what I have said: you are utterly clueless and do not know what you're talking about.

    • yabs uses dd to write benchmark disks and ioping (of unknown origin) to read benchmark them. Those tools may provide a first impression but they are not benchmarks. Why he uses two different tools for read and write testing is a question you must ask the author, not me.
    • yabs uses the direct flag both for reading (ioping -D) and writing (-oflag=direct)

    Of course you didn't know that but the 'direct' flag usually does dramatically slow down IO by (trying to) disabling OS caching. Btw, vpsbench can crash even Raid controller caches while those tools are limited to (largely) disabling the OS cache.

    You can verify that by simply running dd if=/dev/zero of=/test_path/test.out bs=64k count=16k a couple of times with and then a couple of times without the oflag=direct argument. On my desktop (linux 64-bit) the result without the 'direct' flag is about 100% faster than with it.

    In other words: yabs disk results are bound to be artificially slower than mine. Simply leaving away the 'direct' flag will dramatically change the situation.

    My vpsbench on the other hand does not (ab)use some other tools to do the work but it uses its own code that actually reflects what potential customers can expect from a VPS's disk. Plus vpsbench doesn't stupidly write nonsense data, or even worse, extremely well cacheable and compressible zeroes but real world data that are very hard to cache and to compress.

    As for the timing both dd and ioping and vpsbench are based on the best high resolution clock available on a system, typically CLOCK_MONOTONIC.

    TL;DR You are a clueless idiot who does not contribute anything himself but consistently tries to smear people who actually wrote benchmark tools, have lots of experience and know what they are doing, while you - provably - do not even understand the tools - written by others - you use.

    And you call me an a__hole? Well that's probably because that is the playground you really know and live in ...

    P.S. Kudos to @MasonR - that's a nice bit of shell scripting! I would suggest though to base it on sh rather than bash because bash is not available everywhere. Anyway, his tool is useful to get a first impression of a system

    Thanked by 2poisson ouvoun
  • @jsg it seems to me fio does a better job measuring disk performance on linux. The rate of drop of iops with increasing block sizes seems like a better indicator of disk performance than dd, and fio can disable disk cache (it was at least designed with that possibility). I use ioping for latency tests. I hope my rationale for using fio and ioping on Linux is sound.

  • jsgjsg Member, Resident Benchmarker
    edited December 2019

    @poisson said:
    @jsg it seems to me fio does a better job measuring disk performance on linux. The rate of drop of iops with increasing block sizes seems like a better indicator of disk performance than dd, and fio can disable disk cache (it was at least designed with that possibility). I use ioping for latency tests. I hope my rationale for using fio and ioping on Linux is sound.

    I don't want to judge that as my interest in both tools is very limited (re. benchmarking) but your assumption that fio is better because fio can disable the cache is wrong. Both dd and fio (and ioping) can use or (largely) disable the disk cache.
    If at all then ioping might be better because it was meant to be a measuring tool while dd was meant to serve for other purposes but often gets (ab)used for benchmarking because it also reports it's throughput ("speed").
    As for fio I personally wouldn't consider it for doing my benchmarks because just like dd fio can be used for that purpose but is (a) not really a benchmark tool, and (b) overkill. For its real purpose though fio is a fine tool.

    Sidenote: both dd's and ioping's direct flag/parameter only address the OS caching which is of limited use on servers because those often have hardware Raid with its own cache. In fact that factor is one of the reasons why I decided to design and implement my own benchmark test.

    I'm planning to write an enhanced and extended vpsbench 2.0 (closed source this time) but I'm willing to provide you a library with my disk test mechanics if you are interested. Of course I would also provide a description of what it does and how it works. You could then use that for whatever benchmark you are building (e.g. in Python or whatever). Just contact me if interested.

    Thanked by 1poisson
  • MasonRMasonR Community Contributor

    @jsg said:
    I would suggest though to base it on sh rather than bash because bash is not available everywhere.

    Might look into this in the future. I'm most familiar with (and all my personal/work scripts are) bash, so that's what I stuck to for this when I wrote it up. I haven't encountered any distros that don't have bash included by default (at least none that I use regularly), but I'd be interested in testing and getting this to work on one if I can identify one (maybe alpine?).

    Anyway, his tool is useful to get a first impression of a system

    That's pretty much all it was meant for, just to get some basic info on the server that's being analyzed. I wrote it first just to do the iperf tests since every other bench I've seen does single threaded http downloads (or uses speedtest.net -- which I dislike for a few reasons). And I added in geekbench because I run that on most of my machines anyways, so it was more out of convenience.

    As for the disk tests -- I agree completely that they are not the best measure and the results can vary widely under different setups, leading to misleading conclusions regarding disk performance. If I could completely leave it out, I would, but I think that it's something that everyone has come to expect to see from a benchmarking script. I pretty much just copied what nench/bench were doing with their speed tests. Was originally using dd for read/write, then thought ioping would give "truer" results for reads, but now I'm not really sure.

    I wanted the script to not rely on any external dependencies to be installed on the system (which is quite limiting to what you can use for the tests). I attempted to compile fio and use that instead, but couldn't get it to work. If I can find something better for the disk tests, I'll certainly swap them out. The only requirement I have is that it won't need the end user to install (or compile) anything in order to run it.

    I don't try to pretend that I know much about benchmarking a system and charting its performance. I just know what I look for and wrote it up in a script, which others might benefit from. Obviously people that want to get a better idea of server performance should be using something much more sophisticated.

    Thanked by 3poisson uptime jsg
  • jsgjsg Member, Resident Benchmarker
    edited December 2019

    @MasonR said:
    Might look into this in the future. I'm most familiar with (and all my personal/work scripts are) bash, so that's what I stuck to for this when I wrote it up. I haven't encountered any distros that don't have bash included by default (at least none that I use regularly), but I'd be interested in testing and getting this to work on one if I can identify one (maybe alpine?).

    Yes alpine is an example; its default shell is 'ash' which is a variant of 'sh'. The BSDs also have bash packages/ports but by default don't have it installed (but a csh or ksh variant) but the BSDs have a 'sh' installed.

    Also thanks for explaining what your goal was.

    Btw, if you are interested you too can get my new (yet to be done) vpsbench library (or binaries) for a potential new version of your script.

    Thanked by 1MasonR
  • @jsg said:

    @poisson said:
    @jsg it seems to me fio does a better job measuring disk performance on linux. The rate of drop of iops with increasing block sizes seems like a better indicator of disk performance than dd, and fio can disable disk cache (it was at least designed with that possibility). I use ioping for latency tests. I hope my rationale for using fio and ioping on Linux is sound.

    I don't want to judge that as my interest in both tools is very limited (re. benchmarking) but your assumption that fio is better because fio can disable the cache is wrong. Both dd and fio (and ioping) can use or (largely) disable the disk cache.
    If at all then ioping might be better because it was meant to be a measuring tool while dd was meant to serve for other purposes but often gets (ab)used for benchmarking because it also reports it's throughput ("speed").
    As for fio I personally wouldn't consider it for doing my benchmarks because just like dd fio can be used for that purpose but is (a) not really a benchmark tool, and (b) overkill. For its real purpose though fio is a fine tool.

    Sidenote: both dd's and ioping's direct flag/parameter only address the OS caching which is of limited use on servers because those often have hardware Raid with its own cache. In fact that factor is one of the reasons why I decided to design and implement my own benchmark test.

    I'm planning to write an enhanced and extended vpsbench 2.0 (closed source this time) but I'm willing to provide you a library with my disk test mechanics if you are interested. Of course I would also provide a description of what it does and how it works. You could then use that for whatever benchmark you are building (e.g. in Python or whatever). Just contact me if interested.

    Reading your reply, I think perhaps there is an issue with the meaning of disk performance. You are right: they measure different things and users of benchmarking scripts don't understand what they are measuring. I consider fio important because many applications on servers run on databases and random read write IO performance is crucial.

    Your benchmark measures other important disk mechanics (and I don't have the means to account for RAID data pollution) so your tool is very important in a server setting. If it can be compiled and run on debian, I would like to use it in addition to fio to determine true disk read write capabilities.

    Thanked by 1uptime
  • Btw, if you are interested you too can get my new (yet to be done) vpsbench library (or binaries) for a potential new version of your script.

    That would be cool! YABS has become a standard tool for me for lightweight testing on new servers and having more realistic disk IO benchmarks would be awesome.

    Thanked by 2MasonR jsg
  • poissonpoisson Member
    edited December 2019

    @MasonR I just roll with homebrew script and install all the dependencies I need. It's my own testing, not for idiot-proof public consumption, so it works fine for me. Thanks for YABS though! It was the basis of my homebrew script.

    Thanked by 1MasonR
  • jsgjsg Member, Resident Benchmarker

    @poisson said:
    ... I consider fio important because many applications on servers run on databases and random read write IO performance is crucial.

    Your benchmark measures other important disk mechanics ...

    No, my benchmark measures both, sequential and (truly!) random reading and writing and the new version will add some more capabilities (e.g. effective latency).

    And yes, the library (or binary) of my (yet to be done) version 2 I offered will run on debian, virtually all other linux distros and FreeBSD. There will be 4 binaries/libraries, two for linux and two for FreeBSD, 32-bit (still in use sometimes) and 64 bits. All your scripts have to do is to find out on what OS they are running and if 32 or 64 bits and then use the adequate version of my lib/bin.

    @ouvoun said:
    That would be cool! YABS has become a standard tool for me for lightweight testing on new servers and having more realistic disk IO benchmarks would be awesome.

    It's up to @MasonR to decide on that; it's his tool. In case he wants my binary (in his case) he'll get it (or more precisely the 4 relevant versions). But again, you'll have to ask him.

  • @jsg said:

    @poisson said:
    ... I consider fio important because many applications on servers run on databases and random read write IO performance is crucial.

    Your benchmark measures other important disk mechanics ...

    No, my benchmark measures both, sequential and (truly!) random reading and writing and the new version will add some more capabilities (e.g. effective latency).

    And yes, the library (or binary) of my (yet to be done) version 2 I offered will run on debian, virtually all other linux distros and FreeBSD. There will be 4 binaries/libraries, two for linux and two for FreeBSD, 32-bit (still in use sometimes) and 64 bits. All your scripts have to do is to find out on what OS they are running and if 32 or 64 bits and then use the adequate version of my lib/bin.

    @ouvoun said:
    That would be cool! YABS has become a standard tool for me for lightweight testing on new servers and having more realistic disk IO benchmarks would be awesome.

    It's up to @MasonR to decide on that; it's his tool. In case he wants my binary (in his case) he'll get it (or more precisely the 4 relevant versions). But again, you'll have to ask him.

    Wow. OK when you are done I will ditch fio and ioping. Right now I am doing extended tests of VPS disk based on these tools. If one tool gets the job done, I prefer one tool for parsimony. Will be good to document how to interpret the results. I don't have the technical chops, but writing a piece of good documentation is something I can help with.

    Thanked by 1jsg
  • jsgjsg Member, Resident Benchmarker

    @poisson said:
    Wow. OK when you are done I will ditch fio and ioping. Right now I am doing extended tests of VPS disk based on these tools. If one tool gets the job done, I prefer one tool for parsimony. Will be good to document how to interpret the results. I don't have the technical chops, but writing a piece of good documentation is something I can help with.

    I already do provide a small but reasonable documentation for my current version. For the new version I intend to do even better and in particular to explain the "mechanics" used. But you can probably make it a lot more user friendly; I'm not really good at that.

  • @jsg said:
    @TimboJones

    Thank you for finally providing evidence for what I have said: you are utterly clueless and do not know what you're talking about.

    Fact check: You stated "My results are quite consistently lower ("worse") than those from other benchmarks and that's because I want realistic numbers rather than optimal ones."

    Actual fact: You're zero for two. Your results were consistently HIGHER than those from other benchmarks and your results are no where near realistic. You were just demonstrated to be proven wrong. I think you have problems with the English language as you are constantly using words with the opposite meaning from how you use them.

    • yabs uses dd to write benchmark disks and ioping (of unknown origin) to read benchmark them. Those tools may provide a first impression but they are not benchmarks. Why he uses two different tools for read and write testing is a question you must ask the author, not me.
    • yabs uses the direct flag both for reading (ioping -D) and writing (-oflag=direct)

    Of course you didn't know that but the 'direct' flag usually does dramatically slow down IO by (trying to) disabling OS caching. Btw, vpsbench can crash even Raid controller caches while those tools are limited to (largely) disabling the OS cache.

    The objective is to run the test and see results indicative of the user's experience. Saying shit about another benchmark and not your own extremely wrong results is weak sauce. In the end, does the result meet that expectation? In your case, its a big, fat, fail.

    You can verify that by simply running dd if=/dev/zero of=/test_path/test.out bs=64k count=16k a couple of times with and then a couple of times without the oflag=direct argument. On my desktop (linux 64-bit) the result without the 'direct' flag is about 100% faster than with it.

    In other words: yabs disk results are bound to be artificially slower than mine. Simply leaving away the 'direct' flag will dramatically change the situation.

    My vpsbench on the other hand does not (ab)use some other tools to do the work but it uses its own code that actually reflects what potential customers can expect from a VPS's disk. Plus vpsbench doesn't stupidly write nonsense data, or even worse, extremely well cacheable and compressible zeroes but real world data that are very hard to cache and to compress.

    Fuck, you are seriously dense. Your numbers report astronomically high results that are not valid. It doesn't reflect any performance a user would experience!!! CAN YOU UNDERSTAND THIS?!?! WHAT LANGUAGE DO I NEED TO SAY THIS IN? FORGET OTHER BENCHMARKS, FIX YOUR OWN SHIT!

    Sequential writing .................................................................................................................................
    13.394 GB/s
    Random writing     .................................................................................................................................
    7.828 GB/s
    Sequential reading .................................................................................................................................
    13.464 GB/s
    Random reading     .................................................................................................................................
    7.462 GB/s
    

    As for the timing both dd and ioping and vpsbench are based on the best high resolution clock available on a system, typically CLOCK_MONOTONIC.

    TL;DR You are a clueless idiot who does not contribute anything himself but consistently tries to smear people who actually wrote benchmark tools, have lots of experience and know what they are doing, while you - provably - do not even understand the tools - written by others - you use.

    Again, not trying to "smear" any other person but you, and you know this but keep repeating it. Don't be dumb. I've told you before that Centminmod is the gold standard of benchmarks (and you look like a basic bitch in comparison) and @poisson does a MUCH better job than you do in every way. You don't even get the importance of comparative testing and flip flop between real world and synthetic testing.

    Your benchmark is the worst I have ever seen. It's not even in the same timezone as being close to representative performance. It is VERY CLEAR I do not understand this tool - written by you - because it is nonsensical and all you can do is attack other benchmarks as being wrong but have no verification of your own.

    And you call me an a__hole? Well that's probably because that is the playground you really know and live in ...

    Right, the guy who couldn't take criticism, makes claims without proof, calls me a liar and then attacks the benchmark once you're proven to be full of shit. You're never wrong, just clumsy. You were "clumsy" when you said your benchmark reports lower than all the others, right?

    P.S. Kudos to @MasonR - that's a nice bit of shell scripting! I would suggest though to base it on sh rather than bash because bash is not available everywhere. Anyway, his tool is useful to get a first impression of a system

    You are truly, truly mad (as in the apk insane kind, not in the emotional sense). You attack yabs but do not have any comment about how nonsensical your program reports for disk performance? This is Cloud at Cost, notoriously one of the most oversold and POS providers there is (they intentionally limit disk performance). And you think the problem is how yabs uses dd.... get a grip. Look inward for the problem, not outwards.

    Here is a new Cloud at Cost FreeBSD benchmark:

    # ./vpsb.fbsd-x64 -s -c -d
    Machine: amd64, Arch.: amd64, Model: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
    OS, version: FreeBSD 12.0, Mem.: 1.981 GB
    CPU - Cores: 2, Family/Model/Stepping: 6/45/7
    Cache: 32K/32K L1d/L1i, 256K L2, 20M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh ds mmx fxsr sse sse2 ss sse3 pclmulqdq ssse3 cx16 sse4_1
              sse4_2 popcnt aes xsave osxsave avx hypervisor
    Ext. Flags: syscall nx rdtscp lm lahf_lm
    
    --- proc/mem/performance test single core ---
    ................................................................
    64 rounds~ 1.00 GB ->  214.54 MB/s
    --- proc/mem/performance test multi-core ---
    ................
    4 times 64 rounds ~ 4.00 GB ->  478.72 MB/s
    --- disk test ---
    Sequential writing .................................................................................................................................
    299.63 MB/s
    Random writing     .................................................................................................................................
    4.09 MB/s
    Sequential reading .................................................................................................................................
    1.676 GB/s
    Random reading     .................................................................................................................................
    507.32 MB/s
    

    tl;dr Your program reports performance MUCH HIGHER THAN ACTUAL AND IN NO WAY USEFUL OR REPRESENTATIVE OF USER'S EXPERIENCE. 1.676 GB/s is laughable.

    Anyone who has a really crappy server can run your shitty benchmark against yabs or any other benchmark and see which one is closer to their user experience. Thing is, no one seems to give a shit about running your program.

  • jsgjsg Member, Resident Benchmarker

    @TimboJones

    You are amusing me. You can throw as many results at me as you like, that doesn't change the fact that you are - provably and proven by yourself - utterly clueless and do not even know what you are doing and you try to compensate for that by getting ever more aggressive and vulgar.

    Just read what the author of yabs himself wrote here.

    And again: What have you so far contributed here - besides smearing, attacking and trying to insult users who actually contributed? Why don't you write and publish a good benchmark, Mr. AlwaysKnowsBetter?

  • @jsg said:
    @TimboJones

    You are amusing me. You can throw as many results at me as you like, that doesn't change the fact that you are - provably and proven by yourself - utterly clueless and do not even know what you are doing and you try to compensate for that by getting ever more aggressive and vulgar.

    Just read what the author of yabs himself wrote here.

    You mean honest about what it does and it's limitations? Take a hint! I see no explanation for your nonsensical numbers, why don't you stay on topic instead of ignoring your bugs?

    And again: What have you so far contributed here - besides smearing, attacking and trying to insult users who actually contributed? Why don't you write and publish a good benchmark, Mr. AlwaysKnowsBetter?

    Please, tell me where the smear is? Who is being attacked besides someone claiming their benchmark does things better than anyone else but is pure fraud? Tell me where I'm wrong, I'm just running your benchmark and getting nothing but garbage results. You have not shown how these results are remotely valid. You have no response for why your app spits out obviously wrong numbers and that you don't see that is mind blowing. Like I said, you're just another apk as being blinded by your ego to know your shit stinks bad to everyone else but you.

    Do you think you're contributing a useless benchmark is somehow better than nothing? It's worse than having nothing. Worse!

    Anyone who uses your benchmark should know it is just utterly useless as is the author.

    Stop whining like you're personally being attacked and just fix your nonsensical app.

Sign In or Register to comment.