Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


HostBrr | Black Friday Deals! Ryzen & i9 under €1/GB RAM - Storage VPS Under €2/TB, More Inside! - Page 9
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

HostBrr | Black Friday Deals! Ryzen & i9 under €1/GB RAM - Storage VPS Under €2/TB, More Inside!

1567911

Comments

  • @labze Can I order Hybrid Storage 2 TB as an upgrade to my existing service?

  • labzelabze Member, Patron Provider

    @Andrii said:
    @labze Can I order Hybrid Storage 2 TB as an upgrade to my existing service?

    Not sure. Kinda depends what you upgrade from.

  • @labze said:

    @Andrii said:
    @labze Can I order Hybrid Storage 2 TB as an upgrade to my existing service?

    Not sure. Kinda depends what you upgrade from.

    Could you take a Quick Look at 9750870? Thanks!

  • Invoice 15110

  • @labze said:

    @Andrii said:
    @labze Can I order Hybrid Storage 2 TB as an upgrade to my existing service?

    Not sure. Kinda depends what you upgrade from.

    I have Budget Storage VPS 1TB and want to make an upgrade to Hybrid Storage 2 TB.

  • labzelabze Member, Patron Provider

    @Andrii said:

    @labze said:

    @Andrii said:
    @labze Can I order Hybrid Storage 2 TB as an upgrade to my existing service?

    Not sure. Kinda depends what you upgrade from.

    I have Budget Storage VPS 1TB and want to make an upgrade to Hybrid Storage 2 TB.

    Unfortunately there's no upgrade path between the two. You'd have to create a new server and manually migrate data over.

  • Invoice# 15142

  • Invoice #: 15143

    Thank you for 10% cashback, 50% more bandwidth and 3x backup slots!

  • Invoice #15158 - Hybrid Storage 2 TB
    50% bandwidth + x3 backup slots please. Thank you!

  • @icry said:
    Got One!
    Invoice #14982

    Forgot me :#

  • Invoice #15159

  • I am sure it must be me but I am getting the following speeds for the HDD which doesn't seem in line comapred to the benches posted earlier!

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 16.74 MB/s    (4.1k) | 1.54 MB/s       (24)
    Write      | 16.74 MB/s    (4.1k) | 1.65 MB/s       (25)
    Total      | 33.48 MB/s    (8.3k) | 3.20 MB/s       (49)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 52.41 MB/s     (102) | 89.60 MB/s      (87)
    Write      | 55.10 MB/s     (107) | 95.57 MB/s      (93)
    Total      | 107.51 MB/s    (209) | 185.18 MB/s    (180)
    
  • @Astro said:
    I am sure it must be me but I am getting the following speeds for the HDD which doesn't seem in line comapred to the benches posted earlier!

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 16.74 MB/s    (4.1k) | 1.54 MB/s       (24)
    Write      | 16.74 MB/s    (4.1k) | 1.65 MB/s       (25)
    Total      | 33.48 MB/s    (8.3k) | 3.20 MB/s       (49)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 52.41 MB/s     (102) | 89.60 MB/s      (87)
    Write      | 55.10 MB/s     (107) | 95.57 MB/s      (93)
    Total      | 107.51 MB/s    (209) | 185.18 MB/s    (180)
    

    I think that’s quite well if you’re running on HDD.
    I also ordered hybrid storage server and gets similar iops as yours.

  • @marquelin said:

    @Astro said:
    I am sure it must be me but I am getting the following speeds for the HDD which doesn't seem in line comapred to the benches posted earlier!

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 16.74 MB/s    (4.1k) | 1.54 MB/s       (24)
    Write      | 16.74 MB/s    (4.1k) | 1.65 MB/s       (25)
    Total      | 33.48 MB/s    (8.3k) | 3.20 MB/s       (49)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 52.41 MB/s     (102) | 89.60 MB/s      (87)
    Write      | 55.10 MB/s     (107) | 95.57 MB/s      (93)
    Total      | 107.51 MB/s    (209) | 185.18 MB/s    (180)
    

    I think that’s quite well if you’re running on HDD.
    I also ordered hybrid storage server and gets similar iops as yours.

    https://lowendspirit.com/discussion/6258/hostbrr-premium-storage-promo-7950xd-w-block-storage-only-7-budget-storage-from-2-8-tb

    and

    https://lowendtalk.com/discussion/comment/3670047/#Comment_3670047

  • labzelabze Member, Patron Provider

    @Astro said:

    @marquelin said:

    @Astro said:
    I am sure it must be me but I am getting the following speeds for the HDD which doesn't seem in line comapred to the benches posted earlier!

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 16.74 MB/s    (4.1k) | 1.54 MB/s       (24)
    Write      | 16.74 MB/s    (4.1k) | 1.65 MB/s       (25)
    Total      | 33.48 MB/s    (8.3k) | 3.20 MB/s       (49)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 52.41 MB/s     (102) | 89.60 MB/s      (87)
    Write      | 55.10 MB/s     (107) | 95.57 MB/s      (93)
    Total      | 107.51 MB/s    (209) | 185.18 MB/s    (180)
    

    I think that’s quite well if you’re running on HDD.
    I also ordered hybrid storage server and gets similar iops as yours.

    https://lowendspirit.com/discussion/6258/hostbrr-premium-storage-promo-7950xd-w-block-storage-only-7-budget-storage-from-2-8-tb

    and

    https://lowendtalk.com/discussion/comment/3670047/#Comment_3670047

    This would be the Finland server right? The Finland server specifically has a high IO usage atm likely causing lower performance. It'll get better when usage slows down.

  • tl;dr My Hybrid Storage is "OK". Especially at BF pricing. This is not a bashing post. This is a perhaps too-overly-investigative review from a BF-signup customer.

    This is gonna be a long one. I've had a few days, so time for a full review!

    So I jumped on one of the Hybrid Storage 16 TB deals on Black Friday. After an initial OS template issue that took my stubbornness a couple days to give up on and switch OSes, it seems to be what it claims: 16 TB of storage.

    Unfortunately, the network characteristics make it unusable for live backups, but rather only at-rest backups. Using mtr from a couple of other hosts (dedis, not VPSes) outside Hetzner with known-good networks themselves, I see a percent or few packet loss at random at the gateway IP. Pings are fine: it's the general speeds being lower than expected and the random packet loss that slightly concern me. Interactive SSH sessions are generally the responsiveness I'd expect, with occasional temporary session hangs: this is consistent with the issue being throughput rather than ping, combined with hiccups of packet loss.

    Random (read: not cherry-picked; I did it when I wrote this part of the post) Ookla Speedtest result. (I have seen the open source speedtest-cli be far too optimistic too many times now, so only official client for me!)

    # speedtest
    
       Speedtest by Ookla
    
          Server: Netcom Kassel Gesellschaft für Telekommunikation mbH - Kassel (id: 53619)
             ISP: Hetzner Online
    Idle Latency:    23.59 ms   (jitter: 0.09ms, low: 23.50ms, high: 23.69ms)
        Download:   386.61 Mbps (data used: 362.8 MB)
                     23.72 ms   (jitter: 1.34ms, low: 23.17ms, high: 31.12ms)
          Upload:   541.19 Mbps (data used: 512.1 MB)
                     25.28 ms   (jitter: 0.38ms, low: 23.97ms, high: 30.62ms)
     Packet Loss:     0.0%
      Result URL: https://www.speedtest.net/result/c/83fbc6d4-3a96-494f-92cf-14b04794055c
    

    So the network is frustrating. But not a show stopper. My problem is... the actual storage part of the VPS. :( Since yabs only benchmarks the root filesystem, this is very similar to what yabs runs, but both for the root filesystem (/dev/vda) and the storage filesystem (/dev/vdb).

    /# fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.33
    Starting 2 processes
    rand_rw_1m: Laying out IO file (1 file / 2048MiB)
    
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=44684: Sun Dec  3 17:03:26 2023
      read: IOPS=3016, BW=3017MiB/s (3163MB/s)(1982MiB/657msec)
       bw (  MiB/s): min= 3070, max= 3070, per=100.00%, avg=3070.00, stdev= 0.00, samples=2
       iops        : min= 3070, max= 3070, avg=3070.00, stdev= 0.00, samples=2
      write: IOPS=3217, BW=3218MiB/s (3374MB/s)(2114MiB/657msec); 0 zone resets
       bw (  MiB/s): min= 3140, max= 3140, per=97.59%, avg=3140.00, stdev= 0.00, samples=2
       iops        : min= 3140, max= 3140, avg=3140.00, stdev= 0.00, samples=2
      cpu          : usr=5.46%, sys=5.53%, ctx=3458, majf=0, minf=17
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=3017MiB/s (3163MB/s), 3017MiB/s-3017MiB/s (3163MB/s-3163MB/s), io=1982MiB (2078MB), run=657-657msec
      WRITE: bw=3218MiB/s (3374MB/s), 3218MiB/s-3218MiB/s (3374MB/s-3374MB/s), io=2114MiB (2217MB), run=657-657msec
    
    Disk stats (read/write):
      vda: ios=1548/1658, merge=0/0, ticks=19942/44521, in_queue=64463, util=79.58%
    
    /storage# fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.33
    Starting 2 processes
    rand_rw_1m: Laying out IO file (1 file / 2048MiB)
    Jobs: 2 (f=2): [m(2)][90.5%][r=69.0MiB/s,w=54.0MiB/s][r=69,w=54 IOPS][eta 00m:02s]
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=44699: Sun Dec  3 17:10:10 2023
      read: IOPS=106, BW=106MiB/s (111MB/s)(1982MiB/18640msec)
       bw (  KiB/s): min=14315, max=997376, per=100.00%, avg=355228.09, stdev=135667.34, samples=22
       iops        : min=   13, max=  974, avg=346.73, stdev=132.60, samples=22
      write: IOPS=113, BW=113MiB/s (119MB/s)(2114MiB/18640msec); 0 zone resets
       bw (  KiB/s): min=22482, max=1036288, per=100.00%, avg=392206.14, stdev=138202.36, samples=21
       iops        : min=   20, max= 1012, avg=382.83, stdev=135.09, samples=21
      cpu          : usr=0.25%, sys=0.60%, ctx=3101, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1982MiB (2078MB), run=18640-18640msec
      WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=2114MiB (2217MB), run=18640-18640msec
    
    Disk stats (read/write):
      vdb: ios=1808/1895, merge=0/0, ticks=770063/1405908, in_queue=2175971, util=99.50%
    

    As you can see, the speeds for / are Just Fine(TM). No complaints there for an OS disk on a storage VPS. But /storage (not its real mountpoint; names changed to protect the innocent, etc.)? Whoo boy. Storage speeds of 100 MBps means the absolute best case can't fill the bandwidth pipes as it also appears to be mounted via a gigabit link. This is... unfortunate. But also maybe not a concern, since Speedtest can only use half the rated speed anyhow. But those IOPS are... not-great. At all. Also, as I tried to match YABS, these numbers are (like YABS) slightly optimistic (read "look better") than how the real world performance tends to feel.

    Since I tend to like bonnie++ more than fio (and thereby YABS), I ran that too. But, in the interest of full disclosure, bonnie++ isn't perfect. Neither is fio, but I don't know anyone who's benchmarked the benchmarks to the same degree as bonnie++ (mostly by virtue of bonnie++ being a much older and more widely-used [outside of LET or even just Linux] benchmark. All the tools are suspect and comparing their output is more useful. As the root filesystem numbers above and via bonnie are totally non-problematic for a storage VPS root filesystem, and this is getting way longer than I anticipated, let's just look at the bonnie++ output for the storage filesystem (it's the device anyone really cares about anyhow, let's be honest):

    Version 2.00a       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    my.hostname   31912M 1309k  44 14.6m   7 28.3m   4 3638k  77  200m  15 337.8   2
    Latency             47609us   10280ms   18261ms     123ms    3563ms    5037ms
    Version 2.00a       ------Sequential Create------ --------Random Create--------
    my.hostname          -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 16384   0 +++++ +++ 16384   0 16384   0 +++++ +++ 16384   1
    Latency              5765ms     283us     861ms   74355ms      45us    2965ms
    

    What does this tell us? Well... the storage is slow. Up to 75 seconds to create a file. That is over a whole wall clock minute. And the reads were literally timing out. This is... not very good for "I need this file", in honesty.

    Now for real world can't-prove-things-very-well results... I normally see ~10-150 mbit (huge range, yes) when trying to back up to my storage volume from elsewhere. I cannot pin down why. Literally nothing I can control seems to correspond to the speed. Filesystem choice seems irrelevant, as I tried a few early on with the same results. Linux distro seems irrelevant, as I didn't start on this distro and originally blamed the speeds on the original distro choice not cooperating with the HostBrr network. It's not remote filesystem choice either as I've tried at least FTP, FTPS, SFTP, CIFS, NFSv4, and 9P, all with the same results and most with various performance tuning after the defaults showed much the same behavior. My only conclusion is that the huge IOPS throttle is killing the performance.

    Do I think HostBrr's offering is terrible? No. Do I think it's great? No, especially at Black Friday pricing. If you're planning on cold-storage ship-it-and-forget-it backups, it's probably great. But if you're expecting to be able to send or read reasonably large sized files (I'd suggest the behaviors I've seen would be noticeable at all sizes but would feel less like "lag" at about 100 megs or so), in a "warm" fashion (backing them up as-needed with occasional live retrieval)... I'd suggest to test ASAP to figure out if your results match mine. I might just be completely cursed, after all! 😈

    And finally a yabs since that's the LET standard (but with -i since the network bit is mostly unreliable these days, at best). But in this case, it wasn't telling the full story and I want to be fair to both @labze and other potential customers. In fact, I want to be so fair that I'm running this yabs after writing the rest of this post! :sunglasses:

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2023-11-30                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Sun Dec  3 07:05:18 PM GMT 2023
    
    Basic System Information:
    ---------------------------------
    Uptime     : 2 days, 12 hours, 24 minutes
    Processor  : Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz
    CPU cores  : 4 @ 3911.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 15.6 GiB
    Swap       : 10.3 GiB
    Disk       : 249.9 GiB
    Distro     : Debian GNU/Linux 12 (bookworm)
    Kernel     : 6.1.0-13-amd64
    VM Type    : KVM
    IPv4/IPv6  : ✔ Online / ✔ Online
    
    IPv6 Network Information:
    ---------------------------------
    ISP        : Hetzner Online GmbH
    ASN        : AS24940 Hetzner Online GmbH
    Location   : Tuusula, Uusimaa (18)
    Country    : Finland
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 321.66 MB/s  (80.4k) | 3.53 GB/s    (55.2k)
    Write      | 322.51 MB/s  (80.6k) | 3.55 GB/s    (55.5k)
    Total      | 644.18 MB/s (161.0k) | 7.09 GB/s   (110.8k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 3.62 GB/s     (7.0k) | 3.78 GB/s     (3.6k)
    Write      | 3.81 GB/s     (7.4k) | 4.03 GB/s     (3.9k)
    Total      | 7.43 GB/s    (14.5k) | 7.82 GB/s     (7.6k)
    
    Geekbench 6 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1421
    Multi Core      | 4101
    Full Test       | https://browser.geekbench.com/v6/cpu/3839828
    
    YABS completed in 7 min 10 sec
    
  • labzelabze Member, Patron Provider

    @lewellyn said:
    tl;dr My Hybrid Storage is "OK". Especially at BF pricing. This is not a bashing post. This is a perhaps too-overly-investigative review from a BF-signup customer.

    This is gonna be a long one. I've had a few days, so time for a full review!

    So I jumped on one of the Hybrid Storage 16 TB deals on Black Friday. After an initial OS template issue that took my stubbornness a couple days to give up on and switch OSes, it seems to be what it claims: 16 TB of storage.

    Unfortunately, the network characteristics make it unusable for live backups, but rather only at-rest backups. Using mtr from a couple of other hosts (dedis, not VPSes) outside Hetzner with known-good networks themselves, I see a percent or few packet loss at random at the gateway IP. Pings are fine: it's the general speeds being lower than expected and the random packet loss that slightly concern me. Interactive SSH sessions are generally the responsiveness I'd expect, with occasional temporary session hangs: this is consistent with the issue being throughput rather than ping, combined with hiccups of packet loss.

    Random (read: not cherry-picked; I did it when I wrote this part of the post) Ookla Speedtest result. (I have seen the open source speedtest-cli be far too optimistic too many times now, so only official client for me!)

    # speedtest
    
       Speedtest by Ookla
    
          Server: Netcom Kassel Gesellschaft für Telekommunikation mbH - Kassel (id: 53619)
             ISP: Hetzner Online
    Idle Latency:    23.59 ms   (jitter: 0.09ms, low: 23.50ms, high: 23.69ms)
        Download:   386.61 Mbps (data used: 362.8 MB)
                     23.72 ms   (jitter: 1.34ms, low: 23.17ms, high: 31.12ms)
          Upload:   541.19 Mbps (data used: 512.1 MB)
                     25.28 ms   (jitter: 0.38ms, low: 23.97ms, high: 30.62ms)
     Packet Loss:     0.0%
      Result URL: https://www.speedtest.net/result/c/83fbc6d4-3a96-494f-92cf-14b04794055c
    

    So the network is frustrating. But not a show stopper. My problem is... the actual storage part of the VPS. :( Since yabs only benchmarks the root filesystem, this is very similar to what yabs runs, but both for the root filesystem (/dev/vda) and the storage filesystem (/dev/vdb).

    /# fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.33
    Starting 2 processes
    rand_rw_1m: Laying out IO file (1 file / 2048MiB)
    
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=44684: Sun Dec  3 17:03:26 2023
      read: IOPS=3016, BW=3017MiB/s (3163MB/s)(1982MiB/657msec)
       bw (  MiB/s): min= 3070, max= 3070, per=100.00%, avg=3070.00, stdev= 0.00, samples=2
       iops        : min= 3070, max= 3070, avg=3070.00, stdev= 0.00, samples=2
      write: IOPS=3217, BW=3218MiB/s (3374MB/s)(2114MiB/657msec); 0 zone resets
       bw (  MiB/s): min= 3140, max= 3140, per=97.59%, avg=3140.00, stdev= 0.00, samples=2
       iops        : min= 3140, max= 3140, avg=3140.00, stdev= 0.00, samples=2
      cpu          : usr=5.46%, sys=5.53%, ctx=3458, majf=0, minf=17
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=3017MiB/s (3163MB/s), 3017MiB/s-3017MiB/s (3163MB/s-3163MB/s), io=1982MiB (2078MB), run=657-657msec
      WRITE: bw=3218MiB/s (3374MB/s), 3218MiB/s-3218MiB/s (3374MB/s-3374MB/s), io=2114MiB (2217MB), run=657-657msec
    
    Disk stats (read/write):
      vda: ios=1548/1658, merge=0/0, ticks=19942/44521, in_queue=64463, util=79.58%
    
    /storage# fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.33
    Starting 2 processes
    rand_rw_1m: Laying out IO file (1 file / 2048MiB)
    Jobs: 2 (f=2): [m(2)][90.5%][r=69.0MiB/s,w=54.0MiB/s][r=69,w=54 IOPS][eta 00m:02s]
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=44699: Sun Dec  3 17:10:10 2023
      read: IOPS=106, BW=106MiB/s (111MB/s)(1982MiB/18640msec)
       bw (  KiB/s): min=14315, max=997376, per=100.00%, avg=355228.09, stdev=135667.34, samples=22
       iops        : min=   13, max=  974, avg=346.73, stdev=132.60, samples=22
      write: IOPS=113, BW=113MiB/s (119MB/s)(2114MiB/18640msec); 0 zone resets
       bw (  KiB/s): min=22482, max=1036288, per=100.00%, avg=392206.14, stdev=138202.36, samples=21
       iops        : min=   20, max= 1012, avg=382.83, stdev=135.09, samples=21
      cpu          : usr=0.25%, sys=0.60%, ctx=3101, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1982MiB (2078MB), run=18640-18640msec
      WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=2114MiB (2217MB), run=18640-18640msec
    
    Disk stats (read/write):
      vdb: ios=1808/1895, merge=0/0, ticks=770063/1405908, in_queue=2175971, util=99.50%
    

    As you can see, the speeds for / are Just Fine(TM). No complaints there for an OS disk on a storage VPS. But /storage (not its real mountpoint; names changed to protect the innocent, etc.)? Whoo boy. Storage speeds of 100 MBps means the absolute best case can't fill the bandwidth pipes as it also appears to be mounted via a gigabit link. This is... unfortunate. But also maybe not a concern, since Speedtest can only use half the rated speed anyhow. But those IOPS are... not-great. At all. Also, as I tried to match YABS, these numbers are (like YABS) slightly optimistic (read "look better") than how the real world performance tends to feel.

    Since I tend to like bonnie++ more than fio (and thereby YABS), I ran that too. But, in the interest of full disclosure, bonnie++ isn't perfect. Neither is fio, but I don't know anyone who's benchmarked the benchmarks to the same degree as bonnie++ (mostly by virtue of bonnie++ being a much older and more widely-used [outside of LET or even just Linux] benchmark. All the tools are suspect and comparing their output is more useful. As the root filesystem numbers above and via bonnie are totally non-problematic for a storage VPS root filesystem, and this is getting way longer than I anticipated, let's just look at the bonnie++ output for the storage filesystem (it's the device anyone really cares about anyhow, let's be honest):

    Version 2.00a       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    my.hostname   31912M 1309k  44 14.6m   7 28.3m   4 3638k  77  200m  15 337.8   2
    Latency             47609us   10280ms   18261ms     123ms    3563ms    5037ms
    Version 2.00a       ------Sequential Create------ --------Random Create--------
    my.hostname          -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 16384   0 +++++ +++ 16384   0 16384   0 +++++ +++ 16384   1
    Latency              5765ms     283us     861ms   74355ms      45us    2965ms
    

    What does this tell us? Well... the storage is slow. Up to 75 seconds to create a file. That is over a whole wall clock minute. And the reads were literally timing out. This is... not very good for "I need this file", in honesty.

    Now for real world can't-prove-things-very-well results... I normally see ~10-150 mbit (huge range, yes) when trying to back up to my storage volume from elsewhere. I cannot pin down why. Literally nothing I can control seems to correspond to the speed. Filesystem choice seems irrelevant, as I tried a few early on with the same results. Linux distro seems irrelevant, as I didn't start on this distro and originally blamed the speeds on the original distro choice not cooperating with the HostBrr network. It's not remote filesystem choice either as I've tried at least FTP, FTPS, SFTP, CIFS, NFSv4, and 9P, all with the same results and most with various performance tuning after the defaults showed much the same behavior. My only conclusion is that the huge IOPS throttle is killing the performance.

    Do I think HostBrr's offering is terrible? No. Do I think it's great? No, especially at Black Friday pricing. If you're planning on cold-storage ship-it-and-forget-it backups, it's probably great. But if you're expecting to be able to send or read reasonably large sized files (I'd suggest the behaviors I've seen would be noticeable at all sizes but would feel less like "lag" at about 100 megs or so), in a "warm" fashion (backing them up as-needed with occasional live retrieval)... I'd suggest to test ASAP to figure out if your results match mine. I might just be completely cursed, after all! 😈

    And finally a yabs since that's the LET standard (but with -i since the network bit is mostly unreliable these days, at best). But in this case, it wasn't telling the full story and I want to be fair to both @labze and other potential customers. In fact, I want to be so fair that I'm running this yabs after writing the rest of this post! :sunglasses:

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2023-11-30                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Sun Dec  3 07:05:18 PM GMT 2023
    
    Basic System Information:
    ---------------------------------
    Uptime     : 2 days, 12 hours, 24 minutes
    Processor  : Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz
    CPU cores  : 4 @ 3911.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 15.6 GiB
    Swap       : 10.3 GiB
    Disk       : 249.9 GiB
    Distro     : Debian GNU/Linux 12 (bookworm)
    Kernel     : 6.1.0-13-amd64
    VM Type    : KVM
    IPv4/IPv6  : ✔ Online / ✔ Online
    
    IPv6 Network Information:
    ---------------------------------
    ISP        : Hetzner Online GmbH
    ASN        : AS24940 Hetzner Online GmbH
    Location   : Tuusula, Uusimaa (18)
    Country    : Finland
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 321.66 MB/s  (80.4k) | 3.53 GB/s    (55.2k)
    Write      | 322.51 MB/s  (80.6k) | 3.55 GB/s    (55.5k)
    Total      | 644.18 MB/s (161.0k) | 7.09 GB/s   (110.8k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 3.62 GB/s     (7.0k) | 3.78 GB/s     (3.6k)
    Write      | 3.81 GB/s     (7.4k) | 4.03 GB/s     (3.9k)
    Total      | 7.43 GB/s    (14.5k) | 7.82 GB/s     (7.6k)
    
    Geekbench 6 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1421
    Multi Core      | 4101
    Full Test       | https://browser.geekbench.com/v6/cpu/3839828
    
    YABS completed in 7 min 10 sec
    

    Thank you for the thorough review. Always appreciate detailed and honest feedback.

    To address a few concerns - what might be the biggest factor right now is that there is still large amounts of data being transferred into the server. Over a 24-hour span the typical average network usage is around 400 Mbps both up and down, with peak using much higher than that. That would correspond well to your benchmark test only getting around half of the port bandwidth for the test. This seems to be the situation everytime a new storage server has been launched and should settle down as people are done setting up their system.

    With large amounts of data entering the server it is also affecting IO performance. The average write for the last 60 minutes has been 70 MB/s and often reaches over 100 MB/s. Furthermore, the 60 minute average read is 350 MB/s currently. Not sure why, I'll keep an eye out for that. It is likely causing higher IOwait.

    I am not sure what is going on with the Bonnie++ benchmark. I haven't used that myself. My personal Nextcloud server is setup on the Finland Storage Server and I just completed a upload test with a bunch of small files and that completed as one would expect. Likewise, going through the gallery of Nextcloud does not hint of any real-world performance slowdown. At least not in this scenario.

    That's not to take away from your results. I am going to investigate these potential issues and see if something more should be going on than just the high server load at the moment.

    What is more concerning is the packet loss. If you could send some results of MTR from the storage server to the VPS and from the VPS to the storage server then that would be much appreciated. I'll also see if I can replicate that. Even during heavy load the network shouldn't drop packets at a frequent basis.

    As always, if there is something really problematic going on I urge you and others to open a ticket. Most issues can usually be resolved reasonably fast. I just cannot always be aware of them if they are not reported :-)

    By the way, you can perform a YABS test on the block storage, you simply need to run the test while in the folder (cd /storage).

    Thanked by 1zormal
  • This is a HDD yabs of my Hybrid Storage 2 TB. It was worse earlier but it's better now.
    Hopefully, I'm still eligible for the bandwidth and backup slot upgrades. Invoice #15158

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 15.00 MB/s    (3.7k) | 54.76 MB/s     (855)
    Write      | 15.02 MB/s    (3.7k) | 55.03 MB/s     (859)
    Total      | 30.02 MB/s    (7.5k) | 109.80 MB/s   (1.7k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 117.66 MB/s    (229) | 104.21 MB/s    (101)
    Write      | 123.91 MB/s    (242) | 111.16 MB/s    (108)
    Total      | 241.57 MB/s    (471) | 215.37 MB/s    (209)
    
  • Germany location offline?

  • @jedhost said:
    Germany location offline?

    https://status.hostbrr.com/

  • AstroAstro Member
    edited December 2023

    @labze finland write speeds still terrible! Found anything?

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 17.11 MB/s    (4.2k) | 15.62 MB/s     (244)
    Write      | 17.12 MB/s    (4.2k) | 16.17 MB/s     (252)
    Total      | 34.24 MB/s    (8.5k) | 31.79 MB/s     (496)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 70.00 MB/s     (136) | 58.21 MB/s      (56)
    Write      | 73.72 MB/s     (143) | 62.59 MB/s      (61)
    Total      | 143.72 MB/s    (279) | 120.80 MB/s    (117)
    
  • @wuck said:

    @jedhost said:
    Germany location offline?

    https://status.hostbrr.com/

    @wuck said:

    @jedhost said:
    Germany location offline?

    https://status.hostbrr.com/

    yes, i know. Monitor show 3 servers in germany down. Was a brief downtime, 7 minutes

  • @labze said:

    @lewellyn said:
    tl;dr My Hybrid Storage is "OK". Especially at BF pricing. This is not a bashing post. This is a perhaps too-overly-investigative review from a BF-signup customer.

    This is gonna be a long one. I've had a few days, so time for a full review!

    So I jumped on one of the Hybrid Storage 16 TB deals on Black Friday. After an initial OS template issue that took my stubbornness a couple days to give up on and switch OSes, it seems to be what it claims: 16 TB of storage.

    Unfortunately, the network characteristics make it unusable for live backups, but rather only at-rest backups. Using mtr from a couple of other hosts (dedis, not VPSes) outside Hetzner with known-good networks themselves, I see a percent or few packet loss at random at the gateway IP. Pings are fine: it's the general speeds being lower than expected and the random packet loss that slightly concern me. Interactive SSH sessions are generally the responsiveness I'd expect, with occasional temporary session hangs: this is consistent with the issue being throughput rather than ping, combined with hiccups of packet loss.

    Random (read: not cherry-picked; I did it when I wrote this part of the post) Ookla Speedtest result. (I have seen the open source speedtest-cli be far too optimistic too many times now, so only official client for me!)

    # speedtest
    
       Speedtest by Ookla
    
          Server: Netcom Kassel Gesellschaft für Telekommunikation mbH - Kassel (id: 53619)
             ISP: Hetzner Online
    Idle Latency:    23.59 ms   (jitter: 0.09ms, low: 23.50ms, high: 23.69ms)
        Download:   386.61 Mbps (data used: 362.8 MB)
                     23.72 ms   (jitter: 1.34ms, low: 23.17ms, high: 31.12ms)
          Upload:   541.19 Mbps (data used: 512.1 MB)
                     25.28 ms   (jitter: 0.38ms, low: 23.97ms, high: 30.62ms)
     Packet Loss:     0.0%
      Result URL: https://www.speedtest.net/result/c/83fbc6d4-3a96-494f-92cf-14b04794055c
    

    So the network is frustrating. But not a show stopper. My problem is... the actual storage part of the VPS. :( Since yabs only benchmarks the root filesystem, this is very similar to what yabs runs, but both for the root filesystem (/dev/vda) and the storage filesystem (/dev/vdb).

    /# fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.33
    Starting 2 processes
    rand_rw_1m: Laying out IO file (1 file / 2048MiB)
    
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=44684: Sun Dec  3 17:03:26 2023
      read: IOPS=3016, BW=3017MiB/s (3163MB/s)(1982MiB/657msec)
       bw (  MiB/s): min= 3070, max= 3070, per=100.00%, avg=3070.00, stdev= 0.00, samples=2
       iops        : min= 3070, max= 3070, avg=3070.00, stdev= 0.00, samples=2
      write: IOPS=3217, BW=3218MiB/s (3374MB/s)(2114MiB/657msec); 0 zone resets
       bw (  MiB/s): min= 3140, max= 3140, per=97.59%, avg=3140.00, stdev= 0.00, samples=2
       iops        : min= 3140, max= 3140, avg=3140.00, stdev= 0.00, samples=2
      cpu          : usr=5.46%, sys=5.53%, ctx=3458, majf=0, minf=17
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=3017MiB/s (3163MB/s), 3017MiB/s-3017MiB/s (3163MB/s-3163MB/s), io=1982MiB (2078MB), run=657-657msec
      WRITE: bw=3218MiB/s (3374MB/s), 3218MiB/s-3218MiB/s (3374MB/s-3374MB/s), io=2114MiB (2217MB), run=657-657msec
    
    Disk stats (read/write):
      vda: ios=1548/1658, merge=0/0, ticks=19942/44521, in_queue=64463, util=79.58%
    
    /storage# fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.33
    Starting 2 processes
    rand_rw_1m: Laying out IO file (1 file / 2048MiB)
    Jobs: 2 (f=2): [m(2)][90.5%][r=69.0MiB/s,w=54.0MiB/s][r=69,w=54 IOPS][eta 00m:02s]
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=44699: Sun Dec  3 17:10:10 2023
      read: IOPS=106, BW=106MiB/s (111MB/s)(1982MiB/18640msec)
       bw (  KiB/s): min=14315, max=997376, per=100.00%, avg=355228.09, stdev=135667.34, samples=22
       iops        : min=   13, max=  974, avg=346.73, stdev=132.60, samples=22
      write: IOPS=113, BW=113MiB/s (119MB/s)(2114MiB/18640msec); 0 zone resets
       bw (  KiB/s): min=22482, max=1036288, per=100.00%, avg=392206.14, stdev=138202.36, samples=21
       iops        : min=   20, max= 1012, avg=382.83, stdev=135.09, samples=21
      cpu          : usr=0.25%, sys=0.60%, ctx=3101, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1982MiB (2078MB), run=18640-18640msec
      WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=2114MiB (2217MB), run=18640-18640msec
    
    Disk stats (read/write):
      vdb: ios=1808/1895, merge=0/0, ticks=770063/1405908, in_queue=2175971, util=99.50%
    

    As you can see, the speeds for / are Just Fine(TM). No complaints there for an OS disk on a storage VPS. But /storage (not its real mountpoint; names changed to protect the innocent, etc.)? Whoo boy. Storage speeds of 100 MBps means the absolute best case can't fill the bandwidth pipes as it also appears to be mounted via a gigabit link. This is... unfortunate. But also maybe not a concern, since Speedtest can only use half the rated speed anyhow. But those IOPS are... not-great. At all. Also, as I tried to match YABS, these numbers are (like YABS) slightly optimistic (read "look better") than how the real world performance tends to feel.

    Since I tend to like bonnie++ more than fio (and thereby YABS), I ran that too. But, in the interest of full disclosure, bonnie++ isn't perfect. Neither is fio, but I don't know anyone who's benchmarked the benchmarks to the same degree as bonnie++ (mostly by virtue of bonnie++ being a much older and more widely-used [outside of LET or even just Linux] benchmark. All the tools are suspect and comparing their output is more useful. As the root filesystem numbers above and via bonnie are totally non-problematic for a storage VPS root filesystem, and this is getting way longer than I anticipated, let's just look at the bonnie++ output for the storage filesystem (it's the device anyone really cares about anyhow, let's be honest):

    Version 2.00a       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    my.hostname   31912M 1309k  44 14.6m   7 28.3m   4 3638k  77  200m  15 337.8   2
    Latency             47609us   10280ms   18261ms     123ms    3563ms    5037ms
    Version 2.00a       ------Sequential Create------ --------Random Create--------
    my.hostname          -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 16384   0 +++++ +++ 16384   0 16384   0 +++++ +++ 16384   1
    Latency              5765ms     283us     861ms   74355ms      45us    2965ms
    

    What does this tell us? Well... the storage is slow. Up to 75 seconds to create a file. That is over a whole wall clock minute. And the reads were literally timing out. This is... not very good for "I need this file", in honesty.

    Now for real world can't-prove-things-very-well results... I normally see ~10-150 mbit (huge range, yes) when trying to back up to my storage volume from elsewhere. I cannot pin down why. Literally nothing I can control seems to correspond to the speed. Filesystem choice seems irrelevant, as I tried a few early on with the same results. Linux distro seems irrelevant, as I didn't start on this distro and originally blamed the speeds on the original distro choice not cooperating with the HostBrr network. It's not remote filesystem choice either as I've tried at least FTP, FTPS, SFTP, CIFS, NFSv4, and 9P, all with the same results and most with various performance tuning after the defaults showed much the same behavior. My only conclusion is that the huge IOPS throttle is killing the performance.

    Do I think HostBrr's offering is terrible? No. Do I think it's great? No, especially at Black Friday pricing. If you're planning on cold-storage ship-it-and-forget-it backups, it's probably great. But if you're expecting to be able to send or read reasonably large sized files (I'd suggest the behaviors I've seen would be noticeable at all sizes but would feel less like "lag" at about 100 megs or so), in a "warm" fashion (backing them up as-needed with occasional live retrieval)... I'd suggest to test ASAP to figure out if your results match mine. I might just be completely cursed, after all! 😈

    And finally a yabs since that's the LET standard (but with -i since the network bit is mostly unreliable these days, at best). But in this case, it wasn't telling the full story and I want to be fair to both @labze and other potential customers. In fact, I want to be so fair that I'm running this yabs after writing the rest of this post! :sunglasses:

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2023-11-30                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Sun Dec  3 07:05:18 PM GMT 2023
    
    Basic System Information:
    ---------------------------------
    Uptime     : 2 days, 12 hours, 24 minutes
    Processor  : Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz
    CPU cores  : 4 @ 3911.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 15.6 GiB
    Swap       : 10.3 GiB
    Disk       : 249.9 GiB
    Distro     : Debian GNU/Linux 12 (bookworm)
    Kernel     : 6.1.0-13-amd64
    VM Type    : KVM
    IPv4/IPv6  : ✔ Online / ✔ Online
    
    IPv6 Network Information:
    ---------------------------------
    ISP        : Hetzner Online GmbH
    ASN        : AS24940 Hetzner Online GmbH
    Location   : Tuusula, Uusimaa (18)
    Country    : Finland
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 321.66 MB/s  (80.4k) | 3.53 GB/s    (55.2k)
    Write      | 322.51 MB/s  (80.6k) | 3.55 GB/s    (55.5k)
    Total      | 644.18 MB/s (161.0k) | 7.09 GB/s   (110.8k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 3.62 GB/s     (7.0k) | 3.78 GB/s     (3.6k)
    Write      | 3.81 GB/s     (7.4k) | 4.03 GB/s     (3.9k)
    Total      | 7.43 GB/s    (14.5k) | 7.82 GB/s     (7.6k)
    
    Geekbench 6 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1421
    Multi Core      | 4101
    Full Test       | https://browser.geekbench.com/v6/cpu/3839828
    
    YABS completed in 7 min 10 sec
    

    Thank you for the thorough review. Always appreciate detailed and honest feedback.

    To address a few concerns - what might be the biggest factor right now is that there is still large amounts of data being transferred into the server. Over a 24-hour span the typical average network usage is around 400 Mbps both up and down, with peak using much higher than that. That would correspond well to your benchmark test only getting around half of the port bandwidth for the test. This seems to be the situation everytime a new storage server has been launched and should settle down as people are done setting up their system.

    With large amounts of data entering the server it is also affecting IO performance. The average write for the last 60 minutes has been 70 MB/s and often reaches over 100 MB/s. Furthermore, the 60 minute average read is 350 MB/s currently. Not sure why, I'll keep an eye out for that. It is likely causing higher IOwait.

    I am not sure what is going on with the Bonnie++ benchmark. I haven't used that myself. My personal Nextcloud server is setup on the Finland Storage Server and I just completed a upload test with a bunch of small files and that completed as one would expect. Likewise, going through the gallery of Nextcloud does not hint of any real-world performance slowdown. At least not in this scenario.

    That's not to take away from your results. I am going to investigate these potential issues and see if something more should be going on than just the high server load at the moment.

    What is more concerning is the packet loss. If you could send some results of MTR from the storage server to the VPS and from the VPS to the storage server then that would be much appreciated. I'll also see if I can replicate that. Even during heavy load the network shouldn't drop packets at a frequent basis.

    As always, if there is something really problematic going on I urge you and others to open a ticket. Most issues can usually be resolved reasonably fast. I just cannot always be aware of them if they are not reported :-)

    By the way, you can perform a YABS test on the block storage, you simply need to run the test while in the folder (cd /storage).

    I'm glad you didn't take my post as "OMG everything sucks!" :) I figured it's been a week now, so things should have settled down now.

    But bonnie kind of is in line with what I'm experiencing: small operations feel like they take forever. A directory listing, even, sometimes takes a literal minute to come back but it's usually fast. I suspect it has something to do with the way the storage system's scheduler works combined with however many of us are trying to get backups going, but that's pure speculation.

    I'll try to get mtr data together in the couple days, if it continues, and put it in a ticket.

    I don't think I ever thought about yabs benchmarking the current directory. I just always kind of naively assumed it benchmarked /!

    I don't foresee yabs differing greatly from my fio output (as yabs inspired it anyhow) at the moment. So perhaps I'll run it again later this week and we can see if things are changing for the better. :)

    Speeds are more miserable for me tonight than they have been, so perhaps it really just is a lot of people setting things up to sync. Hopefully the first ones finish soon, so the rest of us get a chance! :D

  • labzelabze Member, Patron Provider

    @jedhost said:
    Germany location offline?

    Depends on what server you mean. There was a faulty switch at the datacenter tonight which was promptly replaced causing around 10 minutes downtime for 3 servers.

    @Astro said:
    @labze finland write speeds still terrible! Found anything?

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 17.11 MB/s    (4.2k) | 15.62 MB/s     (244)
    Write      | 17.12 MB/s    (4.2k) | 16.17 MB/s     (252)
    Total      | 34.24 MB/s    (8.5k) | 31.79 MB/s     (496)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 70.00 MB/s     (136) | 58.21 MB/s      (56)
    Write      | 73.72 MB/s     (143) | 62.59 MB/s      (61)
    Total      | 143.72 MB/s    (279) | 120.80 MB/s    (117)
    

    The RAID-array has initialized a re-sync. That'll probrably last 24 hours yet and is causing a slowdown in performance. Furthermore, as long as people are still consistently using YABS and loading data onto the server it will not be at peak performance.

  • labzelabze Member, Patron Provider

    @lewellyn said:

    @labze said:

    @lewellyn said:
    tl;dr My Hybrid Storage is "OK". Especially at BF pricing. This is not a bashing post. This is a perhaps too-overly-investigative review from a BF-signup customer.

    This is gonna be a long one. I've had a few days, so time for a full review!

    So I jumped on one of the Hybrid Storage 16 TB deals on Black Friday. After an initial OS template issue that took my stubbornness a couple days to give up on and switch OSes, it seems to be what it claims: 16 TB of storage.

    Unfortunately, the network characteristics make it unusable for live backups, but rather only at-rest backups. Using mtr from a couple of other hosts (dedis, not VPSes) outside Hetzner with known-good networks themselves, I see a percent or few packet loss at random at the gateway IP. Pings are fine: it's the general speeds being lower than expected and the random packet loss that slightly concern me. Interactive SSH sessions are generally the responsiveness I'd expect, with occasional temporary session hangs: this is consistent with the issue being throughput rather than ping, combined with hiccups of packet loss.

    Random (read: not cherry-picked; I did it when I wrote this part of the post) Ookla Speedtest result. (I have seen the open source speedtest-cli be far too optimistic too many times now, so only official client for me!)

    # speedtest
    
       Speedtest by Ookla
    
          Server: Netcom Kassel Gesellschaft für Telekommunikation mbH - Kassel (id: 53619)
             ISP: Hetzner Online
    Idle Latency:    23.59 ms   (jitter: 0.09ms, low: 23.50ms, high: 23.69ms)
        Download:   386.61 Mbps (data used: 362.8 MB)
                     23.72 ms   (jitter: 1.34ms, low: 23.17ms, high: 31.12ms)
          Upload:   541.19 Mbps (data used: 512.1 MB)
                     25.28 ms   (jitter: 0.38ms, low: 23.97ms, high: 30.62ms)
     Packet Loss:     0.0%
      Result URL: https://www.speedtest.net/result/c/83fbc6d4-3a96-494f-92cf-14b04794055c
    

    So the network is frustrating. But not a show stopper. My problem is... the actual storage part of the VPS. :( Since yabs only benchmarks the root filesystem, this is very similar to what yabs runs, but both for the root filesystem (/dev/vda) and the storage filesystem (/dev/vdb).

    /# fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.33
    Starting 2 processes
    rand_rw_1m: Laying out IO file (1 file / 2048MiB)
    
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=44684: Sun Dec  3 17:03:26 2023
      read: IOPS=3016, BW=3017MiB/s (3163MB/s)(1982MiB/657msec)
       bw (  MiB/s): min= 3070, max= 3070, per=100.00%, avg=3070.00, stdev= 0.00, samples=2
       iops        : min= 3070, max= 3070, avg=3070.00, stdev= 0.00, samples=2
      write: IOPS=3217, BW=3218MiB/s (3374MB/s)(2114MiB/657msec); 0 zone resets
       bw (  MiB/s): min= 3140, max= 3140, per=97.59%, avg=3140.00, stdev= 0.00, samples=2
       iops        : min= 3140, max= 3140, avg=3140.00, stdev= 0.00, samples=2
      cpu          : usr=5.46%, sys=5.53%, ctx=3458, majf=0, minf=17
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=3017MiB/s (3163MB/s), 3017MiB/s-3017MiB/s (3163MB/s-3163MB/s), io=1982MiB (2078MB), run=657-657msec
      WRITE: bw=3218MiB/s (3374MB/s), 3218MiB/s-3218MiB/s (3374MB/s-3374MB/s), io=2114MiB (2217MB), run=657-657msec
    
    Disk stats (read/write):
      vda: ios=1548/1658, merge=0/0, ticks=19942/44521, in_queue=64463, util=79.58%
    
    /storage# fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.33
    Starting 2 processes
    rand_rw_1m: Laying out IO file (1 file / 2048MiB)
    Jobs: 2 (f=2): [m(2)][90.5%][r=69.0MiB/s,w=54.0MiB/s][r=69,w=54 IOPS][eta 00m:02s]
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=44699: Sun Dec  3 17:10:10 2023
      read: IOPS=106, BW=106MiB/s (111MB/s)(1982MiB/18640msec)
       bw (  KiB/s): min=14315, max=997376, per=100.00%, avg=355228.09, stdev=135667.34, samples=22
       iops        : min=   13, max=  974, avg=346.73, stdev=132.60, samples=22
      write: IOPS=113, BW=113MiB/s (119MB/s)(2114MiB/18640msec); 0 zone resets
       bw (  KiB/s): min=22482, max=1036288, per=100.00%, avg=392206.14, stdev=138202.36, samples=21
       iops        : min=   20, max= 1012, avg=382.83, stdev=135.09, samples=21
      cpu          : usr=0.25%, sys=0.60%, ctx=3101, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1982MiB (2078MB), run=18640-18640msec
      WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=2114MiB (2217MB), run=18640-18640msec
    
    Disk stats (read/write):
      vdb: ios=1808/1895, merge=0/0, ticks=770063/1405908, in_queue=2175971, util=99.50%
    

    As you can see, the speeds for / are Just Fine(TM). No complaints there for an OS disk on a storage VPS. But /storage (not its real mountpoint; names changed to protect the innocent, etc.)? Whoo boy. Storage speeds of 100 MBps means the absolute best case can't fill the bandwidth pipes as it also appears to be mounted via a gigabit link. This is... unfortunate. But also maybe not a concern, since Speedtest can only use half the rated speed anyhow. But those IOPS are... not-great. At all. Also, as I tried to match YABS, these numbers are (like YABS) slightly optimistic (read "look better") than how the real world performance tends to feel.

    Since I tend to like bonnie++ more than fio (and thereby YABS), I ran that too. But, in the interest of full disclosure, bonnie++ isn't perfect. Neither is fio, but I don't know anyone who's benchmarked the benchmarks to the same degree as bonnie++ (mostly by virtue of bonnie++ being a much older and more widely-used [outside of LET or even just Linux] benchmark. All the tools are suspect and comparing their output is more useful. As the root filesystem numbers above and via bonnie are totally non-problematic for a storage VPS root filesystem, and this is getting way longer than I anticipated, let's just look at the bonnie++ output for the storage filesystem (it's the device anyone really cares about anyhow, let's be honest):

    Version 2.00a       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    my.hostname   31912M 1309k  44 14.6m   7 28.3m   4 3638k  77  200m  15 337.8   2
    Latency             47609us   10280ms   18261ms     123ms    3563ms    5037ms
    Version 2.00a       ------Sequential Create------ --------Random Create--------
    my.hostname          -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 16384   0 +++++ +++ 16384   0 16384   0 +++++ +++ 16384   1
    Latency              5765ms     283us     861ms   74355ms      45us    2965ms
    

    What does this tell us? Well... the storage is slow. Up to 75 seconds to create a file. That is over a whole wall clock minute. And the reads were literally timing out. This is... not very good for "I need this file", in honesty.

    Now for real world can't-prove-things-very-well results... I normally see ~10-150 mbit (huge range, yes) when trying to back up to my storage volume from elsewhere. I cannot pin down why. Literally nothing I can control seems to correspond to the speed. Filesystem choice seems irrelevant, as I tried a few early on with the same results. Linux distro seems irrelevant, as I didn't start on this distro and originally blamed the speeds on the original distro choice not cooperating with the HostBrr network. It's not remote filesystem choice either as I've tried at least FTP, FTPS, SFTP, CIFS, NFSv4, and 9P, all with the same results and most with various performance tuning after the defaults showed much the same behavior. My only conclusion is that the huge IOPS throttle is killing the performance.

    Do I think HostBrr's offering is terrible? No. Do I think it's great? No, especially at Black Friday pricing. If you're planning on cold-storage ship-it-and-forget-it backups, it's probably great. But if you're expecting to be able to send or read reasonably large sized files (I'd suggest the behaviors I've seen would be noticeable at all sizes but would feel less like "lag" at about 100 megs or so), in a "warm" fashion (backing them up as-needed with occasional live retrieval)... I'd suggest to test ASAP to figure out if your results match mine. I might just be completely cursed, after all! 😈

    And finally a yabs since that's the LET standard (but with -i since the network bit is mostly unreliable these days, at best). But in this case, it wasn't telling the full story and I want to be fair to both @labze and other potential customers. In fact, I want to be so fair that I'm running this yabs after writing the rest of this post! :sunglasses:

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2023-11-30                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Sun Dec  3 07:05:18 PM GMT 2023
    
    Basic System Information:
    ---------------------------------
    Uptime     : 2 days, 12 hours, 24 minutes
    Processor  : Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz
    CPU cores  : 4 @ 3911.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 15.6 GiB
    Swap       : 10.3 GiB
    Disk       : 249.9 GiB
    Distro     : Debian GNU/Linux 12 (bookworm)
    Kernel     : 6.1.0-13-amd64
    VM Type    : KVM
    IPv4/IPv6  : ✔ Online / ✔ Online
    
    IPv6 Network Information:
    ---------------------------------
    ISP        : Hetzner Online GmbH
    ASN        : AS24940 Hetzner Online GmbH
    Location   : Tuusula, Uusimaa (18)
    Country    : Finland
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 321.66 MB/s  (80.4k) | 3.53 GB/s    (55.2k)
    Write      | 322.51 MB/s  (80.6k) | 3.55 GB/s    (55.5k)
    Total      | 644.18 MB/s (161.0k) | 7.09 GB/s   (110.8k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 3.62 GB/s     (7.0k) | 3.78 GB/s     (3.6k)
    Write      | 3.81 GB/s     (7.4k) | 4.03 GB/s     (3.9k)
    Total      | 7.43 GB/s    (14.5k) | 7.82 GB/s     (7.6k)
    
    Geekbench 6 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1421
    Multi Core      | 4101
    Full Test       | https://browser.geekbench.com/v6/cpu/3839828
    
    YABS completed in 7 min 10 sec
    

    Thank you for the thorough review. Always appreciate detailed and honest feedback.

    To address a few concerns - what might be the biggest factor right now is that there is still large amounts of data being transferred into the server. Over a 24-hour span the typical average network usage is around 400 Mbps both up and down, with peak using much higher than that. That would correspond well to your benchmark test only getting around half of the port bandwidth for the test. This seems to be the situation everytime a new storage server has been launched and should settle down as people are done setting up their system.

    With large amounts of data entering the server it is also affecting IO performance. The average write for the last 60 minutes has been 70 MB/s and often reaches over 100 MB/s. Furthermore, the 60 minute average read is 350 MB/s currently. Not sure why, I'll keep an eye out for that. It is likely causing higher IOwait.

    I am not sure what is going on with the Bonnie++ benchmark. I haven't used that myself. My personal Nextcloud server is setup on the Finland Storage Server and I just completed a upload test with a bunch of small files and that completed as one would expect. Likewise, going through the gallery of Nextcloud does not hint of any real-world performance slowdown. At least not in this scenario.

    That's not to take away from your results. I am going to investigate these potential issues and see if something more should be going on than just the high server load at the moment.

    What is more concerning is the packet loss. If you could send some results of MTR from the storage server to the VPS and from the VPS to the storage server then that would be much appreciated. I'll also see if I can replicate that. Even during heavy load the network shouldn't drop packets at a frequent basis.

    As always, if there is something really problematic going on I urge you and others to open a ticket. Most issues can usually be resolved reasonably fast. I just cannot always be aware of them if they are not reported :-)

    By the way, you can perform a YABS test on the block storage, you simply need to run the test while in the folder (cd /storage).

    I'm glad you didn't take my post as "OMG everything sucks!" :) I figured it's been a week now, so things should have settled down now.

    But bonnie kind of is in line with what I'm experiencing: small operations feel like they take forever. A directory listing, even, sometimes takes a literal minute to come back but it's usually fast. I suspect it has something to do with the way the storage system's scheduler works combined with however many of us are trying to get backups going, but that's pure speculation.

    I'll try to get mtr data together in the couple days, if it continues, and put it in a ticket.

    I don't think I ever thought about yabs benchmarking the current directory. I just always kind of naively assumed it benchmarked /!

    I don't foresee yabs differing greatly from my fio output (as yabs inspired it anyhow) at the moment. So perhaps I'll run it again later this week and we can see if things are changing for the better. :)

    Speeds are more miserable for me tonight than they have been, so perhaps it really just is a lot of people setting things up to sync. Hopefully the first ones finish soon, so the rest of us get a chance! :D

    The fact that a RAID re-sync has initialized does not help with the performance currently. But I do find some of these issues strange. I run a Nextcloud server on the Finland storage server with quite a few other Docker applications and while performance (of course) isn't NVMe snappy, I do not feel the same slowness as you describe.

    If issues persists I will do a in-depth troubleshoot and optimization. However, the Finland node is more busy than the Germany one while also undergoing a re-sync and it does not make sense to troubleshoot while the likely cause are these factors.

  • @maverick said:
    Invoice #: 15143

    Thank you for 10% cashback, 50% more bandwidth and 3x backup slots!

    @labze hopefully you have not forgotten our extras. ;)

    @labze said: If issues persists I will do a in-depth troubleshoot and optimization. However, the Finland node is more busy than the Germany one while also undergoing a re-sync and it does not make sense to troubleshoot while the likely cause are these factors.

    Last few days Finland was very slow to me, probably that array resync was the biggest culprit. But today, I'd say everything is just fine, has resync finally finished?

    I'm attaching current disk benches for both DE & FI, and I consider them very good, well... having in mind it is a shared HDD array. Plus good price plus excellent support, I consider this one of the best BF deals this year.

    Germany
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 16.59 MB/s    (4.1k) | 77.11 MB/s    (1.2k)
    Write      | 16.59 MB/s    (4.1k) | 77.52 MB/s    (1.2k)
    Total      | 33.18 MB/s    (8.2k) | 154.64 MB/s   (2.4k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 111.48 MB/s    (217) | 101.92 MB/s     (99)
    Write      | 117.41 MB/s    (229) | 108.71 MB/s    (106)
    Total      | 228.89 MB/s    (446) | 210.64 MB/s    (205)
    
    Finland
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 21.99 MB/s    (5.4k) | 34.03 MB/s     (531)
    Write      | 22.01 MB/s    (5.5k) | 34.17 MB/s     (533)
    Total      | 44.01 MB/s   (11.0k) | 68.20 MB/s    (1.0k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 89.78 MB/s     (175) | 117.40 MB/s    (114)
    Write      | 94.55 MB/s     (184) | 125.22 MB/s    (122)
    Total      | 184.34 MB/s    (359) | 242.62 MB/s    (236)
    
  • Have you fulfilled your back orders?

  • How will you handle DMCA

  • labzelabze Member, Patron Provider

    @maverick said:

    @maverick said:
    Invoice #: 15143

    Thank you for 10% cashback, 50% more bandwidth and 3x backup slots!

    @labze hopefully you have not forgotten our extras. ;)

    @labze said: If issues persists I will do a in-depth troubleshoot and optimization. However, the Finland node is more busy than the Germany one while also undergoing a re-sync and it does not make sense to troubleshoot while the likely cause are these factors.

    Last few days Finland was very slow to me, probably that array resync was the biggest culprit. But today, I'd say everything is just fine, has resync finally finished?

    I'm attaching current disk benches for both DE & FI, and I consider them very good, well... having in mind it is a shared HDD array. Plus good price plus excellent support, I consider this one of the best BF deals this year.

    Germany
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 16.59 MB/s    (4.1k) | 77.11 MB/s    (1.2k)
    Write      | 16.59 MB/s    (4.1k) | 77.52 MB/s    (1.2k)
    Total      | 33.18 MB/s    (8.2k) | 154.64 MB/s   (2.4k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 111.48 MB/s    (217) | 101.92 MB/s     (99)
    Write      | 117.41 MB/s    (229) | 108.71 MB/s    (106)
    Total      | 228.89 MB/s    (446) | 210.64 MB/s    (205)
    
    Finland
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 21.99 MB/s    (5.4k) | 34.03 MB/s     (531)
    Write      | 22.01 MB/s    (5.5k) | 34.17 MB/s     (533)
    Total      | 44.01 MB/s   (11.0k) | 68.20 MB/s    (1.0k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 89.78 MB/s     (175) | 117.40 MB/s    (114)
    Write      | 94.55 MB/s     (184) | 125.22 MB/s    (122)
    Total      | 184.34 MB/s    (359) | 242.62 MB/s    (236)
    

    The array has indeed finished syncing and performance seems to be around normal again. Performance will probrably be a bit better over time as usage decreases and cache is being increased.

  • labzelabze Member, Patron Provider

    @SlowDD said:
    Have you fulfilled your back orders?

    Think all orders are provisioned

    @wangqing said:
    How will you handle DMCA

    DMCA notices will be forwarded and should be handled within 24 hours or your service gets suspended. Repeat offence will lead to service termination.

Sign In or Register to comment.