Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Share your NVMe disk performance
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Share your NVMe disk performance

IKIHOSTIKIHOST Member
edited September 2019 in Help

I need a little info, maybe someone can share your speed Wr/Read or performance of the NVMe disk test that you are using, if possible include a brand, raid soft/hw and vps/dedi :)

Comments

  • here is my hosthatch NVME against SparkVPS SSD:

     ssh spark fio seq_read.fio
    myjob: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
    fio-3.1
    Starting 1 process
    
    myjob: (groupid=0, jobs=1): err= 0: pid=19710: Wed Sep 25 06:52:42 2019
       read: IOPS=17.8k, BW=2221MiB/s (2329MB/s)(1024MiB/461msec)
        slat (usec): min=15, max=849, avg=33.77, stdev=42.84
        clat (usec): min=726, max=21654, avg=1711.29, stdev=1621.46
         lat (usec): min=748, max=22406, avg=1747.68, stdev=1637.24
        clat percentiles (usec):
         |  1.00th=[ 1012],  5.00th=[ 1139], 10.00th=[ 1221], 20.00th=[ 1303],
         | 30.00th=[ 1352], 40.00th=[ 1401], 50.00th=[ 1450], 60.00th=[ 1483],
         | 70.00th=[ 1532], 80.00th=[ 1598], 90.00th=[ 1680], 95.00th=[ 1811],
         | 99.00th=[ 9765], 99.50th=[15401], 99.90th=[16450], 99.95th=[18744],
         | 99.99th=[21627]
      lat (usec)   : 750=0.01%, 1000=0.83%
      lat (msec)   : 2=95.26%, 4=0.81%, 10=2.14%, 20=0.92%, 50=0.04%
      cpu          : usr=25.22%, sys=61.09%, ctx=34, majf=0, minf=1036
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=99.6%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwt: total=8192,0,0, short=0,0,0, dropped=0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=2221MiB/s (2329MB/s), 2221MiB/s-2221MiB/s (2329MB/s-2329MB/s), io=1024MiB (1074MB), run=461-461msec
    
    Disk stats (read/write):
      vda: ios=6975/0, merge=0/0, ticks=7232/0, in_queue=6920, util=75.76%
    

    So approx 2.2GB in 0.5sec
    NVME with HostHatch

     ssh hh2 fio seq_read.fio
    myjob: (g=0): rw=read, bs=128K-128K/128K-128K/128K-128K, ioengine=libaio, iodepth=32
    fio-2.16
    Starting 1 process
    
    myjob: (groupid=0, jobs=1): err= 0: pid=5181: Wed Sep 25 05:55:31 2019
      read : io=1024.0MB, bw=95049KB/s, iops=742, runt= 11032msec
        slat (usec): min=5, max=451, avg=13.13, stdev=13.64
        clat (usec): min=60, max=295174, avg=43047.30, stdev=65627.48
         lat (usec): min=109, max=295192, avg=43061.32, stdev=65627.46
        clat percentiles (usec):
         |  1.00th=[  205],  5.00th=[  266], 10.00th=[  310], 20.00th=[  394],
         | 30.00th=[  470], 40.00th=[  548], 50.00th=[  636], 60.00th=[  764],
         | 70.00th=[ 3056], 80.00th=[142336], 90.00th=[146432], 95.00th=[148480],
         | 99.00th=[148480], 99.50th=[150528], 99.90th=[150528], 99.95th=[150528],
         | 99.99th=[296960]
        lat (usec) : 100=0.04%, 250=3.34%, 500=30.70%, 750=25.34%, 1000=8.83%
        lat (msec) : 2=1.53%, 4=0.45%, 10=0.13%, 100=0.39%, 250=29.21%
        lat (msec) : 500=0.04%
      cpu          : usr=0.62%, sys=1.02%, ctx=2233, majf=0, minf=1035
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=99.6%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued    : total=r=8192/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: io=1024.0MB, aggrb=95048KB/s, minb=95048KB/s, maxb=95048KB/s, mint=11032msec, maxt=11032msec
    
    Disk stats (read/write):
      vda: ios=8184/2, merge=0/21, ticks=347516/488, in_queue=348148, util=99.20%
    

    approx 95Mb in 11 sec with HostHatch.
    YMMV, Howie

  • uptimeuptime Member
    edited September 2019

    @n4af - thanks for that - the second one is SparkVPS? (it's currently also labeled HostHatch) indeed, I see now - the first is sparkVPS, second is HostHatch

  • muffinmuffin Member
    edited September 2019

    @uptime said:
    @n4af - thanks for that - the second one is SparkVPS? (it's currently also labeled HostHatch)

    Pretty sure the first one is sparkvps, the second being hosthatch

    Thanked by 1uptime
  • hosthatchhosthatch Patron Provider, Top Host, Veteran

    Or it might be the one bad node we have in Chicago that has been the reason for 99% of the negative feedback in the past few days. It will be discarded / migrated by the end of this week.

    Thanked by 2uptime coreflux
  • Abdullah was right. Sequential Read - new numbers today>
    WOW !

    ++++Host Hatch 1coreNVME++++++++++
       READ: io=1024.0MB, aggrb=5505.4MB/s, minb=5505.4MB/s, maxb=5505.4MB/s, mint=186msec, maxt=186msec
    +++++ SPARK 2core KVM++++++
       READ: bw=1816MiB/s (1904MB/s), 1816MiB/s-1816MiB/s (1904MB/s-1904MB/s), io=1024MiB (1074MB), run=564-564msec
    ++++++ SD2 2core******
       READ: io=1024.0MB, aggrb=364848KB/s, minb=364848KB/s, maxb=364848KB/s, mint=2874msec, maxt=2874msec
    
  • JordJord Moderator, Host Rep

    We should call on the NVMe Lord @Zerpy

  • NekkiNekki Veteran
    edited October 2019
    ...............…………………………._¸„„„„_
    …………………….…………...„--~*'¯…….'\
    ………….…………………… („-~~--„¸_….,/ì'Ì
    …….…………………….¸„-^"¯ : : : : :¸-¯"¯/'
    ……………………¸„„-^"¯ : : : : : : : '\¸„„,-"
    **¯¯¯'^^~-„„„----~^*'"¯ : : : : : : : : : :¸-"
    .:.:.:.:.„-^" : : : : : : : : : : : : : : : : :„-"
    :.:.:.:.:.:.:.:.:.:.: : : : : : : : : : ¸„-^¯
    .::.:.:.:.:.:.:.:. : : : : : : : ¸„„-^¯
    :.' : : '\ : : : : : : : ;¸„„-~"
    :.:.:: :"-„""***/*'ì¸'¯
    :.': : : : :"-„ : : :"\
    .:.:.: : : : :" : : : : \,
    :.: : : : : : : : : : : : 'Ì
    : : : : : : :, : : : : : :/
    "-„_::::_„-*__„„~"
    
  • @Nekki said:

    > ...............…………………………._¸„„„„_
    > …………………….…………...„--~*'¯…….'\
    > ………….…………………… („-~~--„¸_….,/ì'Ì
    > …….…………………….¸„-^"¯ : : : : :¸-¯"¯/'
    > ……………………¸„„-^"¯ : : : : : : : '\¸„„,-"
    > **¯¯¯'^^~-„„„----~^*'"¯ : : : : : : : : : :¸-"
    > .:.:.:.:.„-^" : : : : : : : : : : : : : : : : :„-"
    > :.:.:.:.:.:.:.:.:.:.: : : : : : : : : : ¸„-^¯
    > .::.:.:.:.:.:.:.:. : : : : : : : ¸„„-^¯
    > :.' : : '\ : : : : : : : ;¸„„-~"
    > :.:.:: :"-„""***/*'ì¸'¯
    > :.': : : : :"-„ : : :"\
    > .:.:.: : : : :" : : : : \,
    > :.: : : : : : : : : : : : 'Ì
    > : : : : : : :, : : : : : :/
    > "-„_::::_„-*__„„~"
    > 

    Thank you the great man

    My advice, you should give a positive thing, so that others can appreciate it.

  • @n4af said:
    Abdullah was right. Sequential Read - new numbers today>
    WOW !

    > ++++Host Hatch 1coreNVME++++++++++
    >    READ: io=1024.0MB, aggrb=5505.4MB/s, minb=5505.4MB/s, maxb=5505.4MB/s, mint=186msec, maxt=186msec
    > +++++ SPARK 2core KVM++++++
    >    READ: bw=1816MiB/s (1904MB/s), 1816MiB/s-1816MiB/s (1904MB/s-1904MB/s), io=1024MiB (1074MB), run=564-564msec
    > ++++++ SD2 2core******
    >    READ: io=1024.0MB, aggrb=364848KB/s, minb=364848KB/s, maxb=364848KB/s, mint=2874msec, maxt=2874msec
    > 

    What the nvme brand/type on this node?

  • @WPF said:

    @Nekki said:

    > > ...............…………………………._¸„„„„_
    > > …………………….…………...„--~*'¯…….'\
    > > ………….…………………… („-~~--„¸_….,/ì'Ì
    > > …….…………………….¸„-^"¯ : : : : :¸-¯"¯/'
    > > ……………………¸„„-^"¯ : : : : : : : '\¸„„,-"
    > > **¯¯¯'^^~-„„„----~^*'"¯ : : : : : : : : : :¸-"
    > > .:.:.:.:.„-^" : : : : : : : : : : : : : : : : :„-"
    > > :.:.:.:.:.:.:.:.:.:.: : : : : : : : : : ¸„-^¯
    > > .::.:.:.:.:.:.:.:. : : : : : : : ¸„„-^¯
    > > :.' : : '\ : : : : : : : ;¸„„-~"
    > > :.:.:: :"-„""***/*'ì¸'¯
    > > :.': : : : :"-„ : : :"\
    > > .:.:.: : : : :" : : : : \,
    > > :.: : : : : : : : : : : : 'Ì
    > > : : : : : : :, : : : : : :/
    > > "-„_::::_„-*__„„~"
    > > 

    Thank you the great man

    My advice, you should give a positive thing, so that others can appreciate it.

    Horsecock?

  • @Nekki said:

    Horsecock?

    Hi @Nekki,

    Actually it is your right to write anything and I really appreciate it, but I hope you do not reappear in the posts that we make in the future. Because we never bother you, let alone harm you, we don't think ever.

    You can take your good time to help other people who are more useful in this forum. We think there is no profit you get from insulting each of our post.

    So please be aware not to put negative things on our next post. Therefore instead of wasting your time just for harassing our company, if you need something or maybe looking for free services like hosting or vps, we will be happy to open a discussion.

    Thank you very much for your understanding and taking your time in this post :)

  • @WPF said:

    @Nekki said:

    Horsecock?

    Hi @Nekki,

    Actually it is your right to write anything and I really appreciate it, but I hope you do not reappear in the posts that we make in the future. Because we never bother you, let alone harm you, we don't think ever.

    You can take your good time to help other people who are more useful in this forum. We think there is no profit you get from insulting each of our post.

    So please be aware not to put negative things on our next post. Therefore instead of wasting your time just for harassing our company, if you need something or maybe looking for free services like hosting or vps, we will be happy to open a discussion.

    Thank you very much for your understanding and taking your time in this post :)

    Horsecock:-/

  • LETBox NVMe 25GB 3TB storage deal

    NVMe

    root@box:~# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 0.284724 s, 943 MB/s
    root@box:~# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 0.280062 s, 958 MB/s
    root@box:~# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 0.258824 s, 1.0 GB/s

    BlockStorage

    root@box:/data# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink te st
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 0.881922 s, 304 MB/s
    root@box:/data# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 0.940352 s, 285 MB/s
    root@box:/data# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 0.946729 s, 284 MB/s
    root@box:/data# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 0.94389 s, 284 MB/s

  • ----------------------------------------------------------------------
    I/O speed(1st run)   : 2.2 GB/s
    I/O speed(2nd run)   : 2.1 GB/s
    I/O speed(3rd run)   : 1.9 GB/s
    Average I/O speed    : 2116.3 MB/s
    ----------------------------------------------------------------------
    ioping: seek rate
        min/avg/max/mdev = 92.6 us / 178.5 us / 3.35 ms / 55.0 us
    ioping: sequential read speed
        generated 10.6 k requests in 5.00 s, 2.59 GiB, 2.12 k iops, 530.7 MiB/s
    
    dd: sequential write speed
        1st run:    2098.08 MiB/s
        2nd run:    2002.72 MiB/s
        3rd run:    2002.72 MiB/s
        average:    2034.51 MiB/s
    ----------------------------------------------------------------------
    [root@localhost ~]# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB) copied, 0.124754 s, 2.2 GB/s
    ----------------------------------------------------------------------
    test: (groupid=0, jobs=1): err= 0: pid=2630: Sun Oct  6 22:00:21 2019
      read : io=3073.7MB, bw=238080KB/s, iops=59519 , runt= 13220msec
      write: io=1022.4MB, bw=79190KB/s, iops=19797 , runt= 13220msec
    ----------------------------------------------------------------------
    
    Thanked by 1mohamed
  • In my opinion, NVMe SSD's are much better and faster than normal SSD's (i think on average 3, or 4x times faster than SSD) but I don't think its worth for the price different.

  • @cybertech said:
    ----------------------------------------------------------------------
    I/O speed(1st run) : 2.2 GB/s
    I/O speed(2nd run) : 2.1 GB/s
    I/O speed(3rd run) : 1.9 GB/s
    Average I/O speed : 2116.3 MB/s
    ----------------------------------------------------------------------
    ioping: seek rate
    min/avg/max/mdev = 92.6 us / 178.5 us / 3.35 ms / 55.0 us
    ioping: sequential read speed
    generated 10.6 k requests in 5.00 s, 2.59 GiB, 2.12 k iops, 530.7 MiB/s

    dd: sequential write speed
    1st run: 2098.08 MiB/s
    2nd run: 2002.72 MiB/s
    3rd run: 2002.72 MiB/s
    average: 2034.51 MiB/s
    ----------------------------------------------------------------------
    [root@localhost ~]# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync;unlink test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB) copied, 0.124754 s, 2.2 GB/s
    ----------------------------------------------------------------------
    test: (groupid=0, jobs=1): err= 0: pid=2630: Sun Oct 6 22:00:21 2019
    read : io=3073.7MB, bw=238080KB/s, iops=59519 , runt= 13220msec
    write: io=1022.4MB, bw=79190KB/s, iops=19797 , runt= 13220msec
    ----------------------------------------------------------------------

    Do you have result from fio test?

Sign In or Register to comment.