Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Black Friday 2019: 100 GB SSD storage - 4,95 EUR monthly - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Black Friday 2019: 100 GB SSD storage - 4,95 EUR monthly

13»

Comments

  • dataforestdataforest Member, Patron Provider

    Regarding the benchmarks: Please note that this particular benchmark (popular here, I've forgotten the name) isn't that great as it only involves a simple dd call for testing disk performance. Without the right block size even NVMe SSDs don't perform that well here. If you use a greater block size, our NVMe servers deliver 6-12 GB/s (!) sequential reading performance without any problems. Of course the small servers here won't deliver the same, but when it comes to IOPS, the SAS setup really rocks. I personally like to test random read/write with fio:

    root@bench:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50 | grep IOPS
      read: IOPS=24.7k, BW=96.4MiB/s (101MB/s)(2049MiB/21260msec)
      write: IOPS=24.6k, BW=96.3MiB/s (101MB/s)(2047MiB/21260msec); 0 zone resets
    

    (Performed on a Black Friday mini VM)

    The IOPS might even increase if we add more SSDs to the nodes =)

    FYI: Our NVMe nodes deliver up to 80k random IOPS (read/write).

    Sequential read:

    root@bench:~ # dd if=/dev/vda of=/dev/null bs=32M iflag=direct count=128
    128+0 Datensätze ein
    128+0 Datensätze aus
    4294967296 bytes (4,3 GB, 4,0 GiB) copied, 2,0196 s, 2,1 GB/s
    

    I think that's great, not only regarding the price but in general. And now, hammer the nodes with your own benchmarks!

    Best Regards,
    Tim

  • edited December 2019

    deleted

  • @akb said:
    Hi @thedp @Falzo @rchurch @sonic,

    If this smaller VPS from them got deployed for you, I will really appreciate if you can please post the CPU flags and a bench/nench.

    Here is it https://browser.geekbench.com/v4/cpu/15001384

    Thanked by 2dataforest akb
  • akbakb Member
    edited December 2019

    Thanks :) Can you also post the output of a bench.sh test as it does some basic network tests and the CPU flags (cat /proc/cpuinfo) ?

  • @PHP_Friends said:
    We have delivered all orders that came over night. Now it's time for some coffee :D

    Thanked by 1dataforest
  • FalzoFalzo Member
    edited December 2019

    @PHP_Friends said:

    fio

    can confirm, getting about 20k+20k on 4k blocksize, random read/write mix 50%. this goes down to 9k+9k with 64k blocksize which still equals >500MB/s in rw speed.

    definitely good numbers!

    while I agree with @PHP_Friends that dd and wget are not the best tools for a benchmark, here is a quick nench anyway:

    -------------------------------------------------
     nench.sh v2019.07.20 -- https://git.io/nench.sh
     benchmark timestamp:    2019-12-05 12:36:39 UTC
    -------------------------------------------------
    
    Processor:    Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
    CPU cores:    1
    Frequency:    2100.000 MHz
    RAM:          3,9G
    Swap:         -
    Kernel:       Linux 4.15.0-43-generic x86_64
    
    Disks:
    vda     50G  HDD
    
    CPU: SHA256-hashing 500 MB
        4,020 seconds
    CPU: bzip2-compressing 500 MB
        7,631 seconds
    CPU: AES-encrypting 500 MB
        2,591 seconds
    
    ioping: seek rate
        min/avg/max/mdev = 121.9 us / 237.1 us / 14.3 ms / 165.9 us
    ioping: sequential read speed
        generated 10.8 k requests in 5.00 s, 2.64 GiB, 2.16 k iops, 540.8 MiB/s
    
    dd: sequential write speed
        1st run:    295.64 MiB/s
        2nd run:    496.86 MiB/s
        3rd run:    652.31 MiB/s
        average:    481.61 MiB/s
    
    IPv4 speedtests
    
        Cachefly CDN:         53.50 MiB/s
        Leaseweb (NL):        39.90 MiB/s
        Softlayer DAL (US):   14.34 MiB/s
        Online.net (FR):      41.12 MiB/s
        OVH BHS (CA):         5.98 MiB/s
    
    IPv6 speedtests
    
        Leaseweb (NL):        1.95 MiB/s
        Softlayer DAL (US):   14.33 MiB/s
        Online.net (FR):      44.86 MiB/s
        OVH BHS (CA):         7.74 MiB/s
    -------------------------------------------------
    
  • Thanks @Falzo for that nench. A nench/bench script outputs some network throughput numbers and that is what I was after. It too is probably pass-through but can you also post the CPU flags exposed to the VM?

  • @akb said:

    here are more speeds with iperf from YABS:

    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider                  | Location (Link)           | Send Speed      | Recv Speed     
                              |                           |                 |                
    Bouygues Telecom          | Paris, FR (10G)           | 545 Mbits/sec   | 468 Mbits/sec  
    Online.net                | Paris, FR (10G)           | busy            | 0.00 bits/sec  
    Severius                  | The Netherlands (10G)     | 545 Mbits/sec   | 394 Mbits/sec  
    Worldstream               | The Netherlands (10G)     | 545 Mbits/sec   | 465 Mbits/sec  
    wilhelm.tel               | Hamburg, DE (10G)         | 544 Mbits/sec   | 464 Mbits/sec  
    Biznet                    | Bogor, Indonesia (1G)     | busy            | busy           
    Hostkey                   | Moscow, RU (1G)           | 493 Mbits/sec   | 446 Mbits/sec  
    Velocity Online           | Tallahassee, FL, US (10G) | 437 Mbits/sec   | 247 Mbits/sec  
    Airstream Communications  | Eau Claire, WI, US (10G)  | 469 Mbits/sec   | 236 Mbits/sec  
    Hurricane Electric        | Fremont, CA, US (10G)     | busy            | 140 Mbits/sec  
    
    iperf3 Network Speed Tests (IPv6):
    ---------------------------------
    Provider                  | Location (Link)           | Send Speed      | Recv Speed     
                              |                           |                 |                
    Bouygues Telecom          | Paris, FR (10G)           | 545 Mbits/sec   | 467 Mbits/sec  
    Online.net                | Paris, FR (10G)           | busy            | busy           
    Severius                  | The Netherlands (10G)     | 545 Mbits/sec   | 423 Mbits/sec  
    Worldstream               | The Netherlands (10G)     | 544 Mbits/sec   | 468 Mbits/sec  
    wilhelm.tel               | Hamburg, DE (10G)         | 545 Mbits/sec   | 462 Mbits/sec  
    Airstream Communications  | Eau Claire, WI, US (10G)  | 459 Mbits/sec   | 143 Mbits/sec  
    Hurricane Electric        | Fremont, CA, US (10G)     | 454 Mbits/sec   | busy           
    

    for CPU flags AES-NI and VMX/VTx is enabled/passed through...

    Thanked by 1dataforest
  • @PHP_Friends said:
    Regarding the benchmarks: Please note that this particular benchmark (popular here, I've forgotten the name) isn't that great as it only involves a simple dd call for testing disk performance. Without the right block size even NVMe SSDs don't perform that well here.

    I personally always test dd directly on the command line, with a large enough block size. Most benchmarks go for a very quick test. People don't like to wait anymore. Same goes with ioping.

    (...) I personally like to test random read/write with fio:

    I didn't know fio, so performed two tests. Which are not on the Black Friday VPS but on the Schnupperspecial. Just to be perfectly clear.

    root@redacted:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50 | grep IOPS
      read: IOPS=1113, BW=4452KiB/s (4559kB/s)(2049MiB/471345msec)
      write: IOPS=1111, BW=4446KiB/s (4553kB/s)(2047MiB/471345msec); 0 zone resets
    
    root@redacted:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50 | grep IOPS
      read: IOPS=1104, BW=4416KiB/s (4522kB/s)(2049MiB/475179msec)
      write: IOPS=1102, BW=4411KiB/s (4516kB/s)(2047MiB/475179msec); 0 zone resets
    

    Given that these are 4K blocks, I'd think that the IOPS on an SSD would be a bit better. But that said, I don't know if I read these correctly. The VM is probably not connected to the same SAS SSD'd, so I understand I can't compare these directly to the BF Special, but still. Could be me though. In my understanding of IOPS I would at least expect ~10K with a 4K block size.

    (mind that these two tests were done 16 hours apart)

  • debaser said: In my understanding of IOPS I would at least expect ~10K with a 4K block size.

    true. probably a full node with a lot going on (noisy neighbours?) or IOps limited from the beginning ;-)
    what do you get as a result for 64k blocksize?

    Thanked by 1debaser
  • dataforestdataforest Member, Patron Provider

    @debaser In the SSD G2 we have still some SATA nodes running. Could you open a ticket about that? We'll check :)

  • debaserdebaser Member
    edited December 2019

    @Falzo said:
    true. probably a full node with a lot going on (noisy neighbours?) or IOps limited from the beginning ;-)

    It's good that you say this, because I'm obviously an idiot. I was on a very busy or full node which lead to some performance issues. To make sure I tried everything before opening a ticket I set the virtual disk driver to IDE. Tim from @PHP_Friends migrated me to another node within an hour. And I forget to switch the driver back to virtio.

    Big difference:

    root@redacted:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50 | grep IOPS
      read: IOPS=41.2k, BW=161MiB/s (169MB/s)(2049MiB/12722msec)
      write: IOPS=41.2k, BW=161MiB/s (169MB/s)(2047MiB/12722msec); 0 zone resets
    
    root@redacted:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50 | grep IOPS
      read: IOPS=20.7k, BW=1291MiB/s (1354MB/s)(2046MiB/1585msec)
      write: IOPS=20.7k, BW=1293MiB/s (1356MB/s)(2050MiB/1585msec); 0 zone resets
    
    Thanked by 2dataforest Falzo
  • dataforestdataforest Member, Patron Provider

    @debaser said:
    I'm obviously an idiot

    We all are. Sometimes. :D

    :heart:

    Thanked by 1debaser
  • dataforestdataforest Member, Patron Provider

    I was curious and ran the test on a managed server running on an older SATA node. Even there we see way more IOPS:

    root@mngmt01:~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50 | grep -i iops
      read : io=2046.9MB, bw=29805KB/s, iops=7451, runt= 70323msec
      write: io=2049.2MB, bw=29838KB/s, iops=7459, runt= 70323msec
    

    So yes, the bad benchmark result was only caused by the IDE driver. However, if 15k IOPS are not enough, we always offer a free migration to a SAS node for G2 customers :)

    The Black Friday nodes (more specific: all machines we bought since the end of 2016) are always powered by SAS + datacenter SSDs.

    Best Regards,
    Tim

    Thanked by 1cybertech
  • I read that 4k block size may result in artificially inflated results? Would a higher block size be a better indicator of performance, say at least 32?

  • @poisson said:
    I read that 4k block size may result in artificially inflated results? Would a higher block size be a better indicator of performance, say at least 32?

    The second test in my post is with 64K blocks.

  • @poisson said:
    I read that 4k block size may result in artificially inflated results? Would a higher block size be a better indicator of performance, say at least 32?

    in general 4k bs in fio should give you a good idea about the maximum iops possible.
    as bandwidth equals bs*iops a higher blocksize like 64k usually is a better indicator for bandwidth limitations then.
    obviously the iops with bigger bs will decrease accordingly when you reach the max bw...

    TL;DR; mutliple runs with different blocksizes like @debaser did make a lot of sense ;-)

    Thanked by 2dataforest vimalware
  • @debaser said:

    @poisson said:
    I read that 4k block size may result in artificially inflated results? Would a higher block size be a better indicator of performance, say at least 32?

    The second test in my post is with 64K blocks.

    Wow. Payload increased by factor of 16, iops drops by factor of 2. Good stuff.

    Thanked by 1dataforest
  • @Falzo said:

    @poisson said:
    I read that 4k block size may result in artificially inflated results? Would a higher block size be a better indicator of performance, say at least 32?

    in general 4k bs in fio should give you a good idea about the maximum iops possible.
    as bandwidth equals bs*iops a higher blocksize like 64k usually is a better indicator for bandwidth limitations then.
    obviously the iops with bigger bs will decrease accordingly when you reach the max bw...

    TL;DR; mutliple runs with different blocksizes like @debaser did make a lot of sense ;-)

    Based on what you said, if iops decline less rapidly relative to blocksize, we are looking at premium potassium because much heavier loads can be pushed without a corresponding penalty on iops. @debaser is on an excellent node now based on this reasoning

  • @poisson said:

    @Falzo said:

    @poisson said:
    I read that 4k block size may result in artificially inflated results? Would a higher block size be a better indicator of performance, say at least 32?

    in general 4k bs in fio should give you a good idea about the maximum iops possible.
    as bandwidth equals bs*iops a higher blocksize like 64k usually is a better indicator for bandwidth limitations then.
    obviously the iops with bigger bs will decrease accordingly when you reach the max bw...

    TL;DR; mutliple runs with different blocksizes like @debaser did make a lot of sense ;-)

    Based on what you said, if iops decline less rapidly relative to blocksize, we are looking at premium potassium because much heavier loads can be pushed without a corresponding penalty on iops. @debaser is on an excellent node now based on this reasoning

    exactly! it's not linear all the way, because you have two limits, one being the max iops your storage potatoe is able to achieve and the second being the size of your sata/sas hose you need to get your watering data through.

    in the example above you'd probably be able to achieve the same iops of ~40k for 4k, 8k 16k and 32k bs because io is the major limiting factor. only after that it changes to bw and that's why iops decrease proportionally with larger blocksizes from that point.
    which probably does not matter much as that's then large files / sequential writes soon anyway ;-)

    Thanked by 2poisson vimalware
  • @debaser said:

    @Falzo said:
    true. probably a full node with a lot going on (noisy neighbours?) or IOps limited from the beginning ;-)

    It's good that you say this, because I'm obviously an idiot. I was on a very busy or full node which lead to some performance issues. To make sure I tried everything before opening a ticket I set the virtual disk driver to IDE. Tim from @PHP_Friends migrated me to another node within an hour. And I forget to switch the driver back to virtio.

    Big difference:

    root@redacted:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50 | grep IOPS
      read: IOPS=41.2k, BW=161MiB/s (169MB/s)(2049MiB/12722msec)
      write: IOPS=41.2k, BW=161MiB/s (169MB/s)(2047MiB/12722msec); 0 zone resets
    
    root@redacted:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50 | grep IOPS
      read: IOPS=20.7k, BW=1291MiB/s (1354MB/s)(2046MiB/1585msec)
      write: IOPS=20.7k, BW=1293MiB/s (1356MB/s)(2050MiB/1585msec); 0 zone resets
    

    Good to see the numbers, thanks @PHP_Friends

    Thanked by 1dataforest
  • @thedp said:

    Virtualization: KVM
    Processor: Intel Xeon E5-CPU
    CPU: 1
    RAM: 4GB ECC RAM
    Disk: 50GB SSD
    Network: 500 MBit/s
    DDoS Protection: Yes
    IP: IPv4 + IPv6 (/64) with RDNS Support
    Contract: 12 Months
    

    Price: €29.70/year (Inclusive of 19% VAT)
    Order Link: https://php-friends.de/add-to-cart/id:148

    Is 500Mb/s good network speed for high traffic websites?

  • @pkr said:

    @thedp said:

    Virtualization: KVM
    Processor: Intel Xeon E5-CPU
    CPU: 1
    RAM: 4GB ECC RAM
    Disk: 50GB SSD
    Network: 500 MBit/s
    DDoS Protection: Yes
    IP: IPv4 + IPv6 (/64) with RDNS Support
    Contract: 12 Months
    

    Price: €29.70/year (Inclusive of 19% VAT)
    Order Link: https://php-friends.de/add-to-cart/id:148

    Is 500Mb/s good network speed for high traffic websites?

    Absolutely..

    Thanked by 1pkr
Sign In or Register to comment.