Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Typical disk speed for SSD RAID 10?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Typical disk speed for SSD RAID 10?

Based on your experience, is the following a typical disk speed for SSD RAID 10?

This is on a dedicated server.

Below/Above average?

fio Disk Speed Tests (Mixed R/W 50/50):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 82.36 MB/s   (20.5k) | 478.74 MB/s   (7.4k)
Write      | 82.58 MB/s   (20.6k) | 481.26 MB/s   (7.5k)
Total      | 164.94 MB/s  (41.2k) | 960.01 MB/s  (14.9k)
           |                      |
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 751.05 MB/s   (1.4k) | 749.74 MB/s    (732)
Write      | 790.96 MB/s   (1.5k) | 799.68 MB/s    (780)
Total      | 1.54 GB/s     (3.0k) | 1.54 GB/s     (1.5k)

TIA

Comments

  • Pretty good for raid 10 SSD..

    Thanked by 1Kassem
  • PeterPPeterP Member, Host Rep

    How many drives? Enterprise or consumer?

  • KassemKassem Member

    @PeterP said: How many drives? Enterprise or consumer?

    4 1.92 TB drives Enterprise.

  • PeterPPeterP Member, Host Rep

    @Kassem said:

    @PeterP said: How many drives? Enterprise or consumer?

    4 1.92 TB drives Enterprise.

    It's a bit below average IMHO, but it's as expected. All depends on the make and model of each drive and what they're rated for (endurance vs performance, etc).

    Thanked by 1Kassem
  • SW or HW raid

  • Your SSD performance is better than mine

    fio Disk Speed Tests (Mixed R/W 50/50):

    Block Size 4k (IOPS) 64k (IOPS)
    Read 9.35 MB/s (2.3k) 131.81 MB/s (2.0k)
    Write 9.38 MB/s (2.3k) 132.50 MB/s (2.0k)
    Total 18.74 MB/s (4.6k) 264.32 MB/s (4.1k)
    Block Size 512k (IOPS) 1m (IOPS)
    ------ --- ---- ---- ----
    Read 450.76 MB/s (880) 696.48 MB/s (680)
    Write 474.71 MB/s (927) 742.87 MB/s (725)
    Total 925.48 MB/s (1.8k) 1.43 GB/s (1.4k)
  • MannDudeMannDude Host Rep, Veteran
    edited June 2021

    Not that it answers your question, but for those who want to compare RAID10 SSD to RAID-1 NVMe, here are the results of a small and idle node we have with RAID-1 NVMe drives:

    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 613.65 MB/s (153.4k) | 1.62 GB/s    (25.3k)
    Write      | 615.26 MB/s (153.8k) | 1.63 GB/s    (25.4k)
    Total      | 1.22 GB/s   (307.2k) | 3.25 GB/s    (50.8k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 1.70 GB/s     (3.3k) | 1.77 GB/s     (1.7k)
    Write      | 1.79 GB/s     (3.5k) | 1.89 GB/s     (1.8k)
    Total      | 3.50 GB/s     (6.8k) | 3.66 GB/s     (3.5k)
    

    The SSD RAID-10 speeds look fine and seem about on par with what I recall from previous and similar setups in the past.

    Thanked by 1Kassem
  • KassemKassem Member

    @logaritse Your result is also for SSD with RAID 10?

    @MannDude You tried RAID 10 with NVMe drives? Wonder what the speed will look like.

  • jsgjsg Member, Resident Benchmarker

    I don't trust those benchmarks, but assuming you ran them on a halfway recent linux kernel the results are not great. Not particularly poor but what I'd call lower end mid range, possibly even just upper end low range considering that it's a hw Raid 10.

    But then it of course depends on what you will use them for. For a DB? Not a great idea. For "you know, stuff, OS, files" damn good enough.

  • @Kassem said:
    @logaritse Your result is also for SSD with RAID 10?

    Yes Raid 10 SSD Enterprise (1.92TB)

    Thanked by 1Kassem
  • letboxletbox Member, Patron Provider
    edited June 2021

    @Kassem said:
    @logaritse Your result is also for SSD with RAID 10?

    @MannDude You tried RAID 10 with NVMe drives? Wonder what the speed will look like.

    I believe it will be capped by cpu/ MB PCIe So no point to do nvme raid10.

    Regarding

    Thanked by 1Kassem
  • ViridWebViridWeb Member, Host Rep

    CPU also play a role. Which one you are using?

  • KassemKassem Member

    @jsg said: I don't trust those benchmarks, but assuming you ran them on a halfway recent linux kernel the results are not great. Not particularly poor but what I'd call lower end mid range, possibly even just upper end low range considering that it's a hw Raid 10.

    But then it of course depends on what you will use them for. For a DB? Not a great idea. For "you know, stuff, OS, files" damn good enough.

    It was run on this CentOS 7.9 3.10.0-1160.31.1.el7.x86_64

    The server was idle while running it too and it was just provisioned as well.

    This is the server in question: https://www.hetzner.com/dedicated-rootserver/px93

    The adapter is Adaptec ASR8405 https://storage.microsemi.com/en-us/support/raid/sas_raid/asr-8405/

    If this is below the expected speed for this server and SSD RAID 10 disks, it will be worth checking with support or it is not a problem to support as long as the disks are running?

    All 4 disks show ~20K Power_On_Hours.

    @ViridWeb Linked the server info above.

  • jsgjsg Member, Resident Benchmarker
    edited June 2021

    @Kassem said:
    It was run on this CentOS 7.9 3.10.0-1160.31.1.el7.x86_64

    I presume that's a recent version? (sorry, I don't care much for CentOs and hence am not up to date).

    The server was idle while running it too and it was just provisioned as well.

    That's good.

    Adaptec adapters usually are OK. And I presume the Raid set up/configuration were done via BIOS and you simply get your 4 disks presented as one linux device, correct?

    If this is below the expected speed for this server and SSD RAID 10 disks, it will be worth checking with support or it is not a problem to support as long as the disks are running?

    Now that I know that the Raid adapter is not some old board but a relatively decent one that should be good enough for 1 GB/s (maybe even a bit more, depending on details) and presumably configured by Hetzner I'm even less excited about the performance numbers you presented.

    Caveat: I don't know that dedi product. Maybe it's one that doesn't promise speed demon disks and so that Raid array is kind of normal for that product. Keep in mind that there are many who automatically click on "SSD" and don't think or care about what they actually get as long as it's SSD and obviously providers adapt to that ...

    All 4 disks show ~20K Power_On_Hours.

    That doesn't mean a whole lot. What's relevant is how many GB or TB have been written to it during those a bit over 2 years (and obviously how many errors the devices had). 20K hours can mean anything between "works great" and "pretty much worn out drives".

    What do you intend to use them for?

  • KassemKassem Member

    @jsg said: I presume that's a recent version? (sorry, I don't care much for CentOs and hence am not up to date).

    I prefer Ubuntu but CentOS is what WHM/cPanel likes atm. Circa 2020.

    @jsg said: Adaptec adapters usually are OK. And I presume the Raid set up/configuration were done via BIOS and you simply get your 4 disks presented as one linux device, correct?

    I setup RAID 10 in Hetzner's rescue system using Adaptec's tool and now it is a single logical device.

    Controllers found: 1
    ----------------------------------------------------------------------
    Logical device information
    ----------------------------------------------------------------------
    Logical Device number 0
       Logical Device name                        : LogicalDrv 0
       Block Size of member drives                : 512 Bytes
       RAID level                                 : 10
       Unique Identifier                          : 9364AC22
       Status of Logical Device                   : Optimal
       Additional details                         : Quick initialized
       Size                                       : 3655670 MB
       Parity space                               : 3655680 MB
       Stripe-unit size                           : 256 KB
       Interface Type                             : Serial ATA
       Device Type                                : SSD
       Read-cache setting                         : Enabled
       Read-cache status                          : On
       Write-cache setting                        : Enabled
       Write-cache status                         : On
       Partitioned                                : Yes
       Protected by Hot-Spare                     : No
       Bootable                                   : Yes
       Failed stripes                             : No
       Power settings                             : Disabled
       --------------------------------------------------------
       Logical Device segment information
       --------------------------------------------------------
       Group 0, Segment 0                         : Present (1831420MB, SATA, SSD, Connector:0, Device:0)         
       Group 0, Segment 1                         : Present (1831420MB, SATA, SSD, Connector:0, Device:1)         
       Group 1, Segment 0                         : Present (1831420MB, SATA, SSD, Connector:0, Device:2)         
       Group 1, Segment 1                         : Present (1831420MB, SATA, SSD, Connector:0, Device:3)         
    
    
    

    @jsg said: That doesn't mean a whole lot. What's relevant is how many GB or TB have been written to it during those a bit over 2 years (and obviously how many errors the devices had). 20K hours can mean anything between "works great" and "pretty much worn out drives".

    Per SMART, it says 3% of its lifetime is used. Full output: https://pastebin.ubuntu.com/p/RXkk9Q2ZV3/

    @jsg said: What do you intend to use them for?

    Busy WHM/cPanel server.

  • jsgjsg Member, Resident Benchmarker

    @Kassem said:
    I prefer Ubuntu but CentOS is what WHM/cPanel likes atm. Circa 2020.

    Good enough

    I setup RAID 10 in Hetzner's rescue system using Adaptec's tool and now it is a single logical device.

    ````
    [status report]

    Looks good to me, except maybe

    Stripe-unit size : 256 KB

    which might be somewhat large. Usually with SSDs I'd go for smaller stripe sizes.

    Per SMART, it says 3% of its lifetime is used. Full output: https://pastebin.ubuntu.com/p/RXkk9Q2ZV3/

    Thanks. Looks good to me. Just the about 150 TB written ... hmm, what's the TBW for those disks? Preferably solidly north of 500. Plus 'unexpected power down events'? Doesn't that Adaptec have a battery? If not you are betting on the SSDs being enterprise types with onboard "UPS" caps.

    Busy WHM/cPanel server.

    With MySQL I guess? Get your stripe size lower, max 16 KB if possible.

  • KassemKassem Member

    Thanks @jsg

    @jsg said: what's the TBW for those disks?

    3500 TBs per their datasheet for the ECO level for the 1.92 TB drives. https://gzhls.at/blob/ldb/2/7/2/a/5694e04e1e329ba28a48d6fe5f6cf598a7d7.pdf.

    @jsg said: Doesn't that Adaptec have a battery?

    It doesn't.

    @jsg said: If not you are betting on the SSDs being enterprise types with onboard "UPS" caps.

    They mention this: Enhanced power-loss data protection with data protection capacitor monitoring

    I guess this is what you mean by "UPS" capacitors.

    @jsg said: With MySQL I guess? Get your stripe size lower, max 16 KB if possible.

    Yes, mySQL too. What would be the benefit of reducing it to 16 KB?

  • jsgjsg Member, Resident Benchmarker
    edited June 2021

    @Kassem said:
    Thanks @jsg

    @jsg said: what's the TBW for those disks?

    3500 TBs per their datasheet for the ECO level for the 1.92 TB drives. Nice, so no worries on that front.

    @jsg said: If not you are betting on the SSDs being enterprise types with onboard "UPS" caps.

    They mention this: Enhanced power-loss data protection with data protection capacitor monitoring

    I guess this is what you mean by "UPS" capacitors.

    Yes, I guess so too.

    @jsg said: With MySQL I guess? Get your stripe size lower, max 16 KB if possible.

    Yes, mySQL too. What would be the benefit of reducing it to 16 KB?

    Way longer SSD life and highly likely better performance for your use case. Plus, keep in mind how DBs write to disk ...

    Thanked by 1Kassem
  • @key900 said:

    @Kassem said:
    @logaritse Your result is also for SSD with RAID 10?

    @MannDude You tried RAID 10 with NVMe drives? Wonder what the speed will look like.

    I believe it will be capped by cpu/ MB PCIe So no point to do nvme raid10.

    Regarding

    It's a good thing you didn't tell that to Intel or AMD who both have NVMe raid support in their CPU's like Threadrippers and above.

  • letboxletbox Member, Patron Provider

    @TimboJones said:

    @key900 said:

    @Kassem said:
    @logaritse Your result is also for SSD with RAID 10?

    @MannDude You tried RAID 10 with NVMe drives? Wonder what the speed will look like.

    I believe it will be capped by cpu/ MB PCIe So no point to do nvme raid10.

    Regarding

    It's a good thing you didn't tell that to Intel or AMD who both have NVMe raid support in their CPU's like Threadrippers and above.

    Well then it good to know ! But for my last record MB X470D4U2 provide half speed in 2 m.2 sticks and ya i know there is things supporting now but i still believe no point to do so now maybe in the future?

    Regarding

  • jsgjsg Member, Resident Benchmarker

    @key900 said:

    @TimboJones said:

    @key900 said:

    @Kassem said:
    @logaritse Your result is also for SSD with RAID 10?

    @MannDude You tried RAID 10 with NVMe drives? Wonder what the speed will look like.

    I believe it will be capped by cpu/ MB PCIe So no point to do nvme raid10.

    Regarding

    It's a good thing you didn't tell that to Intel or AMD who both have NVMe raid support in their CPU's like Threadrippers and above.

    Well then it good to know ! But for my last record MB X470D4U2 provide half speed in 2 m.2 sticks and ya i know there is things supporting now but i still believe no point to do so now maybe in the future?

    Regarding

    The reason invariably is PCIe speed and not enough PCIe lines.

  • TimboJonesTimboJones Member
    edited June 2021

    @key900 said:

    @TimboJones said:

    @key900 said:

    @Kassem said:
    @logaritse Your result is also for SSD with RAID 10?

    @MannDude You tried RAID 10 with NVMe drives? Wonder what the speed will look like.

    I believe it will be capped by cpu/ MB PCIe So no point to do nvme raid10.

    Regarding

    It's a good thing you didn't tell that to Intel or AMD who both have NVMe raid support in their CPU's like Threadrippers and above.

    Well then it good to know ! But for my last record MB X470D4U2 provide half speed in 2 m.2 sticks and ya i know there is things supporting now but i still believe no point to do so now maybe in the future?

    Regarding

    PCIe3.0 x2

    That's a shitty board. It's expected to be gimped when using x2 lanes. It makes no sense to me why vendors bothered with them.

    With other CPU'S and MB's, using PCIe bifurcation you can have 8-11 (or more?) NVMe drives on PCIe 3.0 x4 lanes in a system without external raid card.

Sign In or Register to comment.