Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


NVME SSDs in Software RAID1 vs Software RAID 10
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

NVME SSDs in Software RAID1 vs Software RAID 10

UmairUmair Member

Hello,

Just wondering if there is going to be any "real" benefit of using NVME drives in Software RAID 10 vs Software RAID 1.

For your info:
The drives will be Samsung PM9a3 (Likely 1.92TB)
Server will have AMD 7543 processor
Usage: Use for KVM Host node with bunch of high traffic WP sites.
(we are re-organizing few of our setups).
We normally use LVM disk (RAW) for VM setup. (If that makes any difference)

Ignore the cost and capacity, just wondering if there is going to be a reasonable "performance" benefit with Soft RAID 10? (We already have Soft RAID 1 on a different server, but never used Soft RAID 10 with NVME Drives)

Or should I just stick with Software RAID 1.

Anyone here is using NVMEs in Soft RAID 10?
Any pros / cons ? Kindly give your insight ...

Ask me anything if you need more info.

Thanks

Comments

  • rustelekomrustelekom Member, Patron Provider

    That does not matter whether it is NVMe, SSD, or SATA. It is a known fact that RAID 10 provides more performance in read operations and establishes a higher level of consistency in data health for recovery. Therefore, in general, RAID10 is always better than RAID1.

  • You don’t mention the number of drives, but with 2 drives raid10 far2 is faster than raid1 for all types of drives including nvme

    Thanked by 1maverick
  • jackbjackb Member, Host Rep

    @darkimmortal said:
    You don’t mention the number of drives, but with 2 drives raid10 far2 is faster than raid1 for all types of drives including nvme

    Two drive raid10 is just raid0...

    Thanked by 1jsg
  • @jackb said:

    @darkimmortal said:
    You don’t mention the number of drives, but with 2 drives raid10 far2 is faster than raid1 for all types of drives including nvme

    Two drive raid10 is just raid0...

    Not in linux md

    https://wiki.archlinux.org/title/RAID#RAID_level_comparison

  • rustelekomrustelekom Member, Patron Provider

    Obviously, RAID 10 requires at least four disks, because you need to create at least two pairs of RAID 1. RAID 1 also requires at least two disks for each pair, so you need four total disks to create the two different pairs and combine them into RAID 10.

    Thanked by 2jsg Xrmaddness
  • jackbjackb Member, Host Rep
    edited March 22

    That's because you can create a 4 drive raid10 array with two missing members.

    Guess what a 4 drive raid10 array with two missing members is? Spoiler, it's functionally the same as raid0. No redundancy, loss of either drive means loss of the whole array.

    Thanked by 1jsg
  • PureVoltagePureVoltage Member, Patron Provider

    Raid 10 is quite a bit better if you have the budget for it we highly suggest it.

    Thanked by 1jsg
  • darkimmortaldarkimmortal Member
    edited March 23

    @jackb said:

    That's because you can create a 4 drive raid10 array with two missing members.

    Guess what a 4 drive raid10 array with two missing members is? Spoiler, it's functionally the same as raid0. No redundancy, loss of either drive means loss of the whole array.

    That would be the case for traditional raid10, but not with Linux software raid

  • jsgjsg Member, Resident Benchmarker

    @darkimmortal said:

    @jackb said:

    @darkimmortal said:
    You don’t mention the number of drives, but with 2 drives raid10 far2 is faster than raid1 for all types of drives including nvme

    Two drive raid10 is just raid0...

    Not in linux md

    https://wiki.archlinux.org/title/RAID#RAID_level_comparison

    Sorry, but that table (and page) is very questionable and arguably even wrong, especially regarding "min drives".

    Moreover when looking a bit deeper rather than merely scratching the surface things can get quite complex. To provide an example: whether the drives do have local cache or not (hint: spinning rust traditionally did not for decades) can change things drastically; on the other hand I've seen (and benchmarked) spinning rust R10 which outperformed NVMe R1. Similarly - and of relevance here - both the load and the usage pattern (mostly sequential reads, say streaming, vs wildly random writes, say DB updates, single tenant host vs heavily shared host, etc) do have a major impact, and so does, of course, HW Raid vs SW Raid and quite a few other factors.

    But at least as a general orientation R10 can be considered very significantly faster than R1. So, for a production DB, for an example, R10 will be well worth the higher drive cost; that said, wink wink nudge nudge, it highly likely will also be well worth the cost of high quality drives and probably even a good Raid controller (with plenty and fast cache plus accu/supercap).

    Thanked by 2host_c HostDoc
  • jackbjackb Member, Host Rep
    edited March 23

    [@darkimmortal said]
    That would be the case for traditional raid10, but not with Linux software raid

    You are wrong. You aren't going to get a functional raid10 with two drives.

    You can get a "raid10" array that acts like raid0 or raid1, but that's all you're getting.

    People use this if they want to create a new array and migrate data to it, but don't have enough drive bays to do it with all drives present.

    It isn't an option to recommend for real use.

  • @jackb said:

    [@darkimmortal said]
    That would be the case for traditional raid10, but not with Linux software raid

    You are wrong. You aren't going to get a functional raid10 with two drives.

    You can get a "raid10" array that acts like raid0 or raid1, but that's all you're getting.

    People use this if they want to create a new array and migrate data to it, but don't have enough drive bays to do it with all drives present.

    It isn't an option to recommend for real use.

    https://serverfault.com/questions/139022/explain-mds-raid10-f2

    Thanked by 1maverick
  • UmairUmair Member

    [@jackb said]

    Two drive raid10 is just raid0...

    Obviously I meant 4 drives in RAID 10.
    Was purely asking if 2 drives in RAID 1 vs 4 Drives in RAID 10 will be significantly better for KVM Node??

    Since NVME are pretty fast already.

  • UmairUmair Member

    [@jsg said]
    But at least as a general orientation R10 can be considered very significantly faster than R1. So, for a production DB, for an example, R10 will be well worth the higher drive cost; that said, wink wink nudge nudge, it highly likely will also be well worth the cost of high quality drives and probably even a good Raid controller (with plenty and fast cache plus accu/supercap).

    As I said, ignore the cost.
    Also I was specifically asking for software RAID setup for NVME.

  • HostMayoHostMayo Member, Patron Provider

    not a single good response...op actually wants to know real word gain with RAID 10 as compared to RAID 1 i.e. is the performance gain really matters for nvme since nvme is already very fast and will he be able to hit the IOPS

    Thanked by 1Umair
  • darkimmortaldarkimmortal Member
    edited March 23

    @HostMayo said:
    not a single good response...op actually wants to know real word gain with RAID 10 as compared to RAID 1 i.e. is the performance gain really matters for nvme since nvme is already very fast and will he be able to hit the IOPS

    With the high IOPS of NVMe you can use RAID 10 with a small block size (c. 64k) with any number of disks and it will always be faster than RAID 1 in any workload. There are no performance reasons to choose RAID 1

    Thanked by 1Umair
  • UmairUmair Member

    @HostMayo said:
    not a single good response...op actually wants to know real word gain with RAID 10 as compared to RAID 1 i.e. is the performance gain really matters for nvme since nvme is already very fast and will he be able to hit the IOPS

    Thanks for saying that out loud :smile:

  • UmairUmair Member

    For anyone interested, I went ahead with NVME RAID 10. Taking one for the team :)
    And here is the result of using NVME RAID 1 ve NVME RAID 10

    RAID1

        fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda1):
        ---------------------------------
        Block Size | 4k            (IOPS) | 64k           (IOPS)
          ------   | ---            ----  | ----           ----
        Read       | 218.96 MB/s  (54.7k) | 1.25 GB/s    (19.6k)
        Write      | 219.54 MB/s  (54.8k) | 1.26 GB/s    (19.7k)
        Total      | 438.50 MB/s (109.6k) | 2.51 GB/s    (39.3k)
                   |                      |
        Block Size | 512k          (IOPS) | 1m            (IOPS)
          ------   | ---            ----  | ----           ----
        Read       | 1.20 GB/s     (2.3k) | 1.15 GB/s     (1.1k)
        Write      | 1.26 GB/s     (2.4k) | 1.23 GB/s     (1.2k)
        Total      | 2.46 GB/s     (4.8k) | 2.39 GB/s     (2.3k)
    
        YABS completed in 31 sec
    

    RAID 10

        fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda1):
        ---------------------------------
        Block Size | 4k            (IOPS) | 64k           (IOPS)
          ------   | ---            ----  | ----           ----
        Read       | 212.15 MB/s  (53.0k) | 2.25 GB/s    (35.2k)
        Write      | 212.71 MB/s  (53.1k) | 2.26 GB/s    (35.4k)
        Total      | 424.86 MB/s (106.2k) | 4.52 GB/s    (70.6k)
                   |                      |
        Block Size | 512k          (IOPS) | 1m            (IOPS)
          ------   | ---            ----  | ----           ----
        Read       | 3.87 GB/s     (7.5k) | 4.06 GB/s     (3.9k)
        Write      | 4.08 GB/s     (7.9k) | 4.33 GB/s     (4.2k)
        Total      | 7.95 GB/s    (15.5k) | 8.40 GB/s     (8.2k)
    
        YABS completed in 17 sec 
    

    The test is done inside the VM. Using LVM (raw).
    I used YABS disk test only. I intentinaly included the YABS completed time.

    I guess that explains it. I still have one more day of testing if you want me to run any other DISK related tests, let me know.

    Thanked by 2ebietsy HostMayo
  • host_chost_c Patron Provider, Top Host, Megathread Squad

    Raid 10 of 2 drives???? Wtf???? :D

    Man, did something else got re-invented in the past weeks that I did not know ???

    :D :D :D

  • UmairUmair Member
    edited March 31

    @host_c said:
    Raid 10 of 2 drives???? Wtf???? :D

    Man, did something else got re-invented in the past weeks that I did not know ???

    Where did I say RAID 10 with 2 drives ?? :|
    If you care to read above, that was a discussion of 2 NVME in RAID 1 vs 4 NVME in RAID 10 ... And seeing if RAID 10 is worth it

  • HostMayoHostMayo Member, Patron Provider

    the raid 10 was using sw raid? Is the server a custom one or some branded?

  • UmairUmair Member

    @HostMayo said:
    the raid 10 was using sw raid? Is the server a custom one or some branded?

    Yes, both are using software RAID.
    It's a branded server "Dell PowerEdge R7525"

Sign In or Register to comment.