All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
NVME SSDs in Software RAID1 vs Software RAID 10
Hello,
Just wondering if there is going to be any "real" benefit of using NVME drives in Software RAID 10 vs Software RAID 1.
For your info:
The drives will be Samsung PM9a3 (Likely 1.92TB)
Server will have AMD 7543 processor
Usage: Use for KVM Host node with bunch of high traffic WP sites.
(we are re-organizing few of our setups).
We normally use LVM disk (RAW) for VM setup. (If that makes any difference)
Ignore the cost and capacity, just wondering if there is going to be a reasonable "performance" benefit with Soft RAID 10? (We already have Soft RAID 1 on a different server, but never used Soft RAID 10 with NVME Drives)
Or should I just stick with Software RAID 1.
Anyone here is using NVMEs in Soft RAID 10?
Any pros / cons ? Kindly give your insight ...
Ask me anything if you need more info.
Thanks
Comments
That does not matter whether it is NVMe, SSD, or SATA. It is a known fact that RAID 10 provides more performance in read operations and establishes a higher level of consistency in data health for recovery. Therefore, in general, RAID10 is always better than RAID1.
You don’t mention the number of drives, but with 2 drives raid10 far2 is faster than raid1 for all types of drives including nvme
Two drive raid10 is just raid0...
Not in linux md
https://wiki.archlinux.org/title/RAID#RAID_level_comparison
Obviously, RAID 10 requires at least four disks, because you need to create at least two pairs of RAID 1. RAID 1 also requires at least two disks for each pair, so you need four total disks to create the two different pairs and combine them into RAID 10.
That's because you can create a 4 drive raid10 array with two missing members.
Guess what a 4 drive raid10 array with two missing members is? Spoiler, it's functionally the same as raid0. No redundancy, loss of either drive means loss of the whole array.
Raid 10 is quite a bit better if you have the budget for it we highly suggest it.
That would be the case for traditional raid10, but not with Linux software raid
Sorry, but that table (and page) is very questionable and arguably even wrong, especially regarding "min drives".
Moreover when looking a bit deeper rather than merely scratching the surface things can get quite complex. To provide an example: whether the drives do have local cache or not (hint: spinning rust traditionally did not for decades) can change things drastically; on the other hand I've seen (and benchmarked) spinning rust R10 which outperformed NVMe R1. Similarly - and of relevance here - both the load and the usage pattern (mostly sequential reads, say streaming, vs wildly random writes, say DB updates, single tenant host vs heavily shared host, etc) do have a major impact, and so does, of course, HW Raid vs SW Raid and quite a few other factors.
But at least as a general orientation R10 can be considered very significantly faster than R1. So, for a production DB, for an example, R10 will be well worth the higher drive cost; that said, wink wink nudge nudge, it highly likely will also be well worth the cost of high quality drives and probably even a good Raid controller (with plenty and fast cache plus accu/supercap).
You are wrong. You aren't going to get a functional raid10 with two drives.
You can get a "raid10" array that acts like raid0 or raid1, but that's all you're getting.
People use this if they want to create a new array and migrate data to it, but don't have enough drive bays to do it with all drives present.
It isn't an option to recommend for real use.
https://serverfault.com/questions/139022/explain-mds-raid10-f2
Obviously I meant 4 drives in RAID 10.
Was purely asking if 2 drives in RAID 1 vs 4 Drives in RAID 10 will be significantly better for KVM Node??
Since NVME are pretty fast already.
As I said, ignore the cost.
Also I was specifically asking for software RAID setup for NVME.
not a single good response...op actually wants to know real word gain with RAID 10 as compared to RAID 1 i.e. is the performance gain really matters for nvme since nvme is already very fast and will he be able to hit the IOPS
With the high IOPS of NVMe you can use RAID 10 with a small block size (c. 64k) with any number of disks and it will always be faster than RAID 1 in any workload. There are no performance reasons to choose RAID 1
Thanks for saying that out loud
For anyone interested, I went ahead with NVME RAID 10. Taking one for the team
And here is the result of using NVME RAID 1 ve NVME RAID 10
RAID1
RAID 10
The test is done inside the VM. Using LVM (raw).
I used YABS disk test only. I intentinaly included the YABS completed time.
I guess that explains it. I still have one more day of testing if you want me to run any other DISK related tests, let me know.
Raid 10 of 2 drives???? Wtf????
Man, did something else got re-invented in the past weeks that I did not know ???
Where did I say RAID 10 with 2 drives ??
If you care to read above, that was a discussion of 2 NVME in RAID 1 vs 4 NVME in RAID 10 ... And seeing if RAID 10 is worth it
the raid 10 was using sw raid? Is the server a custom one or some branded?
Yes, both are using software RAID.
It's a branded server "Dell PowerEdge R7525"