Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Software RAID0 to RAID10 possible without reinstall?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Software RAID0 to RAID10 possible without reinstall?

mikewazarmikewazar Member
edited March 2022 in Help

Hi all

Is it possible to grow the array by attaching new disks or do I have to go the reinstallation route? This is Proxmox btw

Comments

  • FalzoFalzo Member

    depends on where you are coming from, aka what your base system looks like, but in most cases yes, growing is possible, maybe even online.

    Thanked by 1mikewazar
  • ShazanShazan Member, Host Rep

    As a last resort you could build another RAID 0 array with 2 additional disks and configure the 2 RAID 0 arrays in RAID 1. This would be pretty easy, not sure about performance.

  • edited March 2022

    @Shazan said: 2 RAID 0 arrays in RAID 1

    That would be RAID0+1 rather than RAID1+0 / RAID10 - a perfectly valid nested solution, but there are differences that make R10 preferred. The speed of upgrading from 0 to 0+1 compared to reshaping R0→R10 might negate these benefits from mikewazar's point of view.

    @Shazan said: This would be pretty easy, not sure about performance.

    In theory 1+0 can perform better for reading because the striping is done at the outer level, though that depends on controller and your IO patterns - I suspect the difference is minimal or not even detectable in the vast majority of cases.

    Resilience of 0+1 is lower, that is the main reason 1+0 is better. With a stripset of mirrors the array can survive in 4 of the 6 two-drives-fail-at-the-same-time situations where a mirror built of stripset can survive the other 2, making R10 statistically safer. How much this matters depends on what other protections you have in place (if any), any uptime & recovery time & other guarantees you need to meet, and your level of concern wrt multi-drive failures.

  • I had a scan around out of interest, and growing from 0 to 10 might actually be easy given various similar things that came up (I didn't find an explicit 0->10 example). For instance https://superuser.com/a/1532270/4129 talks about going from R10 to R5 via R0. Assuming your current R0 is the device md1 and built using sda1 & sdb1 and your new parts are sdc1 & sdd1, something like:

    mdadm --grow /dev/md1 --level=10 --raid-devices=4 --add /dev/sdc1 /dev/sdd1
    

    might well do the trick. I recommend you make sure your backups are in a good state before proceeding with anything like this, and perhaps testing the process elsewhere first (maybe install Linux on a VM with 2 small vdisks in RAID0, then add two more and try the grow-to-RAID10 method).

  • edited March 2022

    Confirmed. Worked smoothly on a simple Debian install. Be careful as adding the new drives might change any /dev/sd* designations (on my VM the new drives were sdb & sdd, moving the old sdb to sdc):

    Before adding new drives:

    root@rtest:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [linear] [multipath] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdb1[0] sda1[1]
          7805952 blocks super 1.2 [2/2] [UU]
    md1 : active raid0 sdb2[0] sda2[1]
          50747392 blocks super 1.2 512k chunks
    unused devices: <none>
    

    After adding new drives:

    root@rtest:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [linear] [multipath] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdc1[0] sda1[1]
          7805952 blocks super 1.2 [2/2] [UU]
    md1 : active raid0 sdc2[0] sda2[1]
          50747392 blocks super 1.2 512k chunks
    unused devices: <none>
    

    Off we go (you'll need to partition the new drives first, of course):

    root@rtest:~# mdadm --grow /dev/md1 --level=10 --raid-devices=4 --add /dev/sdb2 /dev/sdd2
    mdadm: level of /dev/md1 changed to raid10
    mdadm: added /dev/sdb2
    mdadm: added /dev/sdd2
    root@rtest:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [linear] [multipath] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sda1[1] sdc1[0]
          7805952 blocks super 1.2 [2/2] [UU]
    md1 : active raid10 sdd2[5] sdb2[4] sda2[1] sdc2[0]
          50747392 blocks super 1.2 512K chunks 2 near-copies [4/2] [U_U_]
          [===>.................]  recovery = 17.3% (4411136/25373696) finish=1.6min speed=210054K/sec
    unused devices: <none>
    

    Once complete:

    root@rtest:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [linear] [multipath] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sda1[1] sdc1[0]
          7805952 blocks super 1.2 [2/2] [UU]
    md1 : active raid10 sdd2[5] sdb2[4] sda2[1] sdc2[0]
          50747392 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
    unused devices: <none>
    

    Data on a filesystem in an LV in a VG using md1 as it's only PV, was perfectly usable all the way through, other than a little slower due to IO contention with the rebuild. Though as noted above: do refresh and retest your backups before doing this, just in case!

    If you grow to a larger size (perhaps moving to R5 instead of R10) you'll need to pvresize and so forth to make use of the newly available space, of course you don't need to worry if going from R0 to R10 as you are only gaining redundancy and not extra space.

    If like this sample setup you have your system parts (root, boot, swap) on RAID1 on the same drives as the old RAID0 array, consider adding the new drives to that array too and making sure the booloaded is installed on them all.

Sign In or Register to comment.