New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
depends on where you are coming from, aka what your base system looks like, but in most cases yes, growing is possible, maybe even online.
As a last resort you could build another RAID 0 array with 2 additional disks and configure the 2 RAID 0 arrays in RAID 1. This would be pretty easy, not sure about performance.
That would be RAID0+1 rather than RAID1+0 / RAID10 - a perfectly valid nested solution, but there are differences that make R10 preferred. The speed of upgrading from 0 to 0+1 compared to reshaping R0→R10 might negate these benefits from mikewazar's point of view.
In theory 1+0 can perform better for reading because the striping is done at the outer level, though that depends on controller and your IO patterns - I suspect the difference is minimal or not even detectable in the vast majority of cases.
Resilience of 0+1 is lower, that is the main reason 1+0 is better. With a stripset of mirrors the array can survive in 4 of the 6 two-drives-fail-at-the-same-time situations where a mirror built of stripset can survive the other 2, making R10 statistically safer. How much this matters depends on what other protections you have in place (if any), any uptime & recovery time & other guarantees you need to meet, and your level of concern wrt multi-drive failures.
I had a scan around out of interest, and growing from 0 to 10 might actually be easy given various similar things that came up (I didn't find an explicit 0->10 example). For instance https://superuser.com/a/1532270/4129 talks about going from R10 to R5 via R0. Assuming your current R0 is the device md1 and built using sda1 & sdb1 and your new parts are sdc1 & sdd1, something like:
might well do the trick. I recommend you make sure your backups are in a good state before proceeding with anything like this, and perhaps testing the process elsewhere first (maybe install Linux on a VM with 2 small vdisks in RAID0, then add two more and try the grow-to-RAID10 method).
Confirmed. Worked smoothly on a simple Debian install. Be careful as adding the new drives might change any /dev/sd* designations (on my VM the new drives were sdb & sdd, moving the old sdb to sdc):
Before adding new drives:
After adding new drives:
Off we go (you'll need to partition the new drives first, of course):
Once complete:
Data on a filesystem in an LV in a VG using md1 as it's only PV, was perfectly usable all the way through, other than a little slower due to IO contention with the rebuild. Though as noted above: do refresh and retest your backups before doing this, just in case!
If you grow to a larger size (perhaps moving to R5 instead of R10) you'll need to pvresize and so forth to make use of the newly available space, of course you don't need to worry if going from R0 to R10 as you are only gaining redundancy and not extra space.
If like this sample setup you have your system parts (root, boot, swap) on RAID1 on the same drives as the old RAID0 array, consider adding the new drives to that array too and making sure the booloaded is installed on them all.