Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAID with 3 Disks ? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAID with 3 Disks ?

2»

Comments

  • raindog308raindog308 Administrator, Veteran

    reliablevps_us said: Answer to your question setting up RAID 1 on three disks just does not make since , since RAID 1 is basically mirror of your data on 2 disks if one fails the second one will cover.

    Performance wise you get double the read speeds and you are capped on write speeds to one disk.

    Doesn't write performance go down with every disk you add to a RAID1? It's got to write in more places before the write is complete. I don't know personally - I've only used 2-disk RAID1.

  • reliablevps_usreliablevps_us Member, Patron Provider

    @raindog308
    RAID 1 was specifically designed for 2 disks to be mirrored together for redundancy and Read performance.
    Adding another disk as spare wont add or decrease the performance, as it will be just waiting for one of the 2 disks to fail and take its place.

    The best Cost Per Gb for 3 disks would be raid 5 with redundancy to sub stain a single disk failure.

  • raindog308raindog308 Administrator, Veteran
    edited August 2019

    reliablevps_us said: RAID 1 was specifically designed for 2 disks to be mirrored together for redundancy and Read performance. Adding another disk as spare wont add or decrease the performance, as it will be just waiting for one of the 2 disks to fail and take its place.

    I've never done a 3-disk RAID1, but some googling shows it can be done. You'll have a primary with 2 mirrors.

    e.g.: https://www.thegeekdiary.com/how-to-add-a-3rd-disk-to-create-a-3-way-mirror-raid1-md-device-centos-rhel-7/

    Alternatively, you could setup the 3rd disk as a hot spare.

  • reliablevps_usreliablevps_us Member, Patron Provider

    I've never done a 3-disk RAID1, but some googling shows it can be done. You'll have a primary with 2 mirrors.

    e.g.: https://www.thegeekdiary.com/how-to-add-a-3rd-disk-to-create-a-3-way-mirror-raid1-md-device-centos-rhel-7/

    Alternatively, you could setup the 3rd disk as a hot spare.

    Will that is something new, i have never tested something like that.
    Theoretically if you make a 3 ways RAID -1 you should get the 3 X Read speeds and
    Write speeds will remain at 1 X disk speed since you are writing the same data on three different disks in the same time.

    Only thing i can add to this thread would be if you are using Software RAID ALWAYS backup your Boot Loader.

  • jsgjsg Member, Resident Benchmarker
    edited August 2019

    @raindog308 said:
    I've grown to hate RAID5. My home file server has 6x6TB disks, and I do them as three 2-disk pairs. Costs more disk, but I got tired of RAID failures. I've also noticed that I have far fewer RAID problems period with RAID1.

    ......but I've had tons more problems with RAID5 on mdadm than I ever had with RAID1.

    Raid 5 (or whatever) is a spec, linux Raid is an implementation - and sadly not exactly the best one (for Raid 5 and higher).

    • @reliablevps_us (added in because you seem to be interested in the matter)

    To answer the question "what is Raid XYZ?" properly one must know and keep in mind quite many things, incl. the seemingly simple question "what is writing to disk?" because there's much more going on than most think.

    In old times (say in IDE times and before) writing to a disk meant to do a syscall which lead to the OS writing out some sectors and when the disk controller was done the OS told the application "job successfully done".
    Nowadays with OSs running hundreds (or more) tasks and more or less complex caching and writing/reading via a usually serial bus (PCIe) and with significantly more complex hardware like for example "soft-Raid" controllers built into the bridge chips, things aren't that simple anymore and "I'm done" can mean very little (like it's in the cache and will eventually be written out. So use flush() which unfortunately isn't a guarantee either).

    Good and relevant example: soft-Raid. What is that really? Well, usually it's a disk controller (be it as a chip or as a block within a bridge) that is capable of slightly increased Sata (and/or SAS) commands.
    Explanation: "writing to disk" practically means to send (mostly "write") commands, some info (sector, etc), and data blocks to a disk controller which then "translates" it to e.g. Sata commands to the disk(s). One decisive factor is what commands and what complexity the controller is capable of. Classically controllers were very simple and if one wanted to write some data to 2 disks ("Raid 1") one had to do the whole thing twice. ... until some engineers invented command quueing which (usually) allows to send the 2 commands but the data only once - and that is very relevant because usually it's the data transfers that are costly (in terms of using up bus capacity). Soft-Raid is but one child of that and it tells the OS (via it's driver) that it can do Raid 0 and/or 1.

    Raid 5 and 6 are more complex. Raid 5 just slightly but Raid 6 (which is basically Raid 5 plus a second parity alogrithm allowing for 2 disks to fail) is computationally quite expensive. But there are "soft-Raid 5" controllers out there (whom I would not trust at all in terms of both resilience and performance).
    Of course all that can also be put into the OS itself, at least up to Raid 5, but the price you pay is that your system gets slower plus, and that's worse, that you have something that looks like Raid but is not really, depending on what you want.

    So, to put it simply, doing Raid 1 with the OS does make sense if your requirements are modest and basically boil down to "I want a real time life copy of my disk. Just in case it fails".
    In a professional setting (usually meaning "commercial") the requirements are much higher, however and include very high resilience, avaliability, and redundancy but often also the ability to run backups with only very small slowing of the disk sub system.

    The usual approaches are (in increasing order of coolness and price) hardware raid controllers, attached (or remote) storage, and redundant storage. The advantages of a hw Raid controller are well know I assume. Attached or remote storage can be seen (somewhat simplified) as Raid controller and disks not being part of the system, which has advantages also for performance) but the king is obviously redundant storage systems, and that does not simply mean dual storage systems but rather storage systems run on fully redundant hardware - redundant as in CPU, bus(es), memory, everything and a fail over time in the low microseconds controlled by hardware.

    Which connects quite well with the last big player: ZFS, which still is the only major technology that offers real Raid functionality, and in fact even better because it doesn't simply think in terms of sectors but has e.g. knowledge of inodes, metadata, etc.
    But here's the but: no matter how you turn it, ZFS like any software solutions uses resources and eats cycles, so either buy your servers somewhat larger or, and that is probably the best solution both for home and for high-end use, use it on a dedicated storage server that is linked via a (more or less extremely) fast connection.

    TL;DR: use soft-Raid or OS Raid (e.g. mdadm) only for modest "I want a live backup of some disk" use cases and prefer hardware solutions for any real and serious Raid needs (and keep in mind that hardware solution can - and often should - mean ZFS on dedicated hardware).

Sign In or Register to comment.