Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAID5 on Backup Space
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAID5 on Backup Space

FlorinMarianFlorinMarian Member, Host Rep

Hi, guys!
Because my physical server will be delivered very soon, I come with a question.
I intend to provide among others, Backup Space stored on 12 HDDs 7200rpm each.
I know that RAID10 provide 100% Fault tolerance, but it makes usable space only 50% of physical capacity.

Because all VPS HDD will have just 100Mbps Network connection (Like Kimsufi) to make it acceptable for Backup Space only, I would like to know:
How dangerous it is to have RAID5 on 12 Disks used ONLY for Backup Space?

Any feedback is welcome :smile: .

Comments

  • dfroedfroe Member, Host Rep

    RAID 5 on 12 disks? Seems to be some Romanian thing? ;) Some other guy from your country tried it in the past. You may know how it ended.

    Most likely nobody with a certain level of experience with storage would consider such a setup. Chances are just to high that you encounter two errors before or during rebuild so at some point the whole array with all data may die.

    You may consider simply using each disk alone (without any RAID) and distribute your backup data. If one disk fails, you only need to retransfer the data for that disk.

    If you want to build one big pool, you should add some more parity and go for RAID 6. Or with 10+ years of experience I personally would prefer a ZFS raidz2. Which is basically similiar but handling the pools, filesystems and ability to send incremental snapshots is just awesome.

  • @FlorinMarian said:.
    I know that RAID10 provide 100% Fault tolerance

    Woah! This is incredibly misleading.

    raid10 can fail with 2 disks lost. If you loose both halves of a mirror a big chunk of your data is gone. If the disks fail at random there’s only a 1/11 chance your second disk failure is catastrophic, but this gets worse as you lose a 3rd, 4th disk… which does happen during recovery (what disks are you using? What is their BER?)

    I would consider a raid 6 of 12 disks (10 disks usable capacity), but better still a raid 60 (8 disks capacity) personally.

    The level of risk you can tolerate of course depends on many factors… but know your array failure risks :).

    Thanked by 1FlorinMarian
  • FlorinMarianFlorinMarian Member, Host Rep

    @dfroe said:
    RAID 5 on 12 disks? Seems to be some Romanian thing? ;) Some other guy from your country tried it in the past. You may know how it ended.

    Most likely nobody with a certain level of experience with storage would consider such a setup. Chances are just to high that you encounter two errors before or during rebuild so at some point the whole array with all data may die.

    You may consider simply using each disk alone (without any RAID) and distribute your backup data. If one disk fails, you only need to retransfer the data for that disk.

    If you want to build one big pool, you should add some more parity and go for RAID 6. Or with 10+ years of experience I personally would prefer a ZFS raidz2. Which is basically similiar but handling the pools, filesystems and ability to send incremental snapshots is just awesome.

    Thank you for your comment!
    At the end of the day, I'll go with RAID10, giving a small amount of space but absolutely secured.

    Best regards, Florin.

  • @FlorinMarian said: At the end of the day, I'll go with RAID10, giving a small amount of space but absolutely secured.

    hold your horses and read @tehdan advice. raid10 isn't neccessarily better/safer for the amount of disks and what you want to achieve. try to read and understand more on the general topic of raid levels and implementations before you rush into any decision.

    Thanked by 2FlorinMarian tehdan
  • edited October 2021

    @FlorinMarian said:
    How dangerous it is to have RAID5 on 12 Disks used ONLY for Backup Space?

    You'll find RAID5 rebuild times much longer than RAID10, so the main danger is that: the risk that another drive will fail while the rebuild is happening. With that many drives I'd use RAID6 instead then any two drives can be buggered at the same time and the array will survive. Rebuild times for R6 are similar to R5.

    If the RAID10 is setup as an R0 stripeset over a set of R1 pairs, so some combinations of double failure will take out the whole array but some don't (in theory with your 12 drives, up to 6 could fail at once and you'd be fine, if they are all from different pairs). Check with your controller's documentation though, it might distribute data differently to eek a little extra performance out of some IO patterns. With R6 any two can fail but a third at the same time will definitely kill the array, with R5 any two at once will kill everything.

    You could consider R50 if supported by your controller or R0 over four R5 arrays more manually if not. That way when a drive fails you only have to rebuild form two drives to three which depending on your controller/software and other IO patterns may be faster, but I'd still prefer R10.

    As another aside: general write performance can be significantly worse for R5 or R6 compaired to R10, though this might not be much of a concern for backups from external sources (unless you have a 10Gbps or better at the backup site and the other sites might shovel data at that sort of rate in total) as you are not going to need to write faster than your network link usually.

    As dfroe suggested, depending on your OS do give ZFS some consideration as that has other benefits (checksums for better error detection, support for advanced caching options if you shove a smaller faster drive or two in there too, snapshots, easier array growth, ...).

    Thanked by 1FlorinMarian
  • FalzoFalzo Member
    edited October 2021

    @MeAtExampleDotCom said: As dfroe suggested, depending on your OS do give ZFS some consideration as that has other benefits (checksums for better error detection, support for advanced caching options if you shove a smaller faster drive or two in there too, snapshots, easier array growth, ...).

    this. add a small ssd as caching device if possible.
    and if you want more redundancy there is even raid-z3 to buy in on another parity disk. still offers 1.5x more space than regular raid-10.

    edit: oh, and think about desaster recover strategies. what exactly do you plan on doing in case one disk fails. just replace it and rebuild/scrub? or rather backup all data to somewhere else (would you manage to get space for that?) before you start anything else etc.

    also think about the time involved depending on the size of the whole array. what happens if you need to fsck all of it. can your clients handle the downtimes?

    Thanked by 1FlorinMarian
  • FlorinMarianFlorinMarian Member, Host Rep

    Thank you @MeAtExampleDotCom and @Falzo .
    Somehow, you opened my eyes.
    About disaster data recovery, it shouldn't hurt so bad because all drives are 3TB (so, only 36TB per server, Non-RAID).
    My server is still in custody of customs since 3 days and while they keep it right there, I have time to think about it.

    About controller, it is LSI SAS 9305-16i HBA - Full Height PCIe-x8 SAS Controller , if someone can say anything about it.

    Best regards, Florin.

  • AlexBarakovAlexBarakov Patron Provider, Veteran

    @FlorinMarian said: About controller, it is LSI SAS 9305-16i HBA - Full Height PCIe-x8 SAS Controller , if someone can say anything about it.

    Sure. That's HBA, not RAID controller. 9361 for example is a RAID controller.

    Thanked by 1FlorinMarian
  • FlorinMarianFlorinMarian Member, Host Rep
    edited October 2021

    @AlexBarakov said:

    @FlorinMarian said: About controller, it is LSI SAS 9305-16i HBA - Full Height PCIe-x8 SAS Controller , if someone can say anything about it.

    Sure. That's HBA, not RAID controller. 9361 for example is a RAID controller.

    I had a look after replying here.
    So, it probably will be software RAID, cached with RAID1 SSDs.

    Best regards, Florin.

    SMALL UPDATE: Drives Are Hitachi (HUS723030ALS640)

  • jugganutsjugganuts Member
    edited October 2021

    If you're doing HBA cards there's no reason not to do ZFS imo.

    Thanked by 1FlorinMarian
Sign In or Register to comment.