Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


SSD RAID1 vs RAID10
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

SSD RAID1 vs RAID10

UmairUmair Member

Hello,

I am looking for getting a 2nd Dedi server and have few questions about which config to go with. I have not yet decided the provider yet. (Server specs will most likely be E5 with 64GB Ram )

I need suggestion on which Hard Disk config to go with.
2 500GB SSDs in Soft RAID 1
4 500GB SSDs in Soft RAID 10

Or do you think with RAID 10 one should get a raid card?

I want to keep the overall cost within my budget. Frankly speaking with most provider, getting 4SSD in Soft RAID10 is pushing my budget to its limits.

Server will be hosting 10-15 VPS.

«1

Comments

  • RAID 10, you can go with 4 x 250 GB if 500 GB is pushing your budget to its limits. And a RAID card is recommended but not a necessity.

  • rds100rds100 Member

    I wouldn't do a software RAID10 of 4 SSDs, there would be no performance gain over software RAID1 with two SSDs due to SATA throughput limitations. And RAID1 would be much more reliable and much easier to recover from a failure.
    So either do a hardware RAID10 (with a good card), or software RAID1.

    Thanked by 3Umair alexvolk Lee
  • FirsthebergFirstheberg Member
    edited June 2015

    Hi,
    in few cases we rent for our customers 4 disk SSD 120 to 1 TB Samsung in RAID soft 0, 1, 5 or 10.
    I think, if you don't have a big budget, you should choose a dedicated server with 2 disk because it will be more expansive to rent a dedi with 4 disks than 2, even if you got the same storage.

    If you want an offer a r410 2xL5640 and 64 GB RAM, you can PM me.

    Damien.

  • ClouviderClouvider Member, Patron Provider
    edited June 2015

    RAID 1 and RAID 10 is fine.

    RAID 5 and RAID 6 should be avoided at all costs as writing checksums has a terrible effect on the SSDs lifespan (many, many, MANY write cycles).

    Software RAID on most of the systems will give you the TRIM feature.

    Thanked by 1Umair
  • UmairUmair Member

    @rds100 said:
    I wouldn't do a software RAID10 of 4 SSDs, there would be no performance gain over software RAID1 with two SSDs due to SATA throughput limitations. And RAID1 would be much more reliable and much easier to recover from a failure.
    So either do a hardware RAID10 (with a good card), or software RAID1.

    That is exactly what is going on in my mind. I mean, with good SSD (Enterprise / DC level) I guess Soft RAID1 should be good enough.

    Soft RAID 10 with 4 SSDs 'might' give a lil more performance but ain't worth the hassle. And with HW Raid 10, obviously it's going to cost a lot more.

  • ClouviderClouvider Member, Patron Provider

    Umair said: Soft RAID 10 with 4 SSDs 'might' give a lil more performance but ain't worth the hassle

    More space and more IOPS since ability to spread load across 4 drives.

  • UmairUmair Member

    @Clouvider said:
    RAID 1 and RAID 10 is fine.

    RAID 5 and RAID 6 should be avoided at all costs as writing checksums has a terrible effect on the SSDs lifespan (many, many, MANY write cycles).

    Software RAID on most of the systems will give you the TRIM feature.

    Well which one you would go for between RAID1 and RAID10 and why??
    (m not even considering Raid 5/6)

  • Would 4 x 250GB SSDs leave you with another budget to get a decent RAID card for RAID 10? That'd then give you the full 500GB capacity as RAID 1.

    Generally for a small node with a small number of VPS, SSDs in RAID 1 should be fine.

    Thanked by 1Umair
  • ClouviderClouvider Member, Patron Provider
    edited June 2015

    @Umair I'd personally go for RAID 10, even if that's software, especially if you are not going to use DC series of drives. More resiliency the better + gives you additional performance.

    If your SSD has issues working without TRIM, for example, some 840 EVO used to have this problem, HW RAID might not be an option.

    Thanked by 1Umair
  • Shoaib_AShoaib_A Member
    edited June 2015

    @rds100 said:
    I wouldn't do a software RAID10 of 4 SSDs, there would be no performance gain over software RAID1 with two SSDs due to SATA throughput limitations. And RAID1 would be much more reliable and much easier to recover from a failure.
    So either do a hardware RAID10 (with a good card), or software RAID1.

    If you do have a good disaster/failure recovery plan & know what you are doing, I will recommend software RAID 10 over RAID 1. Disk failures do happen but this does not mean you should stop using soft RAID 10. And OP is going to be hosting VPS so keep in mind the degradation in performance during Soft RAID 1 rebuild as compared to that of RAID 10. So considering what OP is going to use his server for I have to say RAID 10 will be a better option for him.

    Thanked by 2Clouvider Umair
  • sepeisepei Member

    How should a good hardware raid controller change the performance?
    I mean normaly all cached raid controllers should be slower because of the extra step and often not faster then ssd or I'm wrong with this.
    What do you define with a "good" hardware raid card?
    Also the onboard intel controller (if available) should be faster then a hw raid or not?

  • UmairUmair Member

    @WSCallum said:
    Would 4 x 250GB SSDs leave you with another budget to get a decent RAID card for RAID 10? That'd then give you the full 500GB capacity as RAID 1.

    Generally for a small node with a small number of VPS, SSDs in RAID 1 should be fine.

    Well the only reason I was not looking into smaller SSDs is because they also tend to have less read/write and IOPs. (Like Intel DC 3700 series)

    @Clouvider said:
    Umair I'd personally go for RAID 10, even if that's software, especially if you are not going to use DC series of drives. More resiliency the better + gives you additional performance.

    If your SSD has issues working without TRIM, for example, some 840 EVO used to have this problem, HW RAID might not be an option.

    I said above I am only going to use DC/Enterprise series. (I am not sure which one right now as it depends on provider, but wont go for low quality SSDs)

    @Shoaib_A said:
    If you do have a good disaster/failure recovery plan & know what you are doing, I will recommend software RAID 10 over RAID 1. Disk failures do happen but this does not mean you should stop using soft RAID 10. And OP is going to be hosting VPS so keep in mind the degradation in performance during Soft RAID 1 rebuild as compared to that of RAID 10. So considering what OP is going to use his server for I have to say RAID 10 will be a better option for him.

    Well yeah, I also think RAID10 will be a better option. I guess if I got a good quote from the provider, will go with HW RAID or else might just go with Soft RAID.

  • sc754sc754 Member

    I really don't see the point of having ssd's in raid 10, surely they're fast enough in raid 1?

  • ClouviderClouvider Member, Patron Provider

    @sc754 more performance and reliability you can have up to 2 disks dead before the array is dead for good. It also takes less time on recovery to rebuild the array.

    Thanked by 1sc754
  • Umair said: Well the only reason I was not looking into smaller SSDs is because they also tend to have less read/write and IOPs. (Like Intel DC 3700 series)

    IOPs you almost surely won't make use of. While they exist, applications requiring tens of thousands of IOPs are far and few between. IOP requirements haven't jumped 100x since SSDs came out.

    Thanked by 1linuxthefish
  • MicrolinuxMicrolinux Member
    edited June 2015

    Clouvider said: @sc754 more performance and reliability you can have up to 2 disks dead

    To clarify, if the wrong two disks fail, you're screwed. Statistically, with the reliability of good quality SSDs, I doubt RAID 10 is really much "safer". Performance-wise, it's almost surely performance you'll never need.

  • sc754sc754 Member
    edited June 2015

    @Microlinux said:

    I was thinking given the longer life time of an ssd, it seems very very very improbable that two disks would fail at once in the same server. Exactly, performance of an ssd is already crazy high compared to traditional drives.

  • jbilohjbiloh Administrator, Veteran

    Depends on your budget but we always recommend customers go for RAID 10 with LSI controllers. The 9271 is a beast.

    Thanked by 1Umair
  • If you don't have a huge budget, shoot us a ticket at https://w3hostingservices.com/submitticket.php and we can see if we can get you 4 disks with-in your pricing range. Don't forget to include your specs. We can also give you alternative quotes with-in your price range for you to consider.

  • @Microlinux said:
    To clarify, if the wrong two disks fail, you're screwed. Statistically, with the reliability of good quality SSDs, I doubt RAID 10 is really much "safer". Performance-wise, it's almost surely performance you'll never need.

    What if both disks fail simultaneously in RAID 1? To be honest, you should always have a backup & disaster management strategy regardless of whatever RAID level you are using on your servers.

  • perennateperennate Member, Host Rep

    vpslegend said: What if both disks fail simultaneously in RAID 1? To be honest, you should always have a backup & disaster management strategy regardless of whatever RAID level you are using on your servers.

    What if your backup fail at same time?

    Thanked by 2vpslegend howardsl2
  • @perennate said:
    What if your backup fail at same time?

    Keep backup of your backup & backup of your backup's backup as well :D

  • BuyAdsBuyAds Member

    Failing 2 disks at a same time can hapend once in 100 years :)

  • @vpslegend said:
    What if both disks fail simultaneously in RAID 1?

    The same thing you do when any other RAID array fails, restore from backup.

  • @sc754 said:
    it seems very very very improbable that two disks would fail at once in the same server.

    Right, I mean certainly it's possible - but generally speaking, if you have two disks fail at the same time, you probably have an environmental issue that could affect any number of disks.

  • vpslegendvpslegend Member
    edited June 2015

    @Microlinux said:
    The same thing you do when any other RAID array fails, restore from backup.

    So then RAID 10 still has less chance of failure. With RAID 1 when 2 disks fail you lose all data & have to restore from backup where as with RAID 10, 2 disks from same array may or may not fail. Hence RAID 10 is 50% more safe than RAID 1 when considering simultaneous failure of 2 disks.

  • perennateperennate Member, Host Rep

    vpslegend said: So then RAID 10 still has less chance of failure. With RAID 1 when 2 disks fail you lose all data & have to restore from backup where as with RAID 10, 2 disks from same array may or may not fail. Hence RAID 10 is 50% more safe than RAID 1 when considering simultaneous failure of 2 disks.

    You're forgetting that you have four disks, so there's a higher probability of a disk failing.

  • MicrolinuxMicrolinux Member
    edited June 2015

    @vpslegend said: Hence RAID 10 is 50% more safe than RAID 1 when considering simultaneous failure of 2 disks.

    50% sounds a big happy number until you look at it as relating to an event that probably happens fractions of a percent of the time.

    Theoretically is RAID 10 safer, sure. In the real random world, given high quality hardware, you have to decide if the additional complexity (mostly with software RAID) and cost are worth the slightly increased probability of surviving a very unlikely event.

    Pre-SSD, I gave RAID 10 a little more merit.

    Thanked by 1Umair
  • ClouviderClouvider Member, Patron Provider
    edited June 2015

    @Microlinux I said up to 2 disks. Depending which disks will fail.

    Also RAID10 rebuilds faster and the process affects the array less (in average workloads you won't notice the rebuilding process is taking place).

    @perennate. It's actually less. You have less load per drive, hence the average lifespan of the entire array is longer and failure of one drive has less effect on the array.

    Thanked by 1Umair
  • perennateperennate Member, Host Rep

    Clouvider said: @perennate. It's actually less. You have less load per drive, hence the average lifespan of the entire array is longer and failure of one drive has less effect on the array.

    Sure. I was merely pointing out the undeniable incorrectness of @vpslegend's statement.

Sign In or Register to comment.