Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAID 1 VS RAID 5
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAID 1 VS RAID 5

vovlervovler Member
edited October 2017 in Help

I'm getting the OVH's SP-64, for the 'winter' shared hosting Im creating.
(All jokes aside, I'm calling it 'winter' just for the gags)

I can't decide what's better :

  • Soft RAID 1 w/ 2x 1.2TB NVME
  • Hard RAID 5 + FastPath w/ 3x 960GB SSD's

Software RAID will decrease the IO of the NVME's

FastPath will increase the IO of the SSD's

In case of power loss Hardware Raid w/ battery is safer.

Which one would you choose?

Poll
  1. Which one would you choose?62 votes
    1. Soft RAID 1 w/ 2x 1.2TB NVME
      41.94%
    2. Hard RAID 5 + FastPath w/ 3x 960GB SSD's
      58.06%
«13

Comments

  • oneilonlineoneilonline Member, Host Rep

    Was there a question?

  • @oneilonline said:
    Was there a question?

    Edited and added a poll

  • GTHostGTHost Member, Patron Provider

    for SSD software RAID-1 is much better than hardware RAID-5

  • WSSWSS Member

    Soft RAID 0, obviously.

    Thanked by 1quick
  • @GTHost said:
    for SSD software RAID-1 is much better than hardware RAID-5

    Would you be kind to explain why?

  • GTHostGTHost Member, Patron Provider

    do you know which one RAID controller will use OVH? and how fast the RAID controller? Almost RAID controllers designed for HDD, not for fast SSD.

  • @WSS said:
    Soft RAID 0, obviously.

  • @GTHost said:
    do you know which one RAID controller will use OVH? and how fast the RAID controller? Almost RAID controllers designed for HDD, not for fast SSD.

    "LSI Megaraid 9271-4i"

  • GTHostGTHost Member, Patron Provider
    edited October 2017

    OVH install not very reliable SSDs. The model Intel SSD DC P3520 1.2 TB - only DPWD: 0.3. It's reliability of a regular desktop SSD.

    Do you know, what is 960GB SSD?

    We install Micron 5100 MAX - DPWD 5. It is real Enterprise SSD.

  • vovlervovler Member
    edited October 2017

    @GTHost said:
    OVH install not very reliable SSDs. The model Intel SSD DC P3520 1.2 TB - only DPWD: 0.3. It's reliability of a regular desktop SSD.

    Do you know, what is 960GB SSD?

    We install Micron 5100 MAX - DPWD 5. It is real Enterprise SSD.

    "Disk 960GB SSD - Datacenter - Intel - S3xxx (0.3 DWPD min)"

    Unfortunately you don't provide the same prices as OVH does :|
    (Although it's nearly impossible to compete with their prices)

  • GTHostGTHost Member, Patron Provider

    yes, we don't use low-cost hardware

  • GTHostGTHost Member, Patron Provider
    edited October 2017

    Our price E3-1270v6, 64GB, 2x960GB Micron 5100 MAX DPWD 5, IPMI, 200M Unmetered - $179/mo.
    OVH offer with 2x1,2Tb - $152/mo

    We offer real server hardware.

  • MikeAMikeA Member, Patron Provider

    @GTHost said:
    Our price E3-1270v6, 64GB, 2x960GB Micron 5100 MAX DPWD 5, IPMI, 200M Unmetered - $179/mo.
    OVH offer with 2x1,2Tb - $152/mo

    We offer real server hardware.

    Nice!

  • @GTHost Don't be a dick.

  • oneilonlineoneilonline Member, Host Rep

    @vovler - I thought you already answered your question?

    @gthost - I thought this was a "Help" section, not an offer section?

    @vovler - What kind of shared hosting? Shared web hosting? VPS hosting?

  • @oneilonline said:
    @vovler - What kind of shared hosting? Shared web hosting? VPS hosting?

    Shared web hosting

  • Don't use raid 5. Its been said before and I'll say it again it's a horrible choice.raid 6 with 5+ drives raid 10 with 4 drives and raid 1 with 2 drugs

  • LeeLee Veteran

    sureiam said: raid 1 with 2 drugs

    Something distract you there?

  • sureiam said: Don't use raid 5. Its been said before and I'll say it again it's a horrible choice

    it's SSDs not HDDs. out of the given options I'd opt for the hw raid5 thingy :-)

  • IonSwitch_StanIonSwitch_Stan Member, Host Rep
    edited October 2017

    Almost RAID controllers designed for HDD, not for fast SSD.

    In practice this doesn't seem to be an issue on modern-ish servers (HP DL360 G8+, Dell R620+). Here is a Dell H710 512MB in default configuration on a 6 disk RAID10 set from one of our hypervisors:

    dd if=/dev/zero of=test bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 7.32244 s, 1.4 GB/s

    HP P410+'s perform similarly.

  • LOL makes so much sense now.

  • sureiamsureiam Member
    edited October 2017

    @Lee said:

    sureiam said: raid 1 with 2 drugs

    Something distract you there?

    Haha pitfall of swipe typing on a phone. Geez that's Super not work friendly! Lol

    Thanked by 1Lee
  • @Falzo said:

    sureiam said: Don't use raid 5. Its been said before and I'll say it again it's a horrible choice

    it's SSDs not HDDs. out of the given options I'd opt for the hw raid5 thingy :-)

    True failure rates during recovery will be d drastically lower but your still taxing those drives like crazy under raid 5 with a single redundancy. Just seems too risky. Then again if they are under 500gb each it'll probably just fine. Raid 5 is really an issue from what I've read at 1tb+. So probably ok but performance will definitely suck

  • Amazing, how certain matters come up again and again.

    For a start, no fastpath does not increase your ssd i/o (it might however make them die sooner). fastpath is a typical marketing game; pretty much all raid controllers (which nowadays again pretty much translates to lsi anyway) have been optimized over decades for spindles and had a bad awakening when ssds came up. fastpath is shitbingo that really means "we have changed some internal stuff in a hurry to at least not look fucking obviously yesteryear wrt ssds and to brake them a little less".

    Secondly keep in mind that a raid controller is basically another computer, namely one specialized to deal with (mostly spindles) disks. Just imagine every raid controller (save those on the bridge or soc itself) to be basically a synology or whatever external drive box linked by a more (e.g. 10 Gb eth) or less (usb 2) fast cable/bus.

    Now, the really important thing with ssd is that it is and works quite differently. For instance it's fucking low, low delay fast but it's touchy in terms of writes. Moreover there is the famous IO parameter which basically is to do with yet another computer within the computer thing because an SSD is just yet another tiny computer that deals with all the ssd specific stuff which is quite complex and hence the ssd controller rat race for IOs and speed.

    So this issue comes largely down to a few questions, some of which are standard disk type questions like io/s (I mean the OS view), mostly read or write, etc., and one of which is very ssd specific: how well is the ssd onboard controller supported - which usually translates to a driver question, which pretty much every major OS will do dimensionally better than any raid controller. linux or BSD (and maybe even windows) will run circles around an lsi controller.

    Considering this and the fact that any decent OS already does buffering very well and that raid 1 is all but gratis (no expensive operations) the answer is clear: OS raid 1, no lsi controller, better add some extra RAM, done, next question, please.

    Caveat: The real advantage of a hw raid controller is that is may have a battery. Otoh there are some ssds with a very similar functionality.

  • @sureiam said:

    @Falzo said:

    sureiam said: Don't use raid 5. Its been said before and I'll say it again it's a horrible choice

    it's SSDs not HDDs. out of the given options I'd opt for the hw raid5 thingy :-)

    True failure rates during recovery will be d drastically lower but your still taxing those drives like crazy under raid 5 with a single redundancy. Just seems too risky.

    Raid 5 is really an issue from what I've read at 1tb+.

    no, you're messing up things. on spinning drives the problem is that you might see another URE during rebuild which will have you lose to the whole array. this gets worse the more and bigger disk you have.

    SSDs don't suffer UREs at all, so this risk isn't there anymore (there is something like UBER but those numbers are much bigger, far away from problematic)

    then there are quite some discussions around how raid5 could also amplify the write operations and therefore have an impact on the expected lifespan of the SSD.
    on a proper hardware raid, the controller should at least optimize writes and not just write old unchanged data blocks etc.
    which leaves you with the often mentioned bigger blocksize called as being problematic for the lifespan - guess what on raid 1 you also have a write in that same bigger blocksize to both drives ;-)

    @vovler posted above that the drives are supposed to be intel S3520 960GB in that raid-5 config. those SSDs come with MLC and a whopping 1750 TBW ...

    as said above, I'd go with raid5 on a hardware raid without a doubt.

    see also: https://mangolassi.it/topic/5895/understanding-raid-5-with-ssd-solid-state-drives

    Thanked by 2vovler mfs
  • At this point, try to go with ZFS or BTRFS in 1 or 10 modes. ZFS RAIDZ is a good option if you want parity, but BTRFS parity is a little too risky, in my opinion.

    FreeBSD with ZFS would be my first thought when thinking about shared hosting. The libraries being decoupled from the OS is a great feature, and ZFS has some great features.

    Web hosting is all about reading files, so I would be inclined to go with NVMe, if possible. Plus, NVMe has a high buzzword quotient.

    On the question of RAID controllers. Modern storage filesystems, procs, and SSDs make them obsolete. The point of RAID controllers was to offload checksum work to free up CPU cycles and provide a fast cache between the OS and slow disks. Modern procs can handle the workload just fine, and SSDs can process IO requests very quickly. Modern storage filesystems compress all of these layers into software, and as such, they have a much better idea of what is going on with the data.

  • vovlervovler Member
    edited October 2017

    @bsdguy
    But doesn't Soft RAID slow the IO?

    @Falzo
    They mention S3XXX not S3520. Just a lottery, either you get an almost new drive, or one old, somewhat outdated and about to die.

  • raindog308raindog308 Administrator, Veteran

    flatland_spider said: FreeBSD with ZFS would be my first thought when thinking about shared hosting.

    It'd be my last because none of the major panels (e.g., cpanel) supports it, and your users will likely want a panel.

    flatland_spider said: On the question of RAID controllers. Modern storage filesystems, procs, and SSDs make them obsolete.

    Ah, no.

  • vovlervovler Member
    edited October 2017

    @raindog308 said:

    flatland_spider said: FreeBSD with ZFS would be my first thought when thinking about shared hosting.

    It'd be my last because none of the major panels (e.g., cpanel) supports it, and your users will likely want a panel.

    Exactly. You are mostly stuck with CentOS for shared hosting, unless you have some custom panel.

  • bsdguybsdguy Member
    edited October 2017

    @vovler said:
    @bsdguy
    But doesn't Soft RAID slow the IO?

    Why would it? It knows the ssd's controller no less well (and quite probably better) than a raid card and raid 1 is virtually free for an OS. ssds do hardly fill a single pci lane and nvmes are usually connected through pcie x4, so unless your nvme drives don't do more than 10 Gb/s each (and that's assuming old pcie 2) where should the problem be?
    (In fact a modern OS might even just do the ioctls double (1 per drive) but not the payload transfer).

    The io bottleneck is not the bus, it's the on drive controller. Moreover, again, the hw raid controllers have been designed and optimized over decades for spindles, i.e. for devices with considerably slower io and fastpath is but a marketing term trying to make "well, we try to limp a bit faster with ssds" sound like "the new speed daemon!!!". OF fucking COURSE they do; the alternative (honesty) would be to say "don't use our expensive cards with ssds. It's a waste, at least with raid 0 and 1".

    Now with R5 and R6 the story starts to change. R5 is but xor; that's a quite cheap operation. With R6 a hw raid controller most probably indeed is faster (due to the considerably more expensive calculations).

    I answered the way I did because nvmes were mentioned and because R1 is pretty much always faster than R5 (except for very expensive setups), so you get speed and redundancy and independence = safety with OS R1 (It has not been unheard of that a replacement raid card, same model, fucked up because of a release difference. That won't happen to you with (f)oss sw raid).

    P.S. The whole question is weird in that we talk about an a priori difference of factor 3 (writing) and 5 (reading) or so between nvme and ssd. So what miracle is supposed to make OS sw raid 1 w/nvme somehow slower than hw raid 5 and hw controller?

Sign In or Register to comment.