Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAID 1 VS RAID 5 - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAID 1 VS RAID 5

2

Comments

  • raindog308raindog308 Administrator, Veteran

    bsdguy said: I answered the way I did because nvmes were mentioned and because R1 is pretty much always faster than R5

    Is that specific to nvme?

  • GTHostGTHost Member, Patron Provider

    Hardware RAID-5 will be faster than any RAID-1 if you use 4 or more HDD. For SSD and NVMe better to use software RAID-1.

  • bsdguybsdguy Member
    edited October 2017

    @raindog308 said:

    bsdguy said: I answered the way I did because nvmes were mentioned and because R1 is pretty much always faster than R5

    Is that specific to nvme?

    No, it's due to the operations performed. R1 is but "write the payload twice" while R5 is xor which is a cheap operation but it is an operation (which i.a. means that the whole cake has to walk through the cpu with all the unfunny side effects that brings (e.g. fucking up the caches).

    There is another factor which, however, is largely theoretical nowadays (at least on servers), namely the bus issue. The nice thing about hw controllers is that they "bundle" the stuff (e.g. 1 bus transfer rather than 2 for R1) and that they keep your processor free for probably more interesting things than calculating R5 or even R6. On the other hand those cards typically have processors that aren't exactly speed daemons although they typically have some raid stuff (certainly xor) in hw.

    Which leads us again to the "spindle" thing. The world (from the view of a controller) was fucking different then (with spindles) and you can't just double or quadruple a design (say by using more powerful processors because everything you do brings other factors with it which change the whole finely tuned game (e.g. more power required)). Thees controllers are finely tuned meticulously designed and evolved beasts - for a world with spindles, which i.a. defines the frame of what's fast; comparatively speaking that world was a slow and comfortable one.

    Btw, I'm generally against hw controllers in hosting (from the customers perspective) because I as a customer have no control over the hardware and have to blindly rely on a hoster to fucking know those details and to have, for instance, a good stock of release identical spares - or else ... my data are fucked up. So I prefer to have 2 physical disks (no matter the kind) and to do sw R1 -plus- diligent backups. Having hw raid, particularly R5 or R6, may be (and usually is) a blessing for me as a customer but it may also turn out to be a curse.

    As for the applications view of speed I have very rarely seen a significant difference and throwing more RAM at it almost always more than compensates.

  • @vovler said:

    @Falzo
    They mention S3XXX not S3520. Just a lottery, either you get an almost new drive, or one old, somewhat outdated and about to die.

    afaik there is no other S3xxx 960GB Intel SD besides the S3520.

  • @Falzo said:

    afaik there is no other S3xxx 960GB Intel SD besides the S3520.

    There is the Intel S3320 960GB.

    With 840 TBW, nearly half of S3520

    Thanked by 1Falzo
  • After going a bit deaper into benchmarks, I decided that Soft Raid 1 if probably the option I'm going with. As you mentioned there doesn't seem to be any I/O performance degradation.

    In case of soft RAID, what happens in case of power failure?

  • @vovler said:
    In case of soft RAID, what happens in case of power failure?

    BANG happens. Data loss and a funny fsck cycle ...

    But then I'm based on the assumption that your hoster has proper ups and n+1 power supplies in the boxen.

  • FredQcFredQc Member
    edited October 2017

    Maybe I am wrong, but you will have to sell a hell lot of shared hosting accounts to cover the expense of these servers. This doesn't look very sustainable to me...

  • @FredQc said:
    Maybe I am wrong, but you will have to sell a hell lot of shared hosting accounts to cover the expense of these servers. This doesn't look very sustainable to me...

    I'm not going for LET prices.

    $152 /m for the server + $130 software licenses

    1TB of storage, to give some space for the garbage collector.

    Each gb costs about $0.28/m (not including customer support and the cost of customer acquisition). Lets say I will give support in the first months, and customer acquisition is about $10.

    Yes, prices won't be LET alike.

  • @raindog308

    Meh. I'm fine with BSD based hosts, as long as they can be managed easily (e.g NearlyFreeSpeech.) Other than that, CentOS seems to be the standard, I guess.

  • MaouniqueMaounique Host Rep, Veteran

    I generally avoid raid controllers where less than 4 disks are involved and I am evaluating options with 5 and 6 depending on many factors, but from 7 and up there is no real choice.
    Yes, you have to take some risks, but we trade risk vs performance and usability every day.
    In this case, software raid, if possible 5. With mirroring, the data is written twice, with raid 5 once and one parity. The disk controller will likely bundle stuff together so the number of writes is probably on par. I wish I knew in much detail what the controller does in all SSDs, but some things must remain black boxes, otherwise we will go crazy.

  • @Maounique said:
    I generally avoid raid controllers where less than 4 disks are involved and I am evaluating options with 5 and 6 depending on many factors, but from 7 and up there is no real choice.
    Yes, you have to take some risks, but we trade risk vs performance and usability every day.
    In this case, software raid, if possible 5. With mirroring, the data is written twice, with raid 5 once and one parity. The disk controller will likely bundle stuff together so the number of writes is probably on par. I wish I knew in much detail what the controller does in all SSDs, but some things must remain black boxes, otherwise we will go crazy.

    Sidenote: A major part of the problem is that the x86 world is brutally marketing driven and not really professional (unlike, say Power or Sparc) and the (currently called) IoT universe is largely driven by gadgets and idiocy.

    That said, have a look at "GnuBee" - tl;dr -> Same or better than 4 disk synology or other brand but a) just somewhat more than half the price plus b) 6 x 2,5". Downside: no real case.

    For more technically inclined (and caring little about brand names or shiny design) persons a really attractive solution. They themselves (dimensions more honest than the brand players) say that they do R0,1,10 but looking at the 2c/4t cpu I guess R5 will work, too albeit certainly not at ssd speeds (but then, for many things spindles are still preferable).
    Offers 2 x 1 Gb, 1 x usb 3 plus some (forgot the number) usb 2.

    Runs linux and lede and the hw is fully open source, so hardcore guys could add e.g. battery backup and whatnot.

  • @WSS said:
    Soft RAID 0, obviously.

    YOLORAID is the best!

    Thanked by 2raindog308 WSS
  • ZerpyZerpy Member
    edited October 2017

    @vovler said:

    @Falzo said:

    afaik there is no other S3xxx 960GB Intel SD besides the S3520.

    There is the Intel S3320 960GB.

    With 840 TBW, nearly half of S3520

    OVH uses Intel DC S3520 for their 960GB drives, they're rated at 1750TBW.

    @GTHost said:
    OVH install not very reliable SSDs. The model Intel SSD DC P3520 1.2 TB - only DPWD: 0.3. It's reliability of a regular desktop SSD.

    Do you know, what is 960GB SSD?

    We install Micron 5100 MAX - DPWD 5. It is real Enterprise SSD.

    OVH's SSDs are completely fine, they use the normal mixed workload Intel enterprise SSDs for their servers, and it's by no means reliability of a regular desktop SSD.

    In case of NVMe, OVH uses P3520 which is rated at 1480TBW - it's 0.3 drive writes per day measured over a time of 5 years.

    If we take Samsung 850 EVO 1TB drive (consumer disk), it's designed for a 150TBW over a period of 5 years.

    So basically P3520 (Intel Enterprise NVMe) and S3520 (Intel Enterprise SSD) rates roughly 10x a consumer drive.

    1480 or even 1750TBW is plenty of write endurance for most people, especially because people tend to switch servers every 2-3 years, if we base it on 3 years you can do following:

    P3520 1.2TB - 1.3515981735 TBW a day, 1.12 DPWD
    S3520 960GB - 1.598173516 TBW a day, 1.66 DPWD
    

    Your Micron 5100 MAX drives are write-intensive disks thus they'll have a higher endurance - which makes complete sense, but to be honest, there's absolutely no reason to go for write-intensive disks, if you're not doing write-intensive workloads.

    OVH has based their disk selection in the P35xx and S35xx series based on general workload they see for their customers, if all their customers did write-intensive workloads, OVH would have selected the other drives because it would be cheaper for them in the long run.

    But anyway, point is - P3520 and S3520 are by no means at same reliability as consumer drives, but sure - also not as "reliable" as disks designed for write-intensive tasks.

    It just seems like you want to make a sale, and then put out odd numbers :-) That's a bit low.

  • @vovler said:
    They mention S3XXX not S3520. Just a lottery, either you get an almost new drive, or one old, somewhat outdated and about to die.

    No, you do not get drives that are about to die, if they use a single old disk, it will often have between 0 and 10k hours (which is by no means dying) - in the about 200 servers I've installed for people at OVH where it has been SSD configs, 80% of the time both drives are completely new, the remaining 20% there's 1 new and 1 old drive, 90% of those 20% - the old drive has always been below 1000 hours (41 days runtime), a single one being about 8k hours and the endurance on that one (according to smartctl) was still at 99% - which means the drive didn't really perform any major writes in almost a year.

    Thanked by 1vovler
  • MCHPhilMCHPhil Member
    edited October 2017

    So LET is doing all your market research googling for you and you are not even going to have a product for the board that is helping you? Savage.

  • vovlervovler Member
    edited October 2017

    @MCHPhil said:
    So LET is doing all your market research googling for you and you are not even going to have a product for the board that is helping you? Savage.

    I didn't ask anyone to google for me. I asked for the opinion of people that are experienced with this, something far different and more valuable than "googling".

    The product will be available to anyone, even for LET. But it's not gonna have LET prices, therefore I will not promote it here on LET.

  • MCHPhilMCHPhil Member
    edited October 2017

    @vovler said:

    @MCHPhil said:
    So LET is doing all your market research googling for you and you are not even going to have a product for the board that is helping you? Savage.

    I didn't ask anyone to google for me. I asked for the opinion of people that are experienced with this, something far different and more valuable than "googling".

    The product will be available to anyone, even for LET. But it's not gonna have LET prices, therefore I will not promote it here on LET.

    Spin it any way you wish. There are TONS of posted experiences around the web related to SSD and RAID arrays. By industry leaders none the less. (no offense to anyone here.) Why would LET be the best place to get that information?

    It's comedy to me. Good luck with your venture.

    Edit: Honestly, your question is answered by RTFM on raid. :S

    Thanked by 1vimalware
  • @MCHPhil said:

    @vovler said:

    @MCHPhil said:
    So LET is doing all your market research googling for you and you are not even going to have a product for the board that is helping you? Savage.

    I didn't ask anyone to google for me. I asked for the opinion of people that are experienced with this, something far different and more valuable than "googling".

    The product will be available to anyone, even for LET. But it's not gonna have LET prices, therefore I will not promote it here on LET.

    Spin it any way you wish. There are TONS of posted experiences around the web related to SSD and RAID arrays. By industry leaders none the less. (no offense to anyone here.) Why would LET be the best place to get that information?

    It's comedy to me. Good luck with your venture.

    Edit: Honestly, your question is answered by RTFM on raid. :S

    Unlike you, there are people that I consider to be very experienced and with a good amount of knowledge.

    If this post makes you laugh, I will tell you every single post that you make about your KVMs using RAID 0, makes me laugh.
    Why would anyone go with your VPSs when even your pricing is a joke.

  • MCHPhilMCHPhil Member
    edited October 2017

    You are very classy :) You asked that a few weeks ago, remember? Did you read the answers you got? I saw a lot of praise as far as it's concerned.

    Very clear your are an actual Winter Host. Good luck though. I know others here see through it all also :)

    As I said good luck buddy.

    edit: damn muscle memory

  • @MCHPhil

    What is so clear to you that its a winter host? Asking for someones opinion on LET? Worrying about customer satisfaction and data?

    Sure you have good reviews now, but that raid0 will bite you in the ass sooner or later
    Customers will ignore and wont do backups, and will get mad when they loose all the data. But once again. I dont see why anyone with brains would trust your service for production.

  • MaouniqueMaounique Host Rep, Veteran

    bsdguy said: certainly not at ssd speeds

    Oh, gee... Using SSD in a low end NAS is such a waste...
    Personally i believe SSD are not suitable even in million dollar SANs in our usage scenario (for VPSes) but it may be in other cases where many IOPS and low data flow is actually required, but those are probably fringe extreme cases.
    No, NAS is for spindles, if you need high bw-high iops storage you dont put it in a NAS, certainly not on one powered by 2c/4t low power cpu with minimal caching and possibly low-end net controller.

  • raindog308raindog308 Administrator, Veteran

    FlamesRunner said: Other than that, CentOS seems to be the standard, I guess.

    The reason is that it's the only OS supported by cPanel.

    doghouch said: YOLORAID is the best!

    RAID 0 = YOLORAID. I love that.

    bsdguy said: No, it's due to the operations performed. R1 is but "write the payload twice" while R5 is xor which is a cheap operation but it is an operation (which i.a. means that the whole cake has to walk through the cpu with all the unfunny side effects that brings (e.g. fucking up the caches).

    Interesting!

  • edited October 2017

    @raindog308 said:
    It'd be my last because none of the major panels (e.g., cpanel) supports it, and your users will likely want a panel.

    >

    Come on, go shell only. :) I forgot people like that sort of thing.

    flatland_spider said: On the question of RAID controllers. Modern storage filesystems, procs, and SSDs make them obsolete.

    Ah, no.

    Care to expand on that? Aside from needing them for expansion over 4 disks, I'm having a hard time thinking of a reason a full RAID card would be needed versus a one that just does JBOD.

  • bsdguybsdguy Member
    edited October 2017

    @raindog308 said:

    bsdguy said: No, it's due to the operations performed. R1 is but "write the payload twice" while R5 is xor which is a cheap operation but it is an operation (which i.a. means that the whole cake has to walk through the cpu with all the unfunny side effects that brings (e.g. fucking up the caches).

    Interesting!

    Is that so? If you want to know more just ask your favourite agent.

  • MaouniqueMaounique Host Rep, Veteran

    flatland_spider said: one that just does JBOD

    I cant figure out why anyone would need one that does JBOD. Can't you just determine a way to partition your disks and add them together?
    Since I have disks of various sizes in my "NAS" I am making my "RAID" from that kind of partitions, spanning on 2 disks, in cases. I need raid 6 due to this, but it is worth it as i can use every bit of the disks with some basic math involved.

  • @Maounique said:

    with some basic math involved.

    There in lies the basic problem. Most can not count let a lone much else anymore.

  • WSSWSS Member

    @AuroraZ said:
    There in lies the basic problem. Most can not count let a lone much else anymore.

    $7

  • vovler said: I'm getting the OVH's SP-64, for the 'winter' shared hosting Im creating

    If this question is meant to be understood as "which one won't / is less likely to require off-site backups?", then neither.

  • MCHPhilMCHPhil Member
    edited October 2017

    vovler said: But once again. I dont see why anyone with brains would trust your service for production.

    What you see and reality can and will be two different things :)

    Edit: You remind me of @GoodHosting. :S

    Thanked by 1PieHasBeenEaten
Sign In or Register to comment.