Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How important it is to have RAID 10 set up on your VPS? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How important it is to have RAID 10 set up on your VPS?

13»

Comments

  • Also,

    Software Raid10 is fine.

  • CloudxtnyHostCloudxtnyHost Member, Host Rep

    @shardhost

    If you are going to do Raid then software raid shouldn't be considered.

    http://www.cyberciti.biz/tips/raid-hardware-vs-raid-software.html is why

  • I still think it depends. Everyone is comparing RAID10 to RAID1 without considering specifics.

    RAID1 with SSDs like I said will probably be more reliable than even HW RAID10. The number of SSDs (especially higher grade) failing compared to RAID cards is even less.

    SW RAID1 also removes the need of having a RAID card = less point of failure.

    We use RAID10 in most deployments because it makes sense.

    I think the argument and battle here is the attitude and mindset of "RAID10 or you suck". There's no hard and fast rule.

    It doesn't help if you use drives that always fail even in RAID10. The type of drives used are also important as well as the RAID card. If you have a crappy RAID card that crashes all day long - is there a point?

    I might be going to extremes, but it's the whole package, including the price point and how it's sold.

    This is more like a opinionated religious debate right now.

  • edited February 2013

    @httpzoom said: @shardhost

    If you are going to do Raid then software raid shouldn't be considered.

    http://www.cyberciti.biz/tips/raid-hardware-vs-raid-software.html is why

    Some of that data is out of date. I can throw five other articles that tell of why hw RAID shouldn't be used over software RAID.

    I'll make comments based on my experience of using software RAID on over 100 nodes for a year +

  • @httpzoom said: If you are going to do Raid then software raid shouldn't be considered.

    In your experience, why do you think software (mdadm) RAID shouldn't be considered? Have you had one failed on you or something?

  • concerto49concerto49 Member
    edited February 2013

    @ShardHost said: Some of that data is out of date. I can throw five other articles that tell of why hw RAID shouldn't be used over software RAID.

    I'll make comments based on my experience of using software RAID on over 100 nodes for a year +

    Then please do, but the articles don't help. It's out of date and it's not a number game. Please. I can imagine someone saying, "I can throw 5 articles that sw RAID is better". Endless loop.

    mdadm has improved over the years. For RAID1 or RAID10 it might work. RAID5 or RAID6 etc is a different story.

  • @httpzoom said: If you are going to do Raid then software raid shouldn't be considered.

    As a blanket statement I really disagree with this. Depending upon the situation, there may be real benefits to hardware RAID. In another case, Linux software RAID may be the perfect solution.

  • jarjar Patron Provider, Top Host, Veteran
    edited February 2013

    @MiguelQ said: In your experience, why do you think software (mdadm) RAID shouldn't be considered? Have you had one failed on you or something?

    Depends on desired capacity and CPU on the node. This is assuming no SSD cache. If you're wanting to achieve what others are with HW RAID10 on SW RAID10, you most often won't (unless SSD caching and/or underselling CPU). Also giving up hot swap and dealing with big performance hits on a rebuild isn't very fun. Of course everyone has to weigh their priorities and their client's expectations. It's not that it's "bad" but I certainly wouldn't suggest it to a person providing their first ever services on a big RAID. Underselling CPU is also a smart thing to do, but if you're using a CPU that caps out pretty easily, don't do it.

  • @jarland said: Also giving up hot swap and dealing with big performance hits on a rebuild isn't very fun. Of course everyone has to weigh their priorities and their client's expectations.

    How does it not have hot swap? Supported it long time ago?

  • @concerto49 said: Then please do, but the articles don't help. It's out of date and it's not a number game. Please. I can imagine someone saying, "I can throw 5 articles that sw RAID is better". Endless loop.

    mdadm has improved over the years. For RAID1 or RAID10 it might work. RAID5 or RAID6 etc is a different story.

    My original point was for Software Raid10. For anything with Parity HW Raid would be preferred, but not required.

    @jarland said: Depends on desired capacity and CPU on the node. This is assuming no SSD cache. If you're wanting to achieve what others are with HW RAID10 on SW RAID10, you most often won't. Also giving up hot swap and dealing with big performance hits on a rebuild isn't very fun. Of course everyone has to weigh their priorities and their client's expectations.

    SW RAID supports hotswap

  • jarjar Patron Provider, Top Host, Veteran

    @concerto49 said: How does it not have hot swap? Supported it long time ago?

    I wasn't aware you could hot swap with SW RAID. Even still, the CPU thing stands. If you're using a CPU that caps out pretty easily and you place the rebuild on the CPU, your clients aren't going to be happy. It's all situational.

  • @jarland said: If you're using a CPU that caps out pretty easily and you place the rebuild on the CPU, your clients aren't going to be happy.

    What caps out during a mdadm rebuild is IO bandwidth, not CPU

  • jarjar Patron Provider, Top Host, Veteran
    edited February 2013

    @MiguelQ said: What caps out during a mdadm rebuild is IO bandwidth, not CPU

    Depends on your CPU. That's why you be careful who you tell "Oh yeah that'll be fine." The one cutting financial corners won't be rocking the same CPU that some of us are. I won't tell them that it's fine, because for them it's usually not.

    The young provider gaining experience and asking for advice should not be running SW RAID1/10 on 2-4 cheap drives and their brand new ebayed single L5420. I encourage people to learn, and sometimes you learn by doing. I'd like people to know things before they start providing, but if they ask me I'm not going to reinforce their getting the cheapest start they can. They'll regret it every time. A seasoned administrator is a different beast. It's not about attacking good providers, it's about not encouraging people to get started on a budget of $50.

    I started with an AMD CPU and SW RAID10 and my biggest regret was listening to the people who said "Oh it's fine" instead of the ones who said "You'll regret it." So that's the true story behind why I'll strongly suggest that young providers either go all in or don't bother.

  • @jarland said: It's all situational.

    Exactly. I don't want to fight in either camp and shouldn't. SW RAID isn't worse than a RAID card that has no buffer + bbu in RAID1 or maybe even RAID10.

    Software RAID has its down falls and so does hardware RAID. Hardware RAID can be faster. There's also things like CacheCade. There are problems with FlashCache and write caching etc...

    Boot partition is RAID1 only in software RAID, so that's something. You can get away with having it in a USB drive :)

  • @concerto49 said: Exactly. I don't want to fight in either camp and shouldn't. SW RAID isn't worse than a RAID card that has no buffer + bbu in RAID1 or maybe even RAID10.

    Software RAID has its down falls and so does hardware RAID. Hardware RAID can be faster. There's also things like CacheCade. There are problems with FlashCache and write caching etc...

    Boot partition is RAID1 only in software RAID, so that's something. You can get away with having it in a USB drive :)

    I totally agree. We use a mixture of both (when/where it is appropriate). Our new nodes will be hardware raid to take advantage of cachecade and fastpath.

    Boot partition can be split across all disks as raid 1, generally you are not looking for striping performance on /boot :)

  • @ShardHost said: Boot partition can be split across all disks as raid 1, generally you are not looking for striping performance on /boot :)

    I like to have my update-grub done FAST! ;)

  • The concept of a HW raid card being 'better' is a myth. Yes, it may be faster but your data is no safer and with any hardware, the more there is, the more downtime you get. Any use of a HW raid card without backup is foolish as the data actually put on the disks may not be easily recoverable.

  • jarjar Patron Provider, Top Host, Veteran
    edited February 2013

    The concept of a HW raid card being 'better' is a myth. Yes, it may be faster

    Then it's better....

    Lol

  • XSXXSX Member, Host Rep
    edited February 2013

    RAID10 is not marketing, I/O requirements and data security risk ratio, it is one of the best mature program. (Note: raid10 not backup)

Sign In or Register to comment.