Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How many hosts are using RAID 1? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How many hosts are using RAID 1?

2

Comments

  • Hehe

    How many nodes do you currently have running?

    Thanked by 1JustinBoshoff
  • Just to throw this out, we do daily backups to a backup server on another rack. (ie Rack 1 backups to rack 3, rack 2 backups to rack 4, etc.)

    We don't do raid. Our servers are pretty much filled so it would be hard to get another drive or board in there. We currently run 2 drives in each server though so it probably would be doable but I would rather have the OS on one drive and client data on another.

  • What's your company drmike?

  • @drmike said: Just to throw this out

    You are fully managed web hosting, so no bringing oranges to our apple stand.

  • @sleddog said: For the RAID10, there's a 50% chance that it's catastrophic.

    If I'm thinking right, it would be a 33% chance of it being catastrophic. :)

  • @drmike
    On your last comment, Raid is slow and useless in a HA "must have data at all times" setup.
    I know big players in the "mail archiving and so forth" industry don't use rad they just keep tree copies of all data. Write it in tree places geographically apart, "tree cluster type setup with fiber as transport"

  • @Kuro said: If I'm thinking right, it would be a 33% chance of it being catastrophic. :)

    Nope :)

    As VMPort described, RAID10 is two RAID1's joined together.

    If the second disk failure happens on the same RAID1 as the first disk failure, then catastrophe.

    If the second disk failure happens on the other RAID1, then overall the RAID10 continues to function.

    So 50% chance of complete failure on loss of the second disk.

  • JustinBoshoffJustinBoshoff Member
    edited November 2011

    The idea behind raid10 is that you stripe over a mirror, so in a 4 disk raid 10 you should get the write speeds of a 2 disks raid0 and the read speed of two mirrors.
    So in essence it's a two disk raid0 with the read speeds of two two disk raid0's and the redundancy of two mirrors.
    Please somebody correct me if I am wrong?

  • RAID 1 IMO works fine for Budget Performance, If your looking for enterprise class VPS then going with RAID 10 is the Ideal solution.

  • I think the brute write speeds are overrated.
    If most of you disk read access is cached in ram and ssd you won't feel 320mb shared over 1000 vm's, that's why "Sun Oracle" zfs based storage works so well.

  • @VMPort said: root@vmport:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 3.98297 s, 270 MB/s

    That's some nice speed there :) The new SAS 15K drives are quite blazing.

  • @sleddog The way I was looking at it is if 1 disk fails, then there are 3 disks left. Then each of the 3 remaining disks has an equal chance they could be the next to fail, which would be 33% each. Thus, there is a 33% chance the disk on the same side of the RAID0 as the first one that fail is the next to fail, and a 66% chance that a disk on the other side of the RAID0 fails ;)

  • @Kuro - makes sense when you put it like that!

  • @kiloserve said: @VMPort said: root@vmport:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 3.98297 s, 270 MB/s

    That's some nice speed there :) The new SAS 15K drives are quite blazing.

    Yep, that's impressive.

    How many users on the node where you can have these results?

  • kiloservekiloserve Member
    edited November 2011

    I'm not sure how the math (probability percent) works but here's the breakdown

    RAID 10 is really just a bunch of RAID1 mirrors added/"striped" to each other....hence the "real name" of RAID10 is actually "RAID 1+0" (mirror+striping)

    So, if you have 4 disk RAID 10, you can survive 2 drive failures as long as they are not in the same RAID1 mirror set.

    If you have an 8 disk RAID 10, you can survive 4 drive failures as long as they are not in the same RAID1 mirror set.

    If you have 2 failures in the same RAID1 mirror set, you're dead regardless of how many drives you have.

  • @EaseVPS Actually, enterprise class VPS should run on RAID 6 (if you have only 4 drives). This is the only configuration where you can lose two drives at the same time. But it is slower. The second best is RAID 5+spare. Performance is a bit better but you can not lose the second drive immediately after the first one.

  • drmikedrmike Member
    edited November 2011

    Let me ask this. I have a pair of drives in each server with no real option to add in anything else hardware wise. What raid should/could I be using in there? Please kindly remember that this is pretty much not an area that I'm used to so you're going to have to explain a bit.

  • edited November 2011

    performance -> RAID 0

    safety -> RAID 1

    With only two drives, there is no other choice.

  • kalamkalam Member
    edited November 2011

    With 2 drives wouldn't the only solution be RAID 1? My take on it was that with disk mirroring you gained the redundancy of being able to lose one drive, and disk reads were generally improved as opposed to not using a RAID solution.

    You'd also want to look at hardware vs software implementations. Software RAID 1 through Linux seems to be pretty decent.

  • @drmike

    Same setup as the one i mentioned above maybe? 2 15K disks in RAID1, it seems quite clear that 1 15K Disk has the same, if not higher I/O as using 4 7K Disks in RAID10.

  • @kiloserve

    Thanks :) Not as good as one of the tests i saw on one of your nodes a few days back :P

    Yes they are super fast, imagine 4 of them in RAID10 that would be a serious result. I daren't even look at the prices for those drives at the minute though, i know our DC has slammed £50 setup fees per disk for the 15K SAS2's if using them in one of there leased configs.

    The sad thing is though that these prices are hiked because of the floods, i can almost guarantee some providers are going to milk this as much as they can and the HD prices probably wont ever go back down to what they once were.

  • kiloservekiloserve Member
    edited November 2011

    @drmike said: I have a pair of drives in each server with no real option to add in anything else hardware wise. What raid should/could I be using in there

    If you have important stuff (hosting clients), then it should be RAID1. RAID 1 will allow one drive to die and the other will keep running without missing a beat. If you don't care about the data, you can use RAID 0 for speed. Absolutely not recommended to run RAID 0 on anything you care about.

    With RAID 0 --> 1 hard drive failure means all data is lost between both drives... say goodbye to your data.
    With RAID 1 --> 1 hard drive failure is not a show stopper, the 2nd drive will kick in.
    

    A word of caution for onboard/software RAID 1, you usually have to take the server down to rebuild the RAID set. So for example if 1 drive dies, you can keep going. However, you should schedule some quality time with your clients to allow a RAID mirror rebuild. Most software RAID will require you to manually interact with the RAID "bios" interface and have you manually tell it "Here's the new drive replacement, rebuild the set".

    Usually when I do software RAID rebuilds, I'll clone the "good" drive just in case before I do the rebuild. There have been times when software RAID won't rebuild the Raid1 set properly.

    In contrast, with good hardware RAID, you can just insert the replacement drive and if automatic rebuild is set to on, you shouldn't see any downtime.

  • @drmike said: I have a pair of drives in each server with no real option to add in anything else hardware wise. What raid should/could I be using in there?

    Since you are using FreeBSD, Concatenated Disk Driver is their version of software raid, or hardware raid is a valid option, but with only 2 drives, RAID1 so you can suffer 1 drive loss, the other is mirrored so you keep going, and then hot-swap a replacement drive and rebuild the mirror.

    http://www.freebsd.org/doc/handbook/raid.html

  • KuJoeKuJoe Member, Host Rep

    Well we finally got the RAID10 built. Here's the results:

    RAID1: ~75MB/s
    RAID10: ~176MB/s

    Keep in mind that these results are from the same hard drives, but the RAID controller is different.

  • Screw it. I'll stick with my daily backups.

    Thanks though.

  • @KuJoe said: Well we finally got the RAID10 built. Here's the results:

    RAID1: ~75MB/s RAID10: ~176MB/s

    Keep in mind that these results are from the same hard drives, but the RAID controller is different.

    Both are hardware RAID?

    I'm interested in seeing how software RAID performs. I've looked into it, and the performance difference for RAID 1 or RAID 10 doesn't seem to be that big, of course rebuilding the array does suck for software RAID but otherwise I'm not sure if hardware RAID is worth it (if you're renting).

  • KuJoeKuJoe Member, Host Rep
    edited November 2011

    Actually the RAID1 is a "fakeRAID" (I think that's what it's called, I just recently found out about it). It's not really software RAID but it's not really a hardware RAID either, it's configured before the OS is installed on the SAS controller. I'll run some tests to see if the fakeRAID performs better than a software RAID. We don't rent our servers so a one-time expense is worth it for us.

  • @KuJoe said: Actually the RAID1 is a "fakeRAID"

    fakeRAID is using the RAID built into most modern BIOS, but cards like LSI 9211 are sort of fake, not really hardware RAID, but performs circles around fakeRAID

  • KuJoeKuJoe Member, Host Rep

    I just looked it up and it looks like we have an LSI SAS 1068 according to Dell so I guess fakeRAID was the wrong term.

Sign In or Register to comment.