Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How many hosts are using RAID 1? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How many hosts are using RAID 1?

13»

Comments

  • @Kairus said: Both are hardware RAID?

    I'm interested in seeing how software RAID performs. I've looked into it, and the performance difference for RAID 1 or RAID 10 doesn't seem to be that big, of course rebuilding the array does suck for software RAID but otherwise I'm not sure if hardware RAID is worth it (if you're renting).

    If you dont have a good raid card sw raid is better imho, with a modern cpu it will take next to nothing to perform as well as a hardware raid, you can put a different drive and dont need expensive raid edition drives with TLER enabled.

    If due to the floods you can't find the exact same drive, it's fine.

    If you have to put your drives in another machine you can, you don't need to have the exact raid card.

    hw is worth it only if it's free and with a good card & a BBU imho :)

  • KuJoeKuJoe Member, Host Rep

    Well damn! After 24 hours of running CentOS 6.0 failed me once again (damn ip6tables related kernel panics). We rebuilt the server with CentOS 5.7 and the write speeds have dropped down to 148MB/s on our RAID10 setup so it's only about double the speed of the RAID1. :(

  • miTgiBmiTgiB Member
    edited November 2011

    @KuJoe said: We rebuilt the server with CentOS 5.7

    yum install e4fsprogs

    mkfs.ext4 /dev/sdaX

    mount /dev/sdaX /vz

  • KuJoeKuJoe Member, Host Rep

    @miTgiB said: yum install e4fsprogs

    mkfs.ext4 /dev/sdaX
    mount /dev/sdaX /vz

    Already done. ;)

  • @KuJoe what disk's are you using in that RAID10 setup and is that a new node, or a occupied one?

  • KuJoeKuJoe Member, Host Rep

    I forget the brand off-hand but they are the 10K SAS drives provided by Dell. It's a brand new node without any clients on it. I figured I'd compare apples to apples by running the tests on completely empty nodes.

  • @KuJoe is that raid10? you are using WB or WT? strange, i've got around 210MB/s with WDC 7200 sata drive. but not quite stable while i'm using WT cache.

  • KuJoeKuJoe Member, Host Rep

    I have the RAID set for WB. We were able to get over 240MB/s write speeds with CentOS 6.0 but it appears the 2.6.18 kernel for CentOS 5.7 does not have the IO improvements of 2.6.32 which makes me sad to see. :(

  • Mon5t3rMon5t3r Member
    edited November 2011

    Mmm,, i think this is from my raid card. i'm using LSI MegaRaid 9260-4i with Centos 5.7 too. try to play around with stripe size, i saw my card is better with 128 or 256k, not 64 or even 512k/1m stripe size. i forgot to take some screenshot/copy pasting the result but as i can remember this is my result :

    Stripe Size :

    64k = ~130MB/s

    128k = ~180MB/s

    256k = ~210MB/s

    512k = ~110MB/s <== what???

    1m = i'm tired using my VPS to run Supermicro IPMI via browser,, yes i have 3 cursor mouse on the screen! hahaha blaming my home internet connection!!!

  • VS_MattVS_Matt Member
    edited November 2011

    The sad thing is though that these prices are hiked because of the floods, i can >almost guarantee some providers are going to milk this as much as they can and >the HD prices probably wont ever go back down to what they once were.

    We have no intention of making a profit out of the situation ourselves however when we are paying out £180 for a 500GB disk which was £35 a month ago we have no choice but to pass some of the cost on to client's. We are still charging £5/month for a 500GB disk at the moment as we bought a bulk of disks a couple of days before the price hike. Depending whether those last us until product starts again will depend on what happens to pricing as if we have to buy any more disks at the higher price then we are likely to have to charge £10-£15 to cover the extra cost of the disk or a setup fee.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @VS_Matt said: We have no intention of making a profit out of the situation ourselves however when we are paying out £180 for a 500GB disk which was £35 a month ago we have no choice but to pass some of the cost on to client's. We are still charging £5/month for a 500GB disk at the moment as we bought a bulk of disks a couple of days before the price hike. Depending whether those last us until product starts again will depend on what happens to pricing as if we have to buy any more disks at the higher price then we are likely to have to charge £10-£15 to cover the extra cost of the disk or a setup fee.

    Troll Tim was jabbering about numbers not returning until well into next year, not just Spring like we were hoping :(

    Francisco

  • VS_MattVS_Matt Member
    edited November 2011

    Troll Tim was jabbering about numbers not returning until well into next year, not >just Spring like we were hoping :(

    Originally we were hearing March however as you say our suppliers are now talking about Q3/Q4 which is why we went for the bulk order. We have been told to start looking at buying disks direct from HP with our servers as well as they are not expecting any price hikes due to their disks being sourced from China. Although they are generally more expensive they are at least in line with the rest of the market at the moment :).

  • KuJoeKuJoe Member, Host Rep

    At my day job we were told by HP that getting orders filled for hard drives will be very hard to do these days. I was surprised because we pay a premium for hardware with them (then again we also bill departments >$3000 per VPS).

  • KuJoeKuJoe Member, Host Rep

    @Mon5t3r said: try to play around with stripe size

    Wish I thought of this before putting the server into production. We were happy with ~150MB/s so we didn't look into it any further. On our next build we'll play with the settings to see what we can get out of them. Thanks for the input. :)

  • I use my 10 vps servers on storage of 20TB with raid 10 which give more security to the data.

  • @KuJoe

    I was going to say that is low for RAID10 usually see 230MB/s when the box is empty. And for 10K drive's thats a weird reading.

  • @Francisco said: numbers not returning until well into next year,

    Some of the estimates I've been reading in various reports has supply stabilizing in 2012Q2 but pricing not returning to pre-flood levels well into 2012Q4 and they justify this by stating stock levels will be completely exhausted and need to be rebuilt before demand subsides and pricing can fall.

  • KuJoeKuJoe Member, Host Rep

    @VMPort said: I was going to say that is low for RAID10 usually see 230MB/s when the box is empty. And for 10K drive's thats a weird reading.

    I agree, we were seeing over 240MB/s on the same drives using the same RAID with a different OS.

  • Yep, i always find Ubuntu reports the best DD results

  • KuJoeKuJoe Member, Host Rep

    CentOS 6.0 was better than CentOS 5.7 but CentOS 5.7 is more stable for us. :(

  • Its more stable for everyone at the moment i think haha

  • KuJoeKuJoe Member, Host Rep

    Yeah, we only have 1 node running CentOS 6.0 and that's because it handles the AFS drives better.

  • I'm using Raid 1 on my dedibox that hosts webshops...Didn't know what to do with 2TB HD space and Raid 1 seemed the better way to go.

Sign In or Register to comment.