New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
If you dont have a good raid card sw raid is better imho, with a modern cpu it will take next to nothing to perform as well as a hardware raid, you can put a different drive and dont need expensive raid edition drives with TLER enabled.
If due to the floods you can't find the exact same drive, it's fine.
If you have to put your drives in another machine you can, you don't need to have the exact raid card.
hw is worth it only if it's free and with a good card & a BBU imho
Well damn! After 24 hours of running CentOS 6.0 failed me once again (damn ip6tables related kernel panics). We rebuilt the server with CentOS 5.7 and the write speeds have dropped down to 148MB/s on our RAID10 setup so it's only about double the speed of the RAID1.
yum install e4fsprogs
mkfs.ext4 /dev/sdaX
mount /dev/sdaX /vz
mkfs.ext4 /dev/sdaX
mount /dev/sdaX /vz
Already done.
@KuJoe what disk's are you using in that RAID10 setup and is that a new node, or a occupied one?
I forget the brand off-hand but they are the 10K SAS drives provided by Dell. It's a brand new node without any clients on it. I figured I'd compare apples to apples by running the tests on completely empty nodes.
@KuJoe is that raid10? you are using WB or WT? strange, i've got around 210MB/s with WDC 7200 sata drive. but not quite stable while i'm using WT cache.
I have the RAID set for WB. We were able to get over 240MB/s write speeds with CentOS 6.0 but it appears the 2.6.18 kernel for CentOS 5.7 does not have the IO improvements of 2.6.32 which makes me sad to see.
Mmm,, i think this is from my raid card. i'm using LSI MegaRaid 9260-4i with Centos 5.7 too. try to play around with stripe size, i saw my card is better with 128 or 256k, not 64 or even 512k/1m stripe size. i forgot to take some screenshot/copy pasting the result but as i can remember this is my result :
Stripe Size :
64k = ~130MB/s
128k = ~180MB/s
256k = ~210MB/s
512k = ~110MB/s <== what???
1m = i'm tired using my VPS to run Supermicro IPMI via browser,, yes i have 3 cursor mouse on the screen! hahaha blaming my home internet connection!!!
We have no intention of making a profit out of the situation ourselves however when we are paying out £180 for a 500GB disk which was £35 a month ago we have no choice but to pass some of the cost on to client's. We are still charging £5/month for a 500GB disk at the moment as we bought a bulk of disks a couple of days before the price hike. Depending whether those last us until product starts again will depend on what happens to pricing as if we have to buy any more disks at the higher price then we are likely to have to charge £10-£15 to cover the extra cost of the disk or a setup fee.
Troll Tim was jabbering about numbers not returning until well into next year, not just Spring like we were hoping
Francisco
Originally we were hearing March however as you say our suppliers are now talking about Q3/Q4 which is why we went for the bulk order. We have been told to start looking at buying disks direct from HP with our servers as well as they are not expecting any price hikes due to their disks being sourced from China. Although they are generally more expensive they are at least in line with the rest of the market at the moment .
At my day job we were told by HP that getting orders filled for hard drives will be very hard to do these days. I was surprised because we pay a premium for hardware with them (then again we also bill departments >$3000 per VPS).
Wish I thought of this before putting the server into production. We were happy with ~150MB/s so we didn't look into it any further. On our next build we'll play with the settings to see what we can get out of them. Thanks for the input.
I use my 10 vps servers on storage of 20TB with raid 10 which give more security to the data.
@KuJoe
I was going to say that is low for RAID10 usually see 230MB/s when the box is empty. And for 10K drive's thats a weird reading.
Some of the estimates I've been reading in various reports has supply stabilizing in 2012Q2 but pricing not returning to pre-flood levels well into 2012Q4 and they justify this by stating stock levels will be completely exhausted and need to be rebuilt before demand subsides and pricing can fall.
I agree, we were seeing over 240MB/s on the same drives using the same RAID with a different OS.
Yep, i always find Ubuntu reports the best DD results
CentOS 6.0 was better than CentOS 5.7 but CentOS 5.7 is more stable for us.
Its more stable for everyone at the moment i think haha
Yeah, we only have 1 node running CentOS 6.0 and that's because it handles the AFS drives better.
I'm using Raid 1 on my dedibox that hosts webshops...Didn't know what to do with 2TB HD space and Raid 1 seemed the better way to go.