New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Is this Bad I/O for this setup?
I have two WD Caviar Greens in RAID0, and I get around 50MB/s.
The Drives: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136939 (They say they have 100MB/s sustained)
I set them up as RAID0.
Comments
That does seem a little low. Are the disks new? Are you using a RAID card or MDADM?
Grr. My host basically told me there is NOTHING they can do.
BurstNET by any chance? I know they use green drives.
Are you using a RAID card or MDADM?
Single WD Caviar Green drive writes at 40 MB/s and reads at 60 MB/s, so you should see much more than 50 MB/s in RAID 0.
I'm not 100% sure, but I think it's Software RAID (Anybody know how to tell?).
No, I'm going with CheetahHost, who have their servers colo'd by CorporateColocation.
These drives say they have 100MB/s transfer, and people seem to get that speed too. Keep in mind these seem to be a new model.
[root@191 ~]# cat /proc/mdstat
Personalities : [raid0] [raid1]
md0 : active raid1 sda1[0] sdb1[1]
511988 blocks super 1.0 [2/2] [UU]
md1 : active raid0 sda2[0] sdb2[1]
1952495616 blocks super 1.1 512k chunks
unused devices:
Ok so you have a software RAID1 and a software RAID0 partition.
Chances are you're getting that speed on the RAID1 partition which would make some sense.
If you want to test the RAID0 partition try:
For RAID 0, software RAID will perform just as good as hardware. I don't think that's the issue here.
edit: Just noticed the RAID 1 partition, why would they set it up like that?
Ok, I'm confused. Are you saying I have RAID0 AND RAID1?
Timing buffered disk reads: 28 MB in 3.05 seconds = 9.19 MB/sec
[root@191 ~]# hdparm -t /dev/md1
Timing buffered disk reads: 324 MB in 3.01 seconds = 107.81 MB/sec
[root@191 ~]# hdparm -t /dev/md1
I was actually thinking a dodgy RAID card.
On different partitions, yes. You can use
to show where they're mounted.
Timing buffered disk reads: 28 MB in 3.05 seconds = 9.19 MB/sec
Well that's worrying. Presumably this is a new server with nothing else running?
Created by anaconda on Thu Mar 15 06:36:37 2012
Accessible filesystems, by reference, are maintained under '/dev/disk'
See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
/dev/mapper/vg_191-LogVol00 / ext4 defaults 1 1
UUID=ac3e0760-8311-4942-acdd-40f70eb8b71a /boot ext4 defaults 1 2
/dev/mapper/vg_191-LogVol02 /vz ext4 defaults 1 2
/dev/mapper/vg_191-LogVol01 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
You have new mail in /var/spool/mail/root
The only thing on the server is SolusVM, and two virtual machines, that aren't really even active. (One has Kloxo installed, no sites being served, and the other one is blank)
Oh you've got LVMs on an OpenVZ system for some unknown reason:
Also try the hdparm test for each disk. One might be dodgy:
PV Name /dev/md1
VG Name vg_191
PV Size 1.82 TiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 476683
Free PE 1
Allocated PE 476682
PV UUID DzLmxJ-2K0l-7FNv-o58B-hMMp-wz5R-L47X40
/dev/sda1:
Timing buffered disk reads: 158 MB in 3.01 seconds = 52.43 MB/sec
/dev/sda2:
Timing buffered disk reads: 22 MB in 3.59 seconds = 6.12 MB/sec
My partitions are setup as:
/ = 1.92Gb
/Boot = 484mb
/vz = 1.78 Tb
Ok so it looks like you have one dodgy disk slowing down the system, /dev/sda2.
What can I do to fix this?
To make things worse, I did a IO test on one of the VPSs under SolusVM, and the I/O is horrible:
Contact your host and show them the hdparm results, tell them the disk is dieing.
What he said
Ok, I'll talk to the host it. My I/O on my whole server pretty much dropped to 10MB/s.
Yeah, now would be a good time to ask for a RAID1 setup across the whole server as well, as RAID0 for hosting VPSs is anything but advisable.
green drives suck, I just traded my greens in. the WD10Ears drives I had only was doing 5400 RPM but westerndigital does not display this onsite.
I recommend Samsung HD103UJ I have 2 drives in raid 1 with a 96MB/s speed consistantly.
All right, I've already sent it a ticket to look at the faulty drives. Thanks for all the help!
Single WD green drive (250gb)
at least for /boot/ partition you "must" use raid1
The green drives are going to be inconsistent in their performance because they have variable RPM.
If this is also a racked server it could be due to vibration. Greens are not vibration proof like the REs or Blacks. We have seen their speed drop to 2-3 MB/s when used in a rack with many servers.
WD has a specific tool to disable the auto spin down, called WDidle3.
Sounds about right. They are nice if you are conscious about your power bill (and I hear they can be pretty quiet), but performance is not their strong point at all. I think the primary use for them is in a home desktop/server application.
I think only the Black and Raid Edition are meant for enterprise usage, they might claim the Raptor is as well, but I think it's only purpose is RMA trading.