New on LowEndTalk? Please Register and read our Community Rules.
How long does it take to build Raid10 Array fresh?
So I'm building a raid 10 array on an adaptec 6405e with 4 300GB WD Raptors..... I did a 'secure erase' from the adaptec menu and then selected 'build & verify' on the raid creation menu. I am only getting about 87MB/s on a dd test from within the centos6 install I did.... shouldn't I be getting in excess of 200MB/s ? I checked the drives with the latest version of smartctl and they are all fine.
(To get the 6405e to work with centos I installed the drivers adaptec made available pre install then once installed I upgraded the kernel and installed kmod-aacraid module because on kernel update the adaptec drivers do not stick. )
Comments
Less than an hour
@ShardHost well this was over 12 hours ago... I'm pretty confused as to the cause of the slow IO. I'm using 512k block size but it doesn't matter what block size I use on the DD test as it is pretty much the same speed.
Is it all Raid 10?
@ShardHost 4 disks in raid10 yes.
Have you tested the disk speeds individually?
@vdnet I did not, have tested them before and I know they get over 100MB/s each unless they are bad. SMART is reporting they are good. Also ran smart self tests on these drives and they pass self tests.
Update with dd->
[[email protected] ~]# dd if=/dev/zero of=test count=4096 bs=1024k
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.7691 s, 312 MB/s
[[email protected] ~]# dd if=/dev/zero of=test count=4096 bs=1024k
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 47.9431 s, 89.6 MB/s
[[email protected] ~]# rm test
rm: remove regular file `test'? y
[[email protected] ~]# dd if=/dev/zero of=test count=4096 bs=1024k
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.9655 s, 308 MB/s
[[email protected] ~]#
If the file 'test' doesn't exist I get 312MB/s
If the file 'test' does exist I get 89.6MB/s
Then you can see I remove the file and run the test again and get close to 312MB/s again.
If I use conv=fdatasync I get about 87MB/s - I can't make sense of what is going on here when the disks should have over 100MB/s each.
[[email protected] ~]# rm test
rm: remove regular file `test'? y
[[email protected] ~]# dd if=/dev/zero of=test count=4096 bs=1024k conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 48.9699 s, 87.7 MB/s
In all my tests it is faster if you remove the test file first on all the servers that I have... but I am still unable to produce anything over 100MB/s... and it's ticking me off. Even the IOPS are terrible!
--- /dev/sda2 (device 557.5 Gb) ioping statistics ---
445 requests completed in 3003.6 ms, 150 iops, 0.6 mb/s
min/avg/max/mdev = 2.0/6.7/15.7/2.1 ms
I get that on a busy raid 1 node with two of the same disks...
--- /dev/sda2 (device 278.5 Gb) ioping statistics ---
425 requests completed in 3004.4 ms, 143 iops, 0.6 mb/s
min/avg/max/mdev = 1.8/7.0/18.8/2.7 ms
You try kicking it? It's a very technical approach but it works.
No - here is the info from the controller, maybe more info if anyone else can help? (No I do not have write cache enabled, does this need to be enabled to get the performance I am looking for? Without it you get less performance than a raid1 setup? I thought this basically wrote in a raid0 fashion (stripe of mirrors) so I should be getting double performance of raid1 out of the box with no cache right?
What is this - Performance Mode : Default/Dynamic ?????
Controller Status : Optimal
Channel description : SAS/SATA
Controller Model : Adaptec 6405E
Controller Serial Number : 1A4211CC920
Physical Slot : 128
Temperature : 54 C/ 129 F (Normal)
Installed memory : 128 MB
Copyback : Disabled
Background consistency check : Disabled
Automatic Failover : Enabled
Global task priority : High
Performance Mode : Default/Dynamic
Stayawake period : Disabled
Spinup limit internal drives : 0
Spinup limit external drives : 0
Defunct disk drive count : 0
Logical devices/Failed/Degraded : 1/0/0
MaxCache Read, Write Balance Factor : 3,1
NCQ status : Enabled
Statistics data collection mode : Enabled
Controller Version Information
BIOS : 5.2-0 (18512)
Firmware : 5.2-0 (18512)
Driver : 1.1-7 (28000)
Boot Flash : 5.2-0 (18512)
Logical device information
Logical device number 0
Logical device name : somr10
RAID level : 10
Status of logical device : Optimal
Size : 571382 MB
Stripe-unit size : 256 KB
Read-cache mode : Enabled
Write-cache mode : Disabled (write-through)
Write-cache setting : Disabled (write-through)
Partitioned : Yes
Protected by Hot-Spare : No
Bootable : Yes
Failed stripes : No
Power settings : Disabled
Logical device segment information
Group 0, Segment 0 : Present (Controller:1,Connector:0,Device:0) WD-WXD1E71MJJE7
Group 0, Segment 1 : Present (Controller:1,Connector:0,Device:1) WD-WXL408048838
Group 1, Segment 0 : Present (Controller:1,Connector:0,Device:2) WD-WXF1E81PXPE2
Group 1, Segment 1 : Present (Controller:1,Connector:0,Device:3) WD-WXC0CA9K7262
Physical Device information
@Corey - can you try to build RAID1 first using disks #0 and #2, beside that enable Write-cache. Strange thing is that you are building RAID using 2 different types of HDDs (WD3000HLHX and WD3000HLFS)
Just so you know, the ioping results you listed aren't titled, so we don't know what type of tests they are.
For example..
starting ioping tests...
ioping disk I/O test (default 1MB working set)
disk I/O: /dev/sda
--- /dev/sda (device 3726.0 Gb) ioping statistics ---
5 requests completed in 4077.4 ms, 65 iops, 0.3 mb/s
min/avg/max/mdev = 12.6/15.3/21.3/3.1 ms
seek rate test (default 1MB working set)
seek rate: /dev/sda
--- /dev/sda (device 3726.0 Gb) ioping statistics ---
215 requests completed in 3013.7 ms, 72 iops, 0.3 mb/s
min/avg/max/mdev = 4.6/13.9/26.0/3.7 ms
sequential test (default 1MB working set)
**********************************************
sequential: /dev/sda
--- /dev/sda (device 3726.0 Gb) ioping statistics ---
3643 requests completed in 3000.0 ms, 1316 iops, 329.1 mb/s
min/avg/max/mdev = 0.4/0.8/15.9/0.5 ms
sequential cached I/O: /dev/sda
--- /dev/sda (device 3726.0 Gb) ioping statistics ---
7155 requests completed in 3001.3 ms, 2805 iops, 701.3 mb/s
min/avg/max/mdev = 0.1/0.4/25.5/0.9 ms
@gbshouse I tested just the HLFS and it was getting over 90MB/s by itself. I've also tested other HLHX and they get over 100MB/s by their self... so that only leaves the GLFS which from my research is ON PAR with HLFS and was released after HLFS. These drives are all very similar and it isn't odd at all to use different model disks in an array.
@PAD
It was Measure disk seek rate (iops, avg) - so it shouldn't be more on the raid 10 volume only if I did the Measure disk sequential speed test?
Disk Sequential tests
Raid1
-- /dev/sda2 (device 278.5 Gb) ioping statistics ---
1292 requests completed in 3001.9 ms, 442 iops, 110.4 mb/s
min/avg/max/mdev = 0.5/2.3/87.5/3.0 ms
Raid10
--- /dev/sda2 (device 557.5 Gb) ioping statistics ---
1682 requests completed in 3001.4 ms, 582 iops, 145.6 mb/s
min/avg/max/mdev = 1.6/1.7/10.4/0.4 ms
Still not impressive...
@Corey - Since WD3000HLHX are twice faster than WD3000HLFS I would try them first = create simple RAID 1 with write-cache enabled and check the results
@gbshouse they are not twice as fast, they support a pipe that is twice as fast. These drives can't push over 300MB/s alone. Why do you suggest write-cache enabled?
@Corey - "network administrators know that enabling the RAID controller cache offers significant performance benefits, such as reduced latency in I/O requests, bandwidth and queue depths that surpass software application limits, and on-the-fly parity calculations on sequential writes."
@gbshouse I do know this, but I'm expecting double the performance WITHOUT write cache, shouldn't I be getting it?
@Corey - as for me you should get better results WITH cache enabled
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 6.5846 s, 163 MB/s
This is with write cache on.... why do you get expected performance with write cache on but when it is off you do not? (Actually I would expect to get over 1GB/s with cache, but I'm guessing fdatasync fixes that.) Does software raid use some sort of write cache to help saturate the disk interface as well? What about cards that have no write cache... how would they have the same performance?
Also - it seems like turning on or off the disk cache makes no difference and that disk cache isn't able to be backed up on a BBU am I correct?