New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I was receiving a 45 MB/second read speed through a RAID 10 array of Seagate 15K SAS disks - LSI Mega Raid - and I believe its very bad. Than i am getting their replies as this is bad, but usually its around 150MB/second - That too is unacceptable to me. So I wanted to put this thread to keep a collection of few outputs to tell that there are people out there easily getting 200-300 on HDDs, How the hell you can even think of making me satisfied by such low rates.
With 15k sas drives that would be really low.
Sas wouldn't 200 be the low end?
@24khost - an idea speed would be ?
If you want people to help you out and contribute, quit acting like a fucking prick.
@MrObvious - I seriously don't know about that, Do you have any technical info around speeds or dd outputs of such an array ??
This.
But a quick search came up with this thread in particular, just have to do some legwork to filter through Raid-10 setups. http://www.webhostingtalk.com/showthread.php?t=1032917
Looks like 200MB/s seems about the norm. Low results can also be indicative of the volume doing a background initialization as well.
No, 200MB/s is way below what I expect of them
He is saying that is about as good as you're going to get, on average
@dougmanes, Read 24khost's comment, you ll understand.
Its not bad speed at all if you have no bbu+cache enabled. Google will tell you why, you should learn how to use it for summer.
On average our nodes do 350MB/s with raid 10 and 4 x 2TB drives.
Is that when you run your servers in a vat of custard, for all the information and variables there it might as well be
I never understood why everyone is 99% Sequential writes and 1% read speed when the reality is more like 99% read and 1% sequential writes in real word use.
@BenND
What kind of setup are you running? Drives/raid card / configuration.
What stripe size did you use for the array? Was it completely synced up when you did the tests?
It would be much more productive to tell us exactly how your array is setup so we can help narrow down any problems, rather than shove a bunch of random results from an Internet forum in front of your host and saying "I want that!".
1073741824 bytes (1.1 GB) copied, 6.0971 s, 176 MB/s
(RAID10, 4x2TB Hitachi, LSI RAID)
SSD RAID10, 12 drives (6+6), SLC, SAS2 (6Gbit dedicated per port) on a E5 system (Not EDIS related):
root@storage-li-04:/ssd# dd if=/dev/zero of=sb-io-test bs=64k count=160k conv=fdatasync; rm sb-io-test
10737418240 bytes (11 GB) copied, 2.61926 s, 4.1 GB/s
@Alex_Liquidhost
Adaptec 6405 with 4 x 1 or 2 TB barracuda drives
Wow.
Ok i'll also play the irrelevant test:
dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync; rm -f sb-io-test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.455404 s, 2.4 GB/s
dd if=/dev/zero of=sb-io-test bs=128k count=144k conv=fdatasync; rm -f sb-io-test
147456+0 records in
147456+0 records out
19327352832 bytes (19 GB) copied, 4.45218 s, 4.3 GB/s
Software raid10 on 4x1TB SATA drives on MB integrated controller.
4x 300GB SAS 15K RPM, adaptec 2405 RAID 10 with write cache disabled
about 310mb
I guess I could have sworn the sas did better than that.
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.0736 s, 107 MB/s
4 x 600GB 15K SAS drives
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 5.35623 seconds, 200 MB/s
[root@server2436 ~]# dd if=sb-io-test of=sb-io-test2 bs=1M count=1k conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.3636 seconds, 200 MB/s
[root@server2436 ~]# dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.57498 seconds, 235 MB/s
@rethinkvps : THANK YOU !!!!!! Exactly what I needed. How many VM on that node now ?
I actually find it good for the number of drives.
Maybe you can share your findings then OP?
30
@rethinkvps
We have a full node with 6 x 400GB 15k SAS drives and it does 300MB/s :O
Our 30 is full, we monitor the node and realized anything after 30 would start compromising the IO so we stopped orders on it.
It's N E A T