New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
We have in a few different builds in the past few months. Will post the results and SSD models soon to follow for benchmark comparison.
I love you.
Surely. I wouldn't be surprised if @Nick_A had at least tested some. I know he loves to toy with different configurations.
It's useless unless you're talking about Haswell. There aren't enough SATA3 ports. That was the only reason to get a RAID card.
Xeon E3 V3 boards should have enough SATA3 ports?
I did say unless you are talking about Haswell?
@concerto49 ok, correct
Is this a marketing thread?
get out of here
possible to get larger disk for VPSDime 6GB RAM plan?
It became a marketing thread
Forgot to reply your msg... I will think about which plan I want first. Thanks for the GST help.
smh
4 x Intel 530 240GB SWRAID10 Results:
@serverian this doesn't look good. Is the raid resyncing at the moment or something?
Also what are the results for RAID1 with the same drives?
And have you tweaked /sys/class/block/sd?/queue/
Haswell's C226 based boards with 6x SATA3 chokes up at around 800M-1G based on RAID0 testing, you won't get >1G write across all drives so RAID10 would be around what you see now which is 500M or so.
Only way to get close to 1G on RAID10 is to run 2 drives onboard and 2 drives on a separate controller. LSI 9220s are cheap, support SATA3 and can do 1.6G across 4x SSD SoftRAID0.
Do you mean using LSI 9220 as HBA and not RAID and instead doing the SWRAID with drives connected to it?
What kernel is that running on? I get more than that with 2 x samsung 830's in mdadm raid 1 on 3.4.x kernels. pust your cat /proc/mdstat please, I suspect you have bitmapping enabled to get such poor ioping results.
We are using 4x250GB Samsung EVO in SW RAID10 for all new Shared Hosting setups, and 4x160GB Intel 320 in SW RAID10 for all earlier setups. Performance, reliability and accounts capacity is significantly better than our previous SW RAID10 SATA setups.
I hope there's either a resync/rebuild in progress or you have bitmap cache enabled as i would expect far better than that. I have servers with 4x 1TB SATA SW RAID-10 pulling over 300MB/s.
Yes, in my earlier tests I managed to do only 1.3G RAID0 HWRAID on the LSI 9220 likely due to the lack of cache on the card itself.
LSI 9220 (4) HWRAID0 = 1.3G Write
LSI 9220 (4) MDRAID0 = 1.5G Write
LSI 9220 (2) + Intel C204 SATA3 (2) MDRAID = 1.6G Write, 2.0G Read
Intel C226 SATA3 (4) MDRAID = 800M-1G Write, 1G Read
My conclusions were:
1) MDRAID performed better than HWRAID on cacheless LSI 9220
2) Intel C226 with 6 SATA3 ports can only achieve max throughput of 1G. I assume C224 with 4 SATA3 ports should be similar. Looking at Intel's block diagram for the C22x chipsets, I'm very suspicious the total bandwidth is only 6Gb/sec across all the ports instead of having sufficient bandwidth per-port, but I didn't do enough testing to confirm this.
@Ash_Hawkridge : So you have another business started? Will you sell it again in the future?
It would max out at 600-750 GB/sec then, not at 800-1000.
The limit is the 4GB/s DMI link from the C226 chipset to the CPU on Lynx Point. It is bi-directional.
Also RAID0 on C226 tops at about 1.2GB. So that's the limitation.
Yeah, but the numbers are pretty close around there.
Guess that's the limitation of the chipset, 6x SATA3 ports would pull 200M/port on average, might as well stick to SATA2 but already standardized my purchases to the X10SLH so well.
You can use 4 drives, making 300M/port
Maybe try with the X10SL7-F (LSI2308 integrated)?
I use this, dd result without virtualization,vm is 1GB/s, with xen virtualization sometimes speed will show 1GB/s or 500MB/s to 700MB/s
@rds100 GB/sec
BTW My Seagate HDD gave me this:-
Avg. Read and Write Rate: 150 MB/s
@MikeIn ok, make that MB/sec