New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Anyone here with 2TB or 4TB drives at Hetzner?
I wonder if anyone here is having a server @ Hetzner with 2TB or 4TB drives in Software Raid-1 and would be willing to share the IO that those drives are capable to deliver?
The infamous "dd" test and (if possible) "ioping" results would be just great. I already have servers with their 3TB drives and would love to see whether the other models are a tad faster.
Thanks a lot in advance & kind regards
Amitz
P.S.: For comparison, here are the results of the 3TB drives
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 5.56555 s, 96.5 MB/s dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 5.67656 s, 94.6 MB/s dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 5.59846 s, 95.9 MB/s
ioping -c 50 / --- / (ext4 /dev/md2) ioping statistics --- 50 requests completed in 49.1 s, 693 iops, 2.71 MiB/s min/avg/max/mdev = 195 us / 1.44 ms / 19.1 ms / 4.01 ms ioping -c 50 / --- / (ext4 /dev/md2) ioping statistics --- 50 requests completed in 49.0 s, 2.46 k iops, 9.62 MiB/s min/avg/max/mdev = 195 us / 405 us / 7.36 ms / 994 us ioping -c 50 / --- / (ext4 /dev/md2) ioping statistics --- 50 requests completed in 49.1 s, 811 iops, 3.17 MiB/s min/avg/max/mdev = 198 us / 1.23 ms / 15.4 ms / 3.16 ms
Comments
What 3TB drives are these? 7200RPM or 5400/5900RPM? Are your partitions aligned on 4K?
To get meaningful ioping results, use
ioping -R /
.2 x 2 but in LVM
Both are 2x2 and LVMed
4TB drives not in RAID1 tho (jbod)
Thank you, guys!
didn't read all but did you try enable write cache
hdparm -W /dev/sdX
replace X with what you have and run tests again
No change. I guess that write cache was already enabled...
But to be precise - It's not that I am unhappy with the disk speed. I just wanted to know whether a model change would also bring a benefit concerning disk IO. Obviously not (or at least not significantly).