New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Test the disk I/O of your VPS
This discussion has been closed.
Comments
A personal VPS of mine on our Germany Node.
[root@test /]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 15.1204 seconds, 71.0 MB/s
IntoVPS (512MB) where we host our master
[root@master ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 18.8047 seconds, 57.1 MB/s
prefiber.nl (OpenVZ - VPS Linux #2)
ksx4system@domare:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 32.5573 s, 33.0 MB/s
ultimahost.pl (OpenVZ - ovz.512MB, it's not lowendbox)
ksx4system@maryland:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 16.0894 s, 66.7 MB/s
ramhost.us (OpenVZ - custom plan)
ksx4system@magnus:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 7.63024 s, 141 MB/s
again : hostigation 64 MB
VPSunlimited 1 GB XEN
Guys have you tried ioping for random disk i/o latency tests as well as sequential disk tests http://vbtechsupport.com/1239/ ?
This is my QualityServers.co.uk OpenVZ Eliminator on VZ2UK...
root@localhost:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 8.66047 s, 124 MB/s
And my 128MB on VZ3UK...
root@localhost:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 17.5009 s, 61.4 MB/s
new 512MB buyvm OpenVZ VPS fresh out of the box so to speak
ioping.sh random disk i/o latency results (default is 4K request size) which include dd sequential disk test
proof that dd tests for sequential don't show true story
2nd VPS with 1.5GB and 15GB of SSD fast disk space on intel X25-M, while dd tests slower than buyvm, the random disk I/O is 3-4x times faster
1.2/1.6MB/s
vs
4.6/4.3MB/s
The problem is that you should not use dd with an SSD. Both have a different way of handling disk-writing.
so what be a better way for comparing ssd vs non-ssd for sequential disk i/o performance ?
Reason here for ssd slower is the x25-m intel ssd sequential write speed rating is alot slower than probably the non-ssd vps. Not that ssd is slower.
The comparison (ioping results) was to highlight random disk i/o
The reason is that SSD are normally much faster in reading than writing because before you can write on a SSD you need to delete the memory block-wise where you want to write on. Additionally, the SSD has the function of wear-leveling in order to save the blocks. The SSD will distribute the write-attemps on the whole SSD and not like on a HDD directly one after another. The writing speed of the SSD will steadily fall as soon as you have more and more data on it. Because the chip will not find an empty block, so it has to read the block first, delete its data and the write on it (read-modify-write). And this costs time. And as
dd
just copies sector-wise, it is not good for using it with an SSD.For SSD use
hdparm -tT <device>
instead.Yeah i know ssd's work differently from non-ssd disks, just wondering what would be a better tool for comparing sequential disk i/o performance for non-ssd vs ssds if dd isn't suited. Bonnie++, sysbench ?
Of course detracting from the point with my ioping results, where random disk i/o is probably a better indicator of responsiveness/performance than sequential disk i/o in server environment.
I didn't try Bonnie++. I just used hdparm. Maybe I will have a look at Bonnie++.
One might maybe also add, that the provider also has to take into consideration to optimise the use of a SSD in virtual environments, as well as your virtual linux has to. (noatime, data=writeback, elevator=noop)
yeah i usually use bonnie++ v1.03e http://www.coker.com.au/bonnie++/ but there's also 1.96 experimental release http://www.coker.com.au/bonnie++/experimental/ which seems to have memory to file allocation size requirements so tested file size needs to be twice memory allocated size
ssd is slower at writing, but since there is no seek time to speak of they excel at reading data.
Personally I do not think SSD is ready for VPS in production, no raid, hardware or software, support trim or GC. I'm sure a solution for this is around the corner.
What is the problem with SSD and RAID?
ATA-Trim does not work on multiple SSDs (RAID), just with individual SSDs. There is, however, sometimes a "garbage collection" on some SSD-chipsets, but this simply works as a profane defrag. The problem with SSDs is that you should not exceed more than about 85-90% of space, otherwise you might run out of free blocks.
Then create partition of size 80% of the whole SSD and create software RAID1 using that partition - 20% of the SSD will never be used or written to. Am i missing something?
This still does not help that there is no ATA-Trim and the read-modify-write.
Nice and stable, as it's the only VPS on my test server.
BuyVM OVZ 256MB:
Quality Servers Xen Eliminator:
ZazoomVPS KVM 1GB:
@Infinity sorry or the delay in response, that vps is not in an active state as mentioned there's not much more I can provide as your not the actual client.
SECURE DRAGON open vz 96mb:
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 24.0446 s, 44.7 MB/s
dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
65536+0 records in
65536+0 records out
1073741824 bytes (1.1 GB) copied, 27.6076 s, 38.9 MB/s
uptimevps 128 ovz 128:
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 12.0486 s, 89.1 MB/s
dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
65536+0 records in
65536+0 records out
1073741824 bytes (1.1 GB) copied, 13.2432 s, 81.1 MB/s
Ok, we get a little redundancy in here... ^-^
host1plus.com, 1 GB XEN really slow
I guess that is average. But I never liked host1plus. Is that meant to be a "cloud" VPS?
From our BudgetBox XenPV plans...server is almost full, maybe 3 to 5 more clients left to go so the indicative speed should be relatively accurate for a full load server.
My cell phone
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.32847 seconds, 323 MB/s
But that isn't with sync :P