New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Your disk I/O

Did a quick search and couldn't find any results so I thought I'd go ahead & make a thread. Post your disk I/O results here.
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
Here's one of mine:
1073741824 bytes (1.1 GB) copied, 5.78699 s, 186 MB/s
Comments
For the 156th time?
Turns out there's one here: http://www.lowendtalk.com/discussion/42/test-the-disk-io-of-your-vps
Search wasn't exactly being effective..
http://puu.sh/WRYJ
Wrong keywords it seems.
Let's at least discuss and propose some other standard tests in addition to the boring dd test.
Maybe also:
ioping -c 10 .
ioping -R .
@serverbear any ideas for the tests? This is strictly about disk IO tests, not CPU/network/etc.
[root@testvps ~]# wget freevps.us/downloads/bench.sh -O - -o /dev/null|bash
CPU model : Intel(R) Xeon(R) CPU L5420 @ 2.50GHz
Number of cores : 8
CPU frequency : 2500.224 MHz
Total amount of ram : 2048 MB
Total amount of swap : 3096 MB
System uptime : 16 min,
Download speed from CacheFly: 54.1MB/s
Download speed from Linode, Atlanta GA: 8.39MB/s
Download speed from Linode, Dallas, TX: 15.3MB/s
Download speed from Linode, Tokyo, JP: 6.45MB/s
Download speed from Linode, London, UK: 8.42MB/s
Download speed from Leaseweb, Haarlem, NL: 10.3MB/s
Download speed from Softlayer, Singapore: 5.94MB/s
Download speed from Softlayer, Seattle, WA: 13.8MB/s
Download speed from Softlayer, San Jose, CA: 17.9MB/s
Download speed from Softlayer, Washington, DC: 42.7MB/s
I/O speed : 126 MB/s
nobody use vps for anything else than run some tests. we should prepare a template with only a busybox, ssh, dd and wget :P
How would they run unixbench?
a busybox alias would return the cached results taken once a day on similar vps
So there goes 1/2 your customer base
Theres more options than just dd:
IOPing
Iozone
Bonnie++
FIO
hdparm
We're implenenting FIO at the moment for more accurate read/write speeds + IOPS.
Here's an example of outputs from the different tests: http://liuyonggang.blog.com/2011/10/21/disk-performance-characterization-dd-fio-hdparm-bonnie-seekmark-and-etc/
You need to learn how to search properly - there are hundreds of threads about this.
http://www.lowendtalk.com/search?Search=disk+speed
Iozone
Bonnie++
FIO
hdparm
These test are usually run on owned/dedicated system, remember that virtual machines share the I/O subsystem with dozens or hundreds of other neighbors and high I/O loads can affect production performances.
My HDD copies at 100000000mb/s
I was just about to add the IoZone + Bonnie to my vps benchmark.
But I saw Francisco post that we are not allowed to run those in a shared environment such vps. So I didn't use it (except the host ask me to do it, or allowing me to run it)
For a hammering test, I run all of this :
and some hard benchmark which caused load to 600.
So I just use normal benchmark like serverbear did
100% Pure IDE Power!
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 21.2225 s, 50.6 MB/s
Your all Jelous, Ofcourse...
I get around 15MB/S on my USB. Jelly
?
way too slow, submitted a ticket complaining. @prometeus needs to pick better hardware.
Our new SSD VPS nodes in InterNap Dallas:
[root@vetest /]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync;unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 1.25966 s, 852 MB/s
wow
now that i looked back, i laughed at my I/O LOL
@serverbear make some shell script which downloads / compiles / installs the required tools and then runs the tests and summarizes the results?
My VPS from @prometeus
I thought it would be more
Let's also share ioping results.
Which node are you? PM14?
I'm on PM12
PM12 run on a bit old but rock solid fiber channel SAN, which handle loads without problems. sequential speed isn't everything
Yeah, Never had a problem with it