Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Test the disk I/O of your VPS - Page 12
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Test the disk I/O of your VPS

189101214

Comments

  • DalCompDalComp Member
    edited November 2013

    @Maounique said:
    Gee, that 1 tb disk is really slow unless you are using it for other things too.

    I think so, too. It's idling. Received last night, just doing some benchmarks. Now testing their automated reinstall feature. Will see if their support can do something about the slow disk.
    Serverbear if anyone interested: http://serverbear.com/benchmark/2013/11/01/9qxMf4n5q3i7xkjG

    Thanked by 1earl
  • Maybe they gave you one of those 5400 rpm green drives..

  • @earl said:
    Maybe they gave you one of those 5400 rpm green drives..

    GB1000EAMYC. Google says it's HP 7200 rpm.

  • providerservice Xen 512

    #dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 7.413 seconds, 145 MB/s

    Blueevm OpenVZ 64

    #dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.90788 seconds, 369 MB/s

    Edis VRS Basic Storage Sweden

    -----> http://funkyimg.com/i/DRjZ.jpg <----- :O

    #dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 4.73815 s, 227 MB/s

    Ramnode SSD 512 40gb

    #dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.51473 seconds, 709 MB/s

    OVH KS2G

    #dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 11.4945 s, 93.4 MB/s

    among others

    :wq

  • @DalComp said:
    GB1000EAMYC. Google says it's HP 7200 rpm.

    Hmm.. I'm starting to think that anything HP and Hard drive related is just a bad thing. ever had those smart array e200/e200i controllers? just terrible!

    Thanked by 1DalComp
  • with this customers can see and have data to compare when need buying vps, who is the hosting provider have great i/o speed

    thanks for the author of this thread

  • Ramnode NL ssd:

    dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync
    0.867189 s, 1.2 GB/s

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2013

    Anything over 30 works well for me, this is kinda who has the biggest appendix.
    I admit we entered the race and won it on production servers, but it is for fun, I assure you, unless you have busy databases and there IOPS count so you need SSD or 24+ spinders, anything above 200 is for show.

  • causecause Member
    edited November 2013

    Urpad, Houston

    --- . (simfs /vz/private/13234) ioping statistics ---
    10 requests completed in 10353.8 ms, 7 iops, 0.0 mb/s
    min/avg/max/mdev = 8.1/135.2/960.8/281.1 ms
    
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 34.4707 s, 31.1 MB/s
    

    Urpad, LA

    --- . (simfs /vz/private/12831) ioping statistics ---
    10 requests completed in 9.1 s, 115 iops, 461.3 kb/s
    min/avg/max/mdev = 193 us / 8.7 ms / 17.1 ms / 7.1 ms
    
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 11.4674 s, 93.6 MB/s
    
  • @earl said:
    ever had those smart array e200/e200i controllers? just terrible!

    Not tried HP HDDs but there new raid cards are amazing, we have great performance with P420 in the latest gen8

    Thanked by 1earl
  • @Zshen

    That can't be right for Catalyst Host? I've not seen below 200 MB/s, it averages around 250 MB/s for me...

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.90237 s, 275 MB/s
    

    Did you send them a ticket?

    Thanked by 1Zshen
  • VpscrazeVpscraze Member
    edited November 2013

    1024MB OVZ VPS I keep for personal use on one of my nodes.

    [root@vpscrazepersonal ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 4.91332 s, 219 MB/s
    [root@vpscrazepersonal ~]#
    
  • RadiRadi Host Rep, Veteran
    edited November 2013

    InceptionHosting NL:

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 15.5108 s, 69.2 MB/s
    

    InceptionHosting US:

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 9.2854 s, 116 MB/s
    

    LowEndSpirit IT:

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.84768 s, 377 MB/s
    

    LowEndSpirit UK:

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 19.6877 s, 54.5 MB/s
    

    LowEndSpirit NL:

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 86.4796 s, 12.4 MB/s
    

    FtpIt Chicago:

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.07842 seconds, 996 MB/s
    

    Will be posting some more soon...

  • ZshenZshen Member
    edited November 2013

    connercg

    Something funky must have been going on when I ran that yesterday. Everything seems much better today. I've never had a performance issue with Catalyst, so it didn't really phase me when I ran it. Ryan has contacted me to look into it further.

    Here is today's.

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 7.67759 seconds, 140 MB/s

  • dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in 
    16384+0 records out 
    1073741824 bytes (1,1 GB) copied, 13,9246 s, 77,1 MB/s
    

    digitalocean, disaster

  • URPAD Node LA2

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 8.09607 seconds, 133 MB/s
    
  • Digital Ocean KVM 512MB NY2

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.10045 s, 511 MB/s

    Ramnode KVM 512M Seattle

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.50796 s, 428 MB/s

    Ramnode KVM 1GIG Seattle

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.54406 s, 422 MB/s

    Ramnode OpenVZ 2gig Seattle

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.99845 s, 153 MB/s

    @RamNode The OpenVZ is kind of slow, isn't it?

  • Nick_ANick_A Member, Top Host, Host Rep

    @hdpixel - send in a ticket

    Thanked by 2lukesUbuntu ahmiq
  • @RamNode Ticket Created #120970

  • ZeroCoolZeroCool Member
    edited November 2013

    RamNode, INIZ Still On Top Provider have Better Result I/O with see many result has been posted

  • StrikerrStrikerr Member
    edited November 2013

    lowendspirit IT
    [root@server1 ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.04564 seconds, 353 MB/s
    VDSInside UA
    [root@server2 ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.03756 s, 178 MB/s
    bpsnode US
    root@server3:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.93052 s, 273 MB/s

  • DotVPS 64Mb OVZ

    best

    dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 5.72725 s, 187 MB/s
    

    worst

    dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 8.93803 s, 120 MB/s
    
  • My colocated server with a makeshift SSD cache

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 13.0313 s, 82.4 MB/s
    
  • SreeSree Member
    edited November 2013
    Weloveservers 1GB Vps
    
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 5.49424 s, 195 MB/s
    
    Urpad 256MB
    
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.03228 s, 178 MB/s
    
  • kontamkontam Member
    edited November 2013

    StylexNetworks SSD

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 3.44749 s, 311 MB/s

  • This is one reason why you want to host with @RamNode. In my previous post, I advised Nick_A that the performance of my OpenVZ vm was unusual. He immediately responded, made some adjustment---causing no downtime, and voila! The VM now has the usual performance you come to expect from @RamNode.

    Ramnode OpenVZ 2gig Seattle

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.25019 s, 330 MB/s
    
  • Nick_ANick_A Member, Top Host, Host Rep

    @hdpixel said:
    This is one reason why you want to host with RamNode. In my previous post, I advised Nick_A that the performance of my OpenVZ vm was unusual. He immediately responded, made some adjustment---causing no downtime, and voila! The VM now has the usual performance you come to expect from RamNode.

    My silly self forgot to enable a pretty important setting on that node... I caught the mistake thanks to this thread!

  • HWAYSHWAYS Member
    edited December 2013

    FAPVPS:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.3365 s, 803 MB/s

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.20521 s, 891 MB/s

  • formosanformosan Member
    edited December 2013

    Reprisehosting, AKA VPShostingdeal:

    ovzstarter plan 128mb ram

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 44.9084 s, 23.9 MB/s
    
  • @formosan said:
    Reprisehosting, AKA VPShostingdeal:

    ovzstarter plan 128mb ram

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 44.9084 s, 23.9 MB/s
    

    I have a 8 year old Western Digital PATA drive on an old Pentium 4 that outperforms that VPS :)

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 40.4345 s, 26.6 MB/s

    Model Family: Western Digital Caviar

    Device Model: WDC WD400BB-23DEA0

    Power_On_Hours 48117

This discussion has been closed.