Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What's your worst disk I/O of your VPS?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What's your worst disk I/O of your VPS?

huluwahuluwa Member
edited March 2012 in General

Mine:
It was Virpus, cencelled.
image

«1

Comments

  • netguynetguy Member
    edited March 2012

    @huluwa said: It was Virpus, cencelled

    I had the similar Virpus result, slightly faster:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 37.2198 seconds, 28.8 MB/s
  • AndriAndri Member

    Clodstra Cloud VPS 512MB RAM

    dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    16777216 bytes (17 MB) copied, 14.3709 seconds, 1.2 MB/s
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 42.9748 seconds, 25.0 MB/s
  • @netguy said: 1073741824 bytes (1.1 GB) copied, 37.2198 seconds, 28.8 MB/s

    @netguy said: I had the similar Virpus result, slightly faster:

    I suppose that you miss the "kB/s" in the op screenshot. :)

    Thanked by 2Kuro Infinity
  • Go59954Go59954 Member
    edited March 2012

    I haven't seen worse than what Hostrail offered for several months, and especialy during their claimed maintenance stage. For some months it took forever to login, wget, process, and I/O all at the same time took forever (in kilobytes, if not bytes), thus I vote hostrail.

  • AlienVPS NY

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 123.531 s, 8.7 MB/s

  • pcanpcan Member

    my Cloudstra cloud VM 512MB is faster on big files but even slower on small files.

    dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    16777216 bytes (17 MB) copied, 20.0514 s, 837 kB/s

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 16.9797 s, 63.2 MB/s

    They should tune the storage area network. As comparison, the following test are from a cloud server with 512RAM, same OS, on Vmware esxi 4.1 + Powervault MD3200i SAS (Dell entry-level SAN)

    Raid 5 performance:

    dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    16777216 bytes (17 MB) copied, 3.35084 s, 5.0 MB/s

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 8.25361 s, 130 MB/s

    Raid 10 performance:

    dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    16777216 bytes (17 MB) copied, 3.21622 s, 5.2 MB/s

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 7.58367 s, 142 MB/s

  • Just out of curiosity, are these openvz containers that are using vswap instead of burst ram?

    We had an incident on one of our nodes where vswap was causing extremely poor disk i/o, due to artificially slowing down the container.

    @pcan said: They should tune the storage area network.

    People want to see big numbers. We get tagged for our network being "too slow", since each connection is limited to 20mbit, but you can pull multiple 20mbit connections, and most people find 2.2mbyte/sec adequate for most things. Some people want to see 60mbyte/sec, even though they'll never use it.

  • @netguy said: I had the similar Virpus result, slightly faster:

    Read again, it says KB/s

  • 123Systems

    root@farore:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 83.0655 s, 12.9 MB/s
    
    Thanked by 1VictorZhang
  • One of our crappier nodes is down to 45mb/sec, which I think is atrociously slow. Then I see these results here... Are you guys still using these?

  • LegendlinkLegendlink Member
    edited March 2012

    @DotVPS That's the worst I've ever seen it at, I don't have any doubts it has been ~9MB/s or less.

  • Made it right now on WEBTROPIA vps
    268435456 Bytes (268 MB) kopiert, 41,4583 Sekunden, 6,5 MB/s

  • @netguy said: I had the similar Virpus result, slightly faster:

    You'er more better, mine is kb/s...

  • FlipperHost:
    root@flipper:~# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 7.40488 s, 36.3 MB/s
    root@flipper:~# dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    16777216 bytes (17 MB) copied, 322.795 s, 52.0 kB/s

  • Not maintaining the same sync type between tests may skew the results a bit...

  • komokomo Member

    private layer... but still ok for squid, openvpn etc.

    root@srv:~# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 78.103 s, 3.4 MB/s

    Thanked by 1TheHackBox
  • i've got a result in kb/s with virpus too .. I try to use the vps for 2-3 day but even the ssh was super slow . i cancel it and waste 4$ =D . i should have listen to others about them . Virpus = Wasted $$ lol

  • With Ubservers

    root@serv2:~# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 33.456 s, 8.0 MB/s

  • InfinityInfinity Member, Host Rep

    DMBHosting, w00t!

    [root@gallium ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 88.0479 seconds, 12.2 MB/s

    Followed by IonVM

    root@lion [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 27.8133 seconds, 38.6 MB/s
  • My worst I/O experience has been with 123systems and hostrail.
    123Systems was between 4 and 6 MB/s, now its a bit better close to 16MB/s.
    Hostrail never was higher thatn 4MB/s

  • justinbjustinb Member
    edited April 2012

    about 7kb/sec @ hostrail

  • KB? kb? Anyway is insane lol

  • vedranvedran Veteran

    IPAP!

    # dd if=/dev/zero of=test bs=64k count=1 conv=fdatasync
    1+0 records in
    1+0 records out
    65536 bytes (66 kB) copied, 21.9595 s, 3.0 kB/s
  • InfinityInfinity Member, Host Rep

    @vedran said: IPAP!

    That's dead, you save your DD results?

  • NightNight Member
    edited April 2012

    Surprisingly my 123Systems Minecraft VPS isn't THAT bad:

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 22.1843 s, 48.4 MB/s

    Apart from the random downtimes, this VPS hasn't been too bad. If BuyVM allows MC on their servers, I might try grabbing one.

  • What disk I/O is expected from LowEndBox servers in your opinion?

  • @EricCubixCloud said: What disk I/O is expected from LowEndBox servers in your opinion?

    40 mb/s +

  • MrAndroidMrAndroid Member
    edited April 2012

    I cancelled it, but LeaseWeb gave me 8.0MBS.

    Apparently that was normal.

  • vedranvedran Veteran

    @Infinity said: That's dead, you save your DD results?

    No, I found it on LEB post.

  • Some VPS in Sweden:

    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 50.5277 s, 5.3 MB/s

Sign In or Register to comment.