Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Disk I/O performance 9.3 MB/s
New on LowEndTalk? Please Register and read our Community Rules.

Disk I/O performance 9.3 MB/s

ChuckChuck Member

My VPS :-(.

«1

Comments

  • matthewvzmatthewvz Member, Host Rep

    Is this a KVM or OpenVZ VPS? If its KVM try enabling virtio in solus.

  • rm_rm_ Member, IPv6 Advocate

    Your speedtest link http://www.speedtest.net/result/3572436661.png
    shows the ISP to be "2267921 Ontario", and after some investigative work...
    surely that's CloudAtCost? http://lowendtalk.com/discussion/28041/cloudatcost-50-off-today-1-core-512mb-10gb-vps-for-17-5-one-time-payment/p1
    I don't think given the nature of their offer anyone would be too shocked to learn their VPSes are this oversold. But in any case, their ticketing system is that way =====>

  • ChuckChuck Member

    I don't think their support can help me if they already oversold.

    Thanked by 1gestiondbi
  • yywudiyywudi Member
    edited June 2014

    I have a bluevm CA KVM VPS with the dd result about 8MB/s...

    dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 126.672 s, 8.5 MB/s

  • SunnSunn Member
    edited June 2014

    Woah, that is low.. To login it must take hours.

  • J1021J1021 Member

    What do you need over 9.3MB/s disk I/O for?

  • I don't see why there is a problem it's not that bad for a 1 time vps.

  • Shoaib_AShoaib_A Member
    edited June 2014

    @1e10 said:
    What do you need over 9.3MB/s disk I/O for?

    A disk I/O of less than 500 Mb/s & anything other than SSD(both of these not needed in 99.99 % cases) is not acceptable at LET, thanks to those providers who try to fool customers with DD porn which is basically due to the controller cache memory. False advertising & fooling customers FTW.

    Thanked by 1vonlulzweg
  • I get like 11MB/s from my OVH VPS, and it's never impacted the performance of the box.

  • J1021J1021 Member

    @hostnoob said:
    I get like 11MB/s from my OVH VPS, and it's never impacted the performance of the box.

    The OVH Classic VPS range have disk I/O limited to around 11MB/s.

  • @hostnoob said:
    I get like 11MB/s from my OVH VPS, and it's never impacted the performance of the box.

    Disk I/O is not an issue in 99.99% of cases where providers knows what he is doing

  • I am happy with my 11 MB/s from the SDHC card on the Raspberry Pi. It's weird to see VPSs being slower than that.

  • @1e10 said:
    The OVH Classic VPS range have disk I/O limited to around 11MB/s.

    Yeah so I've heard. Small price to pay for an excellent value VPS from a reputable company

  • J1021J1021 Member

    I think I prefer VPS hosts imposing limits on I/O and network speed (e.g. VM restricted to 100Mb/s with node connected at 1Gb/s) as it creates a more stable environment.

  • PwnerPwner Member

    @Sunn said:
    Woah, that is low.. To login it must take hours.

    That makes no sense.

  • J1021J1021 Member

    Pwner said: That makes no sense.

    I think hope sarcasm was intended.

  • PwnerPwner Member

    @1e10 said:
    I think hope sarcasm was intended.

    Yeah don't worry. I've had my fair share of low Disk I/O and logins while using Host1Free. Average Disk I/O with them is 20 MB/s on a good day.

  • @Pwner said:
    Yeah don't worry. I've had my fair share of low Disk I/O and logins while using Host1Free. Average Disk I/O with them is 20 MB/s on a good day.

    I once got over 100MB/s on my H1F VPS... I was so happy, then it dropped to 8MB/s...

    Thanked by 1Pwner
  • FredQcFredQc Member

    dd porn ?

    [[email protected] ~]# dd if=/dev/zero of=test_$$ bs=64k count=16k conv=fdatasync && rm -f test_$$
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.47996 s, 726 MB/s
    
  • c0yc0y Member

    @FredQc said:
    dd porn ?

    [[email protected] ~]# dd if=/dev/zero of=test_$$ bs=64k count=16k conv=fdatasync && rm -f test_$$
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.47996 s, 726 MB/s
    

    Just dd if=/dev/zero of=/dev/shm/test_$$ bs=64k count=16k conv=fdatasync && rm -f /dev/shm/test_$$ if you want porn

  • FredQcFredQc Member

    @c0y said:
    Just dd if=/dev/zero of=/dev/shm/test_$$ bs=64k count=16k conv=fdatasync && rm -f /dev/shm/test_$$ if you want porn

    Thanks for the advice.

  • Okay, woah, my VPS with the highest IO is at GVH. Well, that's a plot twist.

    [[email protected] ~]# dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.17826 s, 493 MB/s

    Sorry if that's really off topic, I never tested my I/O before and found that really surprising.

  • KeithKeith Member
    edited June 2014

    For dd porn try
    dd if=/dev/zero of=/dev/null bs=64k count=16k

    For Provisionhost

    16384+0 records in

    16384+0 records out

    1073741824 bytes (1.1 GB) copied, 0.0567883 s, 18.9 GB/s

  • I/O Pings

    ioping -c 10
    10 requests completed in 9002.3 ms, 9606 iops, 37.5 mb/s
    

    I/O Seek Test (No Cache)

    ioping -RD
    18330 iops, 71.6 mb/s
    min/avg/max/mdev = 0.0/0.1/3.6/0.0 ms
    

    I/O Reads - Sequential

    ioping -RL
    6293 iops, 1573.2 mb/s
    min/avg/max/mdev = 0.1/0.2/3.8/0.1 ms
    

    I/O Reads - Cached

    ioping -RC
    530465 iops, 2072.1 mb/s
    min/avg/max/mdev = 0.0/0.0/0.1/0.0 ms
    

    DD

    dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync
    0.753467 s, 1.4 GB/s
    dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync
    0.713217 s, 1.5 GB/s
    dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync
    1.01799 s, 1.1 GB/s
    dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
    3.50467 s, 306 MB/s
    

    FIO

    Read IOPS   137569.0
    Read Bandwidth  550.2 MB/second
    Write IOPS  135367.0
    Write Bandwidth 541.4 MB/second
    

    Raw FIO Output

    FIO random reads:
    randomreads: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    randomreads: Laying out IO file(s) (1 file(s) / 1024MB)
    
    randomreads: (groupid=0, jobs=1): err= 0: pid=11920: Thu Jun 19 06:28:03 2014
      read : io=1024.3MB, bw=550277KB/s, iops=137569 , runt=  1906msec
      cpu          : usr=14.38%, sys=75.22%, ctx=1713, majf=0, minf=89
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=262207/w=0/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=1024.3MB, aggrb=550277KB/s, minb=550277KB/s, maxb=550277KB/s, mint=1906msec, maxt=1906msec
    
    Disk stats (read/write):
      vda: ios=237218/0, merge=0/0, ticks=35342/0, in_queue=35305, util=88.29%
    Done
    
    FIO random writes:
    randomwrites: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    
    randomwrites: (groupid=0, jobs=1): err= 0: pid=11924: Thu Jun 19 06:28:06 2014
      write: io=1024.3MB, bw=541470KB/s, iops=135367 , runt=  1937msec
      cpu          : usr=17.25%, sys=75.93%, ctx=1178, majf=0, minf=25
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=0/w=262207/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
      WRITE: io=1024.3MB, aggrb=541470KB/s, minb=541470KB/s, maxb=541470KB/s, mint=1937msec, maxt=1937msec
    
    Disk stats (read/write):
      vda: ios=0/258951, merge=0/0, ticks=0/38234, in_queue=38187, util=93.60%
    Done
    
  • FredQcFredQc Member

    @Keith said:
    For dd porn try
    dd if=/dev/zero of=/dev/null bs=64k count=16k

    For Provisionhost

    16384+0 records in

    16384+0 records out

    1073741824 bytes (1.1 GB) copied, 0.0567883 s, 18.9 GB/s

    [[email protected] ~]# dd if=/dev/zero of=/dev/null bs=64k count=16k
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 0.0973807 s, 11.0 GB/s
    

    Not bad either

    This is Linode

  • [[email protected] ~]# dd if=/dev/zero of=/dev/null bs=64k count=16k
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 0.0549651 s, 19.5 GB/s
    

    Why are we doing this again?

  • linuxthefishlinuxthefish Member
    edited June 2014

    @Pwner said:
    Yeah don't worry. I've had my fair share of low Disk I/O and logins while using Host1Free. Average Disk I/O with them is 20 MB/s on a good day.

    You (and @eddynetweb ) must have bad luck, I get around 200 - 300 on mine and I've had 80 days uptime before i rebooted. Using it to run kippo honeypot on!

    CPU model :  Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
    Number of cores : 1
    CPU frequency :  848.092 MHz
    Total amount of ram : 128 MB
    Total amount of swap : 256 MB
    System uptime :   6 days, 10:55,
    Download speed from CacheFly: 15.0MB/s
    Download speed from Coloat, Atlanta GA: 11.1MB/s
    Download speed from Softlayer, Dallas, TX: 10.9MB/s
    Download speed from Linode, Tokyo, JP: 3.46MB/s
    Download speed from i3d.net, Rotterdam, NL: 28.4MB/s
    Download speed from Leaseweb, Haarlem, NL: 30.5MB/s
    Download speed from Softlayer, Singapore: 4.13MB/s
    Download speed from Softlayer, Seattle, WA: 8.97MB/s
    Download speed from Softlayer, San Jose, CA: 8.37MB/s
    Download speed from Softlayer, Washington, DC: 15.0MB/s
    I/O speed :  231 MB/s
  • @linuxthefish What node are on? I just tested mine and got 250MB/s. Node 18 <3

  • @linuxthefish said:
    I/O speed : 231 MB/s

    :0 And how can you get that good network Speeds? I have 1mbps from cachefly :/

  • PwnerPwner Member

    @linuxthefish @eddynetweb

    I know this is a little off-topic, but on your H1F VPSs, do you need to shut them down from time to time to clear out the memory cache? I find mine lags after a while and I have to do the 10 second shutdown.

Sign In or Register to comment.