Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Post your VPS iops - Page 6
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Post your VPS iops

12346

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    @rds100 said: There is nothing strange. Ioping writes a 64mb file and then does random reading from the file, dropping OS caches between the reads.

    Modern HDDs have 64MB buffer so if the data alignment is right and there is not other IO happening during the ioping tests all the reads can come from the HDD cache without need to read from the platters. This is when you see the big numbers (60k iops) - it is reading from the HDD memory buffer.
    Or in the case of a RAID controller with a buffer - it is reading from the RAID controller memory.

    Interesting, so without also specifying what type of drives/cache/controller current other IO (which could be momentary) at the same time the results are never like for like.

  • MaouniqueMaounique Host Rep, Veteran

    So, the solution to make it more reliable will be what ? Increase the file to 100 MB ? 1 GB ?

  • @pubcrawler it can only affect the kernel caching, not the hardware caching. I.e. the RAID controller and the HDD itself will still cache in it's memory, you can't instruct it to not cache with ioping.

  • Noticing that running ioping with the -D flag gives higher latency figures.

  • @Maounique said: So, the solution to make it more reliable will be what ? Increase the file to 100 MB ? 1 GB ?

    >

    There is no solution really. What if the RAID card has 2GB memory?

  • And additionally, the "direct" i/o test doesn't really have any more merit than "neat", because the system is going to use cached i/o whenever it can, and most people will want to know how their numbers perform in a 'real world' setting, which cached is going to better provide.

  • pubcrawlerpubcrawler Banned
    edited February 2013

    @rds100, true, the hardware caching is rather off-hand / not really accessible without disabling it with specific controller software. Always issues when testing with caches involved.

    The numbers here for the SSDs are truly odd. Leaning towards similar effects on some of the spinning drives also. Probably where large drives with big cache on controller.

    @Damian - I agree. Part of the recommendation was to look at things from multiple directions and debug or attempt to some of the odd numbers.

  • Ubuntu 12.10 1Gb KVM VPS node on Datashack host machine (2xL5420/8GB/1x500Gb disk) with Centos 6.3(host machine idle).

    10 requests completed in 9006.1 ms, 2079 iops, 8.1 mb/s

    I have another, but it's in the sub 100 iops range, and so I will just leave it out :) - Although flexible, NFS backends over ethernet can be pretty slow for IOPS I have found.

  • @pubcrawler said: Part of the recommendation was to look at things from multiple directions and debug or attempt to some of the odd numbers.

    Understood; just wanted to state it before people start saying "omfg my providers direct io suckzzzzz" :)

  • @pubcrawler said: Anyone have bare SSD to test minus a fancy card? Process of elimination.

    Here you go, 2 x SSDSA2CW120G3 < mdadm raid1 < lvm < ext4

    --- . (ext4 /dev/mapper/beta-root) ioping statistics ---
    10 requests completed in 9003.3 ms, 4082 iops, 15.9 mb/s
    min/avg/max/mdev = 0.2/0.2/0.3/0.0 ms

  • @MiguelQ if you skip the lvm it will be better.

  • @MiguelQ, those are the Intel 320. 64MB of cache.

    Did you get consistent test speeds (i.e. did you run this multiple times with multiple results that were consistent)?

  • Also with SSDs it is very important how you align your partitions. Improper alignment can kill the peformance. And there are other tricks that help with SSDs.

  • @rds100 said: if you skip the lvm it will be better.

    I can't, I need the logical volumes :S

  • @pubcrawler said: Did you get consistent test speeds (i.e. did you run this multiple times with multiple results that were consistent)?

    Yes, I got the same consistent result during several runs, the host was idle also during all that time

  • @rds100 said: And there are other tricks that help with SSDs.

    I think I have that covered (noatime, elevator=noop, no swap (if you have enough mem)). Anything that you would like to add?

  • @MiguelQ underprovision. I.e. on 240GB ssd use only 200GB, leave the rest unused (and unpartitioned even).

  • @rds100 said: underprovision. I.e. on 240GB ssd use only 200GB, leave the rest unused (and unpartitioned even).

    Do you still have to do that with current-gen SSDs? I was under the impression that these drives are already over-provisioned internally, they also give you an SMART counter to check on that too:

    232 Available_Reservd_Space 0x0033 100 100 010 Pre-fail Always - 0 233 Media_Wearout_Indicator 0x0032 100 100 000 Old_age Always - 0

  • rds100rds100 Member
    edited February 2013

    @MiguelQ i do it anyway :) In all cases it can only help performance and longevity, not hurt it.

    And by the way i have never seen the "Available reserved space" and "media wearout indicator" change, even for SSDs which are for a year in service.

  • @rds100 said: And by the way i have never seen the "Available reserved space" and "media wearout indicator" change, even for SSDs which are for a year in service.

    On the 320 series? You shouldn't. One year is too little time for a 320 to show signs of degradation, unless you are writing/deleting several TBs each day

  • causecause Member
    edited February 2013

    how about to post /proc/stat?
    it contain iowait and steal time since reboot. imo, it could give us more pragmatic, stable result about storage and also cpu.

    for who lazy to calculate,

    awk '/^cpu / {print $0 "\niowait " $6 / ($2+$3+$4) ", steal " $9 / ($2+$3+$4) }; /^btime/ {print "uptime " systime() - $2}' /proc/stat

    will give iowait/used and steal/used (edit: add uptime line)

    from SpotVPS vegas node. https://www.dropbox.com/s/56yp8cy75xa0i1g/cpu-day_nv.png

    cpu 80194 145749 27393 58853812 77846 0 0 913

    iowait 0.307284, steal 0.00360391

    from one of URPad, Chicago. https://www.dropbox.com/s/shgm73aycivd8kd/cpu-day_il.png

    cpu 85235 4173 53367 291144884 3698 0 0 118217

    iowait 0.0259009, steal 0.827995

    SpotVPS has a bit poor storage and URPad has a bit busy cpu?

  • AnthonySmithAnthonySmith Member, Patron Provider

    @cause interesting.

    I think we can say that once again the best test is as follows:

    1) Does it feel ok: y/n

    if yes continue to question 2

    if no then rule out any network latency issues and move on to question 2 or get a server with a network that meets your requirements.

    2) Does my app stack/web site/web app etc run ok: y/n

    If yes, then you dont have a problem. end.

    If no find the root cause and dedicate at least 2 days to finding it, so you can have a tangible requirement for your next host to fulfil.

    end.

    forget ioping and dd, your on a shared resource environment and if you don't understand that statement then you have no business running the tests in the first place.

  • @ynzheng Can you submit a ticket, IOPs seem way to low

    Thanks

  • @cause:

    awk '/^cpu / {print $0 "\niowait " $6 / ($2+$3+$4) ", steal " $9 / ($2+$3+$4) }' /proc/stat

    Care to share here what the CPU and steal numbers are?... Definition of what we see with that command.

  • @pubcrawler said: Care to share here what the CPU and steal numbers are?... Definition of what we see with that command.

    IOWAIT means how long processes are blocked by busy IO, mostly storage or perhaps network. STEAL maans how long processes are blocked by luck of cpu timeslice on hostnode. Higher iowait per used cpu time (=USER + NICE + SYSTEM) shows busy IO, Higher STEAL per used shows busy CPU.

    the command is for lazy person who dont want to calc, like me.

    [root@ ~]# tac /proc/stat | awk '/^btime/ {up=systime()-$2;print "up " up/86400 "d"}; /^cpu / {print "user " $2/up "%, nice " $3/up "%, sys " $4/up "%, idle " $5/up "%, iowait " $6/up "%, steal " $9/up "%\niowait/used " $6 / ($2+$3+$4) ", steal/used " $9 / ($2+$3+$4) }'

    up 8.65752d
    user 0.11762%, nice 0.0055788%, sys 0.0735979%, idle 399.25%, iowait 0.0051871%, steal 0.165549%
    iowait/used 0.0263576, steal/used 0.841216

    would be more readable.

  • AmitzAmitz Member
    edited February 2013

    Wouldn't a 'sar' (sysstat) output tell me the same in a nice table view? And even automated e.g. every 5 minutes?
    Example from the daily sar output that each of my VPS send me every day close to midnight:

    http://pastebin.com/tuhURHKp

  • Thanks @cause. Still trying to get head around the numbers, but very interesting. Thanks for bringing this metric up also.

  • @BenND said: Can you submit a ticket, IOPs seem way to low

    I have done submiting Ticket #816696 on 23/01/2013
    but after I told you "it seems nothing fixed yet", you just closed the ticket,without any futher reply

    so I have no choose to cancel that KVM vps even I had paid it for 5 month but without using it yet

  • RAMNode KVM SSD

    :: root % ./ioping -c 10 /
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=1 time=0.1 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=2 time=0.2 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=3 time=0.2 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=4 time=0.2 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=5 time=0.2 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=6 time=0.2 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=7 time=0.2 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=8 time=0.2 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=9 time=0.2 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13): request=10 time=0.2 ms
    
    --- / (ext4 /dev/disk/by-uuid/d3cfac00-b10b-4a1f-87c6-617de180ee13) ioping statistics ---
    10 requests completed in 9003.1 ms, 5365 iops, 21.0 mb/s
    min/avg/max/mdev = 0.1/0.2/0.2/0.0 ms
    
    :: root % dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.06186 s, 1.0 GB/s
    
Sign In or Register to comment.