Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Detect overselling on OpenVZ - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Detect overselling on OpenVZ

2»

Comments

  • Either that or it's being run when other folks are running theirs. Seems to happen occasionally on the main site when folks are posting reviews.

  • @kiloserve the issue with the slow VPS is random IO, not speed. ioping showed results in the 150-500ms range on the slow VPS and 0.1-0.4ms on the fast VPS.

  • I just use vzfree and check how much oram is under (Swap:)

  • kiloservekiloserve Member
    edited September 2011

    dmmcintyre3 said: the issue with the slow VPS is random IO, not speed. ioping showed results in the 150-500ms range on the slow VPS and 0.1-0.4ms on the fast VPS.

    This should also affect sequential write speeds as well. There is disparity between random and sequential; however, they should be close and you won't get a poor sequential speed and then get blazing fast random writes or vice versa.

    50 MB/s is an average write speed for DD. It's random write speeds should be average, random writes won't be poor and it won't be great either.

    Conversely, if I have a DD score of 10MB/s, my random writes will also be poor.

    Or if I have a DD score of 300 MB/s, my random write scores will also be excellent.

  • kiloservekiloserve Member
    edited September 2011

    If anybody actually wants to test it out, I do have 2 servers I can provide you for a few days.

    One is off our XenPV nodes and scores about 250-300MB/s in DD.

    The other is another providers box which scores around 100MB/s in DD.

    I can give you access to both boxes and you can run random write I/O tests and post the results here.

    My guess is that under any random or sequential disk benchmark the 250MB/s box beats the 100MB/s box.

    If you take me up on the offer, you must agree not to disclose the other provider's name/IP/etc, can't be a VPS provider and you must be a regular LEB/LET poster here in good standing.

  • Better random IO, but low dd score:

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync;rm test -f; ioping -c 10 /
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 40.1735 s, 26.7 MB/s
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=1 time=11.2 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=2 time=8.9 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=3 time=0.5 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=4 time=17.1 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=5 time=2.4 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=6 time=0.5 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=7 time=15.2 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=8 time=0.5 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=9 time=0.5 ms
    4096 bytes from / (ext4 /dev/mapper/vg_claw-lv_root): request=10 time=0.5 ms
    
    --- / (ext4 /dev/mapper/vg_claw-lv_root) ioping statistics ---
    10 requests completed in 9080.9 ms, 175 iops, 0.7 mb/s
    min/avg/max/mdev = 0.5/5.7/17.1/6.4 ms

    Decent dd score, but bad small file reads:

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync;rm test -f; ioping -c 10 /
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 16.7651 s, 64.0 MB/s
    4096 bytes from / (simfs /dev/simfs): request=1 time=2.0 ms
    4096 bytes from / (simfs /dev/simfs): request=2 time=9.4 ms
    4096 bytes from / (simfs /dev/simfs): request=3 time=27.2 ms
    4096 bytes from / (simfs /dev/simfs): request=4 time=58.9 ms
    4096 bytes from / (simfs /dev/simfs): request=5 time=43.0 ms
    4096 bytes from / (simfs /dev/simfs): request=6 time=112.6 ms
    4096 bytes from / (simfs /dev/simfs): request=7 time=740.8 ms
    4096 bytes from / (simfs /dev/simfs): request=8 time=238.4 ms
    4096 bytes from / (simfs /dev/simfs): request=9 time=12.1 ms
    4096 bytes from / (simfs /dev/simfs): request=10 time=30.7 ms
    
    --- / (simfs /dev/simfs) ioping statistics ---
    10 requests completed in 10284.6 ms, 8 iops, 0.0 mb/s
    min/avg/max/mdev = 2.0/127.5/740.8/215.2 ms

    First one is a over 10 year old PC with basically nothing running, second is a VPS which got ~130mb/s when I first got it.

  • kiloservekiloserve Member
    edited September 2011

    dmmcintyre3 said: First one is a over 10 year old PC with basically nothing running

    Thanks for running the tests Doc, interesting indeed.

    This one is understandable as the disk I/O is dedicated rather than in a multi-user environment. If you simulate a multi-user environment by adding another disk process like copying a directory with a bunch of random files while running your tests, you will get a more accurate representation of DD on a VPS.

    I am guessing with simultaneous disk access (like you have in a VPS) you will see both DD scores and read scores get more in line with your VPS testing.

    On your second VPS, when you were getting 130MB/s, wasn't the random I/O great back then? I think the two go hand in hand, when the sequential DD score drops that much, so will the random scores.

  • kiloservekiloserve Member
    edited September 2011

    Just for comparison, here's another VPS test. Similar DD scores with similar random reads.

    The DD score was slightly higher and the random reads are also slightly higher.

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync;rm test -f; /root/ioping/ioping -c 10 /
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 14.6845 seconds, 73.1 MB/s
    4096 bytes from / (simfs /dev/simfs): request=1 time=200.5 ms
    4096 bytes from / (simfs /dev/simfs): request=2 time=179.6 ms
    4096 bytes from / (simfs /dev/simfs): request=3 time=58.0 ms
    4096 bytes from / (simfs /dev/simfs): request=4 time=0.1 ms
    4096 bytes from / (simfs /dev/simfs): request=5 time=99.3 ms
    4096 bytes from / (simfs /dev/simfs): request=6 time=11.0 ms
    4096 bytes from / (simfs /dev/simfs): request=7 time=25.5 ms
    4096 bytes from / (simfs /dev/simfs): request=8 time=173.6 ms
    4096 bytes from / (simfs /dev/simfs): request=9 time=4.0 ms
    4096 bytes from / (simfs /dev/simfs): request=10 time=0.2 ms
    
    --- / (simfs /dev/simfs) ioping statistics ---
    10 requests completed in 9972.0 ms, 13 iops, 0.1 mb/s
    min/avg/max/mdev = 0.1/75.2/200.5/77.5 ms
  • kiloservekiloserve Member
    edited September 2011

    And here's a comparison on a node with almost 20 VPS on it.

    As you can see here, the DD score correlates with the random reads as well. As the DD score goes down, I suspect the random read times will increase.

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync;rm test -f; ./ioping -c 10 /
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 3.78859 seconds, 283 MB/s
    4096 bytes from / (ext3 /dev/root): request=1 time=0.1 ms
    4096 bytes from / (ext3 /dev/root): request=2 time=0.2 ms
    4096 bytes from / (ext3 /dev/root): request=3 time=0.2 ms
    4096 bytes from / (ext3 /dev/root): request=4 time=0.2 ms
    4096 bytes from / (ext3 /dev/root): request=5 time=0.2 ms
    4096 bytes from / (ext3 /dev/root): request=6 time=0.2 ms
    4096 bytes from / (ext3 /dev/root): request=7 time=0.2 ms
    4096 bytes from / (ext3 /dev/root): request=8 time=0.2 ms
    4096 bytes from / (ext3 /dev/root): request=9 time=0.2 ms
    4096 bytes from / (ext3 /dev/root): request=10 time=0.2 ms
    
    --- / (ext3 /dev/root) ioping statistics ---
    10 requests completed in 9036.4 ms, 5807 iops, 22.7 mb/s
    min/avg/max/mdev = 0.1/0.2/0.2/0.0 ms
  • I never ran ioping on the VPS when I first got it, but it was defiantly much faster when I first got the VPS.

  •  dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync;rm test -f; ioping -c 10 /
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 18.0077 seconds, 59.6 MB/s
    4096 bytes from / (ext3 /dev/root): request=1 time=0.8 ms
    4096 bytes from / (ext3 /dev/root): request=2 time=0.7 ms
    4096 bytes from / (ext3 /dev/root): request=3 time=0.7 ms
    4096 bytes from / (ext3 /dev/root): request=4 time=0.3 ms
    4096 bytes from / (ext3 /dev/root): request=5 time=0.7 ms
    4096 bytes from / (ext3 /dev/root): request=6 time=0.9 ms
    4096 bytes from / (ext3 /dev/root): request=7 time=0.7 ms
    4096 bytes from / (ext3 /dev/root): request=8 time=0.7 ms
    4096 bytes from / (ext3 /dev/root): request=9 time=0.7 ms
    4096 bytes from / (ext3 /dev/root): request=10 time=0.9 ms
    
    --- / (ext3 /dev/root) ioping statistics ---
    10 requests completed in 9018.9 ms, 1417 iops, 5.5 mb/s
    min/avg/max/mdev = 0.3/0.7/0.9/0.1 ms

    This is a VPS, with nice and stable ioping result and ok dd result. Much better disk performance than the VPS with a 64MB/s dd result.

  • kiloservekiloserve Member
    edited September 2011

    dmmcintyre3 said: Much better disk performance than the VPS with a 64MB/s dd result.

    The disk reads are excellent but the write speed is slower. So if a user wants just excellent read speeds, this is a good choice. But at the same time the write speeds are slower. I can't really say that slower write speeds is better but more of a personal choice depending one whether reads or writes are more important.

    The other VPS can write faster, this one can read faster; but both are average VPS and considerably slower than a VPS that posts 283MB/S in DD

    1073741824 bytes (1.1 GB) copied, 3.78859 seconds, 283 MB/s
    10 requests completed in 9036.4 ms, 5807 iops, 22.7 mb/s
    min/avg/max/mdev = 0.1/0.2/0.2/0.0 ms
  • Let's try some small writes:

    dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.04281 s, 95.7 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.006481 s, 632 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.011782 s, 348 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.010918 s, 375 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.022222 s, 184 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.049247 s, 83.2 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.047279 s, 86.6 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.275647 s, 14.9 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.075216 s, 54.5 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.074935 s, 54.7 kB/s
    dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0103258 seconds, 397 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0437495 seconds, 93.6 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0071096 seconds, 576 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0166829 seconds, 246 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.00639409 seconds, 641 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0198869 seconds, 206 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.00516449 seconds, 793 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.00678288 seconds, 604 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0165196 seconds, 248 kB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.00605218 seconds, 677 kB/s

    First is the VPS with 64MB/s DD and the second is the VPS with the 59MB/s DD.

  • kiloservekiloserve Member
    edited September 2011

    Here's the result from the 283MB/s DD server, as you can see it is far superior to the two average speed servers as the speed indicates.

    The two average servers also seem to write at about the same max speed.

    The trend is still the same, you get 2 average DD scores and you get average results. You take a high DD score server and it outperforms the average DD score servers.

    The ~60 MB/s average DD server is not going to beat a 283 MB/s DD server.

    64MB/s DD = 632 kb/s max
    59MB/s DD = 793 kb/s max
    283MB/s DD = 1.3 MB/s max (I left out the 3.2MB/s, that's an outlier)
    
    dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;dd if=/dev/zero of=test bs=4096 count=1 conv=fdatasync;sleep 1;
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0013 seconds, 3.2 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.003329 seconds, 1.2 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.003658 seconds, 1.1 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.00376 seconds, 1.1 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.003617 seconds, 1.1 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.003661 seconds, 1.1 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.00374 seconds, 1.1 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.00408 seconds, 1.0 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.003625 seconds, 1.1 MB/s
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.003265 seconds, 1.3 MB/s
    
  • yomeroyomero Member
    edited September 2011

    Small writes are less accurate than big writes imho.

    Maybe a good test can be concurrent small writes

  • Folks, please take a look at this:

    Kernel: 2048.00M 20.43M 2027.57M
    Allocate: 4096.00M 677.49M 3418.51M (2048M Guaranteed)
    Commit: 2048.00M 366.80M 1681.20M (51.1% of Allocated)
    Swap: 2.25M (0.6% of Committed)

    Is it good or bad? :D

  • WhizzWrWhizzWr Member
    edited December 2011

    Sorry, If I bumped such an old thread, but I'm curious with my vzfree results, which are:
    Total Used Free Kernel: 2048.00M 10.29M 2037.71M Allocate: 1024.00M 497.64M 526.36M (512M Guaranteed) Commit: 512.00M 259.51M 252.49M (50.1% of Allocated) Swap: 0.11M (0.0% of Committed)

    From what I understand, my node has been swapping, but considering the number of memory that get swapped (0.11M) relative to the amount of RAM allocated (0.0-ish %).

    Does it matter? I mean, will it cause any performance degradation?

  • @WhizzWr said: Does it matter? I mean, will it cause any performance degradation?

    I don't think so. Get worried when more RAM starts to move to the swap

  • WhizzWrWhizzWr Member
    edited December 2011

    @yomero said: I don't think so. Get worried when more RAM starts to move to the swap

    Hey, thanks for commenting Yomero :)
    Well, it goes back to 0.00 M for now, I guess I was just being paranoid. No?

    Would love to hear from other LEB master too.

  • @WhizzWr said: Would love to hear from other LEB master too.

    LOL, nah, I am not a master :P but thanks.

    I remember when I had my hostrail boxes and that were moving more than the half of the RAM used to the swap :S

  • InfinityInfinity Member, Host Rep

    @yomero said: I remember when I had my hostrail boxes and that were moving more than the half of the RAM used to the swap :S

    Lol, yeah. My first ever VPS was from HostFail.

  • JacobJacob Member
    edited December 2011

    I don't believe it is actually possible to detect if a provider is "really" overselling, Measuring your results on the Disk I/O, Net Speed, Etc.. Is not a accurate way.

    Your provider will measure the nodes on the CPU load, This determines how many clients a single node can handle and what would be the CPU threshold on a node.

    You can still make a profit on nodes without majorly overselling, Even if your a budget provider.

Sign In or Register to comment.