Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Optimize KVM performance
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Optimize KVM performance

goexodusgoexodus Member
edited March 2013 in General

Was trying find ways to optimize KVM installs Centos 6.3 or Ubuntu 12.04 and could not find any good information except:

1) modify /boot/grub/menu.lst or /boot/grub/grub.cfg to allow VPS IO scheduler to noop

2) use Virtio Drivers

3) fix Swap Partition if its 0
free -m
swapoff /dev/vda2
mkswap /dev/vda2
swapon /dev/vda2

4) improve I/O performance
echo 0 > /sys/block/vda/queue/rotational
echo 0 > /sys/block/vda/queue/rq_affinity
echo noop > /sys/block/vda/queue/scheduler
echo "echo 0 > /sys/block/vda/queue/rotational" >> /etc/rc.local
echo "echo 0 > /sys/block/vda/queue/rq_affinity" >> /etc/rc.local
echo "echo noop > /sys/block/vda/queue/scheduler" >> /etc/rc.local
echo 'vm.swappiness=5' >> /etc/sysctl.conf
echo 'vm.vfs_cache_pressure=50' >> /etc/sysctl.conf
sysctl -p

5) mount partition with noatime.
vi /etc/fstab

any other links, tips or ideas?

Ramnode seems to have a well done guide
https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=44

Comments

  • there's not much you can do to optimize KVM performance, what issues do you have right now?

  • Move to a faster node :p

  • goexodusgoexodus Member
    edited March 2013

    there's not much you can do to optimize KVM performance, what issues do you have right now?

    virtio driver instead of the legacy IDE made a noticable difference at least on Centos.

  • @goexodus said: virtio driver instead of the legacy IDE made a noticable difference at least on Centos.

    yeah, I meant except the methods you already described

  • KeithKeith Member

    Use ext4 instead of ext3 when installing.
    The kernel could also be recompiled, my choices are
    Not compiling for size
    Preemption model is No Forced Preemption (Server)
    Processor family - depends on cpu used
    Reduce Maximum number of CPUs

  • yomeroyomero Member
    edited March 2013

    None of that instructions is oriented to SSDs except this

    echo 0 > /sys/block/vda/queue/rotational

    The nobarrier option seems to be suggested because they use BBU raid controllers.
    The noatime isn't really recommended, apparently "relatime" is a better choice.
    The noop scheduler is the recommended for KVM too.

    @Keith said: Use ext4 instead of ext3 when installing.

    Why this?

  • flyfly Member

    noop is good fo kvm

  • @fly said: noop is good fo kvm

    +1

  • Nick_ANick_A Member, Top Host, Host Rep

    If you're having any issues on a RamNode VPS, don't hesitate to open a ticket.

    Otherwise, carry on! Perhaps I'll learn a thing or two in this thread as well.

  • Adding the following lines to your /etc/sysctl.conf file will increase network throughput

    net.core.rmem_max=16777216
    net.core.wmem_max=16777216
    net.ipv4.tcp_rmem=4096 87380 16777216
    net.ipv4.tcp_wmem=4096 65536 16777216
    

    You can also try the following /etc/fstab options on your ext4 mount

    noatime,data=writeback,barrier=0,nobh
  • I don't like the noatime one =/

    Also, I am afraid of the cache one.

  • goexodusgoexodus Member
    edited March 2013

    But save all settings in case you need rollback ...

  • Nick_ANick_A Member, Top Host, Host Rep

    I've found that double writeback is not helpful. Just let the hardware handle writeback in my opinion.

  • @Nick_A said: I've found that double writeback is not helpful. Just let the hardware handle writeback in my opinion.

    So, at the end, the "perfect" settings depend of the provider :P

  • @Nick_A said: I've found that double writeback is not helpful. Just let the hardware handle writeback in my opinion.

    I believe this isn't a provider specific topic, so there's always a chance someone's provider not using writeback.

  • Nick_ANick_A Member, Top Host, Host Rep

    Sure, sorry.

    Anyway, have you guys seen a KVM guest become very slow after applying tweaks like the ones above?

  • Does higher count of threads or CPUs have any predictable effect or its worse as in the case of VMWare?

  • @goexodus said: as in the case of VMWare?

    :O What you mean?

  • goexodusgoexodus Member
    edited March 2013

    VMWare does not behave well when you assign multiple vCores at multtiple VMs. You actually pay a penalty in that case. VMs dispatches vcores using a time slicing algorithm (fair approach), not an interrupt driven algorithm as is typically used to dispatch processes in a regular operating system. What this means in practice is that if you assign say 2 vcores to a guest, then 2 vcores must be AVAILABE AT SAME TIME before that guest can run at all.

    I was just wondering how to offer increased benchmarks in certain customers they need it and willing to pay using KVM VM platform?

  • @goexodus said: AVAILABE AT SAME TIME before that guest can run at all.

    Wow, that sounds a little bit dumb :(

  • Wow, that sounds a little bit dumb :(

    No its not actually, Quite the opposite. Its a complicated subjects and statistical in nature ...

  • @goexodus You get more performance with more vCores. Take a look at serverbears benchmarks on the unix bench results, if you have more vCores you get more points. It works very well with linux kvm.

    You can pin each vCore to a specific core if you want to get the the best results.

  • @fileMEDIA said: You get more performance with more vCores.

    Which sounds obvious lol

    @goexodus said: Its a complicated subjects and statistical in nature

    I guess so. VMware stuff has years and years of development

  • I guess so. VMware stuff has years and years of development

    But if you check their licencing you will get a heart attach ....

Sign In or Register to comment.