Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Does Virtualization matter for network speed
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Does Virtualization matter for network speed

Assuming Im on same data center and same provider, is network speed the same for both OVZ and KVM?

Comments

  • I would assume so, I don't see why a 100mbit line would change itself to something else.

    Poor I/O or high CPU could potentially slow the response time down but the line speed would remain the same.

    Stop getting your posts up I know you want to post those offers ;)

    Thanked by 2jcaleb Mark_R
  • jarjar Patron Provider, Top Host, Veteran

    Network driver might cause a variation in results, as would port congestion, but I would assume everyone would have their servers on the same or equiv. infrastructure.

    Thanked by 1jcaleb
  • I think OVZ has a slight, negligible edge over KVM/Virtio. Could be wrong.

    Thanked by 1jcaleb
  • JeffreyJeffrey Member
    edited February 2014

    The network speed would remain the same, whether it's one virtualization or another. However, disk I/O does come in with loading speed.

    Thanked by 1jcaleb
  • c0yc0y Member
    edited February 2014

    @ihatetonyy said:
    I think OVZ has a slight, negligible edge over KVM/Virtio. Could be wrong.

    Correct.

  • @Jeffrey said:
    The network speed would remain the same, whether it's one virtualization or another. However, disk I/O does come in with loading speed.

    Disk I/O runs over the same architecture as network I/O... Why do you think they're both referred to as I/O?

    Please refrain from commenting if you only know peanuts about how such systems work internally.

    Thanked by 1Jeffrey
  • ATHK said: Stop getting your posts up I know you want to post those offers ;)

    im not a host.

  • raindog308raindog308 Administrator, Veteran
    edited February 2014

    @jcaleb said:
    im not a host.

    ...yet.

    Hang around here much and sooner or later you'll catch the disease.

  • raindog308 said: ...yet.
    Hang around here much and sooner or later you'll catch the disease.

    not my cup of tea... i have some plans of software as a service though. im developing/programming something now for small company in manila

    Thanked by 1raindog308
  • MaouniqueMaounique Host Rep, Veteran
    edited February 2014

    In short, it does.
    The long version:
    1. It will not matter over 100 mbps links, the difference will be negligible even with poorer chipsets and E3s (Dual E5 boards are usually better done). I am not talkinging about i3s here, nor atoms;
    2. At 1 gbps and a lot of pps it matters a lot. I have been seeing a lot of outgoing DDoSes and the pps at least are about double on OVZ while on KVM cant usually saturate the port even with UDP attacks. That is over time in many attacks, though I havent checked if virtio was enabled or not, I should assume at least in some cases it was, because we have virtio on by default. Xen does better than KVM but still OVZ does better.
    3. That being said, in a real world scenario, unless you run some illegal satellite receiver card sharing program which goes as high as 80k pps, you wont have to worry.
    4. One last thing, size matters. Small Xen/kvm VMs without some tweaks will have too small buffers in TCP and they will impact speed even in regular tests you need some tweaking to do which OVZ will not need since this is handled by the node. Even bigger VMs might need those tweaks if your network needs to peak very high from time to time.

    Thanked by 1Mark_R
  • jcalebjcaleb Member
    edited February 2014

    Maounique said: 4. One last thing, size matters. Small Xen/kvm VMs without some tweaks will have to small buffers in TCP and they will impact speed even in regular tests you need some tweaking to do which OVZ will not need since this is handled by the node.

    I am thinking of just comparing using speedtest.net command line tool or wget like others use here. Will I get different result on 128mb ram box vs 1gb box?

  • Network driver can have a massive difference is port speed and how it deals with congestion, mostly in the case of VMware, not such an issue with OpenVz and KVM as far as I know

    @jarland said:
    Network driver might cause a variation in results, as would port congestion, but I would assume everyone would have their servers on the same or equiv. infrastructure.

    Thanked by 1jcaleb
  • MaouniqueMaounique Host Rep, Veteran

    If KVM/Xen, yes. If both OVZ, unlikely. You can tweak it in sysctl:

    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    net.ipv4.tcp_no_metrics_save = 1
    net.ipv4.tcp_moderate_rcvbuf = 1
    net.core.netdev_max_backlog = 2500
    net.ipv4.tcp_sack = 0
    net.ipv4.tcp_window_scaling = 1
    

    Some will help more, some less, at least the first 2 are needed, the next 2 will help too, but the others not so much, depending on your configuration, number of pps and size of frames. They are generally good to have.

    Thanked by 1jcaleb
  • Maounique said: If KVM/Xen

    Will it be noticeable? Like 30% difference in download speed?

  • MaouniqueMaounique Host Rep, Veteran

    It could be even 10 times higher, depending on the free mem you had when you did the test without those. But that is in extreme cases.

    Thanked by 1jcaleb
  • @jcaleb said:
    im not a host.

    That was a joke :)

    Thanked by 1jcaleb
  • concerto49concerto49 Member
    edited February 2014

    MitchellRobert said: Why do you think they're both referred to as I/O?

    Because it both does input and output? That's all IO means. It doesn't have to be the same system.

  • Thanks @Maounique I will try this on same host. Maybe compare my 128mb ovz, 256mb kvm, and 1GB Xen all in Milan

  • MaouniqueMaounique Host Rep, Veteran

    They all have same connection, so it should show the differences more clearly. Also, make sure that speedtest location is the same and close (varese would be a good one) also you run them in parallel. If you choose a distant endpoint will probably not going to maximize the port(s), not even close. A location in garr would be best.

    Thanked by 1jcaleb
  • @MitchellRobert said:
    Disk I/O runs over the same architecture as network I/O... Why do you think they're both referred to as I/O?

    >

    Please refrain from commenting if you only know peanuts about how such systems work internally.

    Except the two are separate. VFS and network subsystems are independent and accessed differently. cough

    KVM is going to be slightly slower even with virtio (network and/or block device drivers) because these two subsystems are deeper than just the driver. Specifically the buffers, memory managers and schedulers used by the subsystem are all still running in emulation. VFS tends to have more fundangling with the memory subsystem so there's going to be a penalty there.

  • @MitchellRobert I was referring to the speed at which the HDD writes at.

  • MaouniqueMaounique Host Rep, Veteran

    @Jeffrey said:
    MitchellRobert I was referring to the speed at which the HDD writes at.

    Exactly, with raid 10 you can achieve something like 900 MB/s theoretically (full SSD 450 MB/s writing drives), it will never be more than 500 on SSD cached (unless the cache itself is raid 0, highly unlikely).
    The reason we see speeds higher than that today is because the tests are done in the raid controller cache or even on specially tweaked nodes with ram buffers.
    Some hosts use
    various techniques to show off unrealistic performance which has nothing to do with the drives, nor with real life applications for which 100 MB/s is largely enough and 300 MB/s should be much more than needed unless some truly special usage or very busy unoptimised databases which would do better in RAM or on a dedi most of the time.

    Thanked by 2wcypierre Dylan
  • DewlanceVPSDewlanceVPS Member, Patron Provider

    Without virtualization dd test speed show 1Gbps/s and with virtualization dd test speed show 200Mb/s

    Download/upload speed is same.

    Thanked by 1jcaleb
  • MaouniqueMaounique Host Rep, Veteran

    @DewlanceVPS said:
    Without virtualization dd test speed show 1Gbps/s and with virtualization dd test speed show 200Mb/s

    Download/upload speed is same.

    You have some issue with storage driver inside the VMs, there is a difference, but should not be that big even with KVM if using virtio.

  • I haven't see much speed difference between different types of Virtualization, however I have noticed that OVZ latency is better than KVM, I've seen an almost 10ms difference.

    Thanked by 1jcaleb
  • @tchen said:
    Except the two are separate. VFS and network subsystems are independent and accessed differently. cough

    I was talking on virtualization level, in which they're both virtio which runs over emulated PCI.

    What the I/O subsystems individually handle is irrelevant to the fact you're bound to lose ~10% performance over Qemu's emulated I/O channels, unless maybe you passthrough an entire drive or RAID card with VT-d enabled.

Sign In or Register to comment.