Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Does Linux distribution influence network speed?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Does Linux distribution influence network speed?

colingptcolingpt Member
edited October 2015 in Help

Hi guys,

Just find something interesting.

Today, when I use Centos5 x86 on Digtalocean, I did a network speed test by :
wget --no-check-certificate freevps.us/downloads/bench.sh -O - -o /dev/null | bash
then get result for 4.5 mb/s from Cachefly and other locations around 2-3 mb/s.

When I change to Centos6 x86 and Detian8 X86, doing the same thing , then result become 22 mb/s from cachefly and other locations around 5-8 mb/s. Just 5 times more.
Centos6 X86

detian8 x86

I didn't test on Ubuntu.

What's wrong here? Just curious... Is this a issue related to OS or just DO template problem or anything else?

My vps current in Singapore, all systems are tested several times with clean install. and in a same instance, tests are done in a short time period, so the reason should not because of time change.

Any ideas? I don't have other IDCs to do this test, you may try on others

Thanks.

Comments

  • alexnjhalexnjh Member
    edited October 2015

    DO Singapore network isn't really good tbh I get high latency even though I am base in singapore.

    But I dont think the distro will affect the network that much.

    Thanked by 1colingpt
  • singsingsingsing Member
    edited October 2015

    My advice is to try and exclude your IP from screenshots when posting here ...

    Thanked by 1colingpt
  • Can you confirm this is the same machine? You'd want to limit the amount of variables that change.

    That said there are a few differences between distributions, kernel, default sysctl tcp settings, ethernet driver. All of which can have an affect on tcp performance.

    Using iperf between machines using both UDP & TCP tests is one of the best ways to determine actual bandwidth capacity and tcp performance respectively.

    Thanked by 1colingpt
  • @singsing said:
    My advice is to try and exclude your IP from screenshots when posting here ...

    Although this is not my private ip, which is a university ip, but I Changed it, Thanks for the warning.

  • @vinny said:
    Can you confirm this is the same machine? You'd want to limit the amount of variables that change.

    That said there are a few differences between distributions, kernel, default sysctl tcp settings, ethernet driver. All of which can have an affect on tcp performance.

    Using imperf between machines using both UDP & TCP tests is one of the best ways to determine actual bandwidth capacity and tcp performance respectively.

    Thanks for the information. This is definitely the same instance. The only thing I do is to change the os from control panel and then do the test. And tests are done in a shot time period. the result is repeatable in last 3 days after I begin to use DO service.

    Don't konw if there is someone would like to do this on other IDC to see the difference.

  • @masterqqq said:
    DO Singapore network isn't really good tbh I get high latency even though I am base in singapore.

    But I dont think the distro will affect the network that much.

    Neither do I, but it happens, just curious :)

  • This looks like statistical fluke. Even if there are minor differences in TCP settings, c'mon, how is that going to account for 5x-10x higher transfer rate?

    Compare with the magnitude of changes due to shifts in TCP implementation (from https://en.wikipedia.org/wiki/TCP_congestion-avoidance_algorithm):

    In tests performed by Google, PRR resulted in a 3–10% reduction in average latency and recovery timeouts reduced by 5%.[14] PRR is used by default in Linux kernels since version 3.2.[15]

    Thanked by 1colingpt
  • @singsing said:
    This looks like statistical fluke. Even if there are minor differences in TCP settings, c'mon, how is that going to account for 5x-10x higher transfer rate?

    Compare with the magnitude of changes due to shifts in TCP implementation (from https://en.wikipedia.org/wiki/TCP_congestion-avoidance_algorithm):

    In tests performed by Google, PRR resulted in a 3–10% reduction in average latency and recovery timeouts reduced by 5%.[14] PRR is used by default in Linux kernels since version 3.2.[15]

    I agree on that, but this result is repeatable in last 3 days.... not like a fluke, so I asked here for possible ideas.

  • Do a sysctl -a on all three, and check for differences. I am guessing that some TCP buffer/window defaults changed with kernel versions.
    I know that increasing RMEM/WMEM gets me better performance.

    Can you check if this is a kernel thing? See if CentOS 5 gives you the same performance as another (old) distro with the same kernel version.

    Thanked by 1colingpt
  • Different distributions include different kernel versions as well as drivers.

    So Centos 5 will use a different kernel to say Ubuntu 14 and with that later release of drivers.

    I have been doing some extreme benchmarking and tweaking on Ubuntu 14 and found I can get around 6-7% higher network performance on the latest Vivid kernel that the stock Ubuntu 14.04.

    I've also found that certain CPU benchmarks perform better on Ubuntu 14 over Centos 6, but that seems to only be down to the kernel versions v2.6 vs v3.

    I don't think you'll see 5x higher performance like in the OP's posting between major distributions. Centos 5 vs Debian 8 I'd expect some performance difference but nothing that drastic.

    The only thing I can think is that this is KVM and Centos 5 doesn't have native virtio drivers so probably fails back to some awful Realtek pseudo driver. Whereas Debian 8 supports native virtio.

    I'll test it at the weekend.

    Thanked by 1colingpt
  • MarkTurner said: The only thing I can think is that this is KVM and Centos 5 doesn't have native virtio drivers so probably fails back to some awful Realtek pseudo driver. Whereas Debian 8 supports native virtio.

    OP still gets great speeds to Singapore in the Centos 5 test set, in fact better than in the other test sets.

    Thanked by 1colingpt
  • @MarkTurner said:
    I don't think you'll see 5x higher performance like in the OP's posting between major distributions. Centos 5 vs Debian 8 I'd expect some performance difference but nothing that drastic.

    The only thing I can think is that this is KVM and Centos 5 doesn't have native virtio drivers so probably fails back to some awful Realtek pseudo driver. Whereas Debian 8 supports native virtio.

    I'll test it at the weekend.

    Totally agree, configuration and drivers are my guess as well. But I'm really new here, don't know how to test different drivers on Centos 5.

    the reply below mention another idea, @singsing, notice that speeds to Singapore in the Centos 5 is quite high.

    Maybe the reason why there is a high speed in Singapore is the distance is too short to show the difference?

  • @singsing said:
    OP still gets great speeds to Singapore in the Centos 5 test set, in fact better than in the other test sets.

    Good point! I didn't notice that.

    Maybe the reason why there is a high speed in Singapore is the distance is too short to show the difference?

  • colingpt said: Maybe the reason why there is a high speed in Singapore is the distance is too short to show the difference?

    The point is it proves that virtio vs emulated realtek is not causing the performance problem.

  • Agreed. Looks more like a latency thing. Small buffers will kill the throughput on a high latency link. Look at rmem and wmem settings in sysctl. Also look at selective acknowledgements (net.ipv4.tcp_sack).

    Thanked by 1colingpt
  • @rincewind said:
    Agreed. Looks more like a latency thing. Small buffers will kill the throughput on a high latency link. Look at rmem and wmem settings in sysctl. Also look at selective acknowledgements (net.ipv4.tcp_sack).

    I changed those value based on this page, result didn't change.

  • colingpt said: I changed those value based on this page, result didn't change.

    I just tried it now and it worked. Maybe you forgot net.core.rmem_max and net.core.wmem_max

    Results:
    Download speed from CacheFly: 25.8MB/s Download speed from Coloat, Atlanta GA: 3.47MB/s Download speed from Softlayer, Dallas, TX: 8.55MB/s Download speed from Linode, Tokyo, JP: 19.5MB/s Download speed from i3d.net, Rotterdam, NL: 4.31MB/s Download speed from Leaseweb, Haarlem, NL: 7.03MB/s Download speed from Softlayer, Singapore: 103MB/s Download speed from Softlayer, Seattle, WA: 11.6MB/s Download speed from Softlayer, San Jose, CA: 10.5MB/s Download speed from Softlayer, Washington, DC: 9.01MB/s I/O speed : 118 MB/s [root@centos5 ~]# sysctl -a | grep mem net.ipv4.udp_wmem_min = 4096 net.ipv4.udp_rmem_min = 4096 net.ipv4.udp_mem = 49056 65408 98112 net.ipv4.tcp_rmem = 10240 87380 12582912 net.ipv4.tcp_wmem = 10240 87380 12582912 net.ipv4.tcp_mem = 12288 16384 24576 net.ipv4.igmp_max_memberships = 20 net.core.optmem_max = 10240 net.core.rmem_default = 110592 net.core.wmem_default = 110592 net.core.rmem_max = 12582912 net.core.wmem_max = 12582912 vm.lowmem_reserve_ratio = 256 256 32 vm.overcommit_memory = 0

    Thanked by 1colingpt
  • @rincewind said:

    Thank you! Definitely works now!!! I use your config directly.
    It is weird that i don't find much differnce between yours and those from that page. Maybe I lost some parts of that. Anyway, many thanks!

  • No problem. I did use the same config as your cyberciti link.

    If you want to understand TCP tuning in more depth, you should look at FasterData

    Thanked by 1colingpt
Sign In or Register to comment.