New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Fixing Network Speed in KVM
If you're using a KVM and not getting appropriate network speed (like 100Mbps when you're supposed to get 1Gbps) - you may try the following
open /etc/sysctl.conf in your server and append the following lines
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
Now in terminal run the following command
sysctl -p
Now check network speed again. Here is how it went for me
#Stock Options
Download speed from CacheFly: 8.29MB/s
Download speed from Coloat, Atlanta GA: 2.31MB/s
Download speed from Softlayer, Dallas, TX: 3.45MB/s
After sysctl.conf edit
Download speed from CacheFly: 51.2MB/s
Download speed from Coloat, Atlanta GA: 19.8MB/s
Download speed from Softlayer, Dallas, TX: 53.0MB/s
Download speed from Linode, Tokyo, JP: 10.6MB/s
Download speed from i3d.net, Rotterdam, NL: 5.12MB/s
Download speed from Leaseweb, Haarlem, NL: 14.2MB/s
Download speed from Softlayer, Singapore: 8.87MB/s
Download speed from Softlayer, Seattle, WA: 36.7MB/s
Download speed from Softlayer, San Jose, CA: 33.6MB/s
Download speed from Softlayer, Washington, DC: 76.5MB/s
Comments
Being a support guy, this is the one thing I have to continuously tell people to do. Thanks @ironhide
Add some note/tip for 100/100 as well. The values of this is tricky since its a 'kernel tweak for networking (?)'
Does this change the ram amount which is useable for the network?
According to http://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php it adjusts the maximum and default send/receive buffer sizes for TCP connections. This means TCP algorithm is allowed to use a larger window, and also will start off downloading faster due to higher default. It uses a bit more memory (not noticeable to the user though, TCP doesn't need much memory).
Any same tips for Windows OS ?
I think, it's taken from https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=56
I saw this tips many years only at ramnode.
meh https://www.google.com/search?q=net.core.rmem_max=16777216, some date to 2007
having the right drivers installed, you don't need any other tuning.
That's silly. It's not a "Super Secret" or "Proprietary" stuff that is available only in a specific site. And just didn't you see it anywhere else doesn't mean that it is not available elsewhere. It's a common solution to problems like this.
Even just searching this line "net.ipv4.tcp_rmem=4096 87380 16777216" brings up 138,000 results in google. Time to start using Google, eh?
Does this trick also applies to XEN based VPS's?
Yew, works well under Xen VPS.
All OSes benefit from it, the ones tuned to lower memory the most, i.e. debian based. Virtualization is not important, obviously, I mean virtualization, not containers using host kernel.
And this is in our FAQ too. Just we hid it well
Thanks for this! Worked perfect on BlueVM KVM2!
works just fine with vmware
Works on OS without virtualization too.
Before making this change on my kimsufi atom
After making the change
So a little inconclusive, but maybe better
OpenVZ:
It's sad because I thought I could do better with this:
Guys, there are for KVM containers. You're not going to see a big difference on a host OS (no virtualization) and it's not going to work on OpenVZ.
Hi
I know this is an old thread but I just try it on a KVM VPS running Debian 9 and network speed improved! So I guess this is still valid for Debian 9, Ubuntu 18.04 and CentOS 7?
Are you guyis using this sysctl.conf "optimization"?
no. if you have a proper host you don't need to. If you speed is less than advertised; I would try a different kernel than this hack.
This isn't a hack. Linux was always expected to be tuned like this to the application needs (same with ANY OS) people just stopped tweaking because there's trade offs and requires testing. So stick with defaults.
It's best to find blogs posted by people who worked on high client capacity servers and high throughput servers for their experience and recommendations. People generally search this out when researching their existing bottlenecks with default settings (ie, gigabit and 10Gb links). They will point out the most retarded default settings that should be changed for almost everyone.
I stopped doing sysctl tuning years ago when I stopped using iptables scripts and just use firewalld.
Nowadays, just enabling BBR on a supported kernel is the easiest return on effort.
Agreed.
Google BBR seems interesting. Never try it before. Can anyone recommend a link where I can find a good tutorial on how to implement it on Debian 9 or Ubuntu 18?
Thanks
You need to modify the kernel config and compile it.
Just look for "compiling kernel on debian".
EDIT:
From kernel 4.9 and up.
for a vps with kernel 4.9 you generally just need to edit add sudo nano /etc/sysctl.conf
net.core.default_qdisc=fq
net.ipv4.tcp_congestion_control=bbr
save and run
sudo sysctl -p
check if bbr is loaded
lsmod | grep bbr
Around 30 seconds
but "hack" and bbr in combination are counterproductive?
thanks for tip on bbr.
after updating kernel, low memory usage and faster network speed.
What was that 80MB/s before BBR enabled, do you recall?
Cannot remember, but was doing a single download test which was maxed out at 9.3MB/s before kernel upgrade + bbr.
After doing so it did consistent at around 18MB/s.
Observed similar results in most vps using 3.10 kernel from centos7.6.
Thanks
Before: 200 ish mb
After: 800 ish mb
no suggestions about using "hack" + bbr? is combination of both recommended or not?