New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
it's possibel
https://fasterdata.es.net/host-tuning/linux/#toc-anchor-2
Here are my base tweaks. I don't think they're perfect yet, and it's something I still am benchmarking/changing for improvements as time goes by, but it already gives me much better results than the base settings on 1-10Gbps servers. Happy for feedback on these too
For the qdisc I use
fq_codel
by default instead offq
(it is explicitly set because Linux still defaults topfifo_fast
,fq_codel
being a default is a systemd thing so it's not there on, as an example, Alpine Linux) becausefq
is more tuned for TCP workloads - in my case, I usually have mixed TCP/UDP use-cases (mostly real-time multimedia streaming, QUIC, and tunnels/VPNs - all use UDP). I usecake
in some servers too (and my home network :P); but depends on use-case.For congestion control, I use BBRv3 on my servers (compared to the BBRv1 the kernel includes). This would need a custom kernel - for Debian/Ubuntu you can use XanMod as an easy way to have it (plus includes a few other changes).
MSS is
1240
to fit in the default of a1280
MTU. Depending on the server's use-case I might adjust this to increase it further if it won't be communicating too often with clients, but otherwise I default on the err side for people in lower MTUs (Tailscale for example uses MTU1280
, and this also applies to the "exit node" function a lot of people use. Same thing for those who use Cloudflare's WARP tunnel), sincetcp_mtu_probing
should increase it as long as the connection stays open.The
rmem
/wmem
values are what I want to benchmark further and what I might tweak the most often depending on server/clients location and, of course, the bandwidth speed I might have available on said server. But those work fine at the moment for me as a base where latency is in a normal ~100ms range, either for high-bandwidth file serving to another continent, for streaming HD video via UDP, and for my VPNs.There are some TCP tuning tools floating around in Chinese communities; here is one that I use occasionally (use your browser's translator).
https://omnitt.com/
Apparently there's no universal settings. You'll have to tweak based on your port settings and use cases.
Also found a one-click script though I haven't used myself.
https://github.com/BlackSheep-cry/TCP-Optimization-Tool
Thanks guys!
Also does anyone know how to use spare RAM to make writing to HDDs faster? As in caching using some ram. RAID0 is not cutting it on some servers.
That's depressing, and everytime you do this, a @yoursunny dies.
Block RAM Disk
I'm actually using this: https://github.com/k4yt3x/sysctl/blob/master/sysctl.conf
I use https://tuned-project.org/ for tweaking host profiles.
Same choice
It's installed by default on the Linux distro I'm using
this + custom profile per box using throughput-performance as base
"I NEED NAT! FUCK IPV6"
I did try that, trust me mate, aint worth the effort. Switch to ZFS, mind your CPU and RAM usage, tweak ZFS for your usecase and enjoy.
LOL how is that a good tweaking! anyway I hate IPv6
Can recommend
kernel.unprivileged_userns_clone=0
I’ve lost count of the amount of CVEs having this disabled made me immune to
Play with
vm.dirty_ratio
andvm.dirty_background_ratio
and increase commit time (fs-dependent)I'm actually working on a sysctl.conf configurator for IncogNET. Can either select from a default VPS plans or enter custom values, select the purpose of the server and add some random additional features.
It works, but needs more testing.
On all my personal stuff, I usually swap the kernel for Xanmod and do some various sysctl tweaks.
I just googled on this and you're right. Wow.
And in 6.15 (yet unreleased) linux kernel that tunable seems to be completely gone:
I wonder why.
See:
user.max_user_namespaces
Because it's a Ubuntu/Debian specific patch that adds that tunable, so you won't find it in a mainline kernel.
Oh, sh*t... didn't see that coming.
Thanks for the info!
Good thread. Gotta admit the only ones I used consistently is to disable ipv6.
The thread of sanity