New on LowEndTalk? Please Register and read our Community Rules.
cons of modifying sysctl.conf
Recently, I checked some website about how to modify
sysctl.conf to solve the problem that nginx gives error of 502 or “Resource temporarily unavailable” when using sock file to connect to mysql. People suggest this is because linux concurrent connections are limited by defualt. After changing
sysctl.conf, as stated here, looks like problem can be solved.
my question is why linux doesn't set these by default to improve performance, but require an experienced system administrator to do it manfully? is there any cons effect after changing sysctl.conf?
Make a backup before you edit anything with:
sysctl -a > sysctl.backup
so then you can change things back if you need to
No idea why the limits are set how they are, though changing should not cause issues unless you set insane amounts (i.e. unlimited/0 or x100).
yeah, I did a bit of research, people just talk about how to modify it, but didn't find why there is such an inefficient setting for us to configure...
It might improve performance in your use case, but Linux never tried to be one size fits all. If doing a little bit of research/tuning is beyond you, perhaps you should look for some shared hosting?
Maybe to make it work on the maximum number of servers ?
if you check the DO thread I mentioned, you find it is something global conf to allow more system concurrent connections and buffer, which is not just my case but I assume suitable for most servers.
do you mean, inefficient conf can be used in more general situation? As far as I know, this kind of setting doesn't influence RAM consumption, it only allows certain software can access connections or other resource at a higher level, so the resources consumption depends software itself. Although I am not sure about this, so, I asked here, to see if there is any experienced system administrator know something related to this.
'most servers' is disconcertingly vague. I promise you the kernel devs have picked a reasonable default, and your minimal understanding of kernel internals does not superceed their judgment.
As I said, if you're not willing to tinker a little, shared hosting may be a better product for you.
sysctl is a tool that is used to modify kernel parameters at runtime. It can only modify parameters that are available at "/proc/sys/*". So it's nothing more than a tool to change some kernel parameters. One of these is the open file limit parameters. In your case you had to increase it to fix the issue with your MySQL server because the default value was probably simply too low for your setup by then.
You can change many things with systctl. A few examples:
This https://klaver.it/linux/sysctl.conf seems like a pretty detailed setup with comments and sources for settings.
In the end I'd say that depending on what you do you should adjust values as you need .
lol, amazing point, your promise really worth a penny, I do know kernel devs know much more than me, so I would like to ask why they do that. We know they are great, so we would like to learn from them.
show me your
a little bit of research/tuningwhich is beyond me and help me find the answer of my curiosity. I will thank you, otherwise, you even don't know what am I asking for, then bitching around is not helpful at all..
i don't use shared host at all (well, except some free ones for me to know more about what is share hosting), I am not running a website, I am learning linux. And IT IS NOT YOUR RESPONSIBILITY TO JUDGE ME WHICH I SHOULD USE.
go the cest pit for your thought which is better for you. pls don't bump this up with non-related staff. thanks for comprehension.
great resource!! thanks! much appreciated. I will check though. it really carries a lot of comments about each parameter.
based on things you mentioned, basic settings from default is just for
minimal setting to workpurpose, so tuning on kernel may take bad effect or nothing for general users?
from https://klaver.it/linux/sysctl.conf, I saw this at beginning.
Optimised and tuned for high-performance web/ftp/mail/dns servers with high connection-rates
DO NOT USE at busy networks or xDSL/Cable connections where packetloss can be expected
it looks like when
high-speed networks with loads of RAM and bandwidthis not available. given high connection rates may cause problem. :P
It's like MySQLTuner and my.cnf settings.
Comes with a sane default that's OK for most.
You can tune it and optimize it if you'd like for faster FS access, TCP optimization, etc.
Make sure to not go overboard and set limits meant for beefy bare metal on a small VPS, etc. You can do the same with my.cnf, try to over optimize it, and actually slow things down.
Tweak sysctl if you need to or are seeking improvements at a granular level. Otherwise beyond some open file limits you shouldn't need to poke around too much.
I would suggest making the settings in RT to make sure nothing goes wrong before applying them to sysctl.conf, just a preference.
Some DSL and internet setups require lower MTUs, one example - when the layer 2 HW cannot handle TCP fragmentation. You do not want to throw sysctl settings meant for servers with 1Gbps uplinks on your home connection.
If it ain't broke, don't fix it. If you do want to poke around it, backup as @sin mentioned, and have a copy of Finnix ready incase all goes wrong.
really well explained! thank you! I will check through :-)
People who compile Linux distribution images, use the most common denominator so they (the distro images) would work on even old and weak machines or just any machine right after install. But as we all know there is no one size fits all. Also remember that every server may be used for a different purpose:
You get the vanilla install of an OS and then you tweak to perform at its max capability for you.
I cannot go in details about every single setting here, because it is probably covered in many other guides and it would take forever, but these limits are set by default to prevent system "instability" in many cases.
For example "nofile" limits are set so a process can only open this maximum number of files at a time. Think of a malicious or a buggy code that would start spawning new processes and then opening new file handles. You would soon have a non-responsive server depending on the size of your VPS or power of your bare metal server.
You can increase these limits if you deem the new ones suitable, but take these steps in caution and google for guidelines for each and every setting and how it affects server's performance.
I worked a lot with Oracle software (not only database) and in many cases Oracle deployment guides ask for the same thing, i.e. increasing the default Linux kernel limits, so this practice is nothing new in the industry.
You are given a powerful tool in form of Linux operating system. This tweaking is done so it serves your purpose in the best possible way.
I've optimized Sysctl recently for:
, source that gives context and sane warnings
People had already answered your question - the default is not set according to whatever @colingpt wants to do with Linux today.
And I wasn't bitching - your suggestion that tweaking a simple kernel parameter constitutes something only 'experienced system administrator[s]' should date touch is a fair sign you might be struggling.
It's likely not a sysctl tweak even being needed, but troubleshooting necessary at the application or coding level via MySQL slow query log activation, etc. in my.cnf to see what is causing the bottleneck.
Guessing the context : http://serverfault.com/questions/398972/need-to-increase-nginx-throughput-to-an-upstream-unix-socket-linux-kernel-tun
I would also make sure the SQL DBs are not fragmented and optimized. I usually use:
This repairs and then optimizes all MySQL DBs on the server.
Doubting that's the issue though. Small percentage boost performance wise, it wouldn't bottleneck Nginx.
You should run MySQLTuner and see if it finds something in terms of values that need to be changed, or perhaps add timeouts to my.cnf to free up some nginx sockets.
You can hire someone to as well... Having learned Linux trial and error myself, I figure why not have a go at fixing if things are backed up and not in production?
No way to learn otherwise.
thank you for suggestions and practical examples.
I am not sysctl.conf expert i assumed modifying settings has an influence on max simultaneous connections if that's true it has indeed an effect on RAM consumption as for each connection RAM is allocated.
So yes i think you have low values by default to make it work on most systems.
But again i am not an expert.
The link I provided which contains the sysctl.conf of a high end dedicated server with a high end Internet connection was just reference for you to learn everything about used values. I added it because it had all the comments that actually explained what the values mean and do and all the sources.
At the end of my previous post I said that you should of course adjust it as you need it. To do this you of course need knowledge about the values that you adjust.