New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Optimising nginx for static file serving
I've set up a nginx server only for static file serving (like S3).
My problem is that even when load-testing for the tiniest possible file on the server (32 bites total!), I start getting loads of TCP errors from only 600 concurrent users. The bandwidth still stays in the 100-200 KB/s region, thus it is clearly not be the problem.
In Blitz.io this are the results I get:
17 TCP Connection reset 544
23 TCP Connection timeout 6
I'd like to push this to 5000+ users, not 600.
Can you tell me what should I change in the default nginx configuration to make it possible to handle this many users?
Comments
Check the nginx error logs to see if there's more information of what caused the TCP connections reset and get back to us.
There's probably something useful for you here: http://engineering.chartbeat.com/2014/01/02/part-1-lessons-learned-tuning-tcp-and-nginx-in-ec2/
net.core.somaxconn
in particular, there's a limit to the number of connections queued by the OS before they're passed onto the webserver. The default is usually around ~128. When you exceed that amount, the client TCP connection will just get dropped rather than being served by the webserver.Nginx Cache Settings http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache
Make sure to test epoll in nginx http://nginx.org/en/docs/events.html
Set a nice high limit for nginx worker_processes, multi_accapt on and worker_connections.
http://nginx.org/en/docs/ngx_core_module.html
Thanks for all this, I'm looking into them. There is nothing in the logs, BTW.
Caching static files may help:
It wouldn't help with load testing services. Also, the files are going to be changing quite frequently, it'll be a http server for a linux repo.
How much RAM on the server?
How many CPU cores?
What is worker_processes x; set to in /etc/nginx/nginx.conf?
what is worker_connections x; set to in /etc/nginx/nginx.conf?
Well, you can probably get a bit more perf out of it by using ngx_http_gzip_static along with pre-gzipped content.
gzip is a good idea in general but I don't know if it'll help in this instance @Rallias. If the CPU is the bottleneck it certainly won't make things much better.
You have cache control on OP?
It's *Optimizing
In American English, but not everyone is American you dipshit.
You don't know what ngx_http_gzip_static does, do you?
No I don't, TIL. To be honest I don't know much about the non-default modules in nginx, it's a pain to recompile. I just saw gzip in your post.
Didn't actually think of any other countries when I read it wrong, just shows how selfish us Americans are.
Nginx will have to decompress these files sometimes, in case if user browser doesn't support gzip.
Then don't recompile. Use nginx-core, nginx-full, nginx-naxsi, or nginx-extras
Nope. That's only the case if you use ngx_http_gunzip
Those are just in the debian repos though, right? They're not in the CentOS nginx.org repos I don't think. Not that I use anything Red Hat based.
Well then that's their fault for using a decrepit OS
Have you seen some of the decrepit old Red Hat sys admins? They can barely see the terminal anymore, never mind learn something new
don't crucify me with your walking sticks red hat fans
I believe nginx packages are available from EPEL. gzip_static does significantly reduce system load, but it can be tricky to configure. From what I remember, you create a cron job to pre-compress static files in desired directories; nginx serves these pre-compressed files as opposed to compressing them as they're requested.
I am sure its not quite updated, but you will get an idea, what you need to change in nginx AND system (OS level) settings
http://dak1n1.com/blog/12-nginx-performance-tuning
OpenVZ VPSes tend to have poor network performance as they share the kernel with the host server, as well as connections.
I had experience to run a 200req/s service on an OpenVZ and the host server crashed after running for some hours, while it only uses 2% of CPU resource when I later switch to an i5 dedi.
As you didn't mention what type of server you are running (if I didn't miss that), I suggest you switch to KVM if you are on VZ.
It's an iwstack 1GB instance, KVM, 2 vCPU. CPU usage is something like 5%, it cannot be the bottleneck.
Thanks for all the tips in this thread, I'll start with the open file cache and try all other tweaks mentioned.
Also, I'm using nginx-light 1.6.0 from dotdeb, Debian 7 netinst 64-bit.
Yeah, so you should have ngx_http_gzip_static
Thanks for the comments in this thread and other articles as well as the following articles
http://dak1n1.com/blog/12-nginx-performance-tuning
http://www.slashroot.in/nginx-web-server-performance-tuning-how-to-do-it
https://news.ycombinator.com/item?id=6748767
http://blog.zachorr.com/nginx-setup/
I've come up with a config which works with 8000 concurrent connections on this iwstack VPS. Here is the config:
I've also needed to add
to /etc/security/limits.conf
you probably need to change /etc/sysctl.conf too
I've read in the guides that it's a quite low level system tweaking and isn't needed for the usage I'm expecting. What kind of settings do you believe I should use?
something like this
.net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.netdev_max_backlog = 30000
net.ipv4.tcp_congestion_control=htcp
net.ipv4.tcp_mtu_probing=1