New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
HTTP/2 HTTPS Benchmarking Litespeed vs Nginx - make use of your idle servers
Thought this would be of interesting for all the folks here with idle servers doing nothing Came across WHT thread at https://www.webhostingtalk.com/showthread.php?t=1775139 which asked whether HTTP/2 HTTPS benchmarks posted at https://http2benchmark.org/ are real and I chimed in on that thread with my thoughts.
TLDR
- They created a Github repo suite with all scripts to install both client and server side applications used for benchmarking including litespeed and nginx. I only tested on CentOS 7 but they support Ubuntu too. Repo at https://github.com/http2benchmark/http2benchmark
- I wasn't happy with the test parameters and configurations tested for Litespeed vs Nginx. They're testing a slight more optimised Litespeed 5.4 configuration that one used by Litespeed 5.4 install out of the box versus nginx.org repo stable Nginx 1.16 with normal defaults which aren't optimal and miss some key performance related options. Also there's no TCP/Kernel optimisations on either server/client servers out of the box.
- So I forked their repo and extended their script/suite at https://github.com/centminmod/http2benchmark/tree/extended-tests - the added stuff is explained on the extended-test branch readme.
- Server to Client tests maybe restrained by the network connectivity of your VPS i.e. 250Mbits to 1Gbits compared if you have 40+ Gbps pipe to test with.
So benchmark away folks ! ^_^
Comments
Waiting for some results. The suite is too heavy for my tiny idle servers.
What's the conclusion?
I don't understand what test you want here. Hosing our providers' internet pipes won't do anything about benchmarking the software. Hetzner Cloud no longer has 10gbps uplinks apparently. Is there anything we can test that will reveal anything not already known? If Litespeed is beating nginx somehow, can we fix it? Should we really even care about nginx these days? Litespeed is more in the Apache vein, I thought. So maybe there is a way to graft an async approach onto Apache.
@eva2000 Is there any chance that you have some sort of fetish on benchmarking servers and there is, let's say, 'video' matterial about what's happening with you during benchmark and after it?
Litespeed 5.4 is faster than Litespeed 5.3 and faster than their tested default distro installed Nginx 1.16 which runs default non-optimal configuration. But Nginx can be configured and tuned to match Litespeed for some usage scenarios if you know how to configure Nginx. If you don't, then Litespeed 5.4 probably better fit.
True to some extent, i'm limited by 1Gbps so thought some folks here including web hosts who have access to >1-40Gbps pipes might want to test especially on my forked version which adds a fairer comparison of a more optimal Nginx wordpress setup to test (coachbloggzip test target) which shows a much closer race between Litespeed 5.4 and Nginx when properly configured.
^_^ wouldn't you like to know
From https://www.webhostingtalk.com/showthread.php?t=1775139&p=10167968#post10167968 example
I've tried to install the test suite on my Debian 9/Ubuntu 16.04 VPS but it failed.
So I've spined up two CentOS 7 VMs on LunaNode, since I have many dollars of credits in it.
The script of "coachblog" is broken, so there's no result of it.
Here's my result.
https://docs.google.com/spreadsheets/d/1pYBbvBdErKOO4vkrZmueZY_iWwT4Js8RnuJ1LknJmrk/edit?usp=sharing
Here's my result.
will double check thanks
Should be some issue of file path. During server install, the script says cp: path not found (sth like that)
I forgot to save the server install log, so sorry
no worries was to to typo in directory name for download directory
So I updated my http2benchmark extended-tests branch fork at https://github.com/centminmod/http2benchmark/tree/extended-tests to fix the setup/server/server.sh setup for coachblog and coachbloggzip wordpress test targets so you can update server.sh and re-run server.sh should reinstall everything - ignore error about httpd/apache as that is from original http2benchmark script and it's because CentOS 7 tests don't install Apache right now.
also a fix for setup/client/client.sh so re-run on client side
example results https://gist.github.com/centminmod/6980694c38dc39c5fc9325b581cfd036
First thanks a lot for your efforts.
But I don't care about it and do not even take it serious. Reasons (among others):
Sure, you or me will massage the config as well as the .configure to get the optimum for any given site and constellation, but most users don't. Keep in mind for whom e.g. Ubuntu or CentOs are made.
And then there is the question about http/2. I experimented a lot based on large promises from the http/2 crowd but nowadays my sites are back to 1.1 because (among other reasons) I strongly dislike the "httpS only and everywhere!" hype and because, in fact http/2 is not significantly faster than http/1.1. In fact (due to the httpS hype) http/2 sites are often slower.
As for your benchmark I absolutely do not intend to be harsh to you but I couldn't help but to get the impression that the title of your benchmark should be "Desperate attempts to somehow make nginx vs OLS look better".
Well noted, I run both (and current versions too) on diverse servers (dedi and VPS) and I see OLS to consistently beat nginx, often even brutally (e.g. with dynamic PHP sites).
About the only thing were I perceive nginx to be better is config syntax. Apache (XML) syntax is a PITA and wasting space. On the other hand OLS comes with some kind of config and stats GUI.
So in summary when setting up new servers I tend to go OLS - and with good reasons. Serving e.g. WP much faster is one of the reasons.
DO = Digital Ocean.
Droplet = VPS.
@jsg said:
I know that. But VPS come in many flavours and usually don't have a dedicated core. And even if they do they still are a poor base for that type of benchmarking. When not comparing VPSs but rather software and in a performance vs performance test, one should have a stable, solid base (read: dedi) and one should properly describe the base (e.g. what kind of disks, controller, relevant processor flags, etc).
Glad to see LiteSpeed performing so well
Yes litespeed's LSAPI based php is definitely faster for non-cached dynamic php than php-fpm. The purpose of my forked benchmarks are mentioned in git repo Readme and WHT thread to highlight that yes litespeed can be faster but depending on how nginx/php-fpm is install and configured it can be a much closer race. Case in point my openlitespeed vs Centmin Mod WordPress benchmarks with similar WordPress cache setup part 2 with part 1 linked at https://community.centminmod.com/threads/wordpress-webpagetest-pagespeed-comparison-for-cyberpanel-1-7-rc-openlitespeed-vs-centmin-mod-lemp.15211/. Of course those benchmarks are fairly old now so I will eventually revisit such tests as I plan to add litespeed/openlitespeed support to Centmin Mod stack as well.
And specifically regarding pre-gzip and nginx gzip_static tests https://community.centminmod.com/threads/wordpress-webpagetest-pagespeed-comparison-for-cyberpanel-1-7-rc-openlitespeed-vs-centmin-mod-lemp.15211/#post-65227
@eva2000
I appreciate you taking my post so constructively as I did indeed in no way intend to attack you.
Notes:
I don't care about LSAPI and I don't believe in miracles or nonsensical optimizations. The only real and effective way to optimize PHP is to get rid of it and to use e.g. lua - and not to squeeze out some more % with trickery (like LSAPI).
So my WP runs with php-fpm and not with LSAPI.
I didn't look closely at the code of OLS or of nginx, but I merely fixed some small issues and things I disliked. So I can't make a solid statement as to the "why?" but I can clearly state that looking at the current official (stable and avail. in distros) versions OLS is clearly faster with dynamic content (read PHP sh_t like WP). One likely candidate would be OLS running a better (tighter, better controlled) event loop. At least that's what my tests with libev and libuv suggest. One striking example is dyn. alloc. structures (typ. per connection) vs. pre-allocating them and managing them yourself. I saw major performance differences and I optimized some software projects considerably (in terms of performance) by merely enhancing event loop management and handling.
Be that as it may be, using what comes pre-packaged by the distros and configuring the servers reasonably smart, which probably is what could be called the positive case for most actual installations out there, OLS clearly outperforms nginx. And well noted, I did not like to come to that conclusion because I actually like nginx and would have loved it to come out at least close to OLS.
And that, the 85% of users/installations IMO is the relevant measure, not some hand tuned setup.
That said all those benchmarks are probably in vain anyway because for most users even a standard not exactly smart config is plenty good enough, simply because the vast majority simply don't have websites needing to serve more than a couple of dozen requests per sec if even that. So 97+% of all nginx lovers can stay with nginx anyway and need not care about OLS being quite a bit faster.
Litespeed LSAPI PHP isn't trickery - used it on production Drupal and WordPress sites with 500,000+ unique IP visitors/day and there's definite performance for dynamic non-cache PHP requests years back before I started developing Centmin Mod stack.
Centmin Mod Nginx is built using jemalloc instead of regular system glibc and from my own tests suffers less from kernel related Meltdown and Spectre mitigation fixes and their performance overhead (overhead was jemalloc 5% vs glibc 15% ) . And supporting GCC 7/8/9 and Clang 6/7/8 gives another 5-30% boost depending on cpu pairing. But yes most folks won't have the traffic to even properly differentiate between apache, litespeed /openlitespeed or nginx's generic default configuration or distribution install methods. My view is skewed as I work with and optimize some of the largest forum (Centmin Mod powers 10% of largest xenforo forums online) and WordPress based web sites on the internet so every optimisation cumulatively adds up and more or less gets poured into how Centmin Mod gets developed and configured for Nginx/PHP-FPM and MariaDB MySQL (and eventually for my litespeed and openlitespeed integration)
Well, in my view it is trickery. First because it tries to squeeze out a bit more performance of a technology (PHP) that itself is a major problem. Plus unlike fastCGI it's not well supported on all unix platforms and not even on all linux distros.
So? There are even faster allocators but that's not the point. Of course one can somehow massage a server to be faster -but- that's not what most users do nor can they do it.
If you really want, we can go further along that line but you won't win because I have written servers that just blow anything you have out of the water. Unfortunately though those programs are bespoken, made for a single customer, and not available to others - just like highly optimized http + PHP solutions like the one you created.
Your view is also skewed bceause you are bound to a certain stack, and all you do is sqeezing out performance of that stack - but that stack is poor no matter how nicely you massage it. It is poor because it's centered around PHP.
Again, I respect your work and your benchmark but I wanted to bring up the question of usefulness for the many who have to or want to live with whatever apt-get or yum delivers to them.
Yes dedicated servers would be better for more accurate benchmarks compared VPS, though I only work with what I have access to
Isn't all optimisations of any kind classes as 'squeeizng out' more performance within the confines of the framework you have to work with ? Guess you can call it whatever you want. But if such performance improvements aren't important, then we would not see the performance progression from PHP 4 vs PHP 5 vs PHP 7 or MySQL 3.23.x days vs MySQL in all it's current glory. Of course you can say NoSQL databases for some workloads or non-PHP languages are better - but if you have to work within confines of your framework requirements, then utilising the best tools within their class/specific task are required i.e. using PHP for PHP web apps.
Don't disagree with you as I have tested a nodejs based cache web server setup which serves static files at least 2x times faster than Nginx ages back. And hence why Nginx with gzip_static and other tweaks to offload PHP requests to static requests work to close the gap with Litespeed in above http2benchmark coachbloggzip wordpress test scenario and is the whole purpose of my forked http2benchmark additional extended tests to highlight that which was tested against Nginx's own yum/apt repo distributed versions.
But as I stated if you are running a PHP web app i.e. wordpress, then you use what your confined to i.e. PHP. That is as much a reality as the statement that most folks will use whatever Linux distro provided versions of web servers as opposed to rolling their own customised stack. But I guess you and I aren't most folks. Which is why benchmarks like above are required as most folks won't realise such differences unless they're highlighted and discussed (like we're doing right now) So benchmarks are not entirely useless If benchmarks weren't done and shared like above, then regular folks will be confined to distro default or non-optimal setups and not learn anything about optimisation/tweaks or 'squeezing' out more performance where possible and such tweaks will be contained to the folks like us only With that said would love to see your custom web server performance/benchmark comparisons if you have any - I love reading about such (even if you remove some of the more privy details) as you probably have gathered
I understand well that one has to work with what one has available but still
Not respecting that leads to meaningless results.
No. Using, e.g. Hack (a "compiled PHP") is not trickery. Also using PHP 7 instead of PHP 5.x is not trickery. But using a given constellation and some not widely compatible hacks like e.g. squeezing a bit more performance out over fastCGI is trickery in my book.
I disagree. Both nginx and OLS can use gzipping and both can cache dynamically created content (but OLS's caching is a bit better)
I disagree in part because a benchmark without practical relevance is of little use.
Won't happen because I almost always have to sign NDAs (which is perfectly normal in my field). But it's not at all complicated to understand when one considers what happens once PHP (or Python or even Lua) are out of the game and everything is done by compiled code and using a good basic structure (like worker threads and AIO, depending on the job).
And even if I would show you code it would be pretty worthless because most of it is not in widely known languages and also has a rather high formal part (e.g. for static analysis, passing to Z3, etc).
Yeah that comes down to OpenLiteSpeed/Litespeed also having a user configurable small static file memory mapped cache usually for <4kb files and allocated 20-60MB to such cache for out of the box defaults. Nginx only has file mapped caching. So really depends on size of static file you're testing too.
Yeah totally understand that - nginx with threads pools and I recently added optional patch support to Centmin Mod 123.09beta01's Nginx to use Linux 5.1+ Kernel's io_uring interface for improved buffered AIO and less system calls when using Nginx aio directive https://lwn.net/Articles/776703/. Nginx io_uring patch is disabled by default, so let users enable it themselves via a option variable and test it out themselves if they use Linux 5.1+ with their Centmin Mod setups. I am still doing tests myself
io_uring author's own benchmarks against libaio and user space implementations like spdk https://lore.kernel.org/linux-block/[email protected]/
Yeah I have to sign NDAs for my clients too so definitely understand, hence why i said 'remove more privy info' even if it's just the results themselves.
Not really. OLS has multiple cache models one of which is memory mapped caching mainly for small files. IIRC it's something like small files in a mem mapped cache and larger ones in a file cache (which can be a ram disk).
Frankly, I'm not so sure about nginx's future since it has been bought by F5. In fact that was one of the reasons that confirmed my desire to look at alternatives. OLS is also owned by a company and copylefted (GPL 3) but in the case of OLS there is a long history so we can reasonably assume that their open source efforts are - and stay - honest. AFAIC I wouldn't put any significant work into nginx for some time until we have a clearer picture about F5's attitude and behaviour.
Regarding my work I can't tell a lot but I can share one thing that kind of surprised me. I was always based on the assumption that my clients are concerned about security. Well, my experience suggest that a few really are but most of them seem to be mostly about liability ("If our software is provably and verifiably secure we can't be liable"). Somewhat sad, but oh well ...
yeah true - i just left out file mapped cache for OLS/LS as to highlight where Nginx and OLS/LS differ rather where they are similar with regards to caching.
It's wait and see approach - but it's true Nginx's commercial product focus with Nginx Plus has some folks concerned. But it would suicide for Nginx to mess up open source free version's development. Nginx mainline is due for HTTP/3 QUIC support so will see https://trac.nginx.org/nginx/roadmap. While Litespeed/OpenLiteSpeed already have QUIC support.
But not too concerned for Centmin Mod as I have been working on integration for other web servers too like Litespeed/OpenLitespeed, caddy, h2o etc. I evaluate and test them all
FYI, for anyone running these http2benchmark tests can you verify on server side if litespeed is actually running h2load tests with HTTP/2 mode or falling back to HTTP/1.1. Details outlined at https://github.com/http2benchmark/http2benchmark/issues/7
On server side on CentOS run the following commands where ipaddr is your server's IP address
and verify if application protocol tested is http/1.1 or h2 (http/2)
or
In meantime I also added ECDSA SSL certificate and SSL cipher config testing for Litespeed and Nginx on CentOS 7 (haven't tested on Ubuntu). Example results and test steps at https://github.com/centminmod/http2benchmark/blob/extended-tests/examples/ecdsa-http2benchmark-h2load-low.md
Updated my forked http2benchmarks with more ECDSA SSL certificate/cipher tests for h2load HTTP/2 HTTPS comparisons between Litespeed 5.4.1 vs Nginx 1.16.1. at https://github.com/centminmod/http2benchmark/blob/extended-tests/examples/ecdsa-http2benchmark-h2load-low-lsws-5.4.1-nginx-1.16.1-run1.md
@eva2000 can we assume the same results for OLS 1.5?
haven't tested yet the http2benchmak script does sort of support OLS testing too but haven't tested yet. Probably next on the list and http2benchmark original updated Apache install part too so eventually should be able to test litespeed vs openlitespeed vs nginx vs apache
@eva2000 - not sure what's up with upcloud, e.g. for the
amdepyc2.jpg.webp
upcloud seems to do better for nginx, however, in my test it's quite a lot different. One thing to point out is that the below two VMs are having the CPU flags exposed, I'm not sure whether that's the case with upcloud.One thing I believe is worth noting is the CPU usage difference between nginx and litespeed in the benchmark, nginx is well above double the CPU usage so from a scalability point of view, you'll still from a litespeed perspective be able to pull a lot more traffic from the same resources.
@Zerpy Thanks for sharing your numbers
yeah once i perfect the http2benchmark forked script on upcloud - using free credits, I'll try other web host providers to see differences i.e. fatter network pipe to test with
Yeah if nginx tests hit PHP-FPM more cpu usage is incurred compared to Wordpress with LSCache tests. But Nginx with PHP-FPM fastcgi_cache isn't the fastest method for Wordpress full page caching. If you know how to configure nginx/php-fpm to bypass php-fpm for some usage cases i.e. wordpress static html full page cache + precompressed then that paints a different picture too.
The cpu load average peaks for coachbloggzip test runs illustrate such with lsws 5.4.1 vs nginx 1.16.1 for h2load-ecc256 test below where:
stats too long to post so in gist at https://gist.github.com/centminmod/71ec6d12e67fcfb437b1f1b57ee685ce
On the other hand, I don't really feel that "load average" is useful in this case, since the load average is based over the last minute, but every run is literally sub-second, so you'd have to run the test for minutes on end to get the "real" load - checking the actual CPU usage spikes is better in that case.
e.g. this:
No point in measuring a 1-minute average load when we ran the test in 0.11 seconds.
Looking at
top
during the runs, you'll see nginx spiking a lot more in CPU per test than LSWS, and that's really the metric to base it on, when we're only testing 5000 requests in total per run.