New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
drop it
noted..
Just sat down to read Caddy EULA which applies to their official Caddy binaries and conclusion for me is I probably need to build my own source binaries anyway https://github.com/mholt/caddy/blob/master/dist/EULA.txt
>
so I shouldn't have published benchmarks for official Caddy binaries ? Never came across a EULA that prohibits benchmark info sharing ?
I'm trying to combine/integrate several web servers under the one roof/server - nginx, openlitespeed, litespeed, apache 2.4, caddy and/or h2o.
(h) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software;
This is utter bullshit.
Won't debian/ubuntu/arch compile their own binaries anyway. Why does the EULA matter? shrug
No idea. It's too messed up right now.
True, but pretty sure most folks right now are getting their Caddy binaries from official builds.
Confirmed with Matt, official Caddy binaries and EULA now prohibit benchmark results from being published or shared https://caddy.community/t/caddy-eula-section-3-1-h-benchmarking-info-clarification/2727
Read his comments, doesn't sound good. Looks like he's burning all bridges to the open source community.
Matt responds https://caddy.community/t/the-realities-of-being-a-foss-maintainer/2728
Hmm not sure what to say to this or whether it's true for most 'maintainers'. Or is there a differentiation between 'developer' vs 'maintainer' ? Guess it depends on which end of the spectrum you are as 'user' vs 'maintainer' i.e. Folks who develops and maintains GCC compiler vs 'users' who rely on GCC. Would GCC devs care if everyone switched over to Clang instead ?
That guy is referring to developers (the maintainers).
Summary (my understanding on that paragraph): I don't depend on the project, I don't need it, you guys are the one that depends on it. If you want it to keep succeeding we need to monetize it otherwise fuck off.
Well, if you don't like what he wrote, you can always get a refund.
>
I've never found it that bad. The alternatives (IIS, Nginx) are really awful, so it might be a perspective thing.
Sendmail is horrible.
>
Which is, what, 5% of people in IT?
I always though Caddy was aiming for the WebDevs/DevOps crowd. They don't really have to know how anything works. They just have to know how click some buttons in Puppet Enterprise, and voila, they have a server on AWS.
I think that's the first time I've heard anyone say that.
The guy probably never got further than trying to understand nginx config syntax then...
Caddy performs like shit, no wonder Matt tries to ban benchmarks. People should build from source and post benchmarks all over the internet.
Hey Matt, you don't give a shit about Caddy? Then stick your software and EULA up your arse, you lying fuck. Everybody knows that you want to be the next Igor Sysoev.
In all fairness, Caddy development wise is still in infancy while Nginx has more years behind it's development to get where it is. But that's why sharing benchmarks is important as it gives you an idea of performance progress or regression with each Caddy release. Ages back when I asked Matt, he did say focus is first on feature set for Caddy and not performance which is understandable. But if you're charging for commercial licensing, folks may expect at least some form of feature and performance parity.
I did some Part 2 Caddy vs Nginx benchmarks this time for doing HTTP/2 based HTTPS load testing via h2load tester and Caddy 0.10.9 does look better than previous versions - maybe because I source built it against more performant Go 1.9 ? https://community.centminmod.com/threads/caddy-http-2-server-benchmarks-part-2.12873/
Which Apache has an abundance of. I think the reason Nginx became so popular was because of performance generally, but also that it had a decent feature set that overlapped a lot of what Apache could do. Nginx took advantage of the more modern polling techniques, and is better suited for the C10k problem and avoiding slowloris type attacks which Apache had to adapt to overcome.
It does seem like this young developer naively thought he could front-run an industry leading web server...
nginx offered great performance as a proxy, allowing people who were shit at designing software to offload the static work to nginx, without the overhead of compiled modules into Apache. One hit serving 30 static images with mod_php (for example) compiled in, has a huge memory footprint as compared to a simple forking service.
Apache's downfall is generally due to it's API. There are several ways to make it more resistant to malicious users, but any webservice can be taken down given enough resources and/or determination.
As @eva2000 pointed out, the header added to Caddy actually made it SLOWER. @Jarland 's "Why should a few bytes matter?" statement actually did have merit in this world, because IT DID DECREASE PERFORMANCE.
The icing on the cake for me is the new EULA forbidding sharing performance data. Using a baseless assertion that "things are complicated and you're too dumb to test it properly", I am just astounded that this guy even managed to make it to college.
Matt, you aren't smart enough to be the next Pixelon, so don't bother. After you burn bridges with everyone who has worked on your project, you'll become yet another pointless project when everyone else has moved on.
This guy is completely misunderstanding how much open source has made this project possible.
Without the built-in go webserver caddy wouldn't have existed.
Time for him to finish school, get a real job and learn how much of a special snowflake he really is...
I wouldn't say it's shit to do things that way, it does what it does well enough. I've tinkered with standalone servers speaking HTTP. and thinking about compression, caching ... well, the list goes on.
I'd place it more as a burden, myself. Static content shouldn't need to go through Apache+PHP+Python+mod_security+.... But if it was designed better with, you know, a subdomain, it'd be much easier to separate, rather than having to maintain a forward-facing proxy for content that could easily be separated.
Now, back to Matt and his Techniheader Dreamcoat.
Ah, yeah I misread what you meant.. for static, good information architecture and nginx itself is perfect.
Well, according to the Benchmarks I found, the performance is shit.
Maybe a reason why they hide it, Nginx seems to be still the best when you are handling high amounts of traffic.
Updated h2load HTTP/2 HTTPS benchmarks including GCC 7.1.1 compiled binaries out of interest + Caddy http.cache proxy caching tests https://community.centminmod.com/threads/caddy-http-2-server-benchmarks-part-2.12873/#post-54715. Nginx is non-cache results.
Why do you have the GCC 4.8.5/7.1.1 build of 10,000 tests in the lower table? Just to show that it has become roughly 10% better? The fact that the older build is outright dropping responses when it gets bogged down bothers me. How badly does http.cache handle with 50,000, et al?
Above is 2 separate tables, 2nd table was tests done much earlier in the day. 1st table is updated tests just testing lower load levels at which Caddy had 100% completed results as clearly the higher levels done earlier had <100% completed requests. Haven't tested Caddy http.cache at high levels yet. Tests maybe for later tonight
I assumed the bottom table was later, with the fact that it had the newer GCC builds in it.
I find the GCC build giving that bump incredible, but I won't pretend to be able to read Go's code to pretend to understand where/why. I wonder how it does under CLang.
Yeah will investigate how it does with Clang 3.4, 4.0.1 and 5.0 later on
So... just keep running Nginx and Apache? Got it.
I ran a simple comparison with
ab
ab -kc 20 -n 50000 -p post.txt -m POST -T application/json http://testing:15000/
The POST file is 10 bytes in size. nginx/caddy pretty much standard configs.
Server Software: nginx/1.12.1
Server Software: Caddy
Server Software: home-made single threaded epoll
@eva2000 did you test Lighttpd performance?