Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


PHP-CGI vs PHP-FPM on NGINX
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

PHP-CGI vs PHP-FPM on NGINX

After optimizing our main website server today I decided to share my findings with you guys, let me know what you think about it here as I am no expert in this field and there is probably room for improvement.

http://www.bitaccel.com/blog/php-cgi-vs-php-fpm-on-nginx/

Thanked by 1Makenai
«1

Comments

  • Nice blog. I've never used php-cgi on my VPS, only php-fpm, so I can't confirm if the results you saw were typical or not, but I know my VPS has a very low load usually.

  • Yeah so the special thing about fcgi vs CGI is every new request causes a fork pretty much as a new process. When using fastcgi it create a pool of processes that are pretty forked and scale up and down based on the number of used forks and it won't kill forks until x number of requests processed by that fork or to many idle forks. If u rerun your fastcgi with no max request count or what ever its called u will get better numbers but higher chance of men leak.

  • @CharlesA said:
    Nice blog. I've never used php-cgi on my VPS, only php-fpm, so I can't confirm if the results you saw were typical or not, but I know my VPS has a very low load usually.

    Yes I ran php-cgi for over a year with very low load and low ram usage, but what happens if your traffic explodes? You have to plan for the future. Also - if you are able to serve requests faster, load times go down.

    wojons said: Yeah so the special thing about fcgi vs CGI is every new request causes a fork pretty much as a new process. When using fastcgi it create a pool of processes that are pretty forked and scale up and down based on the number of used forks and it won't kill forks until x number of requests processed by that fork or to many idle forks. If u rerun your fastcgi with no max request count or what ever its called u will get better numbers but higher chance of men leak.

    I think the lowendbox script setup fastcgi, can you confirm? https://github.com/lowendbox/lowendscript/blob/master/setup-debian.sh

  • @Corey said:

    I'm running Varnish in front of Nginx.

  • You're not serving requests faster.

  • @CharlesA said:
    I'm running Varnish in front of Nginx.

    All that does is cache static content right?

  • @tchen said:
    You're not serving requests faster.

    More requests in the same time period != serving requests faster?

  • Corey said: All that does is cache static content right?

    You can cache things like wordpress and php files if you really want to. The majority of my site is static, though.

  • CoreyCorey Member
    edited February 2014

    @CharlesA said:
    You can cache things like wordpress and php files if you really want to. The majority of my site is static, though.

    Right, but the point of this article is serving more php requests per second and efficiency.

  • Corey said: Right, but the point of this article is serving more php requests per second and efficiency.

    Good point. :) I haven't benchmarked my VPS yet, so I'm not really sure how well it'll handle a bunch of php requests. Kinda makes me wonder, though.

  • hhvm fcgi > php fpm (run it on static mode, it sucks ass at dynamic process management -- which kinda was the point) > php-cgi, in that order.

    Though, any proper site should already be utilizing opcode AND output caching along with compression.

  • @CharlesA said:
    I'm running Varnish in front of Nginx.

    You know you can have nginx cache that static content in memeory

    Corey said: I think the lowendbox script setup fastcgi, can you confirm? https://github.com/lowendbox/lowendscript/blob/master/setup-debian.sh

    i will read the scripts again when i am not so tierd but from the looks of it its using the cgi module which i am not sure uses fastcgi normally i use the fpm modual which will handle fastcgi.

  • @wojons said:

    Nope. I haven't really looked into it tbh. Suggestions?

  • DrJinglesMDDrJinglesMD Member
    edited February 2014

    Can't complain, 234 requests per second average good responses

    I'm sure there's more tweeking to do, but seeing how I don't plan on serving 2 million hits per day right this second life will go on :)

  • @CharlesA said:
    Nope. I haven't really looked into it tbh. Suggestions?

    This is a good example for caching items from the reverse proxy part.
    http://serverfault.com/questions/30705/how-to-set-up-nginx-as-a-caching-reverse-proxy

    if you plan on caching static content linux page cache will do the trick and nginx will read from that if i remeber.

    Thanked by 1CharlesA
  • Thanks!

  • on debian 7

    (this one, assuming pre-installed nginx)

    apt-get install hhvm-fastcgi

    Need to get 60.9 MB of archives.
    After this operation, 160 MB of additional disk space will be used.

    apt-get install apache2 libapache2-mod-php5

    Need to get 17.6 MB of archives.
    After this operation, 62.9 MB of additional disk space will be used.

    apt-get install apache2-mpm-worker libapache2-mod-fcgid php5-fpm php-apc

    Need to get 15.1 MB of archives.
    After this operation, 54.6 MB of additional disk space will be used.

    I think installed sizes are also another thing to have in mind when using "offer" lowwends..

  • PHP-FPM gives better permission control which is essential in shared hosting environment.

    Otherwise, you can run suexec along with PHP-cgi but PHP-cgi by itself already has worser performance than PHP-FPM.

  • You can try HHVM. developed by facebook. is faster then PHP-FPM. i used it for my server production

  • edited February 2015

    @Corey,

    Can you upload nginx code for php-cgi and php-fpm here (also php-cgi daemon code).

    BTW, I am using spawn-fcgi for php-cgi daemon.

  • @Corey said: All that does is cache static content right?

    While Varnish is a PITA to set up, especially the first time, you can specify precisely what you want to cache and for how long, even only a few seconds. Once you get the hang of it, you can cache practically anything, depending on your specific application quirks, of course.

  • @aglodek said:
    While Varnish is a PITA to set up, especially the first time, you can specify precisely what you want to cache and for how long, even only a few seconds. Once you get the hang of it, you can cache practically anything, depending on your specific application quirks, of course.

    Instead using Varnish, better using nginx cache (call as microcache if using short time).

  • aglodekaglodek Member
    edited February 2015

    @mustafaramadhan said: Instead using Varnish, better using nginx cache (call as microcache if using short time).

    For simple scenarios, caching static content, yes, by all means. Much easier to set up. Can't speak to performance as I've never ran any Nginx vs Varnish comparisons. For more sophisticated setups, though, e.g. Drupal membership sites, I vote for Varnish configurability. This said, any contrarian views are always welcome ;)

  • I'm aware, thanks. However, those are generic benchmarks. Like I already said, Varnish's advantage is configurability to specific scenarios. Otherwise, I'm in agreement with you re Nginx as the preferred reverse proxy.

  • Use uswgi with embedded php.

  • how about hhvm and php-cgi ? have you used both of it before ? and how about benchmarking. because php-fpm is very weight :(

  • What's the meaning 'php-fpm is very weight'?

  • @mustafaramadhan said:
    What's the meaning 'php-fpm is very weight'?

    I am not sure. if i use php-fpm with nginx my site is always 502 bad gateway , then i change use hhvm my site is very smooth and working properly. may be the real visitor is 20 K / day.

  • @obh_ridwan,

    Need optimize pm (static, dynamic or ondemand), pm.max_children (depend on your memory; possible until 20-30 for 1GB RAM) and pm.max_requests (500-2000).

    Thanked by 1obh_ridwan
Sign In or Register to comment.