Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Nginx & php-fpm - Unix Socket or TCP/IP for fastcgi_pass?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Nginx & php-fpm - Unix Socket or TCP/IP for fastcgi_pass?

AmitzAmitz Member
edited January 2013 in Help

Dear all,

I was always running Nginx in combination with PHP-FPM listening via TCP/IP (fastcgi_pass 127.0.0.1:9000;).
Then I tried to let it listen to a Unix Socket instead (fastcgi_pass unix:fastcgi.socket;). Even though the latter combination is often recommended for being generally faster, I do not see any significant boost in performance. I did not run any artificial test to be honest: It is just the impression that I have from observing a live website under some load with both combinations. Not only that the Unix Socket variation does not seem to be noticeable faster, the general server load is also higher compared to the TCP/IP solution. I have the impression that it adds around 0.3-0.5 to the load.

I wonder what your experiences with both solutions are: Maybe I just did something wrong so I am very much looking forward to your opinions...

Cheers
Amitz

«1

Comments

  • OttoYiuOttoYiu Member
    edited January 2013

    By using TCP sockets instead of Unix sockets, you're technically adding unnecessary overhead to every connection, but it seems like the overhead is pretty neglibile: http://i.amniels.com/nginx-fast-cgi-performance-unix-socket-vs-tcp-ip

  • AmitzAmitz Member
    edited January 2013

    Yes, from a technical point of view, the Socket Connection should be a lot faster with lower latency than the TCP/IP Connection. The thing is: I do not see this "in real life". What I see instead are higher server loads and no noticeable (from a human perception) speed-up.

  • Thanks for sharing your live website experience. I always use tcp/ip because it's default and I'm too lazy to change.

  • There is literally no way it should be contributing to a higher server load unless your hosting environment is so bad that it can't handle the measly I/O.

  • AmitzAmitz Member
    edited January 2013

    @Wintereise said: There is literally no way it should be contributing to a higher server load unless your hosting environment is so bad that it can't handle the measly I/O.

    I agree. This is how it should be in theory. But it isn't in (my) reality. That makes me wonder whether I did anything wrong. The thing is: There is not much that one can do wrong when configuring php-fpm to either listen on socket or TCP/IP port.

    The general environment is well tuned. The website handles around 2,000,000 - 3,000,000 hits from 6,000 - 8,000 unique visitors each day. Everything is smooth, server load is seldom above 0.4-0.6. But as soon as I switch to Unix Sockets, it goes up to 0.8-1.1 and stays in that range. Really strange.

  • @jcaleb said: Thanks for sharing your live website experience. I always use tcp/ip because it's default and I'm too lazy to change.

    It's literally one line change to each of your fpm pool and site file. I don't notice a huge difference though since my site only gets 100 uniques per day.

  • @Amitz said: The website handles around 2,000,000 - 3,000,000 hits a day.

    Can this be handled by a VPS?

  • AmitzAmitz Member
    edited January 2013

    @jcaleb said: Can this be handled by a VPS?

    To be honest: Not by every VPS, as it seems. I have tried nearly every provider here on LET/LEB and even some of the bigger players like FutureHosting, WiredTree and Knownhost. Let's put it like this: A cPanel VPS in its standard configuration had big trouble with it. Furthermore, neighbors on a badly oversold node give you headaches too. A fine-tuned nginx + php-fpm + apc setup on a VPS from a good provider makes it well possible however.

    The site is running simply GREAT on a Prometeus SSD-VPS and has a failover backup on a SSD-cached VPS from RamNode. I tested it live on both of them and it is running VERY smooth with server loads as mentioned above and CPU usage of less than half a core most of the time.

  • @Amitz said: The site is running fine on a Prometeus SSD-VPS and has a failover backup on a SSD-cached VPS from RamNode. I tested it live on both of them and it is running VERY smooth with server loads as mentioned above and CPU usage of half a core most of the time.

    Nice. thanks for the info!

  • AmitzAmitz Member
    edited January 2013

    I changed my "fine" to "great" after you quoted me... ;-)
    Indeed, I never had a better (and cheaper) VPS in every aspect than the ones from Prometeus and RamNode. And, as said, I nearly tried "them all". However, this was not planned to be a review thread... ;-)

  • @Amitz said: I changed my "fine" to "great" after you quoted me... ;-)

    Indeed, I never had a better (and cheaper) VPS in every aspect than the ones from Prometeus and RamNode. However, this was not planned to be a review thread... ;-)

    If the prometeus is down, can your ramnode VPS handle those numbers?

  • AmitzAmitz Member
    edited January 2013

    Yes, as said - I tested it on both. I mainly use the Prometeus VPS as primary location because of the higher BW allocation and because it is geographically closer to me.
    I use the VZSSD7 plan at @Prometeus and the 512MB CVZ plan at @Nick_A. Those two VPS under $10 helped me replace the cPanel server with a $90 price tag that I have used before for the site without any failover solution. Makes me happy. :-)

    However - Back to the Socket vs. TCP/IP thing...

  • @Amitz said: I use the VZSSD7

    Wow, thats just under 7 bucks a month for that traffic

  • AmitzAmitz Member
    edited January 2013

    @jcaleb said: Wow, thats just under 7 bucks a month for that traffic

    One simply HAS TO LOVE cheap virtualization done right. ;-)
    I was in fear that my site and usage could harm the neighbors on my Node, but after looking at my Munin graphs, @Maounique (working @Prometeus) told me that they would be glad if every user would have such a reasonable usage like me. So obviously I am no menace to others.

  • What PHP version are you running @Amitz ?

    https://github.com/perusio/php-fpm-example-config

    "Up until PHP 5.3.8 TCP sockets, although, theoretically slower, if your setup is on the loopback, behave better than UNXI sockets for high-traffic sites."

  • I am using 5.3.19 in that setup.

  • bnmklbnmkl Member
    edited January 2013

    Strange then, but very interesting.

    EDIT: /dev/shm/php-fpm.sock @Amitz ?

  • AmitzAmitz Member
    edited January 2013
    /var/run/php5-fpm-name_of_virtual_host_user.sock

    If that is what you wanted to know... ;)

  • bnmklbnmkl Member
    edited January 2013

    Cheers @Amitz :)

    Would moving php5-fpm-name_of_virtual_host_user.sock to ramdisk make a difference for you then @Amitz ?

  • AmitzAmitz Member
    edited January 2013

    Ah! Now I got the point... ;-)
    Will try that today!

  • lbftlbft Member
    edited January 2013

    @bnmkl said: Would moving php5-fpm-name_of_virtual_host_user.sock to ramdisk make a difference for you then @Amitz ?

    /var/run might already be a tmpfs.

    Edit: doh, not on OpenVZ, and it depends on the distro.

  • bnmklbnmkl Member
    edited January 2013

    Still a good thought process @lbft :)

    @Zen, you're so sexy when you talk like that. My knickers are drenched !

  • @bnmkl said: My knickers are drenched !

    This is not #frantech, Ladies! ;-)

  • @Amitz & @Zen - Haha :)

  • When I moved to a new VPS and had to reinstall php5-fpm I noticed they switched by default to the socket config.

  • @Amitz what are you serving? Static? Php/mysql?

  • It's a PHP/XML driven gallery. No MySQL.

  • AmitzAmitz Member
    edited January 2013

    As a little follow-up and not to spread wrong information:
    The higher server load had nothing to do with switching from TCP/IP to Unix socket. There was another process going wild - Unfortunately exactly at the same time when I did my test.

    I can say now that the switch has no (or just a very, very, very small) influence on the load. The predicted PRO for letting it listen on a socket instead of a TCP/IP connection, namely less latency, is not noticeable for me as a human being without doing benchmarks.

    So the conclusion is: I (personally) do not feel any difference between the two configurations. I will keep an eye on the general server load during the next 24 hours, but it seems as if it does not matter which solution you choose as long as you are not running a very high traffic site with thousands of connections at the same time.

    Cheers and thank you for the fish,
    Amitz

  • I never test using socket though. But i love using nginx.

  • Love nginx also with cache

Sign In or Register to comment.