Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


PHP-FPM shared process
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

PHP-FPM shared process

zserozsero Member
edited September 2012 in General

I've just split my hosts to many small users based on if I'm hosting them for friends or for specific projects, etc. Now I've ended up with a really clean directory structure with 5-6 users, all good.

My problem is that I'm using a per-user php-fpm config, as in minstall. This all seemed like a good idea, until I realized that even if a tiny wordpress website is loaded, then this user's php-fpm starts and stays in memory at 25-30 MB forever. Oh noo! For 6 users on a 128 MB plan it's not gonna work.

My question is that do you know how to make a shared php-fpm process for all users? I'm open for any kind of explanation/guide. At the moment it is how it's done the minstall way:
conf file:

[zsero]

listen = /home/zsero/http/private/php.socket
user = zsero
group = zsero
pm = dynamic
pm.start_servers = 1
pm.max_children = 4
pm.min_spare_servers = 1
pm.max_spare_servers = 2
pm.max_requests = 500
php_flag[expose_php] = off
php_flag[short_open_tag] = on
php_value[max_execution_time] = 120
php_value[memory_limit] = 64M

Included at the server confs:

location ~ .php$ {

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/home/zsero/http/private/php.socket;
include fastcgi_params;
try_files $uri =404;
}

So what should I do? Make a php-fpm user, add it to a www-data group, set socket for it, and modify the user / group in the global conf?

Actually I'm most confused about the user / group thing. What is the best idea for it? I like the idea of separating users by php process, but if it uses users * 35 MB from my 128 MB plan, it's not gonna work. Maybe for 1GB+ plans.

«1

Comments

  • I'm confused. The default 'www' pool, owned and run by user www-data, can be used as a shared pool.

  • i think its a minstall question... need to change what minstall did...

  • I have no problem changing what minstall did, I'm in the process of developing a manager for it. I just don't know how to change users and groups, I mean in what logic would it be the best.

  • NanoG6NanoG6 Member
    edited September 2012

    If you want to give each user write permission (can login via ftp / scp then write files / directories), you have to create separate pool per user, i.e 6 users then you have to create 6 pools.
    Otherwise, just create one pool with user = nginx or www-data

  • I see! So what you are saying is that if I want to have user-based security I definitely need to have a high memory server (i.e. 384+) with php-fpm?

  • I'm afraid yes.
    pm.start_servers=1 * 6 users / pool, then you are at least have 6 idle php-fpm process at start.

  • @NanoG6 said: I'm afraid yes.

    pm.start_servers=1 * 6 users / pool, then you are at least have 6 idle php-fpm process at start.

    Not if you set pm = ondemand. Then there's one master php-fpm process and child processes are spawned when needed.

    Thanked by 1NanoG6
  • It's not even with the processes being idle, but that they stay at around 25-30 MB after the first page view. Do you say that with ondemand they'll disappear after viewing?

    OK, I think I'll make a 'shared-environment' mode for the manager.

    Can you tell me what would be the best user-permission logic for permissions for a shared-environment? Do all files-folders have to be chown'ed to the same user?

  • @zsero said: It's not even with the processes being idle, but that they stay at around 25-30 MB after the first page view. Do you say that with ondemand they'll disappear after viewing?

    They will stay aive for x seconds, where x is the pm.process_idle_timeout setting in the pool configuration. This setting only applies when using pm = ondemand.

    OK, I think I'll make a 'shared-environment' mode for the manager.

    Can you tell me what would be the best user-permission logic for permissions for a shared-environment? Do all files-folders have to be chown'ed to the same user?

    Only files/folders that need to be writable by the PHP process need to be owned by the PHP pool user. Most files/folders don't need to be writable (and should not be writable) by the PHP process. And generally, files written & managed by PHP shouldn't be manipulated by other users.

  • zserozsero Member
    edited September 2012

    Do they need to be owned, or it's enough if the have group permission with write?

    My question is that would it be possible to keep the user / scp security while having a shared php pool in the following way?
    1. only a single php conf, pm = dynamic, all sites using that one
    2. pool-user: www-data, pool-group: www-data
    3. all user files set up with group read permissions, group write on the ones what php needs to write

  • sleddogsleddog Member
    edited September 2012

    @zsero said: 1. only a single php conf, pm = dynamic, all sites using that one
    2. pool-user: www-data, pool-group: www-data
    3. all user files set up with group read permissions, group write on the ones what php needs to write

    For #3 to work (some files writable by PHP user www-data via group permissions) one of two criteria would have to be met. Say your site owner / sftp user is "bob":

    1. The files would have user/group ownership bob:www-data and chmod 664, or

    2. User 'www-data' is added to bob's group. Ownership is bob:bob and chmod is again 664.

    Option1 presents dfificulties as an ordinary user (bob) cannot change group ownership of files. So he could not do chgrp www-data myfile.

    Option2 is workable if the system root adminstrator adds 'www-data' to bob's group. Then bob can make files writable by PHP simply by changing the group permissions (which an ordinary user can do), e.g. chmod 664 myfile. I seem to recall that webmin/virtualmin takes this approach.

  • @sleddog, thanks for the amazing help! So just to check if I understand it right.
    1. I create the users, bob, alice, etc.
    2. I set up the home folders/files in such a way that they cannot see each other's files. All files are chown-ed to bob:bob, alice:alice
    3. www-data is added to bob, alice, etc. all the users
    4. php-pool is run as www-data, www-data
    5. Now the only requirement for a php script to run is to have permissions x4x and to write to a file is to have permissions x6x?

    So pretty much if I set 660 to all files and 775 to all folders it should work perfectly? Is there any chance that a this poses a security risk? Like what if bob has a php script to list alice's files?

  • zserozsero Member
    edited September 2012

    OK, I've changed to ondemand, it works perfectly! Seems absolutely cutting edge, couldn't have been possible even a year ago! But with dotdeb repro everything works out of the box!

    Basically it's just an amazing thing that now we can now set pm.start_servers = 0

    So here is everything you need to lower your memory usage without going shared.

    pm = ondemand

    pm.start_servers = 0

    I might still go and implement a shared pool for situations when there are a lot of users on a leb, but ondemand is a very good option now! Thanks for everyone who helped!

  • sleddogsleddog Member
    edited September 2012

    'pm.start_servers' applies only whenpm = dynamic is set. It does nothing if you're using pm = ondemand.

    For pm = ondemand you should review...

    pm.max_children = 5

    The maximum no. of concurrent PHP processes across all pools. You can use it to limit the maximum possible memory usage by PHP. Set it conservative, enable php-fpm status and watch for incrementing 'max children reached'.

    pm.process_idle_timeout = 10s

    How long, in seconds a PHP process lingers after serving a request. You may want to set it low if you're serving from a bunch of different pools and are trying to conserve memory.

    Regards to the security, shared hosting is inherently insecure. There are benefits and tradeoffs to every approach. I think the best thing to do is monitor your servers and get to know your clients.....

  • Be careful with one shared php process as one user can potentially access anothers data/files

  • @bdtech said: Be careful with one shared php process as one user can potentially access anothers data/files

    If you're going to say that you should provide some evidence. Do you mean a PHP vulnerability (CVE)? I don't doubt for minute that vulnerabilities are lurking in PHP :) but please show us the basis of your claim.

  • @bdtech said: Be careful with one shared php process as one user can potentially access anothers data/files

    Yes, I realized this. For example 'bob' can write a php script to list all php files in 'alice's directory. But maybe it's not this simple.

    @sleddog: about
    pm.max_children = 5
    The maximum no. of concurrent PHP processes across all pools. I've just tested it and it's per pool. Run this:

    siege -b -t1s host1 &

    siege -b -t1s host2 &
    siege -b -t1s host3 &

    You'll see 15 threads, not 5. Or did you mean something else?

  • @zsero said: The maximum no. of concurrent PHP processes across all pools. I've just tested it and it's per pool.

    Looks like I was wrong there, sorry :(

  • NanoG6NanoG6 Member
    edited September 2012

    As an alternative you can use LiteSpeed free version. You just need to chown each htdocs to each user. And then all php5 process will running on behalf that directory owner automatically (DocRoot UID).
    Php5 process will be created just when needed (like ondemand on php-fpm).

    :~# pstree
    init-+-cron
         |-6*[getty]
         |-litespeed-+-httpd
         |           `-litespeed---admin_php (only one process when idle)
         |-2*[logsave]
         |-mysqld_safe-+-logger
         |             `-mysqld---9*[{mysqld}]
         |-sendmail-mta
         |-sshd---sshd---bash---pstree
         |-syslogd
         `-udevd---2*[udevd]
    
  • zserozsero Member
    edited September 2012

    I think I'll settle down on ondemand, it's a tiny bit slower on cold-boot, but otherwise there is no reason not to use it. Any reason for LightSpeed over nginx + php-fpm w/ ondemand?

  • @zsero said: Any reason for LightSpeed over nginx + php-fpm w/ ondemand?

    Apache / .htaccess compatible

  • @NanoG6 said: Apache / .htaccess compatible

    Actually, after spending an evening figuring out how to make editable password protection in nginx I can understand it.

  • If you're going to say that you should provide some evidence. Do you mean a PHP vulnerability (CVE)? I don't doubt for minute that vulnerabilities are lurking in PHP :) but please show us the basis of your claim.

    Think about it for a second. If the shared pool is serving let's say 5 domains with wordpress blogs for 5 different users it must have read permission to all of them. And for wordpress full functionality (installing/disabling plugins, uploading images, etc..) PHP will also need write access. If your a php programmer its trivial to list or read files in various directories or who knows you can find a db config with passwords etc.

    http://www.howtoforge.com/php-fpm-nginx-security-in-shared-hosting-environments-debian-ubuntu

  • @NanoG6 said: Apache / .htaccess compatible

    And not being allowed to host 18+/adult content anymore :')

  • @bdtech said: Think about it for a second. If the shared pool is serving let's say 5 domains with wordpress blogs for 5 different users it must have read permission to all of them. And for wordpress full functionality (installing/disabling plugins, uploading images, etc..) PHP will also need write access. If your a php programmer its trivial to list or read files in various directories or who knows you can find a db config with passwords etc.

    Ah, I misunderstood you. What you describe is essentially what I meant when I said shared hosting is inherently insecure.

    Running separate PHP processes can help alleviate issues cross-account, but can introduce new issues within an account -- if PHP is run as the account owner.

    The idea I've been toying with is that a hosting account should have two users:

    User: bob ; Group: bob
    User: bob-script ; Group: bob

    bob is a system user and can be granted ssh, sftp, ftp, etc. access.
    bob-script is a passwordless unprivileged user. PHP and other scripts (cgi) run as bob-script.

    Directory /home/bob is owned bob:bob and chmod 750. The web tree might start at /home/bob/www.

    (Another account might have users jane and jane-script and be located at /home/jane.)

    All bob's files are owned bob:bob. bob can make them web-writable by making them group-writable.

    User jane and scripts running as jane-script would have no access to anything under /home/bob.

    An exploited script at, say, /home/bob/www/forum would be able to modify only those files that are group-writable in bob's account.

  • Over a month on...

    @zsero:
    1. Are you still using ondemand?
    2. What is your user:group implementation now?

    @sleddog:
    1. Are you using the bob & bob-script idea now?

    @zsero: I'm not sure if the code snippet in your opening post was a typo, but you should do try_files before passing the request to PHP. :)

  • @bnmkl:
    I'm still using it, it's been 100% fast and reliable. It's been implemented as the default method in Minstall and I've made my own minAdmin with this setting. Have a look at it:
    http://www.lowendtalk.com/discussion/5247/minadmin-leb-beta-now

    That try_file is not a typo, it's just a one line hack to avoid some exploit. It's the same in the official nginx documentation too.

  • bnmklbnmkl Member
    edited October 2012

    @zsero : That's great news! :-) Thanks for the link.

    I was referring to the exploit. Depending on your PHP settings, the exploit will work if try_file isn't included before, as the tactic utilises the fact PHP will use any file it can find in the path.

    /path/file_exists.gif/file_does_not_exist.php

    PHP will execute /path/file_exists.gif

    If it is like that in the documentation, then it's kind of cool having that ability, but I thought it had to be stated in a logical sequence.

  • bnmklbnmkl Member
    edited October 2012

    Thanks again to you ( @zsero ) and @maxexcloo. :-)

  • @bnmkl:
    From: http://forum.nginx.org/read.php?2,88845,222567#msg-222567

    Since php 5.3.9 the fpm sapi has 'security.limit_extensions' (defaults to '.php') which limits the extensions of the main script FPM will allow to parse.

    It should prevent poor configuration mistakes.

    I think we can remove that line altogether, at least for DotDeb. I've asked it on the nginx list.

    But you are right, that line might need to be the first one!

Sign In or Register to comment.