Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How would you host the most websites?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How would you host the most websites?

BlueVMBlueVM Member
edited February 2012 in General

Alright so let's get into some nitty gritty stuff since I like talking about theoretical projects and how they'd run.

Assuming you had 1 GB of RAM and wanted to host an ever growing number of websites to the point that they were as optimized as possible... how would you do it?

Limitations: I'm aware there is no such thing as unlimited storage or bandwidth (no way? really?) - That isn't a factor... pure and simple RAM/CPU usage to a minimum...

Comments

  • Nginx + PHP 5.3.10 + APC Enabled as opcache and for caching fequently used queries. MySQL tuned to the point of no more tuning.

    Thanked by 2djvdorp FLVPS
  • @Derek said: Nginx + PHP 5.3.10 + APC Enabled as opcache and for caching fequently used queries. MySQL tuned to the point of no more tuning.

    This.

  • @BlueVM see above ...can also ask browser to respond to gzip compression.

  • @Derek said: Nginx + PHP 5.3.10 + APC Enabled as opcache and for caching fequently used queries. MySQL tuned to the point of no more tuning.

    This plus Supervisord and Beanstalkd

  • MaouniqueMaounique Host Rep, Veteran
    edited February 2012

    @natestamm said: see above ...can also ask browser to respond to gzip compression.

    I am not sure using gzip will be easier on the CPU or RAM.
    M

  • Pre-Gzip everything, and then force the web server to send the gzipped versions.
    I'm sure nginx supports this.

    You could throw CloudFlare onto it as well, reduce the requests a bit.

  • OneTwoOneTwo Member
    edited February 2012

    1 main web server: lighttpd (gzip) + fastcgi + php-xcache
    1 mysql server: mysql only (to cache shit)
    2 front-end servers: nginx (gzip)

    iptables rules to the backend servers for security and shit. that would work for many req/s.

    i'm running a site with static content which have an average of 8 req/s since I restarted the webserver runs awesome on a single 256MB Ram vps with lighttpd (gzip). the only problem is the bandwidth.

  • How about trying varnish cache?

  • Getting a Xeon E7 on a gigabit connection would work too.

  • squid in reverse proxy mode (without disk cache) + cherokee + php-fpm on bsd sockets + apc

    Works flawless on a 128mb xen vps.

  • nginx's FastCGI cache options can provide a significant boost for non-static content if you don't have a proxycache like Squid or Varnish in front of it; it turns serving a dynamic page into serving a static page from disk.

    It's simple to set up and will make more efficient pretty much any request that can be safely cached for a set period of time (minutes/hours). Speedy page loading is one of the factors Google uses in ranking search results too, for added benefit.

    Thanked by 1djvdorp
  • @lbft that's interesting, care to share some config and explain better?

  • I'm messing around with this idea on cPanel with 1024mb of RAM. I'll keep you updated on how it goes.

  • nginx + unicorn for ruby/rails

    nginx for GoLang code. I also use nginx + thttpd for GoLang when I need to run in CGI.

    lighttpd for ikiwiki / perl (CGI) / CGI

  • @Steve81 said: squid in reverse proxy mode (without disk cache) + cherokee + php-fpm on bsd sockets + apc

    interesting config, might wanna share the details :P ?

  • @djvdorp said: interesting config, might wanna share the details :P ?

    There aren't a lot of details, is a quite simple configuration.

    However:

    • cherokee listening on 127.0.0.1:80 with gzip/deflate compression enabled (except for php), compression level set to max.
    • Squid listening on IP:80
      • Squid config:
        acl all src all
        acl manager proto cache_object
        acl purge method PURGE
        acl CONNECT method CONNECT
        
        # "default" is, on my vps, a vhost that return a 401 Unauthorized;
        # if someone doesn't know what is the hostname of my website,
        # he doesn't need to see it
        http_port IP:80 accel defaultsite=default vhost
        
        # Cache_mem depends of the avaliable ram on the machine
        # I use 8MB on a 64mb vps
        cache_mem 8 MB
        
        # No disk cache
        cache_dir null /var/spool/squid
        
        access_log none
        cache_log none
        cache_store_log none
        icp_port 0
        udp_incoming_address 127.0.0.1
        hosts_file /etc/hosts
        coredump_dir /var/spool/squid
        
        # cherokee is listening on 127.0.0.1:80
        cache_peer 127.0.0.1 parent 80 0 no-query no-digest no-netdb-exchange originserver
        name=cherokee login=PASS
        
        # All requests are sent to the local webserver
        # Just to avoid to write a list with all available vhosts
        cache_peer_access cherokee allow all
        
        # I don't want to expose that I cache throught Squid
        header_access Via deny all
        
        # Needed to allow cherokee to replace REMOTE_ADDR with the real REMOTE_ADDR ip
        forwarded_for on
        header_access X-Forwarded-For allow all
        
        http_access deny manager all
        http_access deny purge all
        http_access deny CONNECT all
        
        # All requests are sent to the local webserver
        http_access allow all
        
    Thanked by 1Mon5t3r
  • might be a dumb question, but are there advantages from Cherokee over Nginx for example?

  • Steve81Steve81 Member
    edited February 2012

    @djvdorp said: might be a dumb question, but are there advantages from Cherokee over Nginx for example?

    I found some problems with php-fpm on unix sockets & nginx. Apart from that, cherokee have a web interface to easily configure the webserver.

    Thanked by 1djvdorp
Sign In or Register to comment.