Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


NGINX Load Balance & Proxy Pass?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

NGINX Load Balance & Proxy Pass?

I have a CDN for my small site which serves images, js and different assets for my projects. My CDN works through an API, so it's not 100% static content as it includes dynamic content too which is generated on-the-fly - which means that it is put live through a proxy_pass on Server 1.

I have 3 servers,

  • Server 1 - Main CDN Server
  • Server 2 - Empty Server that I'd like to use to load balance Server 1
  • Server 3 - Empty Server that I'd like to use to load balance Server 1

Is there a way to use both Server 2 & 3 to load balance Server 1? What exact steps would I have to think about/take? Do I need another server which would use Server 2 & 3 to load balance Server 1?

Thanks!

Comments

  • Hey! There are definetly some options for that, i’d recommend to get two load balancing servers with a floating IP though to have some failover/HA ability. An example configs can be found here: https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/

    If you’re not sure what you’re doing though, and this is production data, i’d recommend not to go down this route and get an loadbalancer from a big provider like DO, VULTR, OVH, LINODE, AWS, etc, and hang your servers behind there.

    If you want to try anyway, just shoot me a pm or drop a message down here if you need any help!

  • sgno1sgno1 Member
    edited November 2021

    @FoxelVox said:
    Hey! There are definetly some options for that, i’d recommend to get two load balancing servers with a floating IP though to have some failover/HA ability. An example configs can be found here: https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/

    If you’re not sure what you’re doing though, and this is production data, i’d recommend not to go down this route and get an loadbalancer from a big provider like DO, VULTR, OVH, LINODE, AWS, etc, and hang your servers behind there.

    If you want to try anyway, just shoot me a pm or drop a message down here if you need any help!

    So just to clear this up, I need an initial server that will hold the load balancing servers, meaning I need a 4th Server? Or is that incorrect.

    i.e.

    Server 1 -> CDN
    Server 4 -> Load Balancing Server 2, 3 which point towards Server 1

    --

    Or is there a way to like use Server 2/3 together to load balance each other which point towards Server 1

  • FoxelVoxFoxelVox Member
    edited November 2021

    Hey @sgno1 , not exactly. More like this (note; all IPs in the graph below are just examples. 10.10.10.* is for internal network,192.168.1.1 = external IP of an floating IP . The solid lines are active connections, the dotted are failover connections:

    What you'll need:

    The CDN (caching) servers configuration in NGINX for example, which saves cached data in /tmp and then proxy's it to the backends. you'll need to change the server_name's and install a SSL certificate, or set that up with let's encrypt.:

    http {
        proxy_cache_path /tmp levels=1:2    keys_zone=STATIC:30m
        inactive=24h  max_size=15g;
    
       upstream myapp1 {
            least_conn;
            server 10.10.10.11;
            server 10.10.10.12;
        }
    
    server {
        listen 80;
        server_name example.com www.example.com;
        return 301 https://$host$request_uri;
    }
    
        server {
        listen 443 ssl http2;
        server_name example.com www.example.com;
    #    ssl_certificate /etc/nginx/ssl/www.example.com.crt;
    #    ssl_certificate_key /etc/nginx/ssl/www.example.com.key;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    
            location / {
                proxy_pass             http://myapp1;
                proxy_set_header       Host $host;
                proxy_buffering        on;
                proxy_cache            STATIC;
                proxy_cache_valid      200  1d;
                proxy_cache_use_stale  error timeout invalid_header updating http_500 http_502 http_503 http_504;
            }
        }
    }
    

    Then on the backend webservers you can run for example:

    server {
            listen 80;
            root /var/www/html;
            index index.php index.html index.htm index.nginx-debian.html;
           server_name example.com www.example.com;
    
            location / {
                    try_files $uri $uri/ =404;
            }
    
            location ~ \.php$ {
                    include snippets/fastcgi-php.conf;
                    fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
            }
    
            location ~ /\.ht {
                    deny all;
            }
    }
    
    

    Note that i threw the above together really quickly, so it might contain some errors which needs modification or tweaking. But that's the basics for a 3 server setup. If you want to run HA like the drawing above:

    https://www.digitalocean.com/community/tutorials/how-to-create-a-high-availability-setup-with-heartbeat-and-floating-ips-on-ubuntu-16-04

  • @FoxelVox said:
    Hey @sgno1 , not exactly. More like this (note; all IPs in the graph below are just examples. 10.10.10.* is for internal network,192.168.1.1 = external IP of an floating IP . The solid lines are active connections, the dotted are failover connections:

    What you'll need:

    The CDN (caching) servers configuration in NGINX for example, which saves cached data in /tmp and then proxy's it to the backends. you'll need to change the server_name's and install a SSL certificate, or set that up with let's encrypt.:

    http {
        proxy_cache_path /tmp levels=1:2    keys_zone=STATIC:30m
        inactive=24h  max_size=15g;
     
       upstream myapp1 {
            least_conn;
            server 10.10.10.11;
            server 10.10.10.12;
        }
    
    server {
        listen 80;
        server_name example.com www.example.com;
        return 301 https://$host$request_uri;
    }
    
        server {
        listen 443 ssl http2;
        server_name example.com www.example.com;
    #    ssl_certificate /etc/nginx/ssl/www.example.com.crt;
    #    ssl_certificate_key /etc/nginx/ssl/www.example.com.key;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    
            location / {
                proxy_pass             http://myapp1;
                proxy_set_header       Host $host;
                proxy_buffering        on;
                proxy_cache            STATIC;
                proxy_cache_valid      200  1d;
                proxy_cache_use_stale  error timeout invalid_header updating http_500 http_502 http_503 http_504;
            }
        }
    }
    

    Then on the backend webservers you can run for example:

    server {
            listen 80;
            root /var/www/html;
            index index.php index.html index.htm index.nginx-debian.html;
           server_name example.com www.example.com;
    
            location / {
                    try_files $uri $uri/ =404;
            }
    
            location ~ \.php$ {
                    include snippets/fastcgi-php.conf;
                    fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
            }
    
            location ~ /\.ht {
                    deny all;
            }
    }
    
    

    Note that i threw the above together really quickly, so it might contain some errors which needs modification or tweaking. But that's the basics for a 3 server setup. If you want to run HA like the drawing above:

    https://www.digitalocean.com/community/tutorials/how-to-create-a-high-availability-setup-with-heartbeat-and-floating-ips-on-ubuntu-16-04

    What if I dont want a floating IP, can I do this setup instead:

    Server 1 -> Holds Load Balancer 1 & 2 -> Balancers point to CDN

  • Unless there is something I don't understand, Why would need load balancing to point to 1 single location in the end? where is the balancing? (unless you are doing it to hide the backend or something).

    The main benefit behind load balancing is when you have multiple backend servers running the same backend and you want visitors to be distributed on these

    Thanked by 1sgno1
  • sgno1sgno1 Member
    edited November 2021

    @afn said:
    Unless there is something I don't understand, Why would need load balancing to point to 1 single location in the end? where is the balancing? (unless you are doing it to hide the backend or something).

    The main benefit behind load balancing is when you have multiple backend servers running the same backend and you want visitors to be distributed on these

    I thought I could load balance the network to relieve the stress from server 1? Or will that not work?

    Load Balancer 1 & 2 are both pointing towards the CDN so I was wondering if I could load balance between each to balance the network on both servers


    But I guess not, in that case I would need to run separate instances of the app on each machine which I don't have the power to do.

  • afnafn Member
    edited November 2021

    @sgno1 said: Load Balancer 1 & 2 are both pointing towards the CDN so I was wondering if I could load balance between each to balance the network on both servers

    I am not even sure if what you are saying at this point makes any sense, or is it just me who is unable to understand.

    load balancing: having one (or multiple) clerk(s)/receptionist(s) and multiple rooms in a building. When visitor arrives, he tells the visitor "go to room 1 there is some place for you", or room 2, etc

    What you seem to be trying to do. I have one room, I am gonna hire 2 receptionists each one will tell the visitor "go to room 1". Why? just hire no receptionists at all and let everyone who arrives go to the room.

    Unless you are using these servers as some caching (which the CDN does already!) then there is no meaning or usefulness in what you are trying to do (assuming I understand the context correctly). You mentioned the non-cached content is dynamic (generated via user interaction?) these "idling servers" can't help.

  • You'll need to do either CDN -> LB -> webservers, or CDN -> webservers directly. The CDN should do the heavy lifting in that case and should lower your server load. If one server isn't enough to handle it, upgrade your backend servers first or implement something like caching with Varnish.

    Also, which CDN are you using? if it's cloudflare you won't need to do anything with NGINX for loadbalancing(LB), since they can do it pretty well.

  • afnafn Member
    edited November 2021

    CDN -> LB -> webservers,

    Would make sense if there are many backend servers. But...

    @FoxelVox said: upgrade your backend servers

    I am afraid he actually has 1 single backend server (that's what I understand from OP).

    @FoxelVox said: Also, which CDN are you using? if it's cloudflare you won't need to do anything with NGINX for loadbalancing(LB), since they can do it pretty well.

    I second that, +1

  • @afn said:

    CDN -> LB -> webservers,

    Would make sense if there are many backend servers. But...

    @FoxelVox said: upgrade your backend servers

    I am afraid he actually has 1 single backend server (that's what I understand from OP).

    @FoxelVox said: Also, which CDN are you using? if it's cloudflare you won't need to do anything with NGINX for loadbalancing(LB), since they can do it pretty well.

    I second that, +1

    Thanks! I'll further look into it!

  • Does everything needs to sit on "Server 1"?

    You can just make basic round robin on 3 servers which have same data or divide resources like assets.example.com (js,css) & images.example.com and point these to diferent server IPs.

    If your servers are 1Gbit then now you get peak 2/3Gbps instead of routing it via one 1Gbit server.

    If you go with round robin and you use Cloudflare then you can automatically check health of servers and disable them if they become unhealthy via Cloudflare API.
    And now you have $0 HA load balancing.

    Thanked by 2sgno1 dahartigan
  • @AXYZE said:
    Does everything needs to sit on "Server 1"?

    You can just make basic round robin on 3 servers which have same data or divide resources like assets.example.com (js,css) & images.example.com and point these to diferent server IPs.

    If your servers are 1Gbit then now you get peak 2/3Gbps instead of routing it via one 1Gbit server.

    If you go with round robin and you use Cloudflare then you can automatically check health of servers and disable them if they become unhealthy via Cloudflare API.
    And now you have $0 HA load balancing.

    Yes everything sits on Server 1, it has all images, js css at one point under 1 IP.

  • @sgno1 said:

    @AXYZE said:
    Does everything needs to sit on "Server 1"?

    You can just make basic round robin on 3 servers which have same data or divide resources like assets.example.com (js,css) & images.example.com and point these to diferent server IPs.

    If your servers are 1Gbit then now you get peak 2/3Gbps instead of routing it via one 1Gbit server.

    If you go with round robin and you use Cloudflare then you can automatically check health of servers and disable them if they become unhealthy via Cloudflare API.
    And now you have $0 HA load balancing.

    Yes everything sits on Server 1, it has all images, js css at one point under 1 IP.

    But does it need to? Mounting same drive to different servers/synchronization would be my first choice instead of choking everything via one server and call it balancing, @afn is right.

  • dann00dann00 Member, Patron Provider

    You'd be best to sync that drive to a local directory on each server instead of serving content directly from the mounted drive too. If the server hosting the drive you're mounting (say server 1) dies then the content won't be available on server 2/3 even if they're still online and running.

  • @AXYZE said:
    Does everything needs to sit on "Server 1"?

    You can just make basic round robin on 3 servers which have same data or divide resources like assets.example.com (js,css) & images.example.com and point these to diferent server IPs.

    If your servers are 1Gbit then now you get peak 2/3Gbps instead of routing it via one 1Gbit server.

    If you go with round robin and you use Cloudflare then you can automatically check health of servers and disable them if they become unhealthy via Cloudflare API.
    And now you have $0 HA load balancing.

    If i don’t understand you wrong here you’re suggesting to use dns based round robin by simply creating 3 A/AAAA records (one per server), and if it stops to ping, have it removed via the CF api with a script?

    You’ll still have to account for DNS caching…

    The problem i’m having with this is that he has all his eggs in one basket. If one of the servers fails you’re shit out of luck and your site is broken if you use DNS RR. DNS caching can be a bitch, even if you set the TTL low. Google DNS for example always has atleast 15 min of caching even if your TTL is different.

    With something like varnish, nginx, haproxy, apache, litespeed or caddy loadbalancer for example as a software based LB, or even CF’s real LB functions you can always disable one backend automatically with health checks.

Sign In or Register to comment.