New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
NGINX Load Balance & Proxy Pass?
I have a CDN for my small site which serves images, js and different assets for my projects. My CDN works through an API, so it's not 100% static content as it includes dynamic content too which is generated on-the-fly - which means that it is put live through a proxy_pass on Server 1.
I have 3 servers,
- Server 1 - Main CDN Server
- Server 2 - Empty Server that I'd like to use to load balance Server 1
- Server 3 - Empty Server that I'd like to use to load balance Server 1
Is there a way to use both Server 2 & 3 to load balance Server 1? What exact steps would I have to think about/take? Do I need another server which would use Server 2 & 3 to load balance Server 1?
Thanks!
Comments
Hey! There are definetly some options for that, i’d recommend to get two load balancing servers with a floating IP though to have some failover/HA ability. An example configs can be found here: https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/
If you’re not sure what you’re doing though, and this is production data, i’d recommend not to go down this route and get an loadbalancer from a big provider like DO, VULTR, OVH, LINODE, AWS, etc, and hang your servers behind there.
If you want to try anyway, just shoot me a pm or drop a message down here if you need any help!
So just to clear this up, I need an initial server that will hold the load balancing servers, meaning I need a 4th Server? Or is that incorrect.
i.e.
Server 1 -> CDN
Server 4 -> Load Balancing Server 2, 3 which point towards Server 1
--
Or is there a way to like use Server 2/3 together to load balance each other which point towards Server 1
Hey @sgno1 , not exactly. More like this (note; all IPs in the graph below are just examples. 10.10.10.* is for internal network,192.168.1.1 = external IP of an floating IP . The solid lines are active connections, the dotted are failover connections:
What you'll need:
The CDN (caching) servers configuration in NGINX for example, which saves cached data in /tmp and then proxy's it to the backends. you'll need to change the server_name's and install a SSL certificate, or set that up with let's encrypt.:
Then on the backend webservers you can run for example:
Note that i threw the above together really quickly, so it might contain some errors which needs modification or tweaking. But that's the basics for a 3 server setup. If you want to run HA like the drawing above:
https://www.digitalocean.com/community/tutorials/how-to-create-a-high-availability-setup-with-heartbeat-and-floating-ips-on-ubuntu-16-04
What if I dont want a floating IP, can I do this setup instead:
Server 1 -> Holds Load Balancer 1 & 2 -> Balancers point to CDN
Unless there is something I don't understand, Why would need load balancing to point to 1 single location in the end? where is the balancing? (unless you are doing it to hide the backend or something).
The main benefit behind load balancing is when you have multiple backend servers running the same backend and you want visitors to be distributed on these
I thought I could load balance the network to relieve the stress from server 1? Or will that not work?
Load Balancer 1 & 2 are both pointing towards the CDN so I was wondering if I could load balance between each to balance the network on both servers
But I guess not, in that case I would need to run separate instances of the app on each machine which I don't have the power to do.
I am not even sure if what you are saying at this point makes any sense, or is it just me who is unable to understand.
load balancing: having one (or multiple) clerk(s)/receptionist(s) and multiple rooms in a building. When visitor arrives, he tells the visitor "go to room 1 there is some place for you", or room 2, etc
What you seem to be trying to do. I have one room, I am gonna hire 2 receptionists each one will tell the visitor "go to room 1". Why? just hire no receptionists at all and let everyone who arrives go to the room.
Unless you are using these servers as some caching (which the CDN does already!) then there is no meaning or usefulness in what you are trying to do (assuming I understand the context correctly). You mentioned the non-cached content is dynamic (generated via user interaction?) these "idling servers" can't help.
You'll need to do either CDN -> LB -> webservers, or CDN -> webservers directly. The CDN should do the heavy lifting in that case and should lower your server load. If one server isn't enough to handle it, upgrade your backend servers first or implement something like caching with Varnish.
Also, which CDN are you using? if it's cloudflare you won't need to do anything with NGINX for loadbalancing(LB), since they can do it pretty well.
Would make sense if there are many backend servers. But...
I am afraid he actually has 1 single backend server (that's what I understand from OP).
I second that, +1
Thanks! I'll further look into it!
Does everything needs to sit on "Server 1"?
You can just make basic round robin on 3 servers which have same data or divide resources like assets.example.com (js,css) & images.example.com and point these to diferent server IPs.
If your servers are 1Gbit then now you get peak 2/3Gbps instead of routing it via one 1Gbit server.
If you go with round robin and you use Cloudflare then you can automatically check health of servers and disable them if they become unhealthy via Cloudflare API.
And now you have $0 HA load balancing.
Yes everything sits on Server 1, it has all images, js css at one point under 1 IP.
But does it need to? Mounting same drive to different servers/synchronization would be my first choice instead of choking everything via one server and call it balancing, @afn is right.
You'd be best to sync that drive to a local directory on each server instead of serving content directly from the mounted drive too. If the server hosting the drive you're mounting (say server 1) dies then the content won't be available on server 2/3 even if they're still online and running.
If i don’t understand you wrong here you’re suggesting to use dns based round robin by simply creating 3 A/AAAA records (one per server), and if it stops to ping, have it removed via the CF api with a script?
You’ll still have to account for DNS caching…
The problem i’m having with this is that he has all his eggs in one basket. If one of the servers fails you’re shit out of luck and your site is broken if you use DNS RR. DNS caching can be a bitch, even if you set the TTL low. Google DNS for example always has atleast 15 min of caching even if your TTL is different.
With something like varnish, nginx, haproxy, apache, litespeed or caddy loadbalancer for example as a software based LB, or even CF’s real LB functions you can always disable one backend automatically with health checks.