Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Is there a way to load balance like this?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Is there a way to load balance like this?

I have 4 servers, 3 of them run the same web, node script. And the 4th server is pretty cheap, around 250mbps port cap, whilst the other 3 vary between a 1gbps port - 10gbps port. My concept is something like using the 4th server as a pass-through without capping the user to the limitations of the 250mbps port and being able to use the other ports at the full capacity. So NGINX will not really work after reading some documentations and reaching out to a friend.

Is there another way for me to use the 4th server to choose the best server from the 3 servers running the script, in terms of least load for example & then using that server?

Comments

  • Have each server regularly send it a request containing their current load levels, then when a request comes in redirect based on what looks the least busy and not sending anything through to one that hasn't reported in recently. Though measuring load won't be as simple as one thing, there are a mix of factors depending on what your scripts do: CPU, IO, bandwidth, ... Also where the user is calling from may be significant if the servers are not all on the same location.

    You can't just pass through in the send that the 4th machine acts as a proxy, that way you'll be limited by its connection, you'll need to use a HTTP redirect so the client makes a new request.

    As a really simple first cut you could just forward users on a round-robin basis: first to server 1, seconds to 2, then 3 then back to 1, and you only need to get clever once you are using significant resources.

    If your scripts use a database or any other state then you have the extra issue of keeping the three synchronised in that respect. Another complication is that load could grow to high on one server despite you not sending new requests to it because of existing high load, so you might want some mechanism for the app servers to pass sessions back out to the balancer or direct to another app server in such a circumstance.

    Thanked by 1sgno1
  • jeghjegh Member
    edited February 2022

    You should take a look at Round Robin DNS, which will accomplish what you want without even the need for your 4th server. All load-balancing is done at DNS level. The drawback however is that load will be distributed between servers at random, so there's no opportunity to factor the load of each server when deciding which server to use for a request.

    Depending on your use case, if you're serving mostly video or large file content, having your 4th server give a HTTP 302 redirect to one of the servers 1-3 could work well also. For smaller content such as website/css/images, its probably not worth the added delay to make the additional HTTP requests.

  • @MeAtExampleDotCom said:
    Have each server regularly send it a request containing their current load levels, then when a request comes in redirect based on what looks the least busy and not sending anything through to one that hasn't reported in recently. Though measuring load won't be as simple as one thing, there are a mix of factors depending on what your scripts do: CPU, IO, bandwidth, ... Also where the user is calling from may be significant if the servers are not all on the same location.

    You can't just pass through in the send that the 4th machine acts as a proxy, that way you'll be limited by its connection, you'll need to use a HTTP redirect so the client makes a new request.

    As a really simple first cut you could just forward users on a round-robin basis: first to server 1, seconds to 2, then 3 then back to 1, and you only need to get clever once you are using significant resources.

    If your scripts use a database or any other state then you have the extra issue of keeping the three synchronised in that respect. Another complication is that load could grow to high on one server despite you not sending new requests to it because of existing high load, so you might want some mechanism for the app servers to pass sessions back out to the balancer or direct to another app server in such a circumstance.

    Thanks, I took this route.

    Each server (excluding 4th server) has a script installed which monitors the network and CPU load. 4th server handles the requests and picks one based on least load and then a 302.

  • steering traffic from 250Mbps port?
    Cancel it and get Cloudflare Weighed Load Balancing which steers traffic with predefinied weights/percentage, least-connections or server usage which is monitored by their agent.

  • @AXYZE said:
    steering traffic from 250Mbps port?
    Cancel it and get Cloudflare Weighed Load Balancing which steers traffic with predefinied weights/percentage, least-connections or server usage which is monitored by their agent.

    Too pricey

Sign In or Register to comment.