Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Load balancing: HTTP response time or CPU load
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Load balancing: HTTP response time or CPU load

Hello,

Do you have any recommendations for load balancing according to HTTP(S) response time or CPU load?
So far I found a technique called dynamic round robin here but I can't seem to find anything on how to do it.
Any idea how to do dynamic round robin on nginx' free version?

Are there other tools that are better for such a setup?

Comments

  • http://nginx.org/en/docs/http/load_balancing.html

    30 seconds on google, you are welcome :)

  • lkjllkjl Member
    edited December 2017

    @qtwrk said:
    http://nginx.org/en/docs/http/load_balancing.html

    30 seconds on google, you are welcome :)

    Did you read the post at all? I did refer to nginx in the thread. I'm asking about dynamic round robin (or a similar functionality elsewhere).

    Weighted round robin – A weight is assigned to each server based on criteria chosen by the site administrator; the most commonly used criterion is the server’s traffic-handling capacity. The higher the weight, the larger the proportion of client requests the server receives. If, for example, server A is assigned a weight of 3 and server B a weight of 1, the load balancer forwards 3 requests to server A for each 1 it sends to server B.
    Dynamic round robin – A weight is assigned to each server dynamically, based on real-time data about the server’s current load and idle capacity

  • @lkjl said:

    @qtwrk said:
    http://nginx.org/en/docs/http/load_balancing.html

    30 seconds on google, you are welcome :)

    Did you read the post at all? I did refer to nginx in the thread. I'm asking about dynamic round robin (or a similar functionality elsewhere).

    Weighted round robin – A weight is assigned to each server based on criteria chosen by the site administrator; the most commonly used criterion is the server’s traffic-handling capacity. The higher the weight, the larger the proportion of client requests the server receives. If, for example, server A is assigned a weight of 3 and server B a weight of 1, the load balancer forwards 3 requests to server A for each 1 it sends to server B.
    Dynamic round robin – A weight is assigned to each server dynamically, based on real-time data about the server’s current load and idle capacity

    Yeah, my bad, sorry, maybe you should read a little bit further

    http://nginx.org/en/docs/http/ngx_http_upstream_module.html#least_conn

    I think you mean least connection or least time ?

  • Less connections dont mean less load, especially on systems with different CPUs

  • Get a load balancer, or reimplement one, basically. An oldschool means used to be by setting up private snmp and querying load/connection stats and proxying the load that way. I'd probably do a similar sort of idea with a proprietary measure (non-SNMP) myself, but I haven't had the need/use to implement one.

    You might consider checking out recent OSS CDN software as well.

  • VitaVita Member
    edited December 2017

    You can distribute load by utilizing haproxy with keepalived. Float the IP based on the load to the haproxy instance with less load. That way you not only get redundancy in case something fails but you can distribute the services across two haproxy nodes. Keepalived supports writing your own scripts for this.

    Thanked by 2WSS lkjl
  • @lkjl

    Traefik (https://docs.traefik.io/basics/#backends) supports both weighted and dynamic round robin. Also can configure health checks and stickiness.

    Tengine (NGINX fork) also supports consistent hashing, I think.

    Thanked by 1lkjl
  • @rincewind said:
    @lkjl

    Traefik (https://docs.traefik.io/basics/#backends) supports both weighted and dynamic round robin. Also can configure health checks and stickiness.

    Tengine (NGINX fork) also supports consistent hashing, I think.

    Thanks! Never heard of these. I'm checking them now.

Sign In or Register to comment.