Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


โ€บ Hetzner Floating IP - How reliable is it ?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hetzner Floating IP - How reliable is it ?

Hi,

I am creating a bigger setup with Loadbalancers and 20-30 backend servers (web, db etc)
Now for maximum performance, i will go for LB and use floating IP for HA.

But... If this floating IP goes down, all my servers would be "down".
How stable and reliable is Hetzner floating ip ? Any one using it for a while ? The idea with HA and LB dies, if the floating IP is down - Single Point Of Failure here. Even though i will have multiple LB instances and a fully HA backend services with mysql cluster and webserver, it all depends on this floating IP anyway.

What is your experience?

Comments

  • NeoonNeoon Community Contributor, Veteran

    If you consider using floating ip's I would request hetzner to provide access to at least one temp sensor.

    If you had that, you would be able to switch your IP's before the DC burns down.
    When SBG burned down, their system was dead in the waters and people got fucked big time.

    I guess it depends, their is no 100% guarantee that they will work if something bigger happens within Hetzner.

    Thanked by 1lowprofile
  • I believe Hetzner's LB is already HA so you don't have to use a floating IP.

  • @zakkuuno said:
    I believe Hetzner's LB is already HA so you don't have to use a floating IP.

    I will not use Hetzners LB. Will build my own HA setup.

  • Sorry I don't think this is a valid concern here and that you're conflating "single point of failure" with "reliability". An IP address does not simply "go down". Either the IP range stops getting announced or the server itself goes down. The former is pretty unlikely in general, its extremely rare to have situations where a service provider will just suddenly stop announcing a single IP range unless the router for that range itself goes down. This is generally not very common.

    Also, you can't have a single IP address that corresponds to multiple load balancers. Each load balancer will need to have its own IP address, because one IP address can only be associated with one server so I have no idea what you're trying to do with your setup.

  • I used a custom HA LB setup with floating IP for a while a few years ago using haproxy + keepalived, when they didn't have managed load balancers yet. It worked well and I didn't have any problems with the floating IP or anything. But why the hassle instead of just using a load balancer since they have them now?

  • @ehhthing said:
    Sorry I don't think this is a valid concern here and that you're conflating "single point of failure" with "reliability". An IP address does not simply "go down". Either the IP range stops getting announced or the server itself goes down. The former is pretty unlikely in general, its extremely rare to have situations where a service provider will just suddenly stop announcing a single IP range unless the router for that range itself goes down. This is generally not very common.

    Also, you can't have a single IP address that corresponds to multiple load balancers. Each load balancer will need to have its own IP address, because one IP address can only be associated with one server so I have no idea what you're trying to do with your setup.

    It is definitely a valid concern. I have been with Hetzner for almost 10-12 years. Mostly dedicated servers. I have experienced several times where there router goes offline - not for 1min, but several hours. This has occurred more than 6-7 times if I recall correct for the past 5 years.

    I am aware that floating IP doesnt work without having an IPv4 on each instance.

    What i am trying is very simple and very common. I will use HAProxy/floating IP to be the singlepoint for my clients services. For several purposes - Mainly to keep a streamlined strict SSL policy and to distribute and forward the traffic. Many advantages like performance, security etc,

  • @vitobotta said:
    I used a custom HA LB setup with floating IP for a while a few years ago using haproxy + keepalived, when they didn't have managed load balancers yet. It worked well and I didn't have any problems with the floating IP or anything. But why the hassle instead of just using a load balancer since they have them now?

    Thats exactly my setup as well. There LB is based on HAProxy as well, but have too many limitations. The limits are also very low, thus the performance.

  • @lowprofile said:

    @vitobotta said:
    I used a custom HA LB setup with floating IP for a while a few years ago using haproxy + keepalived, when they didn't have managed load balancers yet. It worked well and I didn't have any problems with the floating IP or anything. But why the hassle instead of just using a load balancer since they have them now?

    Thats exactly my setup as well. There LB is based on HAProxy as well, but have too many limitations. The limits are also very low, thus the performance.

    What limits are low exactly?

  • @vitobotta said:

    What limits are low exactly?

    Probably all of them. I run my own load balancers and have more than 50 SSL certificates, and definitely more than 20TB traffic. These are the limits on their highest advertised "LB31" load balancer. Also a 40,000 connection limit for 150 backends (targets)? Seems very low to me too.

    Thanked by 1lowprofile
  • @fatchan said:

    @vitobotta said:

    What limits are low exactly?

    Probably all of them. I run my own load balancers and have more than 50 SSL certificates, and definitely more than 20TB traffic. These are the limits on their highest advertised "LB31" load balancer. Also a 40,000 connection limit for 150 backends (targets)? Seems very low to me too.

    how much traffic in avg req/sec do you handle, just to get an idea?

  • FYI) I did recently some tests for floating ip. It takes about one minutes for floating ip to be online between NBG<->FSN and under 10 seconds between the same data center.

  • @vitobotta said:

    how much traffic in avg req/sec do you handle, just to get an idea?

    Low request rate, only 100rps off peak 2000rps peak times. Traffic 100-120TB/mo. It all runs on HAProxy. The traffic is higher because of a lot of multimedia content, and the number of certificates is higher because of a lot of auxiliary services and different domains.

    I don't think the hetzner LB is that bad, but the limits are pretty strange/unbalanced imo. 50 certificates is too low and the traffic would become a problem quickly unless they allow you to book more.

  • @fatchan said:

    @vitobotta said:

    how much traffic in avg req/sec do you handle, just to get an idea?

    Low request rate, only 100rps off peak 2000rps peak times. Traffic 100-120TB/mo. It all runs on HAProxy. The traffic is higher because of a lot of multimedia content, and the number of certificates is higher because of a lot of auxiliary services and different domains.

    I don't think the hetzner LB is that bad, but the limits are pretty strange/unbalanced imo. 50 certificates is too low and the traffic would become a problem quickly unless they allow you to book more.

    And 40k connections are not enough for 2k requests per second?

  • fatchanfatchan Member
    edited October 12

    @vitobotta said:
    And 40k connections are not enough for 2k requests per second?

    Thats fine, but the other limits would make it a non-starter, is what I'm saying.

  • ProHosting24ProHosting24 Member, Patron Provider
    edited October 12

    I like to go with primary/primary self managed load balancing.

    There are two VRRP IPs, one being a primary ip on lb1 and one being a primary ip on lb2.

    Both VRRP ips are resolved from the domain, as soon as one LB fails both IPs are running on one LB.

    I like this approach because it makes use of all ressources for balancing and because you always now that both LBs work.

    We have utilized this setup with both LBs running nginx and having 12 EPYC cores each, we had been able to mitigate layer 7 ddos attacks with specific nginx settings with up to 80000 concurrent connections. A very cool setup to run.

    With this many connections we still only had 60% load on the machines ๐Ÿ˜„

    Thanked by 2lowprofile Arirang
  • @ProHosting24 said:
    I like to go with primary/primary self managed load balancing.

    There are two VRRP IPs, one being a primary ip on lb1 and one being a primary ip on lb2.

    Both VRRP ips are resolved from the domain, as soon as one LB fails both IPs are running on one LB.

    I like this approach because it makes use of all ressources for balancing and because you always now that both LBs work.

    We have utilized this setup with both LBs running nginx and having 12 EPYC cores each, we had been able to mitigate layer 7 ddos attacks with specific nginx settings with up to 80000 concurrent connections. A very cool setup to run.

    With this many connections we still only had 60% load on the machines ๐Ÿ˜„

    Thatโ€™s a nice setup actually

    Thanked by 1ProHosting24
  • @ProHosting24 said:
    I like to go with primary/primary self managed load balancing.

    There are two VRRP IPs, one being a primary ip on lb1 and one being a primary ip on lb2.

    Both VRRP ips are resolved from the domain, as soon as one LB fails both IPs are running on one LB.

    I like this approach because it makes use of all ressources for balancing and because you always now that both LBs work.

    We have utilized this setup with both LBs running nginx and having 12 EPYC cores each, we had been able to mitigate layer 7 ddos attacks with specific nginx settings with up to 80000 concurrent connections. A very cool setup to run.

    With this many connections we still only had 60% load on the machines ๐Ÿ˜„

    Does this mean you always define 2 @ A records to every domain or service on DNS ?

    Thanked by 1ProHosting24
  • ArirangArirang Member
    edited October 12

    @ProHosting24 said:
    I like to go with primary/primary self managed load balancing.

    The architecture for my side project is similar. But Mine is there are 4 instances (two instance per each data center)

    The well-programed applications implementing RFC 6724 do failover automatically between multiple A records such as Modern Browsers, curl or OKhttp (java client).

    Thanked by 1ProHosting24
  • ProHosting24ProHosting24 Member, Patron Provider
    edited October 12

    @lowprofile said:

    @ProHosting24 said:
    I like to go with primary/primary self managed load balancing.

    There are two VRRP IPs, one being a primary ip on lb1 and one being a primary ip on lb2.

    Both VRRP ips are resolved from the domain, as soon as one LB fails both IPs are running on one LB.

    I like this approach because it makes use of all ressources for balancing and because you always now that both LBs work.

    We have utilized this setup with both LBs running nginx and having 12 EPYC cores each, we had been able to mitigate layer 7 ddos attacks with specific nginx settings with up to 80000 concurrent connections. A very cool setup to run.

    With this many connections we still only had 60% load on the machines ๐Ÿ˜„

    Does this mean you always define 2 @ A records to every domain or service on DNS ?

    No, that would be an antipattern.
    You have one "main" domain or eg. @ Record which has both of the VRRP IPs, eg.:

    @ domain.net
    A 70.70.70.75 (actually master ip on lb1)
    A 70.70.70.76 (actually master ip on lb2)

    cp.domain.net
    CNAME domain.net

    other-domain.net
    CNAME domain.net

    This way you can easily migrate the IPs

  • quicksilver03quicksilver03 Member, Host Rep
    edited October 12

    I implemented the same setup a few years back with OVH dedicated servers and a couple of IP addresses on their vRack solution, moving the IP from one server to another was very quick (much quicker than OVH's failover IP product).

  • lowprofilelowprofile Member
    edited October 12

    Ah i see it now. Your main LB-domain has the 2 IPs defined, rest of services and clients use CNAME which points to your LB-domain. Is this a compliant standard ?

  • LeviLevi Member

    First of all: consider the fact that hetzner may boot you out without providing a reason. And if you putting all eggs into hetzners bag you are making insane misstake. Serious setups should be hosted on time proven provider.

    Thanked by 1lowprofile
  • ProHosting24ProHosting24 Member, Patron Provider

    @lowprofile said:

    @ProHosting24 said:

    @lowprofile said:

    @ProHosting24 said:
    I like to go with primary/primary self managed load balancing.

    There are two VRRP IPs, one being a primary ip on lb1 and one being a primary ip on lb2.

    Both VRRP ips are resolved from the domain, as soon as one LB fails both IPs are running on one LB.

    I like this approach because it makes use of all ressources for balancing and because you always now that both LBs work.

    We have utilized this setup with both LBs running nginx and having 12 EPYC cores each, we had been able to mitigate layer 7 ddos attacks with specific nginx settings with up to 80000 concurrent connections. A very cool setup to run.

    With this many connections we still only had 60% load on the machines ๐Ÿ˜„

    Does this mean you always define 2 @ A records to every domain or service on DNS ?

    No, that would be an antipattern.
    You have one "main" domain or eg. @ Record which has both of the VRRP IPs, eg.:

    @ domain.net
    A 70.70.70.75 (actually master ip on lb1)
    A 70.70.70.76 (actually master ip on lb2)

    cp.domain.net
    CNAME domain.net

    other-domain.net
    CNAME domain.net

    > This way you can easily migrate the IPs

    Rest of services and clients use CNAME which points to your LB-domain. Is this a compliant standard ?

    It is just the most straigtforwarded setup to have less hassle when changing IPs but unfortunately CNAME records are not intended to be used on @ level.

    Cloudflare though allows you to set that and then just creates the coresponding A records on its own in the background for you.

  • @Levi said:
    First of all: consider the fact that hetzner may boot you out without providing a reason. And if you putting all eggs into hetzners bag you are making insane misstake. Serious setups should be hosted on time proven provider.

    My idea was to use 2-3 LB located at different providers/dc and provide cnames to my clients - in case anyone boot me out, i have the option to switch it without interacting with client.

  • emghemgh Member

    @Arirang said:
    FYI) I did recently some tests for floating ip. It takes about one minutes for floating ip to be online between NBG<->FSN and under 10 seconds between the same data center.

    When I tested this, I remember floating IPs from one DC in particular routed much faster to the others than the others did. Canโ€™t remember which. Maybe it was temporary. But there was a very real difference.

  • ArirangArirang Member
    edited October 19

    @ProHosting24 said:
    This way you can easily migrate the IPs> @Arirang said:

    @ProHosting24 said:
    I like to go with primary/primary self managed load balancing.

    The architecture for my side project is similar. But Mine is there are 4 instances (two instance per each data center)

    The well-programed applications implementing RFC 6724 do failover automatically between multiple A records such as Modern Browsers, curl or OKhttp (java client).

    While researching the HA configuration, I found out that it was happy eyeball. It's an interesting topic.

    Failover between IPv4 depends on the timeout of the application. curl takes quite a long time unless you use the connection-timeout option separately. For example, if the timeout is 5 seconds and there are two A records, the second record is attempted 2.5 seconds after the first record fails. Browsers sometimes require quite long timeouts.

    When I tested happy eyeball with curl, it tried to establish an IPv4 and IPv6 connection.
    Even if one protocol fails to connect, it immediately connects using the other protocol.

  • @Neoon said:
    If you had that, you would be able to switch your IP's before the DC burns down.
    When SBG burned down, their system was dead in the waters and people got fucked big time.

    That was OVH and not Hetzner

  • NeoonNeoon Community Contributor, Veteran

    @MikusR said:

    @Neoon said:
    If you had that, you would be able to switch your IP's before the DC burns down.
    When SBG burned down, their system was dead in the waters and people got fucked big time.

    That was OVH and not Hetzner

    Yea SBG = Strasbourg.
    However, my guess the IP failover system doesn't autoscale at @Hetzner_OL either.

Sign In or Register to comment.