Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Anyone running Kubernetes on dedicated servers (e.g. Hetzner)?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Anyone running Kubernetes on dedicated servers (e.g. Hetzner)?

We at work are using GCP and it's costing a fortune. I figured that if we migrated to something like Hetzner since it's in the EU, affordable and reliable) we would save a massive amount.

Moving to dedis would mean managing more stuff ourselves, but we have expertise (myself especially, more than others in the team) so it wouldn't be a big deal. The only thing would be annoying would be losing flexibility we have with GCP.

Of course there's Hetzner Cloud which would make several things easier, but dedis would make performance a lot better.

Is anyone here doing this at the moment?

Thanked by 1BasToTheMax

Comments

  • if this is for business and its growing then it makes sense to keep it managed by providers.

  • I ran K8 on some colo'd servers for a few years. This was just for personal use and sometimes a pain so recently I've stopped.

    I found persistent storage to be quite a headache. I was using glusterfs for the entirety of my cluster's life but support for it seems to have dropped. I don't know what a good persistent storage option is right now. ceph I think.

    I use Traefik for ingress/https, sticking to v1 since it supported KV store clustering. I used etcd that comes shipped with K8 as the backing KV store, and needed to update certs every year (I'd always forget and everything would break).

    I also used a single IP mounted on both servers for services that listen on ports, but that was because I was new to K8 at the time.

    Overall, I'd say running your own k8 cluster is pretty easy and once its set up its not a lot of work to maintain it (storage excluded). The kubeadm setup and cluster joining is very easy, and you can be running things in 10 minutes or so. Replacing/adding nodes is quick as well.

    PS: I ran very large K8 clusters for my work, so it stopped being fun as a hobby and that's another reason I moved off it :p

  • @HackedServer said:
    I ran K8 on some colo'd servers for a few years. This was just for personal use and sometimes a pain so recently I've stopped.

    I found persistent storage to be quite a headache. I was using glusterfs for the entirety of my cluster's life but support for it seems to have dropped. I don't know what a good persistent storage option is right now. ceph I think.

    I use Traefik for ingress/https, sticking to v1 since it supported KV store clustering. I used etcd that comes shipped with K8 as the backing KV store, and needed to update certs every year (I'd always forget and everything would break).

    I also used a single IP mounted on both servers for services that listen on ports, but that was because I was new to K8 at the time.

    Overall, I'd say running your own k8 cluster is pretty easy and once its set up its not a lot of work to maintain it (storage excluded). The kubeadm setup and cluster joining is very easy, and you can be running things in 10 minutes or so. Replacing/adding nodes is quick as well.

    PS: I ran very large K8 clusters for my work, so it stopped being fun as a hobby and that's another reason I moved off it :p

    So you use dedis right?

    @ehab said:
    if this is for business and its growing then it makes sense to keep it managed by providers.

    I know, but cost is a problem with GCP.

  • You will get a network abuse ticket from hetzner. Be prepared to explain kubernetes networking to them or they will freeze your server (as they did mine).

    Thanked by 1ehab
  • seekborrowseekborrow Member
    edited April 2023

    @HackedServer said:
    I ran K8 on some colo'd servers for a few years. This was just for personal use and sometimes a pain so recently I've stopped.

    I found persistent storage to be quite a headache. I was using glusterfs for the entirety of my cluster's life but support for it seems to have dropped. I don't know what a good persistent storage option is right now. ceph I think.

    I use Traefik for ingress/https, sticking to v1 since it supported KV store clustering. I used etcd that comes shipped with K8 as the backing KV store, and needed to update certs every year (I'd always forget and everything would break).

    I also used a single IP mounted on both servers for services that listen on ports, but that was because I was new to K8 at the time.

    Overall, I'd say running your own k8 cluster is pretty easy and once its set up its not a lot of work to maintain it (storage excluded). The kubeadm setup and cluster joining is very easy, and you can be running things in 10 minutes or so. Replacing/adding nodes is quick as well.

    PS: I ran very large K8 clusters for my work, so it stopped being fun as a hobby and that's another reason I moved off it :p

    Hettner will issue you a network abuse ticket as a result of your actions. The issue with GCP is the expense.

  • imgmoneyimgmoney Member
    edited April 2023

    I had 10+ dedicated servers with Hetzner and 10G switches, and 10G LAN with them. And I never receive any tickets, even after using 10Gbit LAN most of the time.

    @vitobotta I want to try K8s with Hetzner dedi. But setup will be pain unless we have a tool like the one you developed.

  • m4num4nu Member, Patron Provider

    Ran a Ceph cluster an Hetzner dedis for a while. Lots of rebalancing between nodes. Also didn't get any ticket.

  • @mosquitoguy said:
    You will get a network abuse ticket from hetzner. Be prepared to explain kubernetes networking to them or they will freeze your server (as they did mine).

    If you build it correctly with vSwitches why would you get a warning?
    I had proxmox with 10gb NICs with lots of rebalancing as I was testing things daily for a month, hundreds of gigs for replication. No warning.

Sign In or Register to comment.