Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Kubernetes ready providers
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Kubernetes ready providers

What lesser known/used providers are Kubernetes ready. And by Kubernetes ready I mean, have their own CSI driver for their platform and Cloud Controller manager. Could be an opensource one written by other people and not necessarily the provider themselves. Or if any of the Certified Installers installers work on the cloud provider (Seamlessly) without necessarily having a CSI driver (Open to using Longhorn e.t.c.) or Cloud Controller manager. So far I know of Hetzner and OVH. I have heard of sucess stories on Contabo with Rancher.

Comments

  • AWS, Azure, GCP
    DigitalOcean, Linode, Vultr

    Every provider is ready for Kubernetes if you use rancher.

  • TejyTejy Member

    Scaleway, as they're offering a "managed" Kubernetes offer...

  • tsofttsoft Member

    civo

    Thanked by 1ehab
  • quanhua92quanhua92 Member
    edited March 2022

    I have experience using Vultr Kubernetes & writing docs for them. However, the price is not very attractive to me ($10-12 for each 2GB ram). There are costs for Load Balancer & Block Storage too.
    I am using K3s to set up multiple clusters on promo VPS.
    TerraHost -> US.
    Hybula -> EU.
    GreenCloudVPS-> US, EU, SG.
    The experience is good so far. Each Ryzen/Epyc VPS costs only ~$5-$6 for 2GB RAM & lots of Bandwidth. My system is not critical (blog & simple ecom store) so I can take the risk. I also love that I can use multiple VPS providers at the same time and the deployments are seamless.

    If you want managed K8s, Block Storage, Load Balancer, then you should use DO, Vultr, Linode, or Civo.com. If you have more $ then go for AWS, Azure, GCP.
    Another note is that Vultr and DigitalOcean are not always using multiple masters to manage your cluster. You can check the DigitalOcean community and see that the performance is also scaled by the number of instances in the cluster. Therefore, they are managed but you can't assure that the master always responds.

    With your own K3s, you can set up 3 masters on 3 different providers to minimize the risk of master failure.

    For Hetzner, you can take a look at this hetzner-k3s repo.

    Thanked by 2ehab abtdw
  • ehabehab Member

    @quanhua92 said:
    I have experience using Vultr Kubernetes & writing docs for them. However, the price is not very attractive to me ($10-12 for each 2GB ram). There are costs for Load Balancer & Block Storage too.
    I am using K3s to set up multiple clusters on promo VPS.
    TerraHost -> US.
    Hybula -> EU.
    GreenCloudVPS-> US, EU, SG.
    The experience is good so far. Each Ryzen/Epyc VPS costs only ~$5-$6 for 2GB RAM & lots of Bandwidth. My system is not critical (blog & simple ecom store) so I can take the risk.
    If you want managed K8s, Block Storage, Load Balancer, then you should use DO, Vultr, Linode, or Civo.com. If you have more $ then go for AWS, Azure, GCP.
    For Hetzner, you can take a look at this hetzner-k3s repo.

    i agree.

  • tsofttsoft Member

    civo is trash, imho.

  • ehabehab Member

    @tsoft said:
    civo is trash, imho.

    mind to say why?

  • tsofttsoft Member
    edited March 2022

    They ban accounts without a reason.
    I run a search engine, they banned because connected to paid proxy servers.
    I run the same everywhere, aws, gcc, Microsoft, vultr, do, hetzner never get any issues

    today tried civo and got ban. talked to support, they said I run proxy on my server

  • tsofttsoft Member

    From support:
    From our end, it looked like you were running an open web proxy, which is against our terms of service

    Despite I do not run it :)

  • tsofttsoft Member

    Fake reason to ban = not reliable imho.

    Only once I had issue with google cloud, they said I mine crypto due to CPU usage, then unblocked, now no issues. Other are ok, + scaleway + vmhaus everywhere )

  • ArkasArkas Moderator

    Vultr is in Beta technically.

  • You can use my tool with Hetzner Cloud and save money compared to others. See https://github.com/vitobotta/hetzner-k3s

    Hetzner doesn't have managed Kubernetes yet but has official CSI and Cloud Controller Manager. My tool creates a production ready cluster with highly available control plane in a couple of minutes and ready to provision persistent volumes (using block storage) as well as load balancers out of the box, like a managed Kubernetes service.

    It uses k3s as Kubernetes distribution so it consumes less resources than other options and it's very fast to install and upgrade.

  • tsofttsoft Member

    civo even misleads, after talking to support, it appears that they lied about the reason.

    Actual reason was:
    "The fear we had with your usage of Civo was the scale at which your VM was using CPU. It had the potential to affect other users of the platform."

    So the issue was not about proxies,

  • akhfaakhfa Member

    @quanhua92 said:
    I have experience using Vultr Kubernetes & writing docs for them. However, the price is not very attractive to me ($10-12 for each 2GB ram). There are costs for Load Balancer & Block Storage too.
    I am using K3s to set up multiple clusters on promo VPS.
    TerraHost -> US.
    Hybula -> EU.
    GreenCloudVPS-> US, EU, SG.
    The experience is good so far. Each Ryzen/Epyc VPS costs only ~$5-$6 for 2GB RAM & lots of Bandwidth. My system is not critical (blog & simple ecom store) so I can take the risk. I also love that I can use multiple VPS providers at the same time and the deployments are seamless.

    If you want managed K8s, Block Storage, Load Balancer, then you should use DO, Vultr, Linode, or Civo.com. If you have more $ then go for AWS, Azure, GCP.
    Another note is that Vultr and DigitalOcean are not always using multiple masters to manage your cluster. You can check the DigitalOcean community and see that the performance is also scaled by the number of instances in the cluster. Therefore, they are managed but you can't assure that the master always responds.

    With your own K3s, you can set up 3 masters on 3 different providers to minimize the risk of master failure.

    For Hetzner, you can take a look at this hetzner-k3s repo.

    Do you have any latency issues?
    How do you manage the dynamic storage provisioner in high latency cluster?

  • @quanhua92 said:
    I have experience using Vultr Kubernetes & writing docs for them. However, the price is not very attractive to me ($10-12 for each 2GB ram). There are costs for Load Balancer & Block Storage too.
    I am using K3s to set up multiple clusters on promo VPS.
    TerraHost -> US.
    Hybula -> EU.
    GreenCloudVPS-> US, EU, SG.
    The experience is good so far. Each Ryzen/Epyc VPS costs only ~$5-$6 for 2GB RAM & lots of Bandwidth. My system is not critical (blog & simple ecom store) so I can take the risk. I also love that I can use multiple VPS providers at the same time and the deployments are seamless.

    If you want managed K8s, Block Storage, Load Balancer, then you should use DO, Vultr, Linode, or Civo.com. If you have more $ then go for AWS, Azure, GCP.
    Another note is that Vultr and DigitalOcean are not always using multiple masters to manage your cluster. You can check the DigitalOcean community and see that the performance is also scaled by the number of instances in the cluster. Therefore, they are managed but you can't assure that the master always responds.

    With your own K3s, you can set up 3 masters on 3 different providers to minimize the risk of master failure.

    For Hetzner, you can take a look at this hetzner-k3s repo.

    I hadn't noticed before that you already linked to my tool for Hetzner :)

    Thanked by 1quanhua92
  • leapswitchleapswitch Patron Provider, Veteran

    We have one click Kubernetes clusters on CloudJiffy.com

  • @akhfa said:
    Do you have any latency issues?
    How do you manage the dynamic storage provisioner in high latency cluster?

    I don't run all in 1 cluster. Each region is a separate cluster. So, there is no latency issue. I use Google DNS for geo routing. Each cluster can talk to each other with service mesh (linkerd, istio).

    Thanked by 1akhfa
  • akhfaakhfa Member

    @quanhua92 said:

    @akhfa said:
    Do you have any latency issues?
    How do you manage the dynamic storage provisioner in high latency cluster?

    I don't run all in 1 cluster. Each region is a separate cluster. So, there is no latency issue. I use Google DNS for geo routing. Each cluster can talk to each other with service mesh (linkerd, istio).

    What do you use for the persistent volume?

  • @akhfa said:
    What do you use for the persistent volume?

    Host path for Database & LongHorn for others

  • @akhfa said:

    @quanhua92 said:

    @akhfa said:
    Do you have any latency issues?
    How do you manage the dynamic storage provisioner in high latency cluster?

    I don't run all in 1 cluster. Each region is a separate cluster. So, there is no latency issue. I use Google DNS for geo routing. Each cluster can talk to each other with service mesh (linkerd, istio).

    What do you use for the persistent volume?

    I run a K3s cluster between two Hetzner dedicated servers, a phpfriends server, and a few of the Oracle Cloud servers (with Hetzner Cloud running control plane on a VPC). The easiest way for me was to run wireguard in between the services from the CNI to use for internode connectivity since I didn't have a way on some providers to limit traffic natively through a firewall.

    Storage is done on host via hostpath or local PVC via OpenEBS. It's a constraint so where the PVC is created is where the pod/service will be deployed. Without doing something really fancy/expensive that's the best of the situation.

    Thanked by 1akhfa
  • @daxterfellowes said:
    Storage is done on host via hostpath or local PVC via OpenEBS. It's a constraint so where the PVC is created is where the pod/service will be deployed. Without doing something really fancy/expensive that's the best of the situation.

    You can try openebs-nfs which will create a NFS storage that can be shared across multiple pods at the same time. It is not optimized but useful for applications like Wordpress. Longhorn RWX can also offer the same feature.

  • @quanhua92 said:

    @daxterfellowes said:
    Storage is done on host via hostpath or local PVC via OpenEBS. It's a constraint so where the PVC is created is where the pod/service will be deployed. Without doing something really fancy/expensive that's the best of the situation.

    You can try openebs-nfs which will create a NFS storage that can be shared across multiple pods at the same time. It is not optimized but useful for applications like Wordpress. Longhorn RWX can also offer the same feature.

    I think I'll give that another look now. Previously I always thought of NFS as a bit of a headache but for most anything, I'll host it'll totally be fine. Thanks for forcing a re-look; seems very doable.

Sign In or Register to comment.